AWS’s Neurosymbolic AI Offers Safe, Explainable Agent Automation for Regulated Industries

AWS’s Neurosymbolic AI Offers Safe, Explainable Agent Automation for Regulated Industries

AWS is betting that making its Automated Reasoning Checks on Bedrock generally available will boost confidence in AI applications and agents for enterprises and regulated industries. 

By using methods like automated reasoning, which leverages math-based validation to ascertain ground truth, AWS hopes to ease enterprises into neurosymbolic AI, which the company sees as a key future advancement and a distinct differentiator in AI.  

Automated Reasoning Checks allow enterprise users to verify response accuracy and detect model hallucination. AWS introduced Automated Reasoning Checks on Bedrock at its December re:Invent conference, asserting it can catch nearly all hallucinations. Initially, a limited number of users accessed it via Amazon Bedrock Guardrails, allowing organizations to establish responsible AI policies.

Byron Cook, AWS’s Automated Reasoning Group’s distinguished scientist and vice president, told VentureBeat that the feature’s preview rollout demonstrated its functionality in enterprise environments, aiding organizations in comprehending the value of AI that integrates symbolic thinking with generative AI’s neural networks. 

Cook noted that some customers permitted AWS to evaluate their data and documents used for annotating answers as right or wrong, finding that the tool-generated work was comparable to humans using a rule book. He highlighted that unlike interpretation-prone truths, automated reasoning isn’t as susceptible to such issues. 

“It was incredible to observe people with logical backgrounds debating truth on internal communication channels, only to find that the tool was right within a few messages,” he said. 

AWS added new features to Automated Reasoning Checks for full release, including:

  • Support for large documents up to 80k tokens or 100 pages 
  • Simplified policy validation with saved tests for repeated runs
  • Automated scenario generation from pre-saved definitions
  • Natural language suggestions for policy feedback
  • Customizable validation settings

Cook explained Automated Reasoning Checks confirm an AI system’s truth by proving a model didn’t hallucinate a solution or response, providing reassurance to regulators and regulated enterprises wary of generative AI’s non-deterministic response risks. 

Neurosymbolic AI and proving truth

Cook emphasized the role of Automated Reasoning Checks in affirming neurosymbolic AI concepts. 

Neurosymbolic AI combines neural networks used in language models with symbolic AI’s structured logic. While neural networks identify data patterns, symbolic AI employs explicit rules and logic. Despite neural networks or deep learning being foundational for models, their pattern-based responses are prone to hallucinations, a concern for enterprises. Meanwhile, symbolic AI demands manual instructions for flexibility.

Influential AI figures, like Gary Marcus, assert that neurosymbolic AI is crucial for artificial general intelligence. 

Cook and AWS are keen on introducing neurosymbolic AI concepts to enterprises. VentureBeat’s Matt Marshall discussed AWS’s focus on automated reasoning checks and integrating math and logic to reduce generative AI hallucinations in a podcast

Currently, few companies offer productized neurosymbolic AI, such

Leave a Reply

Your email address will not be published. Required fields are marked *