Agentic AI and Neurosymbolic AI: Jacob Andra Interviews Dr. Alexandra Pasi of Lucidity Sciences
Manage episode 515892028 series 3684643
Two major ideas are shaping the next era of artificial intelligence: agentic AI and neurosymbolic AI. Talbot West CEO Jacob Andra and Lucidity Sciences CEO Dr. Alexandra Pasi bring together their complementary perspectives.
They unpack the confusion surrounding the term “agentic.” The most common misuses fall into three categories.
1.Digital employee. This use assumes an AI can fully replace a human role. In practice, jobs consist of overlapping tasks that depend on judgment, context, and social understanding. Substituting a human one-to-one with an AI system oversimplifies work and introduces risk.
2.AI interacting with humans. Many products describe themselves as agentic simply because they interact with people. Yet a chatbot or outbound assistant is not necessarily intelligent or autonomous. Interface does not equal agency.
3.Autonomous executor. Another common assumption is that an AI that performs tasks independently qualifies as agentic. Yet there are non-AI autonomous systems.
Jacob proposes a definition that is specific enough for real-world planning: an AI function able to complete a task as part of a larger ensemble or capability. This definition treats agentic systems as modular and composable. Each agent performs a defined function within a coordinated network of systems. This approach moves the conversation from vague marketing language to measurable performance outcomes.
From there, the discussion turns to large language models. Both Jacob and Alexandra acknowledge their extraordinary power but also their limitations. LLMs have made AI accessible to everyone through natural language, allowing rapid knowledge retrieval, summarization, and idea generation. At the same time, language itself is a constraint. Human language was not built for exact quantitative reasoning or precise logical relationships. LLMs lose reliability when they are asked to maintain long context or handle tightly coupled data. The guests agree that these models should be viewed primarily as interface layers that help people and organizations communicate with structured information systems.
The conversation then transitions to neurosymbolic AI, which combines neural networks and symbolic reasoning into a single architecture. The neural components are probabilistic and pattern-oriented. They generalize and infer. The symbolic components operate on defined rules and logical constraints. They ensure structure, coherence, and traceability. When combined you get an intelligent system that is both adaptive and verifiable.
Dr. Pasi explains how this concept has deep roots in earlier AI research. In some early mathematics experiments, language models were paired with formal systems like Lean to verify every logical step. In modern enterprise applications, this same hybrid pattern provides a way to reconcile innovation with control. It creates a bridge between the flexibility of learning models and the accountability required by governance and compliance.
Jacob shares two Talbot West use cases that illustrate these ideas. The first involves enterprise evaluation and roadmapping. Many organizations have complex, organically grown processes and data flows that are difficult to map or optimize.
The second example is BizForesight, a platform to help business owners understand and improve company value. It combines document ingestion, interviews, and machine learning within a defined symbolic framework. The symbolic layer enforces valuation logic and methodological integrity, while the neural layer interprets unstructured data and provides adaptive recommendations.
12 에피소드