Artwork

Jacob Andra에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 Jacob Andra 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.
Player FM -팟 캐스트 앱
Player FM 앱으로 오프라인으로 전환하세요!

Neurosymbolic AI and the Shortcomings of LLMs: Jacob Andra and Stephen Karafiath

35:04
 
공유
 

Manage episode 514312370 series 3684643
Jacob Andra에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 Jacob Andra 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.

Send us a text

Large language models have captured headlines, but they represent only a fraction of what AI can accomplish. Talbot West co-founders Jacob Andra and Stephen Karafiath explore the fundamental limitations of LLMs and why neurosymbolic AI offers a more robust path forward for enterprise applications.
LLMs sometimes display remarkable contextual awareness, like when ChatGPT proactively noticed specific tile flooring in a photo's background and offered unsolicited cleaning advice. These moments suggest genuine intelligence. But as Jacob and Stephen explain, push these systems harder and the cracks appear.
The hosts examine specific failure modes that emerge when deploying LLMs at scale. Jacob documents persistent formatting errors where models swing between extremes—overusing lists, then refusing to use them at all, even when instructions explicitly define appropriate use cases. These aren't random glitches. They reveal systematic overcorrection behaviors where LLMs bounce off guardrails rather than operating within defined bounds.
More troubling are the logical inconsistencies. When working with large corpuses of information, LLMs demonstrate what Jacob calls cognitive fallacies—errors that mirror human reasoning failures but stem from different causes. The models cannot maintain complex instructions across extended tasks. They hallucinate citations, fabricate data, and contradict themselves when context windows stretch too far. Even the latest reasoning models cannot eliminate certain habits, like the infamous em-dash overuse, no matter how explicitly you prompt against it.
Stephen introduces the deny-affirm construction as another persistent pattern: "It's not X, it's Y" formulations that plague AI-generated content. Tell the model to avoid this construction and watch it appear anyway, sometimes in the very next paragraph. These aren't bugs to be patched. They're symptoms of fundamental architectural limitations.
The solution lies in neurosymbolic AI, which combines neural networks with symbolic reasoning systems. Jacob and Stephen use an extended biological analogy: LLMs are like organisms without skeletons. A paramecium works fine at microscopic scale, but try to build something elephant-sized from the same squishy architecture and it collapses under its own weight. The skeleton—knowledge graphs, structured data, formal logic—provides the rigid structure necessary for complex reasoning at scale.
Learn more about neurosymbolic approaches: https://talbotwest.com/ai-insights/what-is-neurosymbolic-ai
About the hosts:
Jacob Andra is CEO of Talbot West and serves on the board of 47G, a Utah-based public-private aerospace and defense consortium. He pushes the limits of what AI can accomplish in high-stakes use cases and publishes extensively on AI, enterprise transformation, and policy, covering topics including explainability, responsible AI, and systems integration.
Stephen Karafiath is co-founder of Talbot West, where he architects and deploys AI solutions that bridge the gap between theoretical capabilities and practical business outcomes. His work focuses on identifying the specific failure modes of AI systems and developing robust approaches to enterprise implementation.
About Talbot West:
Talbot West delivers Fortune 500-level AI consulting and implementation to midmarket and enterprise organizations. The company specializes in practical AI deployment through its proprietary APEX (AI Prioritization and Execution) framework and Cognitive Hive AI (CHAI) architecture, which emphasizes modular, explainable AI systems over monolithic black-box models.
Visit talbotwest.com to learn how we help organizations cut through AI hype and implem

  continue reading

12 에피소드

Artwork
icon공유
 
Manage episode 514312370 series 3684643
Jacob Andra에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 Jacob Andra 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.

Send us a text

Large language models have captured headlines, but they represent only a fraction of what AI can accomplish. Talbot West co-founders Jacob Andra and Stephen Karafiath explore the fundamental limitations of LLMs and why neurosymbolic AI offers a more robust path forward for enterprise applications.
LLMs sometimes display remarkable contextual awareness, like when ChatGPT proactively noticed specific tile flooring in a photo's background and offered unsolicited cleaning advice. These moments suggest genuine intelligence. But as Jacob and Stephen explain, push these systems harder and the cracks appear.
The hosts examine specific failure modes that emerge when deploying LLMs at scale. Jacob documents persistent formatting errors where models swing between extremes—overusing lists, then refusing to use them at all, even when instructions explicitly define appropriate use cases. These aren't random glitches. They reveal systematic overcorrection behaviors where LLMs bounce off guardrails rather than operating within defined bounds.
More troubling are the logical inconsistencies. When working with large corpuses of information, LLMs demonstrate what Jacob calls cognitive fallacies—errors that mirror human reasoning failures but stem from different causes. The models cannot maintain complex instructions across extended tasks. They hallucinate citations, fabricate data, and contradict themselves when context windows stretch too far. Even the latest reasoning models cannot eliminate certain habits, like the infamous em-dash overuse, no matter how explicitly you prompt against it.
Stephen introduces the deny-affirm construction as another persistent pattern: "It's not X, it's Y" formulations that plague AI-generated content. Tell the model to avoid this construction and watch it appear anyway, sometimes in the very next paragraph. These aren't bugs to be patched. They're symptoms of fundamental architectural limitations.
The solution lies in neurosymbolic AI, which combines neural networks with symbolic reasoning systems. Jacob and Stephen use an extended biological analogy: LLMs are like organisms without skeletons. A paramecium works fine at microscopic scale, but try to build something elephant-sized from the same squishy architecture and it collapses under its own weight. The skeleton—knowledge graphs, structured data, formal logic—provides the rigid structure necessary for complex reasoning at scale.
Learn more about neurosymbolic approaches: https://talbotwest.com/ai-insights/what-is-neurosymbolic-ai
About the hosts:
Jacob Andra is CEO of Talbot West and serves on the board of 47G, a Utah-based public-private aerospace and defense consortium. He pushes the limits of what AI can accomplish in high-stakes use cases and publishes extensively on AI, enterprise transformation, and policy, covering topics including explainability, responsible AI, and systems integration.
Stephen Karafiath is co-founder of Talbot West, where he architects and deploys AI solutions that bridge the gap between theoretical capabilities and practical business outcomes. His work focuses on identifying the specific failure modes of AI systems and developing robust approaches to enterprise implementation.
About Talbot West:
Talbot West delivers Fortune 500-level AI consulting and implementation to midmarket and enterprise organizations. The company specializes in practical AI deployment through its proprietary APEX (AI Prioritization and Execution) framework and Cognitive Hive AI (CHAI) architecture, which emphasizes modular, explainable AI systems over monolithic black-box models.
Visit talbotwest.com to learn how we help organizations cut through AI hype and implem

  continue reading

12 에피소드

모든 에피소드

×
 
Loading …

플레이어 FM에 오신것을 환영합니다!

플레이어 FM은 웹에서 고품질 팟캐스트를 검색하여 지금 바로 즐길 수 있도록 합니다. 최고의 팟캐스트 앱이며 Android, iPhone 및 웹에서도 작동합니다. 장치 간 구독 동기화를 위해 가입하세요.

 

빠른 참조 가이드

탐색하는 동안 이 프로그램을 들어보세요.
재생