Player FM 앱으로 오프라인으로 전환하세요!
Episode 59: Patterns and Anti-Patterns For Building with AI
Manage episode 508113793 series 3317544
John Berryman (Arcturus Labs; early GitHub Copilot engineer; co-author of Relevant Search and Prompt Engineering for LLMs) has spent years figuring out what makes AI applications actually work in production. In this episode, he shares the “seven deadly sins” of LLM development — and the practical fixes that keep projects from stalling.
From context management to retrieval debugging, John explains the patterns he’s seen succeed, the mistakes to avoid, and why it helps to think of an LLM as an “AI intern” rather than an all-knowing oracle.
We talk through:
- Why chasing perfect accuracy is a dead end
- How to use agents without losing control
- Context engineering: fitting the right information in the window
- Starting simple instead of over-orchestrating
- Separating retrieval from generation in RAG
- Splitting complex extractions into smaller checks
- Knowing when frameworks help — and when they slow you down
A practical guide to avoiding the common traps of LLM development and building systems that actually hold up in production.
LINKS:
- Context Engineering for AI Agents, a free, upcoming lightning lesson from John and Hugo
- The Hidden Simplicity of GenAI Systems, a previous lightning lesson from John and Hugo
- Roaming RAG – RAG without the Vector Database, by John
- Cut the Chit-Chat with Artifacts, by John
- Prompt Engineering for LLMs by John and Albert Ziegler
- Relevant Search by John and Doug Turnbull
- Arcturus Labs
- Watch the podcast on YouTube
- Upcoming Events on Luma
🎓 Learn more:
- Hugo's course (this episode was a guest Q&A from the course): Building LLM Applications for Data Scientists and Software Engineers — https://maven.com/s/course/d56067f338
61 에피소드
Manage episode 508113793 series 3317544
John Berryman (Arcturus Labs; early GitHub Copilot engineer; co-author of Relevant Search and Prompt Engineering for LLMs) has spent years figuring out what makes AI applications actually work in production. In this episode, he shares the “seven deadly sins” of LLM development — and the practical fixes that keep projects from stalling.
From context management to retrieval debugging, John explains the patterns he’s seen succeed, the mistakes to avoid, and why it helps to think of an LLM as an “AI intern” rather than an all-knowing oracle.
We talk through:
- Why chasing perfect accuracy is a dead end
- How to use agents without losing control
- Context engineering: fitting the right information in the window
- Starting simple instead of over-orchestrating
- Separating retrieval from generation in RAG
- Splitting complex extractions into smaller checks
- Knowing when frameworks help — and when they slow you down
A practical guide to avoiding the common traps of LLM development and building systems that actually hold up in production.
LINKS:
- Context Engineering for AI Agents, a free, upcoming lightning lesson from John and Hugo
- The Hidden Simplicity of GenAI Systems, a previous lightning lesson from John and Hugo
- Roaming RAG – RAG without the Vector Database, by John
- Cut the Chit-Chat with Artifacts, by John
- Prompt Engineering for LLMs by John and Albert Ziegler
- Relevant Search by John and Doug Turnbull
- Arcturus Labs
- Watch the podcast on YouTube
- Upcoming Events on Luma
🎓 Learn more:
- Hugo's course (this episode was a guest Q&A from the course): Building LLM Applications for Data Scientists and Software Engineers — https://maven.com/s/course/d56067f338
61 에피소드
모든 에피소드
×플레이어 FM에 오신것을 환영합니다!
플레이어 FM은 웹에서 고품질 팟캐스트를 검색하여 지금 바로 즐길 수 있도록 합니다. 최고의 팟캐스트 앱이며 Android, iPhone 및 웹에서도 작동합니다. 장치 간 구독 동기화를 위해 가입하세요.