Artwork

mstraton8112에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 mstraton8112 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.
Player FM -팟 캐스트 앱
Player FM 앱으로 오프라인으로 전환하세요!

How Adobe Built A Specialized Concierge EP 55

13:15
 
공유
 

Manage episode 518602442 series 3658923
mstraton8112에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 mstraton8112 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.
The Human Touch: Building Reliable AI Assistants with LLMs in the Enterprise Generative AI assistants are demonstrating significant potential to enhance productivity, streamline information access, and improve the user experience within enterprise contexts. These systems serve as intuitive, conversational interfaces to enterprise knowledge, leveraging the impressive capabilities of Large Language Models (LLMs). The domain-specific AI assistant known as Summit Concierge, for instance, was developed for Adobe Summit to handle a wide range of event-related queries, from session recommendations to venue logistics, aiming to reduce the burden on support staff and provide scalable, real-time access to information. While LLMs excel at generating fluent and coherent responses, building a reliable, task-aligned AI assistant rapidly presents several critical challenges. These systems often face hurdles like data sparsity in "cold-start" scenarios and the risk of hallucinations or inaccuracies when handling specific or time-sensitive information. Ensuring that the AI consistently produces trustworthy and contextually grounded answers is essential for user trust and adoption. To address these issues—including data sparsity and the need for reliable quality—developers adopted a human-in-the-loop development paradigm. This hybrid approach integrates human expertise to guide data curation, response validation, and quality monitoring, enabling rapid iteration and reliability without requiring extensive pre-collected data. Techniques used included prompt engineering, documentation-aware retrieval, and synthetic data augmentation to effectively bootstrap the assistant. For quality assurance, human reviewers continuously validated and refined responses. This streamlined process, which used LLM judges to auto-select uncertain cases, significantly reduced the need for manual annotation during evaluation. The real-world deployment of Summit Concierge demonstrated the practical benefits of combining scalable LLM capabilities with lightweight human oversight. This strategy offers a viable path to reliable, domain-specific AI assistants at scale, confirming that agile, feedback-driven development enables robust AI solutions, even under strict timelines
  continue reading

57 에피소드

Artwork
icon공유
 
Manage episode 518602442 series 3658923
mstraton8112에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 mstraton8112 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.
The Human Touch: Building Reliable AI Assistants with LLMs in the Enterprise Generative AI assistants are demonstrating significant potential to enhance productivity, streamline information access, and improve the user experience within enterprise contexts. These systems serve as intuitive, conversational interfaces to enterprise knowledge, leveraging the impressive capabilities of Large Language Models (LLMs). The domain-specific AI assistant known as Summit Concierge, for instance, was developed for Adobe Summit to handle a wide range of event-related queries, from session recommendations to venue logistics, aiming to reduce the burden on support staff and provide scalable, real-time access to information. While LLMs excel at generating fluent and coherent responses, building a reliable, task-aligned AI assistant rapidly presents several critical challenges. These systems often face hurdles like data sparsity in "cold-start" scenarios and the risk of hallucinations or inaccuracies when handling specific or time-sensitive information. Ensuring that the AI consistently produces trustworthy and contextually grounded answers is essential for user trust and adoption. To address these issues—including data sparsity and the need for reliable quality—developers adopted a human-in-the-loop development paradigm. This hybrid approach integrates human expertise to guide data curation, response validation, and quality monitoring, enabling rapid iteration and reliability without requiring extensive pre-collected data. Techniques used included prompt engineering, documentation-aware retrieval, and synthetic data augmentation to effectively bootstrap the assistant. For quality assurance, human reviewers continuously validated and refined responses. This streamlined process, which used LLM judges to auto-select uncertain cases, significantly reduced the need for manual annotation during evaluation. The real-world deployment of Summit Concierge demonstrated the practical benefits of combining scalable LLM capabilities with lightweight human oversight. This strategy offers a viable path to reliable, domain-specific AI assistants at scale, confirming that agile, feedback-driven development enables robust AI solutions, even under strict timelines
  continue reading

57 에피소드

ทุกตอน

×
 
Loading …

플레이어 FM에 오신것을 환영합니다!

플레이어 FM은 웹에서 고품질 팟캐스트를 검색하여 지금 바로 즐길 수 있도록 합니다. 최고의 팟캐스트 앱이며 Android, iPhone 및 웹에서도 작동합니다. 장치 간 구독 동기화를 위해 가입하세요.

 

빠른 참조 가이드

탐색하는 동안 이 프로그램을 들어보세요.
재생