Artwork

Massive Studios에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 Massive Studios 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.
Player FM -팟 캐스트 앱
Player FM 앱으로 오프라인으로 전환하세요!

Validation and Guardrails for LLMs

27:27
 
공유
 

Manage episode 402066337 series 2285741
Massive Studios에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 Massive Studios 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.

Shreya Rajpal (@ShreyaR, CEO @guardrails_ai ) talks about the need to provide guardrails and validation of LLM’s, along with common use cases and Guardrail AI’s new Hub.
SHOW: 797
CLOUD NEWS OF THE WEEK - http://bit.ly/cloudcast-cnotw
NEW TO CLOUD? CHECK OUT OUR OTHER PODCAST - "CLOUDCAST BASICS"
SHOW SPONSORS:

SHOW NOTES:

Topic 1 - Welcome to the show. Before we dive into today’s discussion, tell us a little bit about your background.
Topic 2 - Our topic today is the validation and accuracy of AI with guardrails. Let’s start with the why… Why do we need guardrails for LLMs today?
Topic 3 - Where and how do you control (maybe validate is a better word) outputs from LLM’s today? What are your thoughts on the best way to validate outputs?
Topic 4 - Will this workflow work with both closed-source (ChatGPT) and opensource (Llama2) models? Would this process apply to training/fine-tuning or more for inference? Would this potentially replace humans in the loop that we see today or is this completely different?
Topic 5 - What are some of the most common early use cases and practical examples? PII detection comes to mind, violation of ethics or laws, off-topic/out of scope, or simply just something the model isn’t designed to provide?
Topic 6 - What happens if it fails? Does this create a loop scenario to try again?
Topic 7 - Let’s talk about Guardrails AI specifically. Today you offer an open-source marketplace of Validators in the Guardrails Hub, correct? As we mentioned earlier, almost everyone’s implementation and guardrails they want to implement will be different. Is the best way to think about this as building blocks using validators that are pieced together? Tell everyone a little bit about the offering
FEEDBACK?

  continue reading

챕터

1. Validation and Guardrails for LLMs (00:00:00)

2. [Ad] Out-of-the-box insights from digital leaders (00:18:14)

3. (Cont.) Validation and Guardrails for LLMs (00:18:52)

912 에피소드

Artwork

Validation and Guardrails for LLMs

The Cloudcast

1,286 subscribers

published

icon공유
 
Manage episode 402066337 series 2285741
Massive Studios에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 Massive Studios 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.

Shreya Rajpal (@ShreyaR, CEO @guardrails_ai ) talks about the need to provide guardrails and validation of LLM’s, along with common use cases and Guardrail AI’s new Hub.
SHOW: 797
CLOUD NEWS OF THE WEEK - http://bit.ly/cloudcast-cnotw
NEW TO CLOUD? CHECK OUT OUR OTHER PODCAST - "CLOUDCAST BASICS"
SHOW SPONSORS:

SHOW NOTES:

Topic 1 - Welcome to the show. Before we dive into today’s discussion, tell us a little bit about your background.
Topic 2 - Our topic today is the validation and accuracy of AI with guardrails. Let’s start with the why… Why do we need guardrails for LLMs today?
Topic 3 - Where and how do you control (maybe validate is a better word) outputs from LLM’s today? What are your thoughts on the best way to validate outputs?
Topic 4 - Will this workflow work with both closed-source (ChatGPT) and opensource (Llama2) models? Would this process apply to training/fine-tuning or more for inference? Would this potentially replace humans in the loop that we see today or is this completely different?
Topic 5 - What are some of the most common early use cases and practical examples? PII detection comes to mind, violation of ethics or laws, off-topic/out of scope, or simply just something the model isn’t designed to provide?
Topic 6 - What happens if it fails? Does this create a loop scenario to try again?
Topic 7 - Let’s talk about Guardrails AI specifically. Today you offer an open-source marketplace of Validators in the Guardrails Hub, correct? As we mentioned earlier, almost everyone’s implementation and guardrails they want to implement will be different. Is the best way to think about this as building blocks using validators that are pieced together? Tell everyone a little bit about the offering
FEEDBACK?

  continue reading

챕터

1. Validation and Guardrails for LLMs (00:00:00)

2. [Ad] Out-of-the-box insights from digital leaders (00:18:14)

3. (Cont.) Validation and Guardrails for LLMs (00:18:52)

912 에피소드

All episodes

×
 
Loading …

플레이어 FM에 오신것을 환영합니다!

플레이어 FM은 웹에서 고품질 팟캐스트를 검색하여 지금 바로 즐길 수 있도록 합니다. 최고의 팟캐스트 앱이며 Android, iPhone 및 웹에서도 작동합니다. 장치 간 구독 동기화를 위해 가입하세요.

 

빠른 참조 가이드