Artwork

Dr. Andrew Clark & Sid Mangalik, Dr. Andrew Clark, and Sid Mangalik에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 Dr. Andrew Clark & Sid Mangalik, Dr. Andrew Clark, and Sid Mangalik 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.
Player FM -팟 캐스트 앱
Player FM 앱으로 오프라인으로 전환하세요!

New paths in AI: Rethinking LLMs and model risk strategies

39:51
 
공유
 

Manage episode 444115273 series 3475282
Dr. Andrew Clark & Sid Mangalik, Dr. Andrew Clark, and Sid Mangalik에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 Dr. Andrew Clark & Sid Mangalik, Dr. Andrew Clark, and Sid Mangalik 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.

Are businesses ready for large language models as a path to AI? In this episode, the hosts reflect on the past year of what has changed and what hasn’t changed in the world of LLMs. Join us as we debunk the latest myths and emphasize the importance of robust risk management in AI integration. The good news is that many decisions about adoption have forced businesses to discuss their future and impact in the face of emerging technology. You won't want to miss this discussion.

  • Intro and news: The veto of California's AI Safety Bill (00:00:03)
    • Can state-specific AI regulations really protect consumers, or do they risk stifling innovation? (Gov. Newsome's response)
    • Veto highlights the critical need for risk-based regulations that don't rely solely on the size and cost of language models
    • Arguments to be made for a cohesive national framework that ensures consistent AI regulation across the United States
  • Are businesses ready to embrace large language models, or are they underestimating the challenges? (00:08:35)
    • The myth that acquiring a foundational model is a quick fix for productivity woes
    • The essential role of robust risk management strategies, especially in sensitive sectors handling personal data
    • Review of model cards, Open AI's system cards, and the importance of thorough testing, validation, and stricter regulations to prevent a false sense of security
    • Transparency alone is not enough; objective assessments are crucial for genuine progress in AI integration
  • From hallucinations in language models to ethical energy use, we tackle some of the most pressing problems in AI today (00:16:29)
    • Reinforcement learning with annotators and the controversial use of other models for review
    • Jan LeCun's energy systems and retrieval-augmented generation (RAG) offer intriguing alternatives that could reshape modeling approaches
  • The ethics of advancing AI technologies, consider the parallels with past monumental achievements and the responsible allocation of resources (00:26:49)
    • There is good news about developments and lessons learned from LLMs; but there is also a long way to go.
    • Our original predictions in episode 2 for LLMs still reigns true: “Reasonable expectations of LLMs: Where truth matters and risk tolerance is low, LLMs will not be a good fit”
    • With increased hype and awareness from LLMs came varying levels of interest in how all model types and their impacts are governed in a business.

What did you think? Let us know.

Do you have a question or a discussion topic for the AI Fundamentalists? Connect with them to comment on your favorite topics:

  • LinkedIn - Episode summaries, shares of cited articles, and more.
  • YouTube - Was it something that we said? Good. Share your favorite quotes.
  • Visit our page - see past episodes and submit your feedback! It continues to inspire future episodes.
  continue reading

챕터

1. New paths in AI: Rethinking LLMs and model risk strategies (00:00:00)

2. News: Veto of CA AI Safety Bill (00:00:03)

3. Challenges of LLMs and generative AI - One year later (00:08:31)

4. Navigating the Future of AI (00:16:29)

5. Ethics and Innovation in AI (00:26:49)

6. Addressing Responsible AI Concerns (00:37:10)

24 에피소드

Artwork
icon공유
 
Manage episode 444115273 series 3475282
Dr. Andrew Clark & Sid Mangalik, Dr. Andrew Clark, and Sid Mangalik에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 Dr. Andrew Clark & Sid Mangalik, Dr. Andrew Clark, and Sid Mangalik 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.

Are businesses ready for large language models as a path to AI? In this episode, the hosts reflect on the past year of what has changed and what hasn’t changed in the world of LLMs. Join us as we debunk the latest myths and emphasize the importance of robust risk management in AI integration. The good news is that many decisions about adoption have forced businesses to discuss their future and impact in the face of emerging technology. You won't want to miss this discussion.

  • Intro and news: The veto of California's AI Safety Bill (00:00:03)
    • Can state-specific AI regulations really protect consumers, or do they risk stifling innovation? (Gov. Newsome's response)
    • Veto highlights the critical need for risk-based regulations that don't rely solely on the size and cost of language models
    • Arguments to be made for a cohesive national framework that ensures consistent AI regulation across the United States
  • Are businesses ready to embrace large language models, or are they underestimating the challenges? (00:08:35)
    • The myth that acquiring a foundational model is a quick fix for productivity woes
    • The essential role of robust risk management strategies, especially in sensitive sectors handling personal data
    • Review of model cards, Open AI's system cards, and the importance of thorough testing, validation, and stricter regulations to prevent a false sense of security
    • Transparency alone is not enough; objective assessments are crucial for genuine progress in AI integration
  • From hallucinations in language models to ethical energy use, we tackle some of the most pressing problems in AI today (00:16:29)
    • Reinforcement learning with annotators and the controversial use of other models for review
    • Jan LeCun's energy systems and retrieval-augmented generation (RAG) offer intriguing alternatives that could reshape modeling approaches
  • The ethics of advancing AI technologies, consider the parallels with past monumental achievements and the responsible allocation of resources (00:26:49)
    • There is good news about developments and lessons learned from LLMs; but there is also a long way to go.
    • Our original predictions in episode 2 for LLMs still reigns true: “Reasonable expectations of LLMs: Where truth matters and risk tolerance is low, LLMs will not be a good fit”
    • With increased hype and awareness from LLMs came varying levels of interest in how all model types and their impacts are governed in a business.

What did you think? Let us know.

Do you have a question or a discussion topic for the AI Fundamentalists? Connect with them to comment on your favorite topics:

  • LinkedIn - Episode summaries, shares of cited articles, and more.
  • YouTube - Was it something that we said? Good. Share your favorite quotes.
  • Visit our page - see past episodes and submit your feedback! It continues to inspire future episodes.
  continue reading

챕터

1. New paths in AI: Rethinking LLMs and model risk strategies (00:00:00)

2. News: Veto of CA AI Safety Bill (00:00:03)

3. Challenges of LLMs and generative AI - One year later (00:08:31)

4. Navigating the Future of AI (00:16:29)

5. Ethics and Innovation in AI (00:26:49)

6. Addressing Responsible AI Concerns (00:37:10)

24 에피소드

모든 에피소드

×
 
Loading …

플레이어 FM에 오신것을 환영합니다!

플레이어 FM은 웹에서 고품질 팟캐스트를 검색하여 지금 바로 즐길 수 있도록 합니다. 최고의 팟캐스트 앱이며 Android, iPhone 및 웹에서도 작동합니다. 장치 간 구독 동기화를 위해 가입하세요.

 

빠른 참조 가이드