Artwork

Machine Learning Street Talk (MLST)에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 Machine Learning Street Talk (MLST) 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.
Player FM -팟 캐스트 앱
Player FM 앱으로 오프라인으로 전환하세요!

Sara Hooker - Why US AI Act Compute Thresholds Are Misguided

1:05:41
 
공유
 

Manage episode 429575778 series 2803422
Machine Learning Street Talk (MLST)에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 Machine Learning Street Talk (MLST) 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.

Sara Hooker is VP of Research at Cohere and leader of Cohere for AI. We discuss her recent paper critiquing the use of compute thresholds, measured in FLOPs (floating point operations), as an AI governance strategy.

We explore why this approach, recently adopted in both US and EU AI policies, may be problematic and oversimplified. Sara explains the limitations of using raw computational power as a measure of AI capability or risk, and discusses the complex relationship between compute, data, and model architecture.

Equally important, we go into Sara's work on "The AI Language Gap." This research highlights the challenges and inequalities in developing AI systems that work across multiple languages. Sara discusses how current AI models, predominantly trained on English and a handful of high-resource languages, fail to serve the linguistic diversity of our global population. We explore the technical, ethical, and societal implications of this gap, and discuss potential solutions for creating more inclusive and representative AI systems.

We broadly discuss the relationship between language, culture, and AI capabilities, as well as the ethical considerations in AI development and deployment.

YT Version: https://youtu.be/dBZp47999Ko

TOC:

[00:00:00] Intro

[00:02:12] FLOPS paper

[00:26:42] Hardware lottery

[00:30:22] The Language gap

[00:33:25] Safety

[00:38:31] Emergent

[00:41:23] Creativity

[00:43:40] Long tail

[00:44:26] LLMs and society

[00:45:36] Model bias

[00:48:51] Language and capabilities

[00:52:27] Ethical frameworks and RLHF

Sara Hooker

https://www.sarahooker.me/

https://www.linkedin.com/in/sararosehooker/

https://scholar.google.com/citations?user=2xy6h3sAAAAJ&hl=en

https://x.com/sarahookr

Interviewer: Tim Scarfe

Refs

The AI Language gap

https://cohere.com/research/papers/the-AI-language-gap.pdf

On the Limitations of Compute Thresholds as a Governance Strategy.

https://arxiv.org/pdf/2407.05694v1

The Multilingual Alignment Prism: Aligning Global and Local Preferences to Reduce Harm

https://arxiv.org/pdf/2406.18682

Cohere Aya

https://cohere.com/research/aya

RLHF Can Speak Many Languages: Unlocking Multilingual Preference Optimization for LLMs

https://arxiv.org/pdf/2407.02552

Back to Basics: Revisiting REINFORCE Style Optimization for Learning from Human Feedback in LLMs

https://arxiv.org/pdf/2402.14740

Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence

https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/

EU AI Act

https://www.europarl.europa.eu/doceo/document/TA-9-2024-0138_EN.pdf

The bitter lesson

http://www.incompleteideas.net/IncIdeas/BitterLesson.html

Neel Nanda interview

https://www.youtube.com/watch?v=_Ygf0GnlwmY

Scaling Monosemanticity: Extracting Interpretable Features from Claude 3 Sonnet

https://transformer-circuits.pub/2024/scaling-monosemanticity/

Chollet's ARC challenge

https://github.com/fchollet/ARC-AGI

Ryan Greenblatt on ARC

https://www.youtube.com/watch?v=z9j3wB1RRGA

Disclaimer: This is the third video from our Cohere partnership. We were not told what to say in the interview, and didn't edit anything out from the interview.

  continue reading

180 에피소드

Artwork
icon공유
 
Manage episode 429575778 series 2803422
Machine Learning Street Talk (MLST)에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 Machine Learning Street Talk (MLST) 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.

Sara Hooker is VP of Research at Cohere and leader of Cohere for AI. We discuss her recent paper critiquing the use of compute thresholds, measured in FLOPs (floating point operations), as an AI governance strategy.

We explore why this approach, recently adopted in both US and EU AI policies, may be problematic and oversimplified. Sara explains the limitations of using raw computational power as a measure of AI capability or risk, and discusses the complex relationship between compute, data, and model architecture.

Equally important, we go into Sara's work on "The AI Language Gap." This research highlights the challenges and inequalities in developing AI systems that work across multiple languages. Sara discusses how current AI models, predominantly trained on English and a handful of high-resource languages, fail to serve the linguistic diversity of our global population. We explore the technical, ethical, and societal implications of this gap, and discuss potential solutions for creating more inclusive and representative AI systems.

We broadly discuss the relationship between language, culture, and AI capabilities, as well as the ethical considerations in AI development and deployment.

YT Version: https://youtu.be/dBZp47999Ko

TOC:

[00:00:00] Intro

[00:02:12] FLOPS paper

[00:26:42] Hardware lottery

[00:30:22] The Language gap

[00:33:25] Safety

[00:38:31] Emergent

[00:41:23] Creativity

[00:43:40] Long tail

[00:44:26] LLMs and society

[00:45:36] Model bias

[00:48:51] Language and capabilities

[00:52:27] Ethical frameworks and RLHF

Sara Hooker

https://www.sarahooker.me/

https://www.linkedin.com/in/sararosehooker/

https://scholar.google.com/citations?user=2xy6h3sAAAAJ&hl=en

https://x.com/sarahookr

Interviewer: Tim Scarfe

Refs

The AI Language gap

https://cohere.com/research/papers/the-AI-language-gap.pdf

On the Limitations of Compute Thresholds as a Governance Strategy.

https://arxiv.org/pdf/2407.05694v1

The Multilingual Alignment Prism: Aligning Global and Local Preferences to Reduce Harm

https://arxiv.org/pdf/2406.18682

Cohere Aya

https://cohere.com/research/aya

RLHF Can Speak Many Languages: Unlocking Multilingual Preference Optimization for LLMs

https://arxiv.org/pdf/2407.02552

Back to Basics: Revisiting REINFORCE Style Optimization for Learning from Human Feedback in LLMs

https://arxiv.org/pdf/2402.14740

Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence

https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/

EU AI Act

https://www.europarl.europa.eu/doceo/document/TA-9-2024-0138_EN.pdf

The bitter lesson

http://www.incompleteideas.net/IncIdeas/BitterLesson.html

Neel Nanda interview

https://www.youtube.com/watch?v=_Ygf0GnlwmY

Scaling Monosemanticity: Extracting Interpretable Features from Claude 3 Sonnet

https://transformer-circuits.pub/2024/scaling-monosemanticity/

Chollet's ARC challenge

https://github.com/fchollet/ARC-AGI

Ryan Greenblatt on ARC

https://www.youtube.com/watch?v=z9j3wB1RRGA

Disclaimer: This is the third video from our Cohere partnership. We were not told what to say in the interview, and didn't edit anything out from the interview.

  continue reading

180 에피소드

Alla avsnitt

×
 
Loading …

플레이어 FM에 오신것을 환영합니다!

플레이어 FM은 웹에서 고품질 팟캐스트를 검색하여 지금 바로 즐길 수 있도록 합니다. 최고의 팟캐스트 앱이며 Android, iPhone 및 웹에서도 작동합니다. 장치 간 구독 동기화를 위해 가입하세요.

 

빠른 참조 가이드