Artwork

Machine Learning Street Talk (MLST)에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 Machine Learning Street Talk (MLST) 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.
Player FM -팟 캐스트 앱
Player FM 앱으로 오프라인으로 전환하세요!

Nora Belrose - AI Development, Safety, and Meaning

2:29:50
 
공유
 

Manage episode 450673952 series 2803422
Machine Learning Street Talk (MLST)에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 Machine Learning Street Talk (MLST) 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.

Nora Belrose, Head of Interpretability Research at EleutherAI, discusses critical challenges in AI safety and development. The conversation begins with her technical work on concept erasure in neural networks through LEACE (LEAst-squares Concept Erasure), while highlighting how neural networks' progression from simple to complex learning patterns could have important implications for AI safety.

Many fear that advanced AI will pose an existential threat -- pursuing its own dangerous goals once it's powerful enough. But Belrose challenges this popular doomsday scenario with a fascinating breakdown of why it doesn't add up.

Belrose also provides a detailed critique of current AI alignment approaches, particularly examining "counting arguments" and their limitations when applied to AI safety. She argues that the Principle of Indifference may be insufficient for addressing existential risks from advanced AI systems. The discussion explores how emergent properties in complex AI systems could lead to unpredictable and potentially dangerous behaviors that simple reductionist approaches fail to capture.

The conversation concludes by exploring broader philosophical territory, where Belrose discusses her growing interest in Buddhism's potential relevance to a post-automation future. She connects concepts of moral anti-realism with Buddhist ideas about emptiness and non-attachment, suggesting these frameworks might help humans find meaning in a world where AI handles most practical tasks. Rather than viewing this automated future with alarm, she proposes that Zen Buddhism's emphasis on spontaneity and presence might complement a society freed from traditional labor.

SPONSOR MESSAGES:

CentML offers competitive pricing for GenAI model deployment, with flexible options to suit a wide range of models, from small to large-scale deployments.

https://centml.ai/pricing/

Tufa AI Labs is a brand new research lab in Zurich started by Benjamin Crouzier focussed on ARC and AGI, they just acquired MindsAI - the current winners of the ARC challenge. Are you interested in working on ARC, or getting involved in their events? Goto https://tufalabs.ai/

Nora Belrose:

https://norabelrose.com/

https://scholar.google.com/citations?user=p_oBc64AAAAJ&hl=en

https://x.com/norabelrose

SHOWNOTES:

https://www.dropbox.com/scl/fi/38fhsv2zh8gnubtjaoq4a/NORA_FINAL.pdf?rlkey=0e5r8rd261821g1em4dgv0k70&st=t5c9ckfb&dl=0

TOC:

1. Neural Network Foundations

[00:00:00] 1.1 Philosophical Foundations and Neural Network Simplicity Bias

[00:02:20] 1.2 LEACE and Concept Erasure Fundamentals

[00:13:16] 1.3 LISA Technical Implementation and Applications

[00:18:50] 1.4 Practical Implementation Challenges and Data Requirements

[00:22:13] 1.5 Performance Impact and Limitations of Concept Erasure

2. Machine Learning Theory

[00:32:23] 2.1 Neural Network Learning Progression and Simplicity Bias

[00:37:10] 2.2 Optimal Transport Theory and Image Statistics Manipulation

[00:43:05] 2.3 Grokking Phenomena and Training Dynamics

[00:44:50] 2.4 Texture vs Shape Bias in Computer Vision Models

[00:45:15] 2.5 CNN Architecture and Shape Recognition Limitations

3. AI Systems and Value Learning

[00:47:10] 3.1 Meaning, Value, and Consciousness in AI Systems

[00:53:06] 3.2 Global Connectivity vs Local Culture Preservation

[00:58:18] 3.3 AI Capabilities and Future Development Trajectory

4. Consciousness Theory

[01:03:03] 4.1 4E Cognition and Extended Mind Theory

[01:09:40] 4.2 Thompson's Views on Consciousness and Simulation

[01:12:46] 4.3 Phenomenology and Consciousness Theory

[01:15:43] 4.4 Critique of Illusionism and Embodied Experience

[01:23:16] 4.5 AI Alignment and Counting Arguments Debate

(TRUNCATED, TOC embedded in MP3 file with more information)

  continue reading

233 에피소드

Artwork
icon공유
 
Manage episode 450673952 series 2803422
Machine Learning Street Talk (MLST)에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 Machine Learning Street Talk (MLST) 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.

Nora Belrose, Head of Interpretability Research at EleutherAI, discusses critical challenges in AI safety and development. The conversation begins with her technical work on concept erasure in neural networks through LEACE (LEAst-squares Concept Erasure), while highlighting how neural networks' progression from simple to complex learning patterns could have important implications for AI safety.

Many fear that advanced AI will pose an existential threat -- pursuing its own dangerous goals once it's powerful enough. But Belrose challenges this popular doomsday scenario with a fascinating breakdown of why it doesn't add up.

Belrose also provides a detailed critique of current AI alignment approaches, particularly examining "counting arguments" and their limitations when applied to AI safety. She argues that the Principle of Indifference may be insufficient for addressing existential risks from advanced AI systems. The discussion explores how emergent properties in complex AI systems could lead to unpredictable and potentially dangerous behaviors that simple reductionist approaches fail to capture.

The conversation concludes by exploring broader philosophical territory, where Belrose discusses her growing interest in Buddhism's potential relevance to a post-automation future. She connects concepts of moral anti-realism with Buddhist ideas about emptiness and non-attachment, suggesting these frameworks might help humans find meaning in a world where AI handles most practical tasks. Rather than viewing this automated future with alarm, she proposes that Zen Buddhism's emphasis on spontaneity and presence might complement a society freed from traditional labor.

SPONSOR MESSAGES:

CentML offers competitive pricing for GenAI model deployment, with flexible options to suit a wide range of models, from small to large-scale deployments.

https://centml.ai/pricing/

Tufa AI Labs is a brand new research lab in Zurich started by Benjamin Crouzier focussed on ARC and AGI, they just acquired MindsAI - the current winners of the ARC challenge. Are you interested in working on ARC, or getting involved in their events? Goto https://tufalabs.ai/

Nora Belrose:

https://norabelrose.com/

https://scholar.google.com/citations?user=p_oBc64AAAAJ&hl=en

https://x.com/norabelrose

SHOWNOTES:

https://www.dropbox.com/scl/fi/38fhsv2zh8gnubtjaoq4a/NORA_FINAL.pdf?rlkey=0e5r8rd261821g1em4dgv0k70&st=t5c9ckfb&dl=0

TOC:

1. Neural Network Foundations

[00:00:00] 1.1 Philosophical Foundations and Neural Network Simplicity Bias

[00:02:20] 1.2 LEACE and Concept Erasure Fundamentals

[00:13:16] 1.3 LISA Technical Implementation and Applications

[00:18:50] 1.4 Practical Implementation Challenges and Data Requirements

[00:22:13] 1.5 Performance Impact and Limitations of Concept Erasure

2. Machine Learning Theory

[00:32:23] 2.1 Neural Network Learning Progression and Simplicity Bias

[00:37:10] 2.2 Optimal Transport Theory and Image Statistics Manipulation

[00:43:05] 2.3 Grokking Phenomena and Training Dynamics

[00:44:50] 2.4 Texture vs Shape Bias in Computer Vision Models

[00:45:15] 2.5 CNN Architecture and Shape Recognition Limitations

3. AI Systems and Value Learning

[00:47:10] 3.1 Meaning, Value, and Consciousness in AI Systems

[00:53:06] 3.2 Global Connectivity vs Local Culture Preservation

[00:58:18] 3.3 AI Capabilities and Future Development Trajectory

4. Consciousness Theory

[01:03:03] 4.1 4E Cognition and Extended Mind Theory

[01:09:40] 4.2 Thompson's Views on Consciousness and Simulation

[01:12:46] 4.3 Phenomenology and Consciousness Theory

[01:15:43] 4.4 Critique of Illusionism and Embodied Experience

[01:23:16] 4.5 AI Alignment and Counting Arguments Debate

(TRUNCATED, TOC embedded in MP3 file with more information)

  continue reading

233 에피소드

Все серии

×
 
Loading …

플레이어 FM에 오신것을 환영합니다!

플레이어 FM은 웹에서 고품질 팟캐스트를 검색하여 지금 바로 즐길 수 있도록 합니다. 최고의 팟캐스트 앱이며 Android, iPhone 및 웹에서도 작동합니다. 장치 간 구독 동기화를 위해 가입하세요.

 

빠른 참조 가이드

탐색하는 동안 이 프로그램을 들어보세요.
재생