Artwork

Liv Boeree에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 Liv Boeree 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.
Player FM -팟 캐스트 앱
Player FM 앱으로 오프라인으로 전환하세요!

#17 - Dan Hendrycks - Are AI worries overblown?

1:40:50
 
공유
 

Manage episode 447010467 series 3611008
Liv Boeree에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 Liv Boeree 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.

The rate of AI progress is accelerating, so how can we minimize the risks of this incredible technology, while maximizing the rewards?

Today I am speaking to leading AI researcher Dan Hendrycks — Dan is the founder of Center for AI Safety, and lead advisor to Elon Musk's X.AI. He was also the architect behind the "Mitigating Risks" letter that was signed by Demis Hassabis, Sam Altman, Bill Gates, Yoshua Bengio and many others.

In this conversation we discuss everything from immediate issues like deepfakes, to upcoming risks like malicious use, centralisation of power, regulatory capture and more. In other words, how do we ensure AI ends up a win/win for humanity instead of a lose/lose.

Chapters

00:00:00 - Intro

00:02:14 - Are current laws sufficient?

00:09:41 - Types of AI Risk

00:23:30 - Arms Races

00:39:10 - What happens inside an AI?

00:46:39 - Rogue AI

00:52:22 - Sentient AI

01:07:36 - Risks from Centralization

01:14:45 - Open Source

01:23:02 - AI speeding up systemic risks

01:29:54 - Synthetic Data & Simulations

01:36:52 - What Dan is excited about in AI

Links

♾️ An Overview of Catastrophic Risk Paper

https://arxiv.org/pdf/2306.12001.pdf

♾️ Center for AI Safety

https://www.safe.ai/ai-risk

♾️ Representation Engineering

https://www.ai-transparency.org/

♾️ Liv's Ted talk on AI & Moloch

https://www.youtube.com/watch?v=WX_vN1QYgmE

♾️ Norbert Wiener

https://en.wikipedia.org/wiki/Norbert_Wiener

♾️ Reinforcement Learning Textbook

https://inst.eecs.berkeley.edu/~cs188/sp20/assets/files/SuttonBartoIPRLBook2ndEd.pdf

♾️ Richard Posner - Economics Engine

https://plato.stanford.edu/entries/legal-econanalysis/

♾️ More Than a Toy: Random Matrix Models Predict How Real-World Neural Representations Generalize

https://arxiv.org/abs/2203.06176

The Win-Win Podcast:

Poker champion Liv Boeree takes to the interview chair to tease apart the complexities of one of the most fundamental parts of human nature: competition. Liv is joined by top philosophers, gamers, artists, technologists, CEOs, scientists, athletes and more to understand how competition manifests in their world, and how to change seemingly win-lose games into Win-Wins.

Watch the previous episode with Boyan Slat of the Ocean Cleanup here:

https://youtu.be/QEYbLN-LC5k

Credits

♾️ Hosted by Liv Boeree

♾️ Produced & Edited by Raymond Wei

♾️ Audio Mix by Keir Schmidt

  continue reading

35 에피소드

Artwork
icon공유
 
Manage episode 447010467 series 3611008
Liv Boeree에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 Liv Boeree 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.

The rate of AI progress is accelerating, so how can we minimize the risks of this incredible technology, while maximizing the rewards?

Today I am speaking to leading AI researcher Dan Hendrycks — Dan is the founder of Center for AI Safety, and lead advisor to Elon Musk's X.AI. He was also the architect behind the "Mitigating Risks" letter that was signed by Demis Hassabis, Sam Altman, Bill Gates, Yoshua Bengio and many others.

In this conversation we discuss everything from immediate issues like deepfakes, to upcoming risks like malicious use, centralisation of power, regulatory capture and more. In other words, how do we ensure AI ends up a win/win for humanity instead of a lose/lose.

Chapters

00:00:00 - Intro

00:02:14 - Are current laws sufficient?

00:09:41 - Types of AI Risk

00:23:30 - Arms Races

00:39:10 - What happens inside an AI?

00:46:39 - Rogue AI

00:52:22 - Sentient AI

01:07:36 - Risks from Centralization

01:14:45 - Open Source

01:23:02 - AI speeding up systemic risks

01:29:54 - Synthetic Data & Simulations

01:36:52 - What Dan is excited about in AI

Links

♾️ An Overview of Catastrophic Risk Paper

https://arxiv.org/pdf/2306.12001.pdf

♾️ Center for AI Safety

https://www.safe.ai/ai-risk

♾️ Representation Engineering

https://www.ai-transparency.org/

♾️ Liv's Ted talk on AI & Moloch

https://www.youtube.com/watch?v=WX_vN1QYgmE

♾️ Norbert Wiener

https://en.wikipedia.org/wiki/Norbert_Wiener

♾️ Reinforcement Learning Textbook

https://inst.eecs.berkeley.edu/~cs188/sp20/assets/files/SuttonBartoIPRLBook2ndEd.pdf

♾️ Richard Posner - Economics Engine

https://plato.stanford.edu/entries/legal-econanalysis/

♾️ More Than a Toy: Random Matrix Models Predict How Real-World Neural Representations Generalize

https://arxiv.org/abs/2203.06176

The Win-Win Podcast:

Poker champion Liv Boeree takes to the interview chair to tease apart the complexities of one of the most fundamental parts of human nature: competition. Liv is joined by top philosophers, gamers, artists, technologists, CEOs, scientists, athletes and more to understand how competition manifests in their world, and how to change seemingly win-lose games into Win-Wins.

Watch the previous episode with Boyan Slat of the Ocean Cleanup here:

https://youtu.be/QEYbLN-LC5k

Credits

♾️ Hosted by Liv Boeree

♾️ Produced & Edited by Raymond Wei

♾️ Audio Mix by Keir Schmidt

  continue reading

35 에피소드

Semua episode

×
 
Loading …

플레이어 FM에 오신것을 환영합니다!

플레이어 FM은 웹에서 고품질 팟캐스트를 검색하여 지금 바로 즐길 수 있도록 합니다. 최고의 팟캐스트 앱이며 Android, iPhone 및 웹에서도 작동합니다. 장치 간 구독 동기화를 위해 가입하세요.

 

빠른 참조 가이드

탐색하는 동안 이 프로그램을 들어보세요.
재생