Artwork

CSPI에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 CSPI 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.
Player FM -팟 캐스트 앱
Player FM 앱으로 오프라인으로 전환하세요!

AI Alignment as a Solvable Problem | Leopold Aschenbrenner & Richard Hanania

1:02:08
 
공유
 

Manage episode 363361433 series 3321519
CSPI에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 CSPI 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.

In the popular imagination, the AI alignment debate is between those who say everything is hopeless, and others who tell us there is nothing to worry about.

Leopold Aschenbrenner graduated valedictorian from Columbia in 2021 when he was 19 years old. He is currently a research affiliate at the Global Priorities Institute at Oxford, and previously helped run Future Fund, which works on philanthropy in AI and biosecurity.

He contends that, contrary to popular perceptions, there aren’t that many people working on the alignment issue. Not only that, but he argues that the problem is actually solvable. In this podcast, he discusses what he believes some of the most promising paths forward are. Even if there is only a small probability that AI is dangerous, a small chance of existential risk is something to take seriously.

AI is not all potential downsides. Near the end, the discussion turns to the possibility that it may supercharge a new era of economic growth. Aschebrenner and Hanania discuss fundamental questions of how well GDP numbers still capture what we want to measure, the possibility that regulation strangles AI to death, and whether the changes we see in the coming decades will be on the same scale as the internet or more important.

Listen in podcast form here, or watch on YouTube.

Links:

* Leopold Aschenbrenner, “Nobody’s on the Ball on AGI Alignment.”

* Collin Burns, Haotian Ye, Dan Klein, and Jacob Steinhardt, “Discovering Latent Knowledge in Language Models Without Supervision.”

* Kevin Meng, David Bau, Alex Andonian, and Yonatan Belinkov, “Locating and Editing Factual Associations in GPT.”

* Leopold’s Tweets:

* Using GPT4 to interpret GPT2 .

* What a model says is not necessarily what’s it’s“thinking” internally.


This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.cspicenter.com
  continue reading

72 에피소드

Artwork
icon공유
 
Manage episode 363361433 series 3321519
CSPI에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 CSPI 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.

In the popular imagination, the AI alignment debate is between those who say everything is hopeless, and others who tell us there is nothing to worry about.

Leopold Aschenbrenner graduated valedictorian from Columbia in 2021 when he was 19 years old. He is currently a research affiliate at the Global Priorities Institute at Oxford, and previously helped run Future Fund, which works on philanthropy in AI and biosecurity.

He contends that, contrary to popular perceptions, there aren’t that many people working on the alignment issue. Not only that, but he argues that the problem is actually solvable. In this podcast, he discusses what he believes some of the most promising paths forward are. Even if there is only a small probability that AI is dangerous, a small chance of existential risk is something to take seriously.

AI is not all potential downsides. Near the end, the discussion turns to the possibility that it may supercharge a new era of economic growth. Aschebrenner and Hanania discuss fundamental questions of how well GDP numbers still capture what we want to measure, the possibility that regulation strangles AI to death, and whether the changes we see in the coming decades will be on the same scale as the internet or more important.

Listen in podcast form here, or watch on YouTube.

Links:

* Leopold Aschenbrenner, “Nobody’s on the Ball on AGI Alignment.”

* Collin Burns, Haotian Ye, Dan Klein, and Jacob Steinhardt, “Discovering Latent Knowledge in Language Models Without Supervision.”

* Kevin Meng, David Bau, Alex Andonian, and Yonatan Belinkov, “Locating and Editing Factual Associations in GPT.”

* Leopold’s Tweets:

* Using GPT4 to interpret GPT2 .

* What a model says is not necessarily what’s it’s“thinking” internally.


This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.cspicenter.com
  continue reading

72 에피소드

모든 에피소드

×
 
Loading …

플레이어 FM에 오신것을 환영합니다!

플레이어 FM은 웹에서 고품질 팟캐스트를 검색하여 지금 바로 즐길 수 있도록 합니다. 최고의 팟캐스트 앱이며 Android, iPhone 및 웹에서도 작동합니다. 장치 간 구독 동기화를 위해 가입하세요.

 

빠른 참조 가이드

탐색하는 동안 이 프로그램을 들어보세요.
재생