Artwork

CSPI에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 CSPI 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.
Player FM -팟 캐스트 앱
Player FM 앱으로 오프라인으로 전환하세요!

Waiting for the Betterness Explosion | Robin Hanson & Richard Hanania

1:42:06
 
공유
 

Manage episode 357787426 series 2853093
CSPI에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 CSPI 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.

Robin Hanson joins the podcast to talk about the AI debate. He explains his reasons for being skeptical about “foom,” or the idea that there will emerge a sudden superintelligence that will be able to improve itself quickly and potentially destroy humanity in the service of its goals. Among his arguments are:

* We should start with a very low prior about something like this happening, given the history of the world. We already have “superintelligences” in the form of firms, for example, and they only improve slowly and incrementally

* There are different levels of abstraction with regards to intelligence and knowledge. A machine that can reason very fast may not have the specific knowledge necessary to know how to do important things.

* We may be erring in thinking of intelligence as a general quality, rather than as more domain-specific.

Hanania presents various arguments made by AI doomers, and Hanson responds to each in kind, eventually giving a less than 1% chance that something like the scenario imagined by Eliezer Yudkowsky and others will come to pass.

He also discusses why he thinks it is a waste of time to worry about the control problem before we know what any supposed superintelligence will even look like. The conversation includes a discussion about why so many smart people seem drawn to AI doomerism, and why you shouldn’t worry all that much about the principal-agent problem in this area.

Listen in podcast form or watch on YouTube. You can also read a transcript of the conversation here.

Links:

* The Hanson-Yudkowsky AI-Foom Debate

* Previous Hanson appearance on CSPI podcast, audio and transcript

* Eric Drexler, Engines of Creation

* Eric Drexler, Nanosystems

* Robin Hanson, “Explain the Sacred”

* Robin Hanson, “We See the Sacred from Afar, to See It the Same.”

* Articles by Robin on AI alignment:

* “Prefer Law to Values” (October 10, 2009)

* “The Betterness Explosion” (June 21, 2011)

* “Foom Debate, Again” (February 8, 2013)

* “How Lumpy AI Services?” (February 14, 2019)

* “Agency Failure AI Apocalypse?” (April 10, 2019)

* “Foom Update” (May 6, 2022)

* “Why Not Wait?” (June 30, 2022)

Get full access to Center for the Study of Partisanship and Ideology at www.cspicenter.com/subscribe

  continue reading

67 에피소드

Artwork
icon공유
 
Manage episode 357787426 series 2853093
CSPI에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 CSPI 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.

Robin Hanson joins the podcast to talk about the AI debate. He explains his reasons for being skeptical about “foom,” or the idea that there will emerge a sudden superintelligence that will be able to improve itself quickly and potentially destroy humanity in the service of its goals. Among his arguments are:

* We should start with a very low prior about something like this happening, given the history of the world. We already have “superintelligences” in the form of firms, for example, and they only improve slowly and incrementally

* There are different levels of abstraction with regards to intelligence and knowledge. A machine that can reason very fast may not have the specific knowledge necessary to know how to do important things.

* We may be erring in thinking of intelligence as a general quality, rather than as more domain-specific.

Hanania presents various arguments made by AI doomers, and Hanson responds to each in kind, eventually giving a less than 1% chance that something like the scenario imagined by Eliezer Yudkowsky and others will come to pass.

He also discusses why he thinks it is a waste of time to worry about the control problem before we know what any supposed superintelligence will even look like. The conversation includes a discussion about why so many smart people seem drawn to AI doomerism, and why you shouldn’t worry all that much about the principal-agent problem in this area.

Listen in podcast form or watch on YouTube. You can also read a transcript of the conversation here.

Links:

* The Hanson-Yudkowsky AI-Foom Debate

* Previous Hanson appearance on CSPI podcast, audio and transcript

* Eric Drexler, Engines of Creation

* Eric Drexler, Nanosystems

* Robin Hanson, “Explain the Sacred”

* Robin Hanson, “We See the Sacred from Afar, to See It the Same.”

* Articles by Robin on AI alignment:

* “Prefer Law to Values” (October 10, 2009)

* “The Betterness Explosion” (June 21, 2011)

* “Foom Debate, Again” (February 8, 2013)

* “How Lumpy AI Services?” (February 14, 2019)

* “Agency Failure AI Apocalypse?” (April 10, 2019)

* “Foom Update” (May 6, 2022)

* “Why Not Wait?” (June 30, 2022)

Get full access to Center for the Study of Partisanship and Ideology at www.cspicenter.com/subscribe

  continue reading

67 에피소드

모든 에피소드

×
 
Loading …

플레이어 FM에 오신것을 환영합니다!

플레이어 FM은 웹에서 고품질 팟캐스트를 검색하여 지금 바로 즐길 수 있도록 합니다. 최고의 팟캐스트 앱이며 Android, iPhone 및 웹에서도 작동합니다. 장치 간 구독 동기화를 위해 가입하세요.

 

빠른 참조 가이드