Artwork

Future of Life Institute에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 Future of Life Institute 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.
Player FM -팟 캐스트 앱
Player FM 앱으로 오프라인으로 전환하세요!

Roman Yampolskiy on the Uncontrollability, Incomprehensibility, and Unexplainability of AI

1:12:01
 
공유
 

Manage episode 287851833 series 1334308
Future of Life Institute에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 Future of Life Institute 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.
Roman Yampolskiy, Professor of Computer Science at the University of Louisville, joins us to discuss whether we can control, comprehend, and explain AI systems, and how this constrains the project of AI safety. Topics discussed in this episode include: -Roman’s results on the unexplainability, incomprehensibility, and uncontrollability of AI -The relationship between AI safety, control, and alignment -Virtual worlds as a proposal for solving multi-multi alignment -AI security You can find the page for this podcast here: https://futureoflife.org/2021/03/19/roman-yampolskiy-on-the-uncontrollability-incomprehensibility-and-unexplainability-of-ai/ You can find FLI's three new policy focused job postings here: https://futureoflife.org/job-postings/ Have any feedback about the podcast? You can share your thoughts here: www.surveymonkey.com/r/DRBFZCT Timestamps: 0:00 Intro 2:35 Roman’s primary research interests 4:09 How theoretical proofs help AI safety research 6:23 How impossibility results constrain computer science systems 10:18 The inability to tell if arbitrary code is friendly or unfriendly 12:06 Impossibility results clarify what we can do 14:19 Roman’s results on unexplainability and incomprehensibility 22:34 Focusing on comprehensibility 26:17 Roman’s results on uncontrollability 28:33 Alignment as a subset of safety and control 30:48 The relationship between unexplainability, incomprehensibility, and uncontrollability with each other and with AI alignment 33:40 What does it mean to solve AI safety? 34:19 What do the impossibility results really mean? 37:07 Virtual worlds and AI alignment 49:55 AI security and malevolent agents 53:00 Air gapping, boxing, and other security methods 58:43 Some examples of historical failures of AI systems and what we can learn from them 1:01:20 Clarifying impossibility results 1:06 55 Examples of systems failing and what these demonstrate about AI 1:08:20 Are oracles a valid approach to AI safety? 1:10:30 Roman’s final thoughts This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.
  continue reading

204 에피소드

Artwork
icon공유
 
Manage episode 287851833 series 1334308
Future of Life Institute에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 Future of Life Institute 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.
Roman Yampolskiy, Professor of Computer Science at the University of Louisville, joins us to discuss whether we can control, comprehend, and explain AI systems, and how this constrains the project of AI safety. Topics discussed in this episode include: -Roman’s results on the unexplainability, incomprehensibility, and uncontrollability of AI -The relationship between AI safety, control, and alignment -Virtual worlds as a proposal for solving multi-multi alignment -AI security You can find the page for this podcast here: https://futureoflife.org/2021/03/19/roman-yampolskiy-on-the-uncontrollability-incomprehensibility-and-unexplainability-of-ai/ You can find FLI's three new policy focused job postings here: https://futureoflife.org/job-postings/ Have any feedback about the podcast? You can share your thoughts here: www.surveymonkey.com/r/DRBFZCT Timestamps: 0:00 Intro 2:35 Roman’s primary research interests 4:09 How theoretical proofs help AI safety research 6:23 How impossibility results constrain computer science systems 10:18 The inability to tell if arbitrary code is friendly or unfriendly 12:06 Impossibility results clarify what we can do 14:19 Roman’s results on unexplainability and incomprehensibility 22:34 Focusing on comprehensibility 26:17 Roman’s results on uncontrollability 28:33 Alignment as a subset of safety and control 30:48 The relationship between unexplainability, incomprehensibility, and uncontrollability with each other and with AI alignment 33:40 What does it mean to solve AI safety? 34:19 What do the impossibility results really mean? 37:07 Virtual worlds and AI alignment 49:55 AI security and malevolent agents 53:00 Air gapping, boxing, and other security methods 58:43 Some examples of historical failures of AI systems and what we can learn from them 1:01:20 Clarifying impossibility results 1:06 55 Examples of systems failing and what these demonstrate about AI 1:08:20 Are oracles a valid approach to AI safety? 1:10:30 Roman’s final thoughts This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.
  continue reading

204 에피소드

모든 에피소드

×
 
Loading …

플레이어 FM에 오신것을 환영합니다!

플레이어 FM은 웹에서 고품질 팟캐스트를 검색하여 지금 바로 즐길 수 있도록 합니다. 최고의 팟캐스트 앱이며 Android, iPhone 및 웹에서도 작동합니다. 장치 간 구독 동기화를 위해 가입하세요.

 

빠른 참조 가이드