Artwork

Daniel Filan에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 Daniel Filan 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.
Player FM -팟 캐스트 앱
Player FM 앱으로 오프라인으로 전환하세요!

10 - AI's Future and Impacts with Katja Grace

2:02:58
 
공유
 

Manage episode 298221323 series 2844728
Daniel Filan에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 Daniel Filan 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.

When going about trying to ensure that AI does not cause an existential catastrophe, it's likely important to understand how AI will develop in the future, and why exactly it might or might not cause such a catastrophe. In this episode, I interview Katja Grace, researcher at AI Impacts, who's done work surveying AI researchers about when they expect superhuman AI to be reached, collecting data about how rapidly AI tends to progress, and thinking about the weak points in arguments that AI could be catastrophic for humanity.

Topics we discuss:

  • 00:00:34 - AI Impacts and its research
  • 00:08:59 - How to forecast the future of AI
  • 00:13:33 - Results of surveying AI researchers
  • 00:30:41 - Work related to forecasting AI takeoff speeds
    • 00:31:11 - How long it takes AI to cross the human skill range
    • 00:42:47 - How often technologies have discontinuous progress
    • 00:50:06 - Arguments for and against fast takeoff of AI
  • 01:04:00 - Coherence arguments
  • 01:12:15 - Arguments that AI might cause existential catastrophe, and counter-arguments
    • 01:13:58 - The size of the super-human range of intelligence
    • 01:17:22 - The dangers of agentic AI
    • 01:25:45 - The difficulty of human-compatible goals
    • 01:33:54 - The possibility of AI destroying everything
  • 01:49:42 - The future of AI Impacts
  • 01:52:17 - AI Impacts vs academia
  • 02:00:25 - What AI x-risk researchers do wrong
  • 02:01:43 - How to follow Katja's and AI Impacts' work

The transcript

"When Will AI Exceed Human Performance? Evidence from AI Experts"

AI Impacts page of more complete survey results

Likelihood of discontinuous progress around the development of AGI

Discontinuous progress investigation

The range of human intelligence

  continue reading

30 에피소드

Artwork
icon공유
 
Manage episode 298221323 series 2844728
Daniel Filan에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 Daniel Filan 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.

When going about trying to ensure that AI does not cause an existential catastrophe, it's likely important to understand how AI will develop in the future, and why exactly it might or might not cause such a catastrophe. In this episode, I interview Katja Grace, researcher at AI Impacts, who's done work surveying AI researchers about when they expect superhuman AI to be reached, collecting data about how rapidly AI tends to progress, and thinking about the weak points in arguments that AI could be catastrophic for humanity.

Topics we discuss:

  • 00:00:34 - AI Impacts and its research
  • 00:08:59 - How to forecast the future of AI
  • 00:13:33 - Results of surveying AI researchers
  • 00:30:41 - Work related to forecasting AI takeoff speeds
    • 00:31:11 - How long it takes AI to cross the human skill range
    • 00:42:47 - How often technologies have discontinuous progress
    • 00:50:06 - Arguments for and against fast takeoff of AI
  • 01:04:00 - Coherence arguments
  • 01:12:15 - Arguments that AI might cause existential catastrophe, and counter-arguments
    • 01:13:58 - The size of the super-human range of intelligence
    • 01:17:22 - The dangers of agentic AI
    • 01:25:45 - The difficulty of human-compatible goals
    • 01:33:54 - The possibility of AI destroying everything
  • 01:49:42 - The future of AI Impacts
  • 01:52:17 - AI Impacts vs academia
  • 02:00:25 - What AI x-risk researchers do wrong
  • 02:01:43 - How to follow Katja's and AI Impacts' work

The transcript

"When Will AI Exceed Human Performance? Evidence from AI Experts"

AI Impacts page of more complete survey results

Likelihood of discontinuous progress around the development of AGI

Discontinuous progress investigation

The range of human intelligence

  continue reading

30 에피소드

모든 에피소드

×
 
Loading …

플레이어 FM에 오신것을 환영합니다!

플레이어 FM은 웹에서 고품질 팟캐스트를 검색하여 지금 바로 즐길 수 있도록 합니다. 최고의 팟캐스트 앱이며 Android, iPhone 및 웹에서도 작동합니다. 장치 간 구독 동기화를 위해 가입하세요.

 

빠른 참조 가이드