Artwork

EPIIPLUS 1 Ltd / Azeem Azhar and Azeem Azhar에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 EPIIPLUS 1 Ltd / Azeem Azhar and Azeem Azhar 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.
Player FM -팟 캐스트 앱
Player FM 앱으로 오프라인으로 전환하세요!

Are we ready for human-level AI by 2030? Anthropic's co-founder answers

52:06
 
공유
 

Manage episode 474638077 series 2615510
EPIIPLUS 1 Ltd / Azeem Azhar and Azeem Azhar에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 EPIIPLUS 1 Ltd / Azeem Azhar and Azeem Azhar 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.

Anthropic's co-founder and chief scientist Jared Kaplan discusses AI's rapid evolution, the shorter-than-expected timeline to human-level AI, and how Claude's "thinking time" feature represents a new frontier in AI reasoning capabilities.

In this episode you'll hear:

  • Why Jared believes human-level AI is now likely to arrive in 2-3 years instead of by 2030
  • How AI models are developing the ability to handle increasingly complex tasks that would take humans hours or days
  • The importance of constitutional AI and interpretability research as essential guardrails for increasingly powerful systems

Our new show

This was originally recorded for "Friday with Azeem Azhar", a new show that takes place every Friday at 9am PT and 12pm ET on Exponential View. You can tune in through my Substack linked below. The format is experimental and we'd love your feedback, so feel free to comment or email your thoughts to our team at [email protected].

Timestamps:

(00:00) Episode trailer

(01:27) Jared's updated prediction for reaching human-level intelligence

(08:12) What will limit scaling laws?

(11:13) How long will we wait between model generations?

(16:27) Why test-time scaling is a big deal

(21:59) There’s no reason why DeepSeek can’t be competitive algorithmically

(25:31) Has Anthropic changed their approach to safety vs speed?

(30:08) Managing the paradoxes of AI progress

(32:21) Can interpretability and monitoring really keep AI safe?

(39:43) Are model incentives misaligned with public interests?

(42:36) How should we prepare for electricity-level impact?

(51:15) What Jared is most excited about in the next 12 months

Jared's links:

Azeem's links:

Produced by supermix.io


Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

  continue reading

206 에피소드

Artwork
icon공유
 
Manage episode 474638077 series 2615510
EPIIPLUS 1 Ltd / Azeem Azhar and Azeem Azhar에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 EPIIPLUS 1 Ltd / Azeem Azhar and Azeem Azhar 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.

Anthropic's co-founder and chief scientist Jared Kaplan discusses AI's rapid evolution, the shorter-than-expected timeline to human-level AI, and how Claude's "thinking time" feature represents a new frontier in AI reasoning capabilities.

In this episode you'll hear:

  • Why Jared believes human-level AI is now likely to arrive in 2-3 years instead of by 2030
  • How AI models are developing the ability to handle increasingly complex tasks that would take humans hours or days
  • The importance of constitutional AI and interpretability research as essential guardrails for increasingly powerful systems

Our new show

This was originally recorded for "Friday with Azeem Azhar", a new show that takes place every Friday at 9am PT and 12pm ET on Exponential View. You can tune in through my Substack linked below. The format is experimental and we'd love your feedback, so feel free to comment or email your thoughts to our team at [email protected].

Timestamps:

(00:00) Episode trailer

(01:27) Jared's updated prediction for reaching human-level intelligence

(08:12) What will limit scaling laws?

(11:13) How long will we wait between model generations?

(16:27) Why test-time scaling is a big deal

(21:59) There’s no reason why DeepSeek can’t be competitive algorithmically

(25:31) Has Anthropic changed their approach to safety vs speed?

(30:08) Managing the paradoxes of AI progress

(32:21) Can interpretability and monitoring really keep AI safe?

(39:43) Are model incentives misaligned with public interests?

(42:36) How should we prepare for electricity-level impact?

(51:15) What Jared is most excited about in the next 12 months

Jared's links:

Azeem's links:

Produced by supermix.io


Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

  continue reading

206 에피소드

모든 에피소드

×
 
Loading …

플레이어 FM에 오신것을 환영합니다!

플레이어 FM은 웹에서 고품질 팟캐스트를 검색하여 지금 바로 즐길 수 있도록 합니다. 최고의 팟캐스트 앱이며 Android, iPhone 및 웹에서도 작동합니다. 장치 간 구독 동기화를 위해 가입하세요.

 

빠른 참조 가이드

탐색하는 동안 이 프로그램을 들어보세요.
재생