Artwork

The Delphi Podcast에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 The Delphi Podcast 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.
Player FM -팟 캐스트 앱
Player FM 앱으로 오프라인으로 전환하세요!

Mike McCormick: AI Acceleration vs Risks, Funding Global Resilience, AGI scenarios, U.S. vs China

1:42:00
 
공유
 

Manage episode 512577696 series 2478788
The Delphi Podcast에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 The Delphi Podcast 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.

Join Tommy Shaughnessy as he speaks with Mike McCormick, founder of Halcyon, about the urgent intersection of AI acceleration and safety. Mike shares his path from venture capital to launching a hybrid nonprofit–fund model focused on securing advanced AI systems. They dive into mechanistic interpretability, global competition for AGI, and what a safe superintelligence future could look like. Can we build superintelligence safely? How do we balance innovation with existential risk? And what happens to humanity when AGI arrives?

Halcyon Futures: https://halcyonfutures.org

🎯 Key Highlights

▸ Leaving VC to focus entirely on AI safety and security

▸ Why Halcyon flipped the model: nonprofit first, fund second

▸ Multi-layered “defense in depth” approach to AI biosecurity & cyber risk

▸ The acceleration vs. safety debate — finding middle ground

▸ Good Fire case: career grants into interpretability research

▸ The 2×2 dilemma — speed vs. slowdown, centralization vs. decentralization

▸ U.S.–China dynamics and fast takeoff scenarios

▸ AI underwriting: how insurance can drive safety standards

▸ Founder-market fit and mission orientation in AI startups

▸ Risk, diffusion, and the uncertain path to AGI

💡 Subscribe for more crypto & infrastructure insights! 🔔

🧠 Follow the Alpha

▸ Mike's Twitter: @MikeMcCormick_

▸ Halcyon's Twitter: @HalcyonFutures

🔗 Connect with Delphi

🌐 Portal: https://delphidigital.io/

🐦 Twitter: https://x.com/delphi_digital

💼 LinkedIn: https://www.linkedin.com/company/delphi-digital/

🎧 Listen on

Spotify: https://open.spotify.com/show/62PR1RigLG2YN5Pelq6UY9?si=18ac7ccf36ab4753&nd=1&dlsi=50105fd66e6c4124

Apple Podcasts: https://podcasts.apple.com/us/podcast/the-delphi-podcast/id1438148082

Youtube: https://www.youtube.com/channel/UC9Yy99ZlQIX9-PdG_xHj43Q

Timestamps

00:00 — Mike’s background & pivot to AI safety

03:00 — The realization: AGI could change everything

05:00 — Why VC wasn’t enough to solve the problem

06:00 — Halcyon’s hybrid model and early mission

08:00 — AI security concerns: misuse, bio, and control

12:00 — Defense in depth: pre-training → deployment

15:00 — The creativity vs. restriction trade-off

17:30 — Pause AI vs. Build Baby Build

20:00 — Speed vs. centralization: the 2×2 framework

24:00 — Good Fire: career grants & interpretability

27:00 — Writing to neurons: alignment and insight

30:00 — How insurance markets can enforce safety

36:00 — Mission-driven founders & conviction filters

44:00 — Geopolitical race: U.S., China, and compute

50:00 — Diffusion limits, adoption, and energy costs

57:00 — Mass unemployment and meaning after AGI

01:05:00 — What “winning” AGI means for humanity

01:12:00 — Critical thinking, sycophantic AI, and engagement traps

01:20:00 — UBI, adaptation, and new work paradigms

01:30:00 — Three AGI futures: scale, shift, or stall

01:36:00 — 20% catastrophic risk & asteroid analogy

01:40:00 — Final message: talent is upstream of everything

Disclaimer

This podcast is strictly informational and educational and is not investment advice or a solicitation to buy or sell any tokens or securities or to make any financial decisions. Do not trade or invest in any project, tokens, or securities based upon this podcast episode. The host and members at Delphi Ventures may personally own tokens or art that are mentioned on the podcast.

  continue reading

477 에피소드

Artwork
icon공유
 
Manage episode 512577696 series 2478788
The Delphi Podcast에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 The Delphi Podcast 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.

Join Tommy Shaughnessy as he speaks with Mike McCormick, founder of Halcyon, about the urgent intersection of AI acceleration and safety. Mike shares his path from venture capital to launching a hybrid nonprofit–fund model focused on securing advanced AI systems. They dive into mechanistic interpretability, global competition for AGI, and what a safe superintelligence future could look like. Can we build superintelligence safely? How do we balance innovation with existential risk? And what happens to humanity when AGI arrives?

Halcyon Futures: https://halcyonfutures.org

🎯 Key Highlights

▸ Leaving VC to focus entirely on AI safety and security

▸ Why Halcyon flipped the model: nonprofit first, fund second

▸ Multi-layered “defense in depth” approach to AI biosecurity & cyber risk

▸ The acceleration vs. safety debate — finding middle ground

▸ Good Fire case: career grants into interpretability research

▸ The 2×2 dilemma — speed vs. slowdown, centralization vs. decentralization

▸ U.S.–China dynamics and fast takeoff scenarios

▸ AI underwriting: how insurance can drive safety standards

▸ Founder-market fit and mission orientation in AI startups

▸ Risk, diffusion, and the uncertain path to AGI

💡 Subscribe for more crypto & infrastructure insights! 🔔

🧠 Follow the Alpha

▸ Mike's Twitter: @MikeMcCormick_

▸ Halcyon's Twitter: @HalcyonFutures

🔗 Connect with Delphi

🌐 Portal: https://delphidigital.io/

🐦 Twitter: https://x.com/delphi_digital

💼 LinkedIn: https://www.linkedin.com/company/delphi-digital/

🎧 Listen on

Spotify: https://open.spotify.com/show/62PR1RigLG2YN5Pelq6UY9?si=18ac7ccf36ab4753&nd=1&dlsi=50105fd66e6c4124

Apple Podcasts: https://podcasts.apple.com/us/podcast/the-delphi-podcast/id1438148082

Youtube: https://www.youtube.com/channel/UC9Yy99ZlQIX9-PdG_xHj43Q

Timestamps

00:00 — Mike’s background & pivot to AI safety

03:00 — The realization: AGI could change everything

05:00 — Why VC wasn’t enough to solve the problem

06:00 — Halcyon’s hybrid model and early mission

08:00 — AI security concerns: misuse, bio, and control

12:00 — Defense in depth: pre-training → deployment

15:00 — The creativity vs. restriction trade-off

17:30 — Pause AI vs. Build Baby Build

20:00 — Speed vs. centralization: the 2×2 framework

24:00 — Good Fire: career grants & interpretability

27:00 — Writing to neurons: alignment and insight

30:00 — How insurance markets can enforce safety

36:00 — Mission-driven founders & conviction filters

44:00 — Geopolitical race: U.S., China, and compute

50:00 — Diffusion limits, adoption, and energy costs

57:00 — Mass unemployment and meaning after AGI

01:05:00 — What “winning” AGI means for humanity

01:12:00 — Critical thinking, sycophantic AI, and engagement traps

01:20:00 — UBI, adaptation, and new work paradigms

01:30:00 — Three AGI futures: scale, shift, or stall

01:36:00 — 20% catastrophic risk & asteroid analogy

01:40:00 — Final message: talent is upstream of everything

Disclaimer

This podcast is strictly informational and educational and is not investment advice or a solicitation to buy or sell any tokens or securities or to make any financial decisions. Do not trade or invest in any project, tokens, or securities based upon this podcast episode. The host and members at Delphi Ventures may personally own tokens or art that are mentioned on the podcast.

  continue reading

477 에피소드

Wszystkie odcinki

×
 
Loading …

플레이어 FM에 오신것을 환영합니다!

플레이어 FM은 웹에서 고품질 팟캐스트를 검색하여 지금 바로 즐길 수 있도록 합니다. 최고의 팟캐스트 앱이며 Android, iPhone 및 웹에서도 작동합니다. 장치 간 구독 동기화를 위해 가입하세요.

 

빠른 참조 가이드

탐색하는 동안 이 프로그램을 들어보세요.
재생