Artwork

The 80,000 Hours Podcast, The 80, and 000 Hours team에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 The 80,000 Hours Podcast, The 80, and 000 Hours team 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.
Player FM -팟 캐스트 앱
Player FM 앱으로 오프라인으로 전환하세요!

#195 – Sella Nevo on who's trying to steal frontier AI models, and what they could do with them

2:08:29
 
공유
 

Manage episode 431899069 series 1531348
The 80,000 Hours Podcast, The 80, and 000 Hours team에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 The 80,000 Hours Podcast, The 80, and 000 Hours team 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.

"Computational systems have literally millions of physical and conceptual components, and around 98% of them are embedded into your infrastructure without you ever having heard of them. And an inordinate amount of them can lead to a catastrophic failure of your security assumptions. And because of this, the Iranian secret nuclear programme failed to prevent a breach, most US agencies failed to prevent multiple breaches, most US national security agencies failed to prevent breaches. So ensuring your system is truly secure against highly resourced and dedicated attackers is really, really hard." —Sella Nevo

In today’s episode, host Luisa Rodriguez speaks to Sella Nevo — director of the Meselson Center at RAND — about his team’s latest report on how to protect the model weights of frontier AI models from actors who might want to steal them.

Links to learn more, highlights, and full transcript.

They cover:

  • Real-world examples of sophisticated security breaches, and what we can learn from them.
  • Why AI model weights might be such a high-value target for adversaries like hackers, rogue states, and other bad actors.
  • The many ways that model weights could be stolen, from using human insiders to sophisticated supply chain hacks.
  • The current best practices in cybersecurity, and why they may not be enough to keep bad actors away.
  • New security measures that Sella hopes can mitigate with the growing risks.
  • Sella’s work using machine learning for flood forecasting, which has significantly reduced injuries and costs from floods across Africa and Asia.
  • And plenty more.

Also, RAND is currently hiring for roles in technical and policy information security — check them out if you're interested in this field!

Chapters:

  • Cold open (00:00:00)
  • Luisa’s intro (00:00:56)
  • The interview begins (00:02:30)
  • The importance of securing the model weights of frontier AI models (00:03:01)
  • The most sophisticated and surprising security breaches (00:10:22)
  • AI models being leaked (00:25:52)
  • Researching for the RAND report (00:30:11)
  • Who tries to steal model weights? (00:32:21)
  • Malicious code and exploiting zero-days (00:42:06)
  • Human insiders (00:53:20)
  • Side-channel attacks (01:04:11)
  • Getting access to air-gapped networks (01:10:52)
  • Model extraction (01:19:47)
  • Reducing and hardening authorised access (01:38:52)
  • Confidential computing (01:48:05)
  • Red-teaming and security testing (01:53:42)
  • Careers in information security (01:59:54)
  • Sella’s work on flood forecasting systems (02:01:57)
  • Luisa’s outro (02:04:51)

Producer and editor: Keiran Harris
Audio engineering team: Ben Cordell, Simon Monsour, Milo McGuire, and Dominic Armstrong
Additional content editing: Katy Moore and Luisa Rodriguez
Transcriptions: Katy Moore

  continue reading

261 에피소드

Artwork
icon공유
 
Manage episode 431899069 series 1531348
The 80,000 Hours Podcast, The 80, and 000 Hours team에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 The 80,000 Hours Podcast, The 80, and 000 Hours team 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.

"Computational systems have literally millions of physical and conceptual components, and around 98% of them are embedded into your infrastructure without you ever having heard of them. And an inordinate amount of them can lead to a catastrophic failure of your security assumptions. And because of this, the Iranian secret nuclear programme failed to prevent a breach, most US agencies failed to prevent multiple breaches, most US national security agencies failed to prevent breaches. So ensuring your system is truly secure against highly resourced and dedicated attackers is really, really hard." —Sella Nevo

In today’s episode, host Luisa Rodriguez speaks to Sella Nevo — director of the Meselson Center at RAND — about his team’s latest report on how to protect the model weights of frontier AI models from actors who might want to steal them.

Links to learn more, highlights, and full transcript.

They cover:

  • Real-world examples of sophisticated security breaches, and what we can learn from them.
  • Why AI model weights might be such a high-value target for adversaries like hackers, rogue states, and other bad actors.
  • The many ways that model weights could be stolen, from using human insiders to sophisticated supply chain hacks.
  • The current best practices in cybersecurity, and why they may not be enough to keep bad actors away.
  • New security measures that Sella hopes can mitigate with the growing risks.
  • Sella’s work using machine learning for flood forecasting, which has significantly reduced injuries and costs from floods across Africa and Asia.
  • And plenty more.

Also, RAND is currently hiring for roles in technical and policy information security — check them out if you're interested in this field!

Chapters:

  • Cold open (00:00:00)
  • Luisa’s intro (00:00:56)
  • The interview begins (00:02:30)
  • The importance of securing the model weights of frontier AI models (00:03:01)
  • The most sophisticated and surprising security breaches (00:10:22)
  • AI models being leaked (00:25:52)
  • Researching for the RAND report (00:30:11)
  • Who tries to steal model weights? (00:32:21)
  • Malicious code and exploiting zero-days (00:42:06)
  • Human insiders (00:53:20)
  • Side-channel attacks (01:04:11)
  • Getting access to air-gapped networks (01:10:52)
  • Model extraction (01:19:47)
  • Reducing and hardening authorised access (01:38:52)
  • Confidential computing (01:48:05)
  • Red-teaming and security testing (01:53:42)
  • Careers in information security (01:59:54)
  • Sella’s work on flood forecasting systems (02:01:57)
  • Luisa’s outro (02:04:51)

Producer and editor: Keiran Harris
Audio engineering team: Ben Cordell, Simon Monsour, Milo McGuire, and Dominic Armstrong
Additional content editing: Katy Moore and Luisa Rodriguez
Transcriptions: Katy Moore

  continue reading

261 에피소드

모든 에피소드

×
 
Loading …

플레이어 FM에 오신것을 환영합니다!

플레이어 FM은 웹에서 고품질 팟캐스트를 검색하여 지금 바로 즐길 수 있도록 합니다. 최고의 팟캐스트 앱이며 Android, iPhone 및 웹에서도 작동합니다. 장치 간 구독 동기화를 위해 가입하세요.

 

빠른 참조 가이드