Artwork

Soroush Pour에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 Soroush Pour 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.
Player FM -팟 캐스트 앱
Player FM 앱으로 오프라인으로 전환하세요!

Ep 9 - Scaling AI safety research w/ Adam Gleave (CEO, FAR AI)

1:19:12
 
공유
 

Manage episode 382307174 series 3428190
Soroush Pour에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 Soroush Pour 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.

We speak with Adam Gleave, CEO of FAR AI (https://far.ai). FAR AI’s mission is to ensure AI systems are trustworthy & beneficial. They incubate & accelerate research that's too resource-intensive for academia but not ready for commercialisation. They work on everything from adversarial robustness, interpretability, preference learning, & more.
We talk to Adam about:
* The founding story of FAR as an AI safety org, and how it's different from the big commercial labs (e.g. OpenAI) and academia.
* Their current research directions & how they're going
* Promising agendas & notable gaps in the AI safety research
Hosted by Soroush Pour. Follow me for more AGI content:
Twitter: https://twitter.com/soroushjp
LinkedIn: https://www.linkedin.com/in/soroushjp/
== Show links ==
-- About Adam --
Adam Gleave is the CEO of FAR, one of the most prominent not-for-profits focused on research towards AI safety & alignment. He completed his PhD in artificial intelligence (AI) at UC Berkeley, advised by Stuart Russell, a giant in the field of AI. Adam did his PhD on trustworthy machine learning and has dedicated his career to ensuring advanced AI systems act according to human preferences. Adam is incredibly knowledgeable about the world of AI, having worked directly as a researcher and now as leader of a sizable and growing research org.
-- Further resources --
* Adam
* Website: https://www.gleave.me/
* Twitter: https://twitter.com/ARGleave
* LinkedIn: https://www.linkedin.com/in/adamgleave/
* Google Scholar: https://scholar.google.com/citations?user=lBunDH0AAAAJ&hl=en&oi=ao
* FAR AI
* Website: https://far.ai
* Twitter: https://twitter.com/farairesearch
* LinkedIn: https://www.linkedin.com/company/far-ai/
* Job board: https://far.ai/category/jobs/
* AI safety training bootcamps:
* ARENA: https://www.arena.education/
* See also: MLAB, WMLB, https://aisafety.training/
* Research
* FAR's adversarial attack on Katago https://goattack.far.ai/
* Ideas for impact mentioned by Adam
* Consumer report for AI model safety
* Agency model to support AI safety researchers
* Compute cluster for AI safety researchers
* Donate to AI safety
* FAR AI: https://www.every.org/far-ai-inc#/donate/card
* ARC Evals: https://evals.alignment.org/
* Berkeley CHAI: https://humancompatible.ai/
Recorded Oct 9, 2023

  continue reading

15 에피소드

Artwork
icon공유
 
Manage episode 382307174 series 3428190
Soroush Pour에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 Soroush Pour 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.

We speak with Adam Gleave, CEO of FAR AI (https://far.ai). FAR AI’s mission is to ensure AI systems are trustworthy & beneficial. They incubate & accelerate research that's too resource-intensive for academia but not ready for commercialisation. They work on everything from adversarial robustness, interpretability, preference learning, & more.
We talk to Adam about:
* The founding story of FAR as an AI safety org, and how it's different from the big commercial labs (e.g. OpenAI) and academia.
* Their current research directions & how they're going
* Promising agendas & notable gaps in the AI safety research
Hosted by Soroush Pour. Follow me for more AGI content:
Twitter: https://twitter.com/soroushjp
LinkedIn: https://www.linkedin.com/in/soroushjp/
== Show links ==
-- About Adam --
Adam Gleave is the CEO of FAR, one of the most prominent not-for-profits focused on research towards AI safety & alignment. He completed his PhD in artificial intelligence (AI) at UC Berkeley, advised by Stuart Russell, a giant in the field of AI. Adam did his PhD on trustworthy machine learning and has dedicated his career to ensuring advanced AI systems act according to human preferences. Adam is incredibly knowledgeable about the world of AI, having worked directly as a researcher and now as leader of a sizable and growing research org.
-- Further resources --
* Adam
* Website: https://www.gleave.me/
* Twitter: https://twitter.com/ARGleave
* LinkedIn: https://www.linkedin.com/in/adamgleave/
* Google Scholar: https://scholar.google.com/citations?user=lBunDH0AAAAJ&hl=en&oi=ao
* FAR AI
* Website: https://far.ai
* Twitter: https://twitter.com/farairesearch
* LinkedIn: https://www.linkedin.com/company/far-ai/
* Job board: https://far.ai/category/jobs/
* AI safety training bootcamps:
* ARENA: https://www.arena.education/
* See also: MLAB, WMLB, https://aisafety.training/
* Research
* FAR's adversarial attack on Katago https://goattack.far.ai/
* Ideas for impact mentioned by Adam
* Consumer report for AI model safety
* Agency model to support AI safety researchers
* Compute cluster for AI safety researchers
* Donate to AI safety
* FAR AI: https://www.every.org/far-ai-inc#/donate/card
* ARC Evals: https://evals.alignment.org/
* Berkeley CHAI: https://humancompatible.ai/
Recorded Oct 9, 2023

  continue reading

15 에피소드

모든 에피소드

×
 
Loading …

플레이어 FM에 오신것을 환영합니다!

플레이어 FM은 웹에서 고품질 팟캐스트를 검색하여 지금 바로 즐길 수 있도록 합니다. 최고의 팟캐스트 앱이며 Android, iPhone 및 웹에서도 작동합니다. 장치 간 구독 동기화를 위해 가입하세요.

 

빠른 참조 가이드