Artwork

Michaël Trazzi에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 Michaël Trazzi 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.
Player FM -팟 캐스트 앱
Player FM 앱으로 오프라인으로 전환하세요!

Owain Evans - AI Situational Awareness, Out-of-Context Reasoning

2:15:46
 
공유
 

Manage episode 435779364 series 2966339
Michaël Trazzi에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 Michaël Trazzi 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.

Owain Evans is an AI Alignment researcher, research associate at the Center of Human Compatible AI at UC Berkeley, and now leading a new AI safety research group.

In this episode we discuss two of his recent papers, “Me, Myself, and AI: The Situational Awareness Dataset (SAD) for LLMs” and “Connecting the Dots: LLMs can Infer and Verbalize Latent Structure from Disparate Training Data”, alongside some Twitter questions.

LINKS

Patreon: https://www.patreon.com/theinsideview Manifund: https://manifund.org/projects/making-52-ai-alignment-video-explainers-and-podcasts Ask questions: https://twitter.com/MichaelTrazzi Owain Evans: https://twitter.com/owainevans_uk

OUTLINE

(00:00:00) Intro

(00:01:12) Owain's Agenda

(00:02:25) Defining Situational Awareness

(00:03:30) Safety Motivation

(00:04:58) Why Release A Dataset

(00:06:17) Risks From Releasing It

(00:10:03) Claude 3 on the Longform Task

(00:14:57) Needle in a Haystack

(00:19:23) Situating Prompt

(00:23:08) Deceptive Alignment Precursor

(00:30:12) Distribution Over Two Random Words

(00:34:36) Discontinuing a 01 sequence

(00:40:20) GPT-4 Base On the Longform Task

(00:46:44) Human-AI Data in GPT-4's Pretraining

(00:49:25) Are Longform Task Questions Unusual

(00:51:48) When Will Situational Awareness Saturate

(00:53:36) Safety And Governance Implications Of Saturation

(00:56:17) Evaluation Implications Of Saturation

(00:57:40) Follow-up Work On The Situational Awarenss Dataset

(01:00:04) Would Removing Chain-Of-Thought Work?

(01:02:18) Out-of-Context Reasoning: the "Connecting the Dots" paper

(01:05:15) Experimental Setup

(01:07:46) Concrete Function Example: 3x + 1

(01:11:23) Isn't It Just A Simple Mapping?

(01:17:20) Safety Motivation

(01:22:40) Out-Of-Context Reasoning Results Were Surprising

(01:24:51) The Biased Coin Task

(01:27:00) Will Out-Of-Context Resaoning Scale

(01:32:50) Checking If In-Context Learning Work

(01:34:33) Mixture-Of-Functions

(01:38:24) Infering New Architectures From ArXiv

(01:43:52) Twitter Questions

(01:44:27) How Does Owain Come Up With Ideas?

(01:49:44) How Did Owain's Background Influence His Research Style And Taste?

(01:52:06) Should AI Alignment Researchers Aim For Publication?

(01:57:01) How Can We Apply LLM Understanding To Mitigate Deceptive Alignment?

(01:58:52) Could Owain's Research Accelerate Capabilities?

(02:08:44) How Was Owain's Work Received?

(02:13:23) Last Message

  continue reading

55 에피소드

Artwork
icon공유
 
Manage episode 435779364 series 2966339
Michaël Trazzi에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 Michaël Trazzi 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.

Owain Evans is an AI Alignment researcher, research associate at the Center of Human Compatible AI at UC Berkeley, and now leading a new AI safety research group.

In this episode we discuss two of his recent papers, “Me, Myself, and AI: The Situational Awareness Dataset (SAD) for LLMs” and “Connecting the Dots: LLMs can Infer and Verbalize Latent Structure from Disparate Training Data”, alongside some Twitter questions.

LINKS

Patreon: https://www.patreon.com/theinsideview Manifund: https://manifund.org/projects/making-52-ai-alignment-video-explainers-and-podcasts Ask questions: https://twitter.com/MichaelTrazzi Owain Evans: https://twitter.com/owainevans_uk

OUTLINE

(00:00:00) Intro

(00:01:12) Owain's Agenda

(00:02:25) Defining Situational Awareness

(00:03:30) Safety Motivation

(00:04:58) Why Release A Dataset

(00:06:17) Risks From Releasing It

(00:10:03) Claude 3 on the Longform Task

(00:14:57) Needle in a Haystack

(00:19:23) Situating Prompt

(00:23:08) Deceptive Alignment Precursor

(00:30:12) Distribution Over Two Random Words

(00:34:36) Discontinuing a 01 sequence

(00:40:20) GPT-4 Base On the Longform Task

(00:46:44) Human-AI Data in GPT-4's Pretraining

(00:49:25) Are Longform Task Questions Unusual

(00:51:48) When Will Situational Awareness Saturate

(00:53:36) Safety And Governance Implications Of Saturation

(00:56:17) Evaluation Implications Of Saturation

(00:57:40) Follow-up Work On The Situational Awarenss Dataset

(01:00:04) Would Removing Chain-Of-Thought Work?

(01:02:18) Out-of-Context Reasoning: the "Connecting the Dots" paper

(01:05:15) Experimental Setup

(01:07:46) Concrete Function Example: 3x + 1

(01:11:23) Isn't It Just A Simple Mapping?

(01:17:20) Safety Motivation

(01:22:40) Out-Of-Context Reasoning Results Were Surprising

(01:24:51) The Biased Coin Task

(01:27:00) Will Out-Of-Context Resaoning Scale

(01:32:50) Checking If In-Context Learning Work

(01:34:33) Mixture-Of-Functions

(01:38:24) Infering New Architectures From ArXiv

(01:43:52) Twitter Questions

(01:44:27) How Does Owain Come Up With Ideas?

(01:49:44) How Did Owain's Background Influence His Research Style And Taste?

(01:52:06) Should AI Alignment Researchers Aim For Publication?

(01:57:01) How Can We Apply LLM Understanding To Mitigate Deceptive Alignment?

(01:58:52) Could Owain's Research Accelerate Capabilities?

(02:08:44) How Was Owain's Work Received?

(02:13:23) Last Message

  continue reading

55 에피소드

모든 에피소드

×
 
Loading …

플레이어 FM에 오신것을 환영합니다!

플레이어 FM은 웹에서 고품질 팟캐스트를 검색하여 지금 바로 즐길 수 있도록 합니다. 최고의 팟캐스트 앱이며 Android, iPhone 및 웹에서도 작동합니다. 장치 간 구독 동기화를 위해 가입하세요.

 

빠른 참조 가이드