Artwork

LessWrong에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 LessWrong 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.
Player FM -팟 캐스트 앱
Player FM 앱으로 오프라인으로 전환하세요!

“Frontier Models are Capable of In-context Scheming” by Marius Hobbhahn, AlexMeinke, Bronson Schoen

14:46
 
공유
 

Manage episode 454188092 series 3364758
LessWrong에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 LessWrong 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.
This is a brief summary of what we believe to be the most important takeaways from our new paper and from our findings shown in the o1 system card. We also specifically clarify what we think we did NOT show.
Paper: https://www.apolloresearch.ai/research/scheming-reasoning-evaluations
Twitter about paper: https://x.com/apolloaisafety/status/1864735819207995716
Twitter about o1 system card: https://x.com/apolloaisafety/status/1864737158226928124
What we think the most important findings are
Models are now capable enough to do in-context scheming reasoning
We say an AI system is “scheming” if it covertly pursues misaligned goals, hiding its true capabilities and
objectives. We think that in order to scheme, models likely need to be goal-directed, situationally aware, and capable enough to reason about scheming as a strategy. In principle, models might acquire situational awareness and stable long-term goals during training, and then scheme in pursuit of those goals. We only study the simpler case of “in-context scheming,” where [...]
---
Outline:
(00:28) What we think the most important findings are
(00:33) Models are now capable enough to do in-context scheming reasoning
(03:53) The scheming actions can sometimes be quite subtle
(05:05) Models sometimes double down on their deception
(06:35) Models sometimes scheme without any goal nudging
(07:34) Sonnet-3.5 and opus-3 can sandbag in pursuit of helpfulness
(08:22) Sonnet-3.5 can sandbag without relying on CoT reasoning
(09:28) What we are not claiming:
(10:33) Takeaways
(10:36) Chain-of-thought monitoring
(11:42) Chain-of-thought access
(13:27) Inability safety cases now have to be more specific
The original text contained 7 images which were described by AI.
---
First published:
December 5th, 2024
Source:
https://www.lesswrong.com/posts/8gy7c8GAPkuu6wTiX/frontier-models-are-capable-of-in-context-scheming
---
Narrated by TYPE III AUDIO.
---
Images from the article:
undefined
undefined
  continue reading

402 에피소드

Artwork
icon공유
 
Manage episode 454188092 series 3364758
LessWrong에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 LessWrong 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.
This is a brief summary of what we believe to be the most important takeaways from our new paper and from our findings shown in the o1 system card. We also specifically clarify what we think we did NOT show.
Paper: https://www.apolloresearch.ai/research/scheming-reasoning-evaluations
Twitter about paper: https://x.com/apolloaisafety/status/1864735819207995716
Twitter about o1 system card: https://x.com/apolloaisafety/status/1864737158226928124
What we think the most important findings are
Models are now capable enough to do in-context scheming reasoning
We say an AI system is “scheming” if it covertly pursues misaligned goals, hiding its true capabilities and
objectives. We think that in order to scheme, models likely need to be goal-directed, situationally aware, and capable enough to reason about scheming as a strategy. In principle, models might acquire situational awareness and stable long-term goals during training, and then scheme in pursuit of those goals. We only study the simpler case of “in-context scheming,” where [...]
---
Outline:
(00:28) What we think the most important findings are
(00:33) Models are now capable enough to do in-context scheming reasoning
(03:53) The scheming actions can sometimes be quite subtle
(05:05) Models sometimes double down on their deception
(06:35) Models sometimes scheme without any goal nudging
(07:34) Sonnet-3.5 and opus-3 can sandbag in pursuit of helpfulness
(08:22) Sonnet-3.5 can sandbag without relying on CoT reasoning
(09:28) What we are not claiming:
(10:33) Takeaways
(10:36) Chain-of-thought monitoring
(11:42) Chain-of-thought access
(13:27) Inability safety cases now have to be more specific
The original text contained 7 images which were described by AI.
---
First published:
December 5th, 2024
Source:
https://www.lesswrong.com/posts/8gy7c8GAPkuu6wTiX/frontier-models-are-capable-of-in-context-scheming
---
Narrated by TYPE III AUDIO.
---
Images from the article:
undefined
undefined
  continue reading

402 에피소드

Tous les épisodes

×
 
Loading …

플레이어 FM에 오신것을 환영합니다!

플레이어 FM은 웹에서 고품질 팟캐스트를 검색하여 지금 바로 즐길 수 있도록 합니다. 최고의 팟캐스트 앱이며 Android, iPhone 및 웹에서도 작동합니다. 장치 간 구독 동기화를 위해 가입하세요.

 

빠른 참조 가이드

탐색하는 동안 이 프로그램을 들어보세요.
재생