Artwork

Daniel Filan에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 Daniel Filan 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.
Player FM -팟 캐스트 앱
Player FM 앱으로 오프라인으로 전환하세요!

35 - Peter Hase on LLM Beliefs and Easy-to-Hard Generalization

2:17:24
 
공유
 

Manage episode 435988478 series 2844728
Daniel Filan에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 Daniel Filan 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.

How do we figure out what large language models believe? In fact, do they even have beliefs? Do those beliefs have locations, and if so, can we edit those locations to change the beliefs? Also, how are we going to get AI to perform tasks so hard that we can't figure out if they succeeded at them? In this episode, I chat with Peter Hase about his research into these questions.

Patreon: https://www.patreon.com/axrpodcast

Ko-fi: https://ko-fi.com/axrpodcast

The transcript: https://axrp.net/episode/2024/08/24/episode-35-peter-hase-llm-beliefs-easy-to-hard-generalization.html

Topics we discuss, and timestamps:

0:00:36 - NLP and interpretability

0:10:20 - Interpretability lessons

0:32:22 - Belief interpretability

1:00:12 - Localizing and editing models' beliefs

1:19:18 - Beliefs beyond language models

1:27:21 - Easy-to-hard generalization

1:47:16 - What do easy-to-hard results tell us?

1:57:33 - Easy-to-hard vs weak-to-strong

2:03:50 - Different notions of hardness

2:13:01 - Easy-to-hard vs weak-to-strong, round 2

2:15:39 - Following Peter's work

Peter on Twitter: https://x.com/peterbhase

Peter's papers:

Foundational Challenges in Assuring Alignment and Safety of Large Language Models: https://arxiv.org/abs/2404.09932

Do Language Models Have Beliefs? Methods for Detecting, Updating, and Visualizing Model Beliefs: https://arxiv.org/abs/2111.13654

Does Localization Inform Editing? Surprising Differences in Causality-Based Localization vs. Knowledge Editing in Language Models: https://arxiv.org/abs/2301.04213

Are Language Models Rational? The Case of Coherence Norms and Belief Revision: https://arxiv.org/abs/2406.03442

The Unreasonable Effectiveness of Easy Training Data for Hard Tasks: https://arxiv.org/abs/2401.06751

Other links:

Toy Models of Superposition: https://transformer-circuits.pub/2022/toy_model/index.html

Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV): https://arxiv.org/abs/1711.11279

Locating and Editing Factual Associations in GPT (aka the ROME paper): https://arxiv.org/abs/2202.05262

Of nonlinearity and commutativity in BERT: https://arxiv.org/abs/2101.04547

Inference-Time Intervention: Eliciting Truthful Answers from a Language Model: https://arxiv.org/abs/2306.03341

Editing a classifier by rewriting its prediction rules: https://arxiv.org/abs/2112.01008

Discovering Latent Knowledge Without Supervision (aka the Collin Burns CCS paper): https://arxiv.org/abs/2212.03827

Weak-to-Strong Generalization: Eliciting Strong Capabilities With Weak Supervision: https://arxiv.org/abs/2312.09390

Concrete problems in AI safety: https://arxiv.org/abs/1606.06565

Rissanen Data Analysis: Examining Dataset Characteristics via Description Length: https://arxiv.org/abs/2103.03872

Episode art by Hamish Doodles: hamishdoodles.com

  continue reading

42 에피소드

Artwork
icon공유
 
Manage episode 435988478 series 2844728
Daniel Filan에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 Daniel Filan 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.

How do we figure out what large language models believe? In fact, do they even have beliefs? Do those beliefs have locations, and if so, can we edit those locations to change the beliefs? Also, how are we going to get AI to perform tasks so hard that we can't figure out if they succeeded at them? In this episode, I chat with Peter Hase about his research into these questions.

Patreon: https://www.patreon.com/axrpodcast

Ko-fi: https://ko-fi.com/axrpodcast

The transcript: https://axrp.net/episode/2024/08/24/episode-35-peter-hase-llm-beliefs-easy-to-hard-generalization.html

Topics we discuss, and timestamps:

0:00:36 - NLP and interpretability

0:10:20 - Interpretability lessons

0:32:22 - Belief interpretability

1:00:12 - Localizing and editing models' beliefs

1:19:18 - Beliefs beyond language models

1:27:21 - Easy-to-hard generalization

1:47:16 - What do easy-to-hard results tell us?

1:57:33 - Easy-to-hard vs weak-to-strong

2:03:50 - Different notions of hardness

2:13:01 - Easy-to-hard vs weak-to-strong, round 2

2:15:39 - Following Peter's work

Peter on Twitter: https://x.com/peterbhase

Peter's papers:

Foundational Challenges in Assuring Alignment and Safety of Large Language Models: https://arxiv.org/abs/2404.09932

Do Language Models Have Beliefs? Methods for Detecting, Updating, and Visualizing Model Beliefs: https://arxiv.org/abs/2111.13654

Does Localization Inform Editing? Surprising Differences in Causality-Based Localization vs. Knowledge Editing in Language Models: https://arxiv.org/abs/2301.04213

Are Language Models Rational? The Case of Coherence Norms and Belief Revision: https://arxiv.org/abs/2406.03442

The Unreasonable Effectiveness of Easy Training Data for Hard Tasks: https://arxiv.org/abs/2401.06751

Other links:

Toy Models of Superposition: https://transformer-circuits.pub/2022/toy_model/index.html

Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV): https://arxiv.org/abs/1711.11279

Locating and Editing Factual Associations in GPT (aka the ROME paper): https://arxiv.org/abs/2202.05262

Of nonlinearity and commutativity in BERT: https://arxiv.org/abs/2101.04547

Inference-Time Intervention: Eliciting Truthful Answers from a Language Model: https://arxiv.org/abs/2306.03341

Editing a classifier by rewriting its prediction rules: https://arxiv.org/abs/2112.01008

Discovering Latent Knowledge Without Supervision (aka the Collin Burns CCS paper): https://arxiv.org/abs/2212.03827

Weak-to-Strong Generalization: Eliciting Strong Capabilities With Weak Supervision: https://arxiv.org/abs/2312.09390

Concrete problems in AI safety: https://arxiv.org/abs/1606.06565

Rissanen Data Analysis: Examining Dataset Characteristics via Description Length: https://arxiv.org/abs/2103.03872

Episode art by Hamish Doodles: hamishdoodles.com

  continue reading

42 에피소드

모든 에피소드

×
 
Loading …

플레이어 FM에 오신것을 환영합니다!

플레이어 FM은 웹에서 고품질 팟캐스트를 검색하여 지금 바로 즐길 수 있도록 합니다. 최고의 팟캐스트 앱이며 Android, iPhone 및 웹에서도 작동합니다. 장치 간 구독 동기화를 위해 가입하세요.

 

빠른 참조 가이드