The award-winning WIRED UK Podcast with James Temperton and the rest of the team. Listen every week for the an informed and entertaining rundown of latest technology, science, business and culture news. New episodes every Friday.
…
continue reading
LessWrong에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 LessWrong 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.
Player FM -팟 캐스트 앱
Player FM 앱으로 오프라인으로 전환하세요!
Player FM 앱으로 오프라인으로 전환하세요!
“Condensation” by abramdemski
Manage episode 519021309 series 3364760
LessWrong에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 LessWrong 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.
Condensation: a theory of concepts is a model of concept-formation by Sam Eisenstat. Its goals and methods resemble John Wentworth's natural abstractions/natural latents research.[1] Both theories seek to provide a clear picture of how to posit latent variables, such that once someone has understood the theory, they'll say "yep, I see now, that's how latent variables work!".
The goal of this post is to popularize Sam's theory and to give my own perspective on it; however, it will not be a full explanation of the math. For technical details, I suggest reading Sam's paper.
Brief Summary
Shannon's information theory focuses on the question of how to encode information when you have to encode everything. You get to design the coding scheme, but the information you'll have to encode is unknown (and you have some subjective probability distribution over what it will be). Your objective is to minimize the total expected code-length.
Algorithmic information theory similarly focuses on minimizing the total code-length, but it uses a "more objective" distribution (a universal algorithmic distribution), and a fixed coding scheme (some programming language). This allows it to talk about the minimum code-length of specific data (talking about particulars rather than average [...]
---
Outline:
(00:45) Brief Summary
(02:35) Shannons Information Theory
(07:21) Universal Codes
(11:13) Condensation
(12:52) Universal Data-Structure?
(15:30) Well-Organized Notebooks
(18:18) Random Variables
(18:54) Givens
(19:50) Underlying Space
(20:33) Latents
(21:21) Contributions
(21:39) Top
(22:24) Bottoms
(22:55) Score
(24:29) Perfect Condensation
(25:52) Interpretability Solved?
(26:38) Condensation isnt as tight an abstraction as information theory.
(27:40) Condensation isnt a very good model of cognition.
(29:46) Much work to be done!
The original text contained 15 footnotes which were omitted from this narration.
---
First published:
November 9th, 2025
Source:
https://www.lesswrong.com/posts/BstHXPgQyfeNnLjjp/condensation
---
Narrated by TYPE III AUDIO.
…
continue reading
The goal of this post is to popularize Sam's theory and to give my own perspective on it; however, it will not be a full explanation of the math. For technical details, I suggest reading Sam's paper.
Brief Summary
Shannon's information theory focuses on the question of how to encode information when you have to encode everything. You get to design the coding scheme, but the information you'll have to encode is unknown (and you have some subjective probability distribution over what it will be). Your objective is to minimize the total expected code-length.
Algorithmic information theory similarly focuses on minimizing the total code-length, but it uses a "more objective" distribution (a universal algorithmic distribution), and a fixed coding scheme (some programming language). This allows it to talk about the minimum code-length of specific data (talking about particulars rather than average [...]
---
Outline:
(00:45) Brief Summary
(02:35) Shannons Information Theory
(07:21) Universal Codes
(11:13) Condensation
(12:52) Universal Data-Structure?
(15:30) Well-Organized Notebooks
(18:18) Random Variables
(18:54) Givens
(19:50) Underlying Space
(20:33) Latents
(21:21) Contributions
(21:39) Top
(22:24) Bottoms
(22:55) Score
(24:29) Perfect Condensation
(25:52) Interpretability Solved?
(26:38) Condensation isnt as tight an abstraction as information theory.
(27:40) Condensation isnt a very good model of cognition.
(29:46) Much work to be done!
The original text contained 15 footnotes which were omitted from this narration.
---
First published:
November 9th, 2025
Source:
https://www.lesswrong.com/posts/BstHXPgQyfeNnLjjp/condensation
---
Narrated by TYPE III AUDIO.
668 에피소드
Manage episode 519021309 series 3364760
LessWrong에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 LessWrong 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.
Condensation: a theory of concepts is a model of concept-formation by Sam Eisenstat. Its goals and methods resemble John Wentworth's natural abstractions/natural latents research.[1] Both theories seek to provide a clear picture of how to posit latent variables, such that once someone has understood the theory, they'll say "yep, I see now, that's how latent variables work!".
The goal of this post is to popularize Sam's theory and to give my own perspective on it; however, it will not be a full explanation of the math. For technical details, I suggest reading Sam's paper.
Brief Summary
Shannon's information theory focuses on the question of how to encode information when you have to encode everything. You get to design the coding scheme, but the information you'll have to encode is unknown (and you have some subjective probability distribution over what it will be). Your objective is to minimize the total expected code-length.
Algorithmic information theory similarly focuses on minimizing the total code-length, but it uses a "more objective" distribution (a universal algorithmic distribution), and a fixed coding scheme (some programming language). This allows it to talk about the minimum code-length of specific data (talking about particulars rather than average [...]
---
Outline:
(00:45) Brief Summary
(02:35) Shannons Information Theory
(07:21) Universal Codes
(11:13) Condensation
(12:52) Universal Data-Structure?
(15:30) Well-Organized Notebooks
(18:18) Random Variables
(18:54) Givens
(19:50) Underlying Space
(20:33) Latents
(21:21) Contributions
(21:39) Top
(22:24) Bottoms
(22:55) Score
(24:29) Perfect Condensation
(25:52) Interpretability Solved?
(26:38) Condensation isnt as tight an abstraction as information theory.
(27:40) Condensation isnt a very good model of cognition.
(29:46) Much work to be done!
The original text contained 15 footnotes which were omitted from this narration.
---
First published:
November 9th, 2025
Source:
https://www.lesswrong.com/posts/BstHXPgQyfeNnLjjp/condensation
---
Narrated by TYPE III AUDIO.
…
continue reading
The goal of this post is to popularize Sam's theory and to give my own perspective on it; however, it will not be a full explanation of the math. For technical details, I suggest reading Sam's paper.
Brief Summary
Shannon's information theory focuses on the question of how to encode information when you have to encode everything. You get to design the coding scheme, but the information you'll have to encode is unknown (and you have some subjective probability distribution over what it will be). Your objective is to minimize the total expected code-length.
Algorithmic information theory similarly focuses on minimizing the total code-length, but it uses a "more objective" distribution (a universal algorithmic distribution), and a fixed coding scheme (some programming language). This allows it to talk about the minimum code-length of specific data (talking about particulars rather than average [...]
---
Outline:
(00:45) Brief Summary
(02:35) Shannons Information Theory
(07:21) Universal Codes
(11:13) Condensation
(12:52) Universal Data-Structure?
(15:30) Well-Organized Notebooks
(18:18) Random Variables
(18:54) Givens
(19:50) Underlying Space
(20:33) Latents
(21:21) Contributions
(21:39) Top
(22:24) Bottoms
(22:55) Score
(24:29) Perfect Condensation
(25:52) Interpretability Solved?
(26:38) Condensation isnt as tight an abstraction as information theory.
(27:40) Condensation isnt a very good model of cognition.
(29:46) Much work to be done!
The original text contained 15 footnotes which were omitted from this narration.
---
First published:
November 9th, 2025
Source:
https://www.lesswrong.com/posts/BstHXPgQyfeNnLjjp/condensation
---
Narrated by TYPE III AUDIO.
668 에피소드
모든 에피소드
×플레이어 FM에 오신것을 환영합니다!
플레이어 FM은 웹에서 고품질 팟캐스트를 검색하여 지금 바로 즐길 수 있도록 합니다. 최고의 팟캐스트 앱이며 Android, iPhone 및 웹에서도 작동합니다. 장치 간 구독 동기화를 위해 가입하세요.