Artwork

Daniel Filan에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 Daniel Filan 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.
Player FM -팟 캐스트 앱
Player FM 앱으로 오프라인으로 전환하세요!

31 - Singular Learning Theory with Daniel Murfet

2:32:07
 
공유
 

Manage episode 416904872 series 2844728
Daniel Filan에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 Daniel Filan 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.

What's going on with deep learning? What sorts of models get learned, and what are the learning dynamics? Singular learning theory is a theory of Bayesian statistics broad enough in scope to encompass deep neural networks that may help answer these questions. In this episode, I speak with Daniel Murfet about this research program and what it tells us.

Patreon: patreon.com/axrpodcast

Ko-fi: ko-fi.com/axrpodcast

Topics we discuss, and timestamps:

0:00:26 - What is singular learning theory?

0:16:00 - Phase transitions

0:35:12 - Estimating the local learning coefficient

0:44:37 - Singular learning theory and generalization

1:00:39 - Singular learning theory vs other deep learning theory

1:17:06 - How singular learning theory hit AI alignment

1:33:12 - Payoffs of singular learning theory for AI alignment

1:59:36 - Does singular learning theory advance AI capabilities?

2:13:02 - Open problems in singular learning theory for AI alignment

2:20:53 - What is the singular fluctuation?

2:25:33 - How geometry relates to information

2:30:13 - Following Daniel Murfet's work

The transcript: https://axrp.net/episode/2024/05/07/episode-31-singular-learning-theory-dan-murfet.html

Daniel Murfet's twitter/X account: https://twitter.com/danielmurfet

Developmental interpretability website: https://devinterp.com

Developmental interpretability YouTube channel: https://www.youtube.com/@Devinterp

Main research discussed in this episode:

- Developmental Landscape of In-Context Learning: https://arxiv.org/abs/2402.02364

- Estimating the Local Learning Coefficient at Scale: https://arxiv.org/abs/2402.03698

- Simple versus Short: Higher-order degeneracy and error-correction: https://www.lesswrong.com/posts/nWRj6Ey8e5siAEXbK/simple-versus-short-higher-order-degeneracy-and-error-1

Other links:

- Algebraic Geometry and Statistical Learning Theory (the grey book): https://www.cambridge.org/core/books/algebraic-geometry-and-statistical-learning-theory/9C8FD1BDC817E2FC79117C7F41544A3A

- Mathematical Theory of Bayesian Statistics (the green book): https://www.routledge.com/Mathematical-Theory-of-Bayesian-Statistics/Watanabe/p/book/9780367734817 In-context learning and induction heads: https://transformer-circuits.pub/2022/in-context-learning-and-induction-heads/index.html

- Saddle-to-Saddle Dynamics in Deep Linear Networks: Small Initialization Training, Symmetry, and Sparsity: https://arxiv.org/abs/2106.15933

- A mathematical theory of semantic development in deep neural networks: https://www.pnas.org/doi/abs/10.1073/pnas.1820226116

- Consideration on the Learning Efficiency Of Multiple-Layered Neural Networks with Linear Units: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4404877

- Neural Tangent Kernel: Convergence and Generalization in Neural Networks: https://arxiv.org/abs/1806.07572

- The Interpolating Information Criterion for Overparameterized Models: https://arxiv.org/abs/2307.07785

- Feature Learning in Infinite-Width Neural Networks: https://arxiv.org/abs/2011.14522

- A central AI alignment problem: capabilities generalization, and the sharp left turn: https://www.lesswrong.com/posts/GNhMPAWcfBCASy8e6/a-central-ai-alignment-problem-capabilities-generalization

- Quantifying degeneracy in singular models via the learning coefficient: https://arxiv.org/abs/2308.12108

Episode art by Hamish Doodles: hamishdoodles.com

  continue reading

38 에피소드

Artwork
icon공유
 
Manage episode 416904872 series 2844728
Daniel Filan에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 Daniel Filan 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.

What's going on with deep learning? What sorts of models get learned, and what are the learning dynamics? Singular learning theory is a theory of Bayesian statistics broad enough in scope to encompass deep neural networks that may help answer these questions. In this episode, I speak with Daniel Murfet about this research program and what it tells us.

Patreon: patreon.com/axrpodcast

Ko-fi: ko-fi.com/axrpodcast

Topics we discuss, and timestamps:

0:00:26 - What is singular learning theory?

0:16:00 - Phase transitions

0:35:12 - Estimating the local learning coefficient

0:44:37 - Singular learning theory and generalization

1:00:39 - Singular learning theory vs other deep learning theory

1:17:06 - How singular learning theory hit AI alignment

1:33:12 - Payoffs of singular learning theory for AI alignment

1:59:36 - Does singular learning theory advance AI capabilities?

2:13:02 - Open problems in singular learning theory for AI alignment

2:20:53 - What is the singular fluctuation?

2:25:33 - How geometry relates to information

2:30:13 - Following Daniel Murfet's work

The transcript: https://axrp.net/episode/2024/05/07/episode-31-singular-learning-theory-dan-murfet.html

Daniel Murfet's twitter/X account: https://twitter.com/danielmurfet

Developmental interpretability website: https://devinterp.com

Developmental interpretability YouTube channel: https://www.youtube.com/@Devinterp

Main research discussed in this episode:

- Developmental Landscape of In-Context Learning: https://arxiv.org/abs/2402.02364

- Estimating the Local Learning Coefficient at Scale: https://arxiv.org/abs/2402.03698

- Simple versus Short: Higher-order degeneracy and error-correction: https://www.lesswrong.com/posts/nWRj6Ey8e5siAEXbK/simple-versus-short-higher-order-degeneracy-and-error-1

Other links:

- Algebraic Geometry and Statistical Learning Theory (the grey book): https://www.cambridge.org/core/books/algebraic-geometry-and-statistical-learning-theory/9C8FD1BDC817E2FC79117C7F41544A3A

- Mathematical Theory of Bayesian Statistics (the green book): https://www.routledge.com/Mathematical-Theory-of-Bayesian-Statistics/Watanabe/p/book/9780367734817 In-context learning and induction heads: https://transformer-circuits.pub/2022/in-context-learning-and-induction-heads/index.html

- Saddle-to-Saddle Dynamics in Deep Linear Networks: Small Initialization Training, Symmetry, and Sparsity: https://arxiv.org/abs/2106.15933

- A mathematical theory of semantic development in deep neural networks: https://www.pnas.org/doi/abs/10.1073/pnas.1820226116

- Consideration on the Learning Efficiency Of Multiple-Layered Neural Networks with Linear Units: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4404877

- Neural Tangent Kernel: Convergence and Generalization in Neural Networks: https://arxiv.org/abs/1806.07572

- The Interpolating Information Criterion for Overparameterized Models: https://arxiv.org/abs/2307.07785

- Feature Learning in Infinite-Width Neural Networks: https://arxiv.org/abs/2011.14522

- A central AI alignment problem: capabilities generalization, and the sharp left turn: https://www.lesswrong.com/posts/GNhMPAWcfBCASy8e6/a-central-ai-alignment-problem-capabilities-generalization

- Quantifying degeneracy in singular models via the learning coefficient: https://arxiv.org/abs/2308.12108

Episode art by Hamish Doodles: hamishdoodles.com

  continue reading

38 에피소드

모든 에피소드

×
 
Loading …

플레이어 FM에 오신것을 환영합니다!

플레이어 FM은 웹에서 고품질 팟캐스트를 검색하여 지금 바로 즐길 수 있도록 합니다. 최고의 팟캐스트 앱이며 Android, iPhone 및 웹에서도 작동합니다. 장치 간 구독 동기화를 위해 가입하세요.

 

빠른 참조 가이드