
Player FM 앱으로 오프라인으로 전환하세요!
OpenAI Researcher Dan Roberts on What Physics Can Teach Us About AI
Manage episode 446324987 series 3586723
In recent years there’s been an influx of theoretical physicists into the leading AI labs. Do they have unique capabilities suited to studying large models or is it just herd behavior? To find out, we talked to our former AI Fellow (and now OpenAI researcher) Dan Roberts.
Roberts, co-author of The Principles of Deep Learning Theory, is at the forefront of research that applies the tools of theoretical physics to another type of large complex system, deep neural networks. Dan believes that DLLs, and eventually LLMs, are interpretable in the same way a large collection of atoms is—at the system level. He also thinks that emphasis on scaling laws will balance with new ideas and architectures over time as scaling asymptotes economically.
Hosted by: Sonya Huang and Pat Grady, Sequoia Capital
Mentioned in this episode:
- The Principles of Deep Learning Theory: An Effective Theory Approach to Understanding Neural Networks, by Daniel A. Roberts, Sho Yaida, Boris Hanin
- Black Holes and the Intelligence Explosion: Extreme scenarios of AI focus on what is logically possible rather than what is physically possible. What does physics have to say about AI risk?
- Yang-Mills & The Mass Gap: An unsolved Millennium Prize problem
AI Math Olympiad: Dan is on the prize committee
66 에피소드
Manage episode 446324987 series 3586723
In recent years there’s been an influx of theoretical physicists into the leading AI labs. Do they have unique capabilities suited to studying large models or is it just herd behavior? To find out, we talked to our former AI Fellow (and now OpenAI researcher) Dan Roberts.
Roberts, co-author of The Principles of Deep Learning Theory, is at the forefront of research that applies the tools of theoretical physics to another type of large complex system, deep neural networks. Dan believes that DLLs, and eventually LLMs, are interpretable in the same way a large collection of atoms is—at the system level. He also thinks that emphasis on scaling laws will balance with new ideas and architectures over time as scaling asymptotes economically.
Hosted by: Sonya Huang and Pat Grady, Sequoia Capital
Mentioned in this episode:
- The Principles of Deep Learning Theory: An Effective Theory Approach to Understanding Neural Networks, by Daniel A. Roberts, Sho Yaida, Boris Hanin
- Black Holes and the Intelligence Explosion: Extreme scenarios of AI focus on what is logically possible rather than what is physically possible. What does physics have to say about AI risk?
- Yang-Mills & The Mass Gap: An unsolved Millennium Prize problem
AI Math Olympiad: Dan is on the prize committee
66 에피소드
모든 에피소드
×플레이어 FM에 오신것을 환영합니다!
플레이어 FM은 웹에서 고품질 팟캐스트를 검색하여 지금 바로 즐길 수 있도록 합니다. 최고의 팟캐스트 앱이며 Android, iPhone 및 웹에서도 작동합니다. 장치 간 구독 동기화를 위해 가입하세요.