Artwork

Machine Learning Street Talk (MLST)에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 Machine Learning Street Talk (MLST) 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.
Player FM -팟 캐스트 앱
Player FM 앱으로 오프라인으로 전환하세요!

#114 - Secrets of Deep Reinforcement Learning (Minqi Jiang)

2:47:15
 
공유
 

Manage episode 360803513 series 2803422
Machine Learning Street Talk (MLST)에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 Machine Learning Street Talk (MLST) 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.

Patreon: https://www.patreon.com/mlst Discord: https://discord.gg/ESrGqhf5CB Twitter: https://twitter.com/MLStreetTalk

In this exclusive interview, Dr. Tim Scarfe sits down with Minqi Jiang, a leading PhD student at University College London and Meta AI, as they delve into the fascinating world of deep reinforcement learning (RL) and its impact on technology, startups, and research. Discover how Minqi made the crucial decision to pursue a PhD in this exciting field, and learn from his valuable startup experiences and lessons.

Minqi shares his insights into balancing serendipity and planning in life and research, and explains the role of objectives and Goodhart's Law in decision-making. Get ready to explore the depths of robustness in RL, two-player zero-sum games, and the differences between RL and supervised learning.

As they discuss the role of environment in intelligence, emergence, and abstraction, prepare to be blown away by the possibilities of open-endedness and the intelligence explosion. Learn how language models generate their own training data, the limitations of RL, and the future of software 2.0 with interpretability concerns.

From robotics and open-ended learning applications to learning potential metrics and MDPs, this interview is a goldmine of information for anyone interested in AI, RL, and the cutting edge of technology. Don't miss out on this incredible opportunity to learn from a rising star in the AI world!

TOC

Tech & Startup Background [00:00:00]

Pursuing PhD in Deep RL [00:03:59]

Startup Lessons [00:11:33]

Serendipity vs Planning [00:12:30]

Objectives & Decision Making [00:19:19]

Minimax Regret & Uncertainty [00:22:57]

Robustness in RL & Zero-Sum Games [00:26:14]

RL vs Supervised Learning [00:34:04]

Exploration & Intelligence [00:41:27]

Environment, Emergence, Abstraction [00:46:31]

Open-endedness & Intelligence Explosion [00:54:28]

Language Models & Training Data [01:04:59]

RLHF & Language Models [01:16:37]

Creativity in Language Models [01:27:25]

Limitations of RL [01:40:58]

Software 2.0 & Interpretability [01:45:11]

Language Models & Code Reliability [01:48:23]

Robust Prioritized Level Replay [01:51:42]

Open-ended Learning [01:55:57]

Auto-curriculum & Deep RL [02:08:48]

Robotics & Open-ended Learning [02:31:05]

Learning Potential & MDPs [02:36:20]

Universal Function Space [02:42:02]

Goal-Directed Learning & Auto-Curricula [02:42:48]

Advice & Closing Thoughts [02:44:47]

References:

- Why Greatness Cannot Be Planned: The Myth of the Objective by Kenneth O. Stanley and Joel Lehman

https://www.springer.com/gp/book/9783319155234

- Rethinking Exploration: General Intelligence Requires Rethinking Exploration

https://arxiv.org/abs/2106.06860

- The Case for Strong Emergence (Sabine Hossenfelder)

https://arxiv.org/abs/2102.07740

- The Game of Life (Conway)

https://www.conwaylife.com/

- Toolformer: Teaching Language Models to Generate APIs (Meta AI)

https://arxiv.org/abs/2302.04761

- OpenAI's POET: Paired Open-Ended Trailblazer

https://arxiv.org/abs/1901.01753

- Schmidhuber's Artificial Curiosity

https://people.idsia.ch/~juergen/interest.html

- Gödel Machines

https://people.idsia.ch/~juergen/goedelmachine.html

- PowerPlay

https://arxiv.org/abs/1112.5309

- Robust Prioritized Level Replay: https://openreview.net/forum?id=NfZ6g2OmXEk

- Unsupervised Environment Design: https://arxiv.org/abs/2012.02096

- Excel: Evolving Curriculum Learning for Deep Reinforcement Learning

https://arxiv.org/abs/1901.05431

- Go-Explore: A New Approach for Hard-Exploration Problems

https://arxiv.org/abs/1901.10995

- Learning with AMIGo: Adversarially Motivated Intrinsic Goals

https://www.researchgate.net/publication/342377312_Learning_with_AMIGo_Adversarially_Motivated_Intrinsic_Goals

PRML

https://www.microsoft.com/en-us/research/uploads/prod/2006/01/Bishop-Pattern-Recognition-and-Machine-Learning-2006.pdf

Sutton and Barto

https://web.stanford.edu/class/psych209/Readings/SuttonBartoIPRLBook2ndEd.pdf

  continue reading

149 에피소드

Artwork
icon공유
 
Manage episode 360803513 series 2803422
Machine Learning Street Talk (MLST)에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 Machine Learning Street Talk (MLST) 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.

Patreon: https://www.patreon.com/mlst Discord: https://discord.gg/ESrGqhf5CB Twitter: https://twitter.com/MLStreetTalk

In this exclusive interview, Dr. Tim Scarfe sits down with Minqi Jiang, a leading PhD student at University College London and Meta AI, as they delve into the fascinating world of deep reinforcement learning (RL) and its impact on technology, startups, and research. Discover how Minqi made the crucial decision to pursue a PhD in this exciting field, and learn from his valuable startup experiences and lessons.

Minqi shares his insights into balancing serendipity and planning in life and research, and explains the role of objectives and Goodhart's Law in decision-making. Get ready to explore the depths of robustness in RL, two-player zero-sum games, and the differences between RL and supervised learning.

As they discuss the role of environment in intelligence, emergence, and abstraction, prepare to be blown away by the possibilities of open-endedness and the intelligence explosion. Learn how language models generate their own training data, the limitations of RL, and the future of software 2.0 with interpretability concerns.

From robotics and open-ended learning applications to learning potential metrics and MDPs, this interview is a goldmine of information for anyone interested in AI, RL, and the cutting edge of technology. Don't miss out on this incredible opportunity to learn from a rising star in the AI world!

TOC

Tech & Startup Background [00:00:00]

Pursuing PhD in Deep RL [00:03:59]

Startup Lessons [00:11:33]

Serendipity vs Planning [00:12:30]

Objectives & Decision Making [00:19:19]

Minimax Regret & Uncertainty [00:22:57]

Robustness in RL & Zero-Sum Games [00:26:14]

RL vs Supervised Learning [00:34:04]

Exploration & Intelligence [00:41:27]

Environment, Emergence, Abstraction [00:46:31]

Open-endedness & Intelligence Explosion [00:54:28]

Language Models & Training Data [01:04:59]

RLHF & Language Models [01:16:37]

Creativity in Language Models [01:27:25]

Limitations of RL [01:40:58]

Software 2.0 & Interpretability [01:45:11]

Language Models & Code Reliability [01:48:23]

Robust Prioritized Level Replay [01:51:42]

Open-ended Learning [01:55:57]

Auto-curriculum & Deep RL [02:08:48]

Robotics & Open-ended Learning [02:31:05]

Learning Potential & MDPs [02:36:20]

Universal Function Space [02:42:02]

Goal-Directed Learning & Auto-Curricula [02:42:48]

Advice & Closing Thoughts [02:44:47]

References:

- Why Greatness Cannot Be Planned: The Myth of the Objective by Kenneth O. Stanley and Joel Lehman

https://www.springer.com/gp/book/9783319155234

- Rethinking Exploration: General Intelligence Requires Rethinking Exploration

https://arxiv.org/abs/2106.06860

- The Case for Strong Emergence (Sabine Hossenfelder)

https://arxiv.org/abs/2102.07740

- The Game of Life (Conway)

https://www.conwaylife.com/

- Toolformer: Teaching Language Models to Generate APIs (Meta AI)

https://arxiv.org/abs/2302.04761

- OpenAI's POET: Paired Open-Ended Trailblazer

https://arxiv.org/abs/1901.01753

- Schmidhuber's Artificial Curiosity

https://people.idsia.ch/~juergen/interest.html

- Gödel Machines

https://people.idsia.ch/~juergen/goedelmachine.html

- PowerPlay

https://arxiv.org/abs/1112.5309

- Robust Prioritized Level Replay: https://openreview.net/forum?id=NfZ6g2OmXEk

- Unsupervised Environment Design: https://arxiv.org/abs/2012.02096

- Excel: Evolving Curriculum Learning for Deep Reinforcement Learning

https://arxiv.org/abs/1901.05431

- Go-Explore: A New Approach for Hard-Exploration Problems

https://arxiv.org/abs/1901.10995

- Learning with AMIGo: Adversarially Motivated Intrinsic Goals

https://www.researchgate.net/publication/342377312_Learning_with_AMIGo_Adversarially_Motivated_Intrinsic_Goals

PRML

https://www.microsoft.com/en-us/research/uploads/prod/2006/01/Bishop-Pattern-Recognition-and-Machine-Learning-2006.pdf

Sutton and Barto

https://web.stanford.edu/class/psych209/Readings/SuttonBartoIPRLBook2ndEd.pdf

  continue reading

149 에피소드

Все серии

×
 
Loading …

플레이어 FM에 오신것을 환영합니다!

플레이어 FM은 웹에서 고품질 팟캐스트를 검색하여 지금 바로 즐길 수 있도록 합니다. 최고의 팟캐스트 앱이며 Android, iPhone 및 웹에서도 작동합니다. 장치 간 구독 동기화를 위해 가입하세요.

 

빠른 참조 가이드