Player FM 앱으로 오프라인으로 전환하세요!
BI 123 Irina Rish: Continual Learning
Manage episode 315799564 series 2422585
Support the show to get full episodes, full archive, and join the Discord community.
Irina is a faculty member at MILA-Quebec AI Institute and a professor at Université de Montréal. She has worked from both ends of the neuroscience/AI interface, using AI for neuroscience applications, and using neural principles to help improve AI. We discuss her work on biologically-plausible alternatives to back-propagation, using "auxiliary variables" in addition to the normal connection weight updates. We also discuss the world of lifelong learning, which seeks to train networks in an online manner to improve on any tasks as they are introduced. Catastrophic forgetting is an obstacle in modern deep learning, where a network forgets old tasks when it is trained on new tasks. Lifelong learning strategies, like continual learning, transfer learning, and meta-learning seek to overcome catastrophic forgetting, and we talk about some of the inspirations from neuroscience being used to help lifelong learning in networks.
- Irina's website.
- Twitter: @irinarish
- Related papers:
- Lifelong learning video tutorial: DLRL Summer School 2021 - Lifelong Learning - Irina Rish.
0:00 - Intro 3:26 - AI for Neuro, Neuro for AI 14:59 - Utility of philosophy 20:51 - Artificial general intelligence 24:34 - Back-propagation alternatives 35:10 - Inductive bias vs. scaling generic architectures 45:51 - Continual learning 59:54 - Neuro-inspired continual learning 1:06:57 - Learning trajectories
235 에피소드
Manage episode 315799564 series 2422585
Support the show to get full episodes, full archive, and join the Discord community.
Irina is a faculty member at MILA-Quebec AI Institute and a professor at Université de Montréal. She has worked from both ends of the neuroscience/AI interface, using AI for neuroscience applications, and using neural principles to help improve AI. We discuss her work on biologically-plausible alternatives to back-propagation, using "auxiliary variables" in addition to the normal connection weight updates. We also discuss the world of lifelong learning, which seeks to train networks in an online manner to improve on any tasks as they are introduced. Catastrophic forgetting is an obstacle in modern deep learning, where a network forgets old tasks when it is trained on new tasks. Lifelong learning strategies, like continual learning, transfer learning, and meta-learning seek to overcome catastrophic forgetting, and we talk about some of the inspirations from neuroscience being used to help lifelong learning in networks.
- Irina's website.
- Twitter: @irinarish
- Related papers:
- Lifelong learning video tutorial: DLRL Summer School 2021 - Lifelong Learning - Irina Rish.
0:00 - Intro 3:26 - AI for Neuro, Neuro for AI 14:59 - Utility of philosophy 20:51 - Artificial general intelligence 24:34 - Back-propagation alternatives 35:10 - Inductive bias vs. scaling generic architectures 45:51 - Continual learning 59:54 - Neuro-inspired continual learning 1:06:57 - Learning trajectories
235 에피소드
모든 에피소드
×플레이어 FM에 오신것을 환영합니다!
플레이어 FM은 웹에서 고품질 팟캐스트를 검색하여 지금 바로 즐길 수 있도록 합니다. 최고의 팟캐스트 앱이며 Android, iPhone 및 웹에서도 작동합니다. 장치 간 구독 동기화를 위해 가입하세요.