#60 Geometric Deep Learning Blueprint (Special Edition)

3:33:22
 
공유
 

Manage episode 302605353 series 2803422
Player FM과 저희 커뮤니티의 Machine Learning Street Talk 콘텐츠는 모두 원 저작자에게 속하며 Player FM이 아닌 작가가 저작권을 갖습니다. 오디오는 해당 서버에서 직접 스트리밍 됩니다. 구독 버튼을 눌러 Player FM에서 업데이트 현황을 확인하세요. 혹은 다른 팟캐스트 앱에서 URL을 불러오세요.

The last decade has witnessed an experimental revolution in data science and machine learning, epitomised by deep learning methods. Many high-dimensional learning tasks previously thought to be beyond reach -- such as computer vision, playing Go, or protein folding -- are in fact tractable given enough computational horsepower. Remarkably, the essence of deep learning is built from two simple algorithmic principles: first, the notion of representation or feature learning and second, learning by local gradient-descent type methods, typically implemented as backpropagation.

While learning generic functions in high dimensions is a cursed estimation problem, most tasks of interest are not uniform and have strong repeating patterns as a result of the low-dimensionality and structure of the physical world.

Geometric Deep Learning unifies a broad class of ML problems from the perspectives of symmetry and invariance. These principles not only underlie the breakthrough performance of convolutional neural networks and the recent success of graph neural networks but also provide a principled way to construct new types of problem-specific inductive biases.

This week we spoke with Professor Michael Bronstein (head of graph ML at Twitter) and Dr.

Petar Veličković (Senior Research Scientist at DeepMind), and Dr. Taco Cohen and Prof. Joan Bruna about their new proto-book Geometric Deep Learning: Grids, Groups, Graphs, Geodesics, and Gauges.

See the table of contents for this (long) show at https://youtu.be/bIZB1hIJ4u8

61 에피소드