Artwork

Machine Learning Street Talk (MLST)에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 Machine Learning Street Talk (MLST) 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.
Player FM -팟 캐스트 앱
Player FM 앱으로 오프라인으로 전환하세요!

Is ChatGPT an N-gram model on steroids?

32:57
 
공유
 

Manage episode 434317798 series 2803422
Machine Learning Street Talk (MLST)에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 Machine Learning Street Talk (MLST) 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.

DeepMind Research Scientist / MIT scholar Dr. Timothy Nguyen discusses his recent paper on understanding transformers through n-gram statistics. Nguyen explains his approach to analyzing transformer behavior using a kind of "template matching" (N-grams), providing insights into how these models process and predict language.

MLST is sponsored by Brave:

The Brave Search API covers over 20 billion webpages, built from scratch without Big Tech biases or the recent extortionate price hikes on search API access. Perfect for AI model training and retrieval augmentated generation. Try it now - get 2,000 free queries monthly at http://brave.com/api.

Key points covered include:

A method for describing transformer predictions using n-gram statistics without relying on internal mechanisms.

The discovery of a technique to detect overfitting in large language models without using holdout sets.

Observations on curriculum learning, showing how transformers progress from simpler to more complex rules during training.

Discussion of distance measures used in the analysis, particularly the variational distance.

Exploration of model sizes, training dynamics, and their impact on the results.

We also touch on philosophical aspects of describing versus explaining AI behavior, and the challenges in understanding the abstractions formed by neural networks. Nguyen concludes by discussing potential future research directions, including attempts to convert descriptions of transformer behavior into explanations of internal mechanisms.

Timothy Nguyen's earned his B.S. and Ph.D. in mathematics from Caltech and MIT, respectively. He held positions as Research Assistant Professor at the Simons Center for Geometry and Physics (2011-2014) and Visiting Assistant Professor at Michigan State University (2014-2017). During this time, his research expanded into high-energy physics, focusing on mathematical problems in quantum field theory. His work notably provided a simplified and corrected formulation of perturbative path integrals.

Since 2017, Nguyen has been working in industry, applying his expertise to machine learning. He is currently at DeepMind, where he contributes to both fundamental research and practical applications of deep learning to solve real-world problems.

Refs:

The Cartesian Cafe

https://www.youtube.com/@TimothyNguyen

Understanding Transformers via N-Gram Statistics

https://www.researchgate.net/publication/382204056_Understanding_Transformers_via_N-Gram_Statistics

TOC

00:00:00 Timothy Nguyen's background

00:02:50 Paper overview: transformers and n-gram statistics

00:04:55 Template matching and hash table approach

00:08:55 Comparing templates to transformer predictions

00:12:01 Describing vs explaining transformer behavior

00:15:36 Detecting overfitting without holdout sets

00:22:47 Curriculum learning in training

00:26:32 Distance measures in analysis

00:28:58 Model sizes and training dynamics

00:30:39 Future research directions

00:32:06 Conclusion and future topics

  continue reading

195 에피소드

Artwork
icon공유
 
Manage episode 434317798 series 2803422
Machine Learning Street Talk (MLST)에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 Machine Learning Street Talk (MLST) 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.

DeepMind Research Scientist / MIT scholar Dr. Timothy Nguyen discusses his recent paper on understanding transformers through n-gram statistics. Nguyen explains his approach to analyzing transformer behavior using a kind of "template matching" (N-grams), providing insights into how these models process and predict language.

MLST is sponsored by Brave:

The Brave Search API covers over 20 billion webpages, built from scratch without Big Tech biases or the recent extortionate price hikes on search API access. Perfect for AI model training and retrieval augmentated generation. Try it now - get 2,000 free queries monthly at http://brave.com/api.

Key points covered include:

A method for describing transformer predictions using n-gram statistics without relying on internal mechanisms.

The discovery of a technique to detect overfitting in large language models without using holdout sets.

Observations on curriculum learning, showing how transformers progress from simpler to more complex rules during training.

Discussion of distance measures used in the analysis, particularly the variational distance.

Exploration of model sizes, training dynamics, and their impact on the results.

We also touch on philosophical aspects of describing versus explaining AI behavior, and the challenges in understanding the abstractions formed by neural networks. Nguyen concludes by discussing potential future research directions, including attempts to convert descriptions of transformer behavior into explanations of internal mechanisms.

Timothy Nguyen's earned his B.S. and Ph.D. in mathematics from Caltech and MIT, respectively. He held positions as Research Assistant Professor at the Simons Center for Geometry and Physics (2011-2014) and Visiting Assistant Professor at Michigan State University (2014-2017). During this time, his research expanded into high-energy physics, focusing on mathematical problems in quantum field theory. His work notably provided a simplified and corrected formulation of perturbative path integrals.

Since 2017, Nguyen has been working in industry, applying his expertise to machine learning. He is currently at DeepMind, where he contributes to both fundamental research and practical applications of deep learning to solve real-world problems.

Refs:

The Cartesian Cafe

https://www.youtube.com/@TimothyNguyen

Understanding Transformers via N-Gram Statistics

https://www.researchgate.net/publication/382204056_Understanding_Transformers_via_N-Gram_Statistics

TOC

00:00:00 Timothy Nguyen's background

00:02:50 Paper overview: transformers and n-gram statistics

00:04:55 Template matching and hash table approach

00:08:55 Comparing templates to transformer predictions

00:12:01 Describing vs explaining transformer behavior

00:15:36 Detecting overfitting without holdout sets

00:22:47 Curriculum learning in training

00:26:32 Distance measures in analysis

00:28:58 Model sizes and training dynamics

00:30:39 Future research directions

00:32:06 Conclusion and future topics

  continue reading

195 에피소드

ทุกตอน

×
 
Loading …

플레이어 FM에 오신것을 환영합니다!

플레이어 FM은 웹에서 고품질 팟캐스트를 검색하여 지금 바로 즐길 수 있도록 합니다. 최고의 팟캐스트 앱이며 Android, iPhone 및 웹에서도 작동합니다. 장치 간 구독 동기화를 위해 가입하세요.

 

빠른 참조 가이드

탐색하는 동안 이 프로그램을 들어보세요.
재생