Artwork

Machine Learning Street Talk (MLST)에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 Machine Learning Street Talk (MLST) 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.
Player FM -팟 캐스트 앱
Player FM 앱으로 오프라인으로 전환하세요!

Sakana AI - Chris Lu, Robert Tjarko Lange, Cong Lu

1:37:54
 
공유
 

Manage episode 469133285 series 2803422
Machine Learning Street Talk (MLST)에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 Machine Learning Street Talk (MLST) 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.

We speak with Sakana AI, who are building nature-inspired methods that could fundamentally transform how we develop AI systems.

The guests include Chris Lu, a researcher who recently completed his DPhil at Oxford University under Prof. Jakob Foerster's supervision, where he focused on meta-learning and multi-agent systems. Chris is the first author of the DiscoPOP paper, which demonstrates how language models can discover and design better training algorithms. Also joining is Robert Tjarko Lange, a founding member of Sakana AI who specializes in evolutionary algorithms and large language models. Robert leads research at the intersection of evolutionary computation and foundation models, and is completing his PhD at TU Berlin on evolutionary meta-learning. The discussion also features Cong Lu, currently a Research Scientist at Google DeepMind's Open-Endedness team, who previously helped develop The AI Scientist and Intelligent Go-Explore.

SPONSOR MESSAGES:

***

CentML offers competitive pricing for GenAI model deployment, with flexible options to suit a wide range of models, from small to large-scale deployments. Check out their super fast DeepSeek R1 hosting!

https://centml.ai/pricing/

Tufa AI Labs is a brand new research lab in Zurich started by Benjamin Crouzier focussed on o-series style reasoning and AGI. They are hiring a Chief Engineer and ML engineers. Events in Zurich.

Goto https://tufalabs.ai/

***

* DiscoPOP - A framework where language models discover their own optimization algorithms

* EvoLLM - Using language models as evolution strategies for optimization

The AI Scientist - A fully automated system that conducts scientific research end-to-end

* Neural Attention Memory Models (NAMMs) - Evolved memory systems that make transformers both faster and more accurate

TRANSCRIPT + REFS:

https://www.dropbox.com/scl/fi/gflcyvnujp8cl7zlv3v9d/Sakana.pdf?rlkey=woaoo82943170jd4yyi2he71c&dl=0

Robert Tjarko Lange

https://roberttlange.com/

Chris Lu

https://chrislu.page/

Cong Lu

https://www.conglu.co.uk/

Sakana

https://sakana.ai/blog/

TOC:

1. LLMs for Algorithm Generation and Optimization

[00:00:00] 1.1 LLMs generating algorithms for training other LLMs

[00:04:00] 1.2 Evolutionary black-box optim using neural network loss parameterization

[00:11:50] 1.3 DiscoPOP: Non-convex loss function for noisy data

[00:20:45] 1.4 External entropy Injection for preventing Model collapse

[00:26:25] 1.5 LLMs for black-box optimization using abstract numerical sequences

2. Model Learning and Generalization

[00:31:05] 2.1 Fine-tuning on teacher algorithm trajectories

[00:31:30] 2.2 Transformers learning gradient descent

[00:33:00] 2.3 LLM tokenization biases towards specific numbers

[00:34:50] 2.4 LLMs as evolution strategies for black box optimization

[00:38:05] 2.5 DiscoPOP: LLMs discovering novel optimization algorithms

3. AI Agents and System Architectures

[00:51:30] 3.1 ARC challenge: Induction vs. transformer approaches

[00:54:35] 3.2 LangChain / modular agent components

[00:57:50] 3.3 Debate improves LLM truthfulness

[01:00:55] 3.4 Time limits controlling AI agent systems

[01:03:00] 3.5 Gemini: Million-token context enables flatter hierarchies

[01:04:05] 3.6 Agents follow own interest gradients

[01:09:50] 3.7 Go-Explore algorithm: archive-based exploration

[01:11:05] 3.8 Foundation models for interesting state discovery

[01:13:00] 3.9 LLMs leverage prior game knowledge

4. AI for Scientific Discovery and Human Alignment

[01:17:45] 4.1 Encoding Alignment & Aesthetics via Reward Functions

[01:20:00] 4.2 AI Scientist: Automated Open-Ended Scientific Discovery

[01:24:15] 4.3 DiscoPOP: LLM for Preference Optimization Algorithms

[01:28:30] 4.4 Balancing AI Knowledge with Human Understanding

[01:33:55] 4.5 AI-Driven Conferences and Paper Review

  continue reading

232 에피소드

Artwork
icon공유
 
Manage episode 469133285 series 2803422
Machine Learning Street Talk (MLST)에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 Machine Learning Street Talk (MLST) 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.

We speak with Sakana AI, who are building nature-inspired methods that could fundamentally transform how we develop AI systems.

The guests include Chris Lu, a researcher who recently completed his DPhil at Oxford University under Prof. Jakob Foerster's supervision, where he focused on meta-learning and multi-agent systems. Chris is the first author of the DiscoPOP paper, which demonstrates how language models can discover and design better training algorithms. Also joining is Robert Tjarko Lange, a founding member of Sakana AI who specializes in evolutionary algorithms and large language models. Robert leads research at the intersection of evolutionary computation and foundation models, and is completing his PhD at TU Berlin on evolutionary meta-learning. The discussion also features Cong Lu, currently a Research Scientist at Google DeepMind's Open-Endedness team, who previously helped develop The AI Scientist and Intelligent Go-Explore.

SPONSOR MESSAGES:

***

CentML offers competitive pricing for GenAI model deployment, with flexible options to suit a wide range of models, from small to large-scale deployments. Check out their super fast DeepSeek R1 hosting!

https://centml.ai/pricing/

Tufa AI Labs is a brand new research lab in Zurich started by Benjamin Crouzier focussed on o-series style reasoning and AGI. They are hiring a Chief Engineer and ML engineers. Events in Zurich.

Goto https://tufalabs.ai/

***

* DiscoPOP - A framework where language models discover their own optimization algorithms

* EvoLLM - Using language models as evolution strategies for optimization

The AI Scientist - A fully automated system that conducts scientific research end-to-end

* Neural Attention Memory Models (NAMMs) - Evolved memory systems that make transformers both faster and more accurate

TRANSCRIPT + REFS:

https://www.dropbox.com/scl/fi/gflcyvnujp8cl7zlv3v9d/Sakana.pdf?rlkey=woaoo82943170jd4yyi2he71c&dl=0

Robert Tjarko Lange

https://roberttlange.com/

Chris Lu

https://chrislu.page/

Cong Lu

https://www.conglu.co.uk/

Sakana

https://sakana.ai/blog/

TOC:

1. LLMs for Algorithm Generation and Optimization

[00:00:00] 1.1 LLMs generating algorithms for training other LLMs

[00:04:00] 1.2 Evolutionary black-box optim using neural network loss parameterization

[00:11:50] 1.3 DiscoPOP: Non-convex loss function for noisy data

[00:20:45] 1.4 External entropy Injection for preventing Model collapse

[00:26:25] 1.5 LLMs for black-box optimization using abstract numerical sequences

2. Model Learning and Generalization

[00:31:05] 2.1 Fine-tuning on teacher algorithm trajectories

[00:31:30] 2.2 Transformers learning gradient descent

[00:33:00] 2.3 LLM tokenization biases towards specific numbers

[00:34:50] 2.4 LLMs as evolution strategies for black box optimization

[00:38:05] 2.5 DiscoPOP: LLMs discovering novel optimization algorithms

3. AI Agents and System Architectures

[00:51:30] 3.1 ARC challenge: Induction vs. transformer approaches

[00:54:35] 3.2 LangChain / modular agent components

[00:57:50] 3.3 Debate improves LLM truthfulness

[01:00:55] 3.4 Time limits controlling AI agent systems

[01:03:00] 3.5 Gemini: Million-token context enables flatter hierarchies

[01:04:05] 3.6 Agents follow own interest gradients

[01:09:50] 3.7 Go-Explore algorithm: archive-based exploration

[01:11:05] 3.8 Foundation models for interesting state discovery

[01:13:00] 3.9 LLMs leverage prior game knowledge

4. AI for Scientific Discovery and Human Alignment

[01:17:45] 4.1 Encoding Alignment & Aesthetics via Reward Functions

[01:20:00] 4.2 AI Scientist: Automated Open-Ended Scientific Discovery

[01:24:15] 4.3 DiscoPOP: LLM for Preference Optimization Algorithms

[01:28:30] 4.4 Balancing AI Knowledge with Human Understanding

[01:33:55] 4.5 AI-Driven Conferences and Paper Review

  continue reading

232 에피소드

모든 에피소드

×
 
Loading …

플레이어 FM에 오신것을 환영합니다!

플레이어 FM은 웹에서 고품질 팟캐스트를 검색하여 지금 바로 즐길 수 있도록 합니다. 최고의 팟캐스트 앱이며 Android, iPhone 및 웹에서도 작동합니다. 장치 간 구독 동기화를 위해 가입하세요.

 

빠른 참조 가이드

탐색하는 동안 이 프로그램을 들어보세요.
재생