Artwork

The Nonlinear Fund에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 The Nonlinear Fund 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.
Player FM -팟 캐스트 앱
Player FM 앱으로 오프라인으로 전환하세요!

LW - You should go to ML conferences by Jan Kulveit

6:37
 
공유
 

Manage episode 430585429 series 2997284
The Nonlinear Fund에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 The Nonlinear Fund 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: You should go to ML conferences, published by Jan Kulveit on July 24, 2024 on LessWrong.
This is second kind of obvious point to make, but if you are interested in AI, AI safety, or cognition in general, it is likely worth going to top ML conferences, such as NeurIPS, ICML or ICLR. In this post I cover some reasons why, and some anecdotal stories.
1. Parts of AI alignment and safety are now completely mainstream
Looking at the "Best paper awards" at ICML, you'll find these safety-relevant or alignment-relevant papers:
Stealing part of a production language model by Carlini et al.
Probabilistic Inference in Language Models via Twisted Sequential Monte Carlo
by Zhao et al.
Debating with More Persuasive LLMs Leads to More Truthful Answers by Khan et al.
Genie: Generative Interactive Environments Bruce et al.
which amounts to about one-third (!). "Because of safety concerns" is part of the motivation for hundreds of papers.
While the signal-to-noise ratio is even worse than on LessWrong, in total, the amount you can learn is higher - my personal guess is there is maybe 2-3x as much prosaic AI safety relevant work at conferences than what you get by just following LessWrong, Alignment Forum and safety-oriented communication channels.
2. Conferences are an efficient way how to screen general ML research without spending a lot of time on X
Almost all papers are presented in the form of posters. In case of a big conference, this usually means many thousands of posters presented in huge poster sessions.
My routine for engaging with this firehose of papers:
1. For each session, read all the titles. Usually, this prunes it by a factor of ten (i.e. from 600 papers to 60).
2. Read the abstracts. Prune it to things which I haven't noticed before and seem relevant. For me, this is usually by a factor of ~3-5.
3. Visit the posters. Posters with paper authors present are actually a highly efficient way how to digest research:
Sometimes, you suspect there is some assumption or choice hidden somewhere making the result approximately irrelevant - just asking can often resolve this in a matter of tens of seconds.
Posters themselves don't undergo peer review which makes the communication more honest, with less hedging.
Usually authors of a paper know significantly more about the problem than what's in the paper, and you can learn more about negative results, obstacles, or directions people are excited about.
Clear disadvantage of conferences is the time lag; by the time they are presented, some of the main results are old and well known, but in my view a lot of the value is the long tail of results which are sometimes very useful, but not attention grabbing.
3. ML research community as a control group
My vague impression is that in conceptual research, mainstream ML research lags behind LW/AI safety community by something between 1 to 5 years, rediscovering topics discussed here. Some examples:
ICML poster & oral presentation
The Platonic Representation Hypothesis is an independent version of Natural abstractions discussed here for about 4 years.
A Roadmap to Pluralistic Alignment deals with Self-unalignment problem and Coherent extrapolated volition
Plenty of research on safety protocols like debate, IDA,...
Prior work published in the LW/AI safety community is almost never cited or acknowledged - in some cases because it is more convenient to claim the topic is completely novel, but I suspect in many cases researchers are genuinely not aware of the existing work, which makes their contribution a useful control: if someone starts thinking about these topics, unaware of the thousands hours spent on them by dozens of people, what will they arrive at?
4. What 'experts' think
ML research community is the intellectual home of many people expressing public opinions about AI risk. In my view, b...
  continue reading

2447 에피소드

Artwork
icon공유
 
Manage episode 430585429 series 2997284
The Nonlinear Fund에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 The Nonlinear Fund 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: You should go to ML conferences, published by Jan Kulveit on July 24, 2024 on LessWrong.
This is second kind of obvious point to make, but if you are interested in AI, AI safety, or cognition in general, it is likely worth going to top ML conferences, such as NeurIPS, ICML or ICLR. In this post I cover some reasons why, and some anecdotal stories.
1. Parts of AI alignment and safety are now completely mainstream
Looking at the "Best paper awards" at ICML, you'll find these safety-relevant or alignment-relevant papers:
Stealing part of a production language model by Carlini et al.
Probabilistic Inference in Language Models via Twisted Sequential Monte Carlo
by Zhao et al.
Debating with More Persuasive LLMs Leads to More Truthful Answers by Khan et al.
Genie: Generative Interactive Environments Bruce et al.
which amounts to about one-third (!). "Because of safety concerns" is part of the motivation for hundreds of papers.
While the signal-to-noise ratio is even worse than on LessWrong, in total, the amount you can learn is higher - my personal guess is there is maybe 2-3x as much prosaic AI safety relevant work at conferences than what you get by just following LessWrong, Alignment Forum and safety-oriented communication channels.
2. Conferences are an efficient way how to screen general ML research without spending a lot of time on X
Almost all papers are presented in the form of posters. In case of a big conference, this usually means many thousands of posters presented in huge poster sessions.
My routine for engaging with this firehose of papers:
1. For each session, read all the titles. Usually, this prunes it by a factor of ten (i.e. from 600 papers to 60).
2. Read the abstracts. Prune it to things which I haven't noticed before and seem relevant. For me, this is usually by a factor of ~3-5.
3. Visit the posters. Posters with paper authors present are actually a highly efficient way how to digest research:
Sometimes, you suspect there is some assumption or choice hidden somewhere making the result approximately irrelevant - just asking can often resolve this in a matter of tens of seconds.
Posters themselves don't undergo peer review which makes the communication more honest, with less hedging.
Usually authors of a paper know significantly more about the problem than what's in the paper, and you can learn more about negative results, obstacles, or directions people are excited about.
Clear disadvantage of conferences is the time lag; by the time they are presented, some of the main results are old and well known, but in my view a lot of the value is the long tail of results which are sometimes very useful, but not attention grabbing.
3. ML research community as a control group
My vague impression is that in conceptual research, mainstream ML research lags behind LW/AI safety community by something between 1 to 5 years, rediscovering topics discussed here. Some examples:
ICML poster & oral presentation
The Platonic Representation Hypothesis is an independent version of Natural abstractions discussed here for about 4 years.
A Roadmap to Pluralistic Alignment deals with Self-unalignment problem and Coherent extrapolated volition
Plenty of research on safety protocols like debate, IDA,...
Prior work published in the LW/AI safety community is almost never cited or acknowledged - in some cases because it is more convenient to claim the topic is completely novel, but I suspect in many cases researchers are genuinely not aware of the existing work, which makes their contribution a useful control: if someone starts thinking about these topics, unaware of the thousands hours spent on them by dozens of people, what will they arrive at?
4. What 'experts' think
ML research community is the intellectual home of many people expressing public opinions about AI risk. In my view, b...
  continue reading

2447 에피소드

Alle afleveringen

×
 
Loading …

플레이어 FM에 오신것을 환영합니다!

플레이어 FM은 웹에서 고품질 팟캐스트를 검색하여 지금 바로 즐길 수 있도록 합니다. 최고의 팟캐스트 앱이며 Android, iPhone 및 웹에서도 작동합니다. 장치 간 구독 동기화를 위해 가입하세요.

 

빠른 참조 가이드