Artwork

The Nonlinear Fund에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 The Nonlinear Fund 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.
Player FM -팟 캐스트 앱
Player FM 앱으로 오프라인으로 전환하세요!

LW - AI #80: Never Have I Ever by Zvi

1:03:12
 
공유
 

Manage episode 439325979 series 3337129
The Nonlinear Fund에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 The Nonlinear Fund 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI #80: Never Have I Ever, published by Zvi on September 10, 2024 on LessWrong.
(This was supposed to be on Thursday but I forgot to cross-post)
Will AI ever make art? Fully do your coding? Take all the jobs? Kill all the humans?
Most of the time, the question comes down to a general disagreement about AI capabilities. How high on a 'technological richter scale' will AI go? If you feel the AGI and think capabilities will greatly improve, then AI will also be able to do any particular other thing, and arguments that it cannot are almost always extremely poor. However, if frontier AI capabilities level off soon, then it is an open question how far we can get that to go in practice.
A lot of frustration comes from people implicitly making the claim that general AI capabilities will level off soon, usually without noticing they are doing that. At its most extreme, this is treating AI as if it will only ever be able to do exactly the things it can already do. Then, when it can do a new thing, you add exactly that new thing.
Realize this, and a lot of things make a lot more sense, and are a lot less infuriating.
There are also continuous obvious warning signs of what is to come, that everyone keeps ignoring, but I'm used to that. The boat count will increment until morale improves.
The most infuriating thing that is unrelated to that was DOJ going after Nvidia. It sure looked like the accusation was that Nvidia was too good at making GPUs. If you dig into the details, you do see accusations of what would be legitimately illegal anti-competitive behavior, in which case Nvidia should be made to stop doing that. But one cannot shake the feeling that the core accusation is still probably too much winning via making too good a product. The nerve of that Jensen.
Table of Contents
1. Introduction.
2. Table of Contents.
3. Language Models Offer Mundane Utility. Sorry, what was the question?
4. Language Models Don't Offer Mundane Utility. A principal-agent problem?
5. Fun With Image Generation. AI supposedly making art, claims AI never will.
6. Copyright Confrontation. OpenAI asks for a mix of forgiveness and permission.
7. Deepfaketown and Botpocalypse Soon. How to fool the humans.
8. They Took Our Jobs. First it came for the unproductive, and the call centers.
9. Time of the Season. If no one else is working hard, why should Claude?
10. Get Involved. DeepMind frontier safety, Patel thumbnail competition.
11. Introducing. Beijing AI Safety and Governance, Daylight Computer, Honeycomb.
12. In Other AI News. Bigger context windows, bigger funding rounds.
13. Quiet Speculations. I don't want to live in a world without slack.
14. A Matter of Antitrust. DOJ goes after Nvidia.
15. The Quest for Sane Regulations. A few SB 1047 support letters.
16. The Week in Audio. Dario Amodei, Dwaresh Patel, Anca Dragon.
17. Rhetorical Innovation. People feel strongly about safety. They're against it.
18. The Cosmos Institute. Philosophy for the age of AI.
19. The Alignment Checklist. What will it take?
20. People Are Worried About AI Killing Everyone. Predicting worries doesn't work.
21. Other People Are Not As Worried About AI Killing Everyone. What happened?
22. Five Boats and a Helicopter. It's probably nothing.
23. Pick Up the Phone. Chinese students talk about AI, safety and regulation.
24. The Lighter Side. Do we have your attention now?
Language Models Offer Mundane Utility
Prompting suggestion reminder, perhaps:
Rohan Paul: Simply adding "Repeat the question before answering it." somehow make the models answer the trick question correctly.
Probable explanations:
Repeating the question in the model's context, significantly increasing the likelihood of the model detecting any potential "gotchas."
One hypothesis is that maybe it puts the model into more of a completion mode vs answering from a c...
  continue reading

1843 에피소드

Artwork
icon공유
 
Manage episode 439325979 series 3337129
The Nonlinear Fund에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 The Nonlinear Fund 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI #80: Never Have I Ever, published by Zvi on September 10, 2024 on LessWrong.
(This was supposed to be on Thursday but I forgot to cross-post)
Will AI ever make art? Fully do your coding? Take all the jobs? Kill all the humans?
Most of the time, the question comes down to a general disagreement about AI capabilities. How high on a 'technological richter scale' will AI go? If you feel the AGI and think capabilities will greatly improve, then AI will also be able to do any particular other thing, and arguments that it cannot are almost always extremely poor. However, if frontier AI capabilities level off soon, then it is an open question how far we can get that to go in practice.
A lot of frustration comes from people implicitly making the claim that general AI capabilities will level off soon, usually without noticing they are doing that. At its most extreme, this is treating AI as if it will only ever be able to do exactly the things it can already do. Then, when it can do a new thing, you add exactly that new thing.
Realize this, and a lot of things make a lot more sense, and are a lot less infuriating.
There are also continuous obvious warning signs of what is to come, that everyone keeps ignoring, but I'm used to that. The boat count will increment until morale improves.
The most infuriating thing that is unrelated to that was DOJ going after Nvidia. It sure looked like the accusation was that Nvidia was too good at making GPUs. If you dig into the details, you do see accusations of what would be legitimately illegal anti-competitive behavior, in which case Nvidia should be made to stop doing that. But one cannot shake the feeling that the core accusation is still probably too much winning via making too good a product. The nerve of that Jensen.
Table of Contents
1. Introduction.
2. Table of Contents.
3. Language Models Offer Mundane Utility. Sorry, what was the question?
4. Language Models Don't Offer Mundane Utility. A principal-agent problem?
5. Fun With Image Generation. AI supposedly making art, claims AI never will.
6. Copyright Confrontation. OpenAI asks for a mix of forgiveness and permission.
7. Deepfaketown and Botpocalypse Soon. How to fool the humans.
8. They Took Our Jobs. First it came for the unproductive, and the call centers.
9. Time of the Season. If no one else is working hard, why should Claude?
10. Get Involved. DeepMind frontier safety, Patel thumbnail competition.
11. Introducing. Beijing AI Safety and Governance, Daylight Computer, Honeycomb.
12. In Other AI News. Bigger context windows, bigger funding rounds.
13. Quiet Speculations. I don't want to live in a world without slack.
14. A Matter of Antitrust. DOJ goes after Nvidia.
15. The Quest for Sane Regulations. A few SB 1047 support letters.
16. The Week in Audio. Dario Amodei, Dwaresh Patel, Anca Dragon.
17. Rhetorical Innovation. People feel strongly about safety. They're against it.
18. The Cosmos Institute. Philosophy for the age of AI.
19. The Alignment Checklist. What will it take?
20. People Are Worried About AI Killing Everyone. Predicting worries doesn't work.
21. Other People Are Not As Worried About AI Killing Everyone. What happened?
22. Five Boats and a Helicopter. It's probably nothing.
23. Pick Up the Phone. Chinese students talk about AI, safety and regulation.
24. The Lighter Side. Do we have your attention now?
Language Models Offer Mundane Utility
Prompting suggestion reminder, perhaps:
Rohan Paul: Simply adding "Repeat the question before answering it." somehow make the models answer the trick question correctly.
Probable explanations:
Repeating the question in the model's context, significantly increasing the likelihood of the model detecting any potential "gotchas."
One hypothesis is that maybe it puts the model into more of a completion mode vs answering from a c...
  continue reading

1843 에피소드

모든 에피소드

×
 
Loading …

플레이어 FM에 오신것을 환영합니다!

플레이어 FM은 웹에서 고품질 팟캐스트를 검색하여 지금 바로 즐길 수 있도록 합니다. 최고의 팟캐스트 앱이며 Android, iPhone 및 웹에서도 작동합니다. 장치 간 구독 동기화를 위해 가입하세요.

 

빠른 참조 가이드