Artwork

The Nonlinear Fund에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 The Nonlinear Fund 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.
Player FM -팟 캐스트 앱
Player FM 앱으로 오프라인으로 전환하세요!

LW - The last era of human mistakes by owencb

11:37
 
공유
 

Manage episode 430643643 series 3314709
The Nonlinear Fund에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 The Nonlinear Fund 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The last era of human mistakes, published by owencb on July 25, 2024 on LessWrong. Suppose we had to take moves in a high-stakes chess game, with thousands of lives at stake. We wouldn't just find a good chess player and ask them to play carefully. We would consult a computer. It would be deeply irresponsible to do otherwise. Computers are better than humans at chess, and more reliable. We'd probably still keep some good chess players in the loop, to try to catch possible computer error. (Similarly we still have pilots for planes, even though the autopilot is often safer.) But by consulting the computer we'd remove the opportunity for humans to make a certain type of high stakes mistake. A lot of the high stakes decisions people make today don't look like chess, or flying a plane. They happen in domains where computers are much worse than humans. But that's a contingent fact about our technology level. If we had sufficiently good AI systems, they could catch and prevent significant human errors in whichever domains we wanted them to. In such a world, I think that they would come to be employed for just about all suitable and important decisions. If some actors didn't take advice from AI systems, I would expect them to lose power over time to actors who did. And if public institutions were making consequential decisions, I expect that it would (eventually) be seen as deeply irresponsible not to consult computers. In this world, humans could still be responsible for taking decisions (with advice). And humans might keep closer to sole responsibility for some decisions. Perhaps deciding what, ultimately, is valued. And many less consequential decisions, but still potentially large at the scale of an individual's life (such as who to marry, where to live, or whether to have children), might be deliberately kept under human control[1]. Such a world might still collapse. It might face external challenges which were just too difficult. But it would not fail because of anything we would parse as foolish errors. In many ways I'm not so interested in that era. It feels out of reach. Not that we won't get there, but that there's no prospect for us to help the people of that era to navigate it better. My attention is drawn, instead, to the period before it. This is a time when AI will (I expect) be advancing rapidly. Important decisions may be made in a hurry. And while automation-of-advice will be on the up, it seems like wildly unprecedented situations will be among the hardest things to automate good advice for. We might think of it as the last era of consequential human mistakes[2]. Can we do anything to help people navigate those? I honestly don't know. It feels very difficult (given the difficulty at our remove in even identifying the challenges properly). But it doesn't feel obviously impossible. What will this era look like? Perhaps AI progress is blisteringly fast and we move from something like the world of today straight to a world where human mistakes don't matter. But I doubt it. On my mainline picture of things, this era - the final one in which human incompetence (and hence human competence) really matters - might look something like this: Cognitive labour approaching the level of human thinking in many domains is widespread, and cheap People are starting to build elaborate ecosystems leveraging its cheapness … … since if one of the basic inputs to the economy is changed, the optimal arrangement of things is probably quite different (cf. the ecosystem of things built on the internet); … but that process hasn't reached maturity. There is widespread access to standard advice, which helps to avoid some foolish errors, though this is only applicable to "standard" situations, and it isn't universal to seek that advice In some domains, AI performance is significantly bet...
  continue reading

2435 에피소드

Artwork
icon공유
 
Manage episode 430643643 series 3314709
The Nonlinear Fund에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 The Nonlinear Fund 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The last era of human mistakes, published by owencb on July 25, 2024 on LessWrong. Suppose we had to take moves in a high-stakes chess game, with thousands of lives at stake. We wouldn't just find a good chess player and ask them to play carefully. We would consult a computer. It would be deeply irresponsible to do otherwise. Computers are better than humans at chess, and more reliable. We'd probably still keep some good chess players in the loop, to try to catch possible computer error. (Similarly we still have pilots for planes, even though the autopilot is often safer.) But by consulting the computer we'd remove the opportunity for humans to make a certain type of high stakes mistake. A lot of the high stakes decisions people make today don't look like chess, or flying a plane. They happen in domains where computers are much worse than humans. But that's a contingent fact about our technology level. If we had sufficiently good AI systems, they could catch and prevent significant human errors in whichever domains we wanted them to. In such a world, I think that they would come to be employed for just about all suitable and important decisions. If some actors didn't take advice from AI systems, I would expect them to lose power over time to actors who did. And if public institutions were making consequential decisions, I expect that it would (eventually) be seen as deeply irresponsible not to consult computers. In this world, humans could still be responsible for taking decisions (with advice). And humans might keep closer to sole responsibility for some decisions. Perhaps deciding what, ultimately, is valued. And many less consequential decisions, but still potentially large at the scale of an individual's life (such as who to marry, where to live, or whether to have children), might be deliberately kept under human control[1]. Such a world might still collapse. It might face external challenges which were just too difficult. But it would not fail because of anything we would parse as foolish errors. In many ways I'm not so interested in that era. It feels out of reach. Not that we won't get there, but that there's no prospect for us to help the people of that era to navigate it better. My attention is drawn, instead, to the period before it. This is a time when AI will (I expect) be advancing rapidly. Important decisions may be made in a hurry. And while automation-of-advice will be on the up, it seems like wildly unprecedented situations will be among the hardest things to automate good advice for. We might think of it as the last era of consequential human mistakes[2]. Can we do anything to help people navigate those? I honestly don't know. It feels very difficult (given the difficulty at our remove in even identifying the challenges properly). But it doesn't feel obviously impossible. What will this era look like? Perhaps AI progress is blisteringly fast and we move from something like the world of today straight to a world where human mistakes don't matter. But I doubt it. On my mainline picture of things, this era - the final one in which human incompetence (and hence human competence) really matters - might look something like this: Cognitive labour approaching the level of human thinking in many domains is widespread, and cheap People are starting to build elaborate ecosystems leveraging its cheapness … … since if one of the basic inputs to the economy is changed, the optimal arrangement of things is probably quite different (cf. the ecosystem of things built on the internet); … but that process hasn't reached maturity. There is widespread access to standard advice, which helps to avoid some foolish errors, though this is only applicable to "standard" situations, and it isn't universal to seek that advice In some domains, AI performance is significantly bet...
  continue reading

2435 에피소드

Todos os episódios

×
 
Loading …

플레이어 FM에 오신것을 환영합니다!

플레이어 FM은 웹에서 고품질 팟캐스트를 검색하여 지금 바로 즐길 수 있도록 합니다. 최고의 팟캐스트 앱이며 Android, iPhone 및 웹에서도 작동합니다. 장치 간 구독 동기화를 위해 가입하세요.

 

빠른 참조 가이드