Artwork

Greg La Blanc에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 Greg La Blanc 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.
Player FM -팟 캐스트 앱
Player FM 앱으로 오프라인으로 전환하세요!

497. Spotting The Difference Between AI Innovation and AI Snake Oil feat. Arvind Narayanan

46:11
 
공유
 

Manage episode 460014094 series 3305636
Greg La Blanc에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 Greg La Blanc 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.

Where is the line between fact and fiction in the capabilities of AI? Which predictions or promises about the future of AI are reasonable and which are the creations of hype for the benefit of the industry or the company making outsized claims?

Arvind Narayanan is a professor of computer science at Princeton University, the director of the Center for Information Technology Policy, and an author. His latest book is AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference.

Greg and Arvind discuss the misconceptions about AI technology, emphasizing the overestimations of AI's capabilities and the importance of understanding predictive versus generative AI. Arvind also points out the ethical and practical issues of deploying AI in fields like criminal justice and HR. Arvind and Greg also explore the challenges of regulation, the historical context of technological hype, and how academia can play a role in shaping AI's future. Arvind also reflects on his previous work on Bitcoin and cryptocurrency technologies and shares insights into the complexities and future of AI and blockchain.

*unSILOed Podcast is produced by University FM.*

Show Links:

Recommended Resources:

Guest Profile:

His Work:

Episode Quotes:

What can the AI community learn from medicine about testing?

28:51: Let's talk about what we can learn from medicine and what maybe we shouldn't take from them. I think that the community internalized a long time ago that the hard part of innovation is not the building, but the testing. And the AI community needs to learn that. Traditionally, in machine learning, the building was the hard part, and everybody would evaluate on the same few sets of benchmarks. And that was okay because they were mostly solving toy problems as they were building up the complexities of these technologies. Now, we're building AI systems that need to do things in the real world. And the building, especially with foundation models, you build once and apply it to a lot of different things. Right? That has gotten a lot easier—not necessarily easier in terms of technical skills, but in terms of the relative amount of investment you need to put into that, as opposed to the testing—because now you have to test foundation models in a legal setting, medical setting, [and] hundreds of other settings. So that, I think, is one big lesson.

Replacing broken systems with AI can escalate the problem

08:36: Just because one system is broken doesn't mean that we should replace it with another broken system instead of trying to do the hard work of thinking about how to fix the system. And fixing it with AI is not even working because, in the hiring scenario, what's happening is that candidates are now turning to AI to apply to hundreds of positions at once. And it's clearly not solving the problem; it's only escalating the arms race. And it might be true that human decision-makers are biased; they're not very accurate. But at least, when you have a human in the loop, you're forced to confront this shittiness of the situation, right? You can't put this moral distance between yourself and what's going on, and I think that's one way in which AI could make it worse because it's got this veneer of objectivity and accuracy.

Foundation models lower costs and could shift AI research back to academia

27:22: The rise of foundation models has meant that they've kind of now become a layer on top of which you can build other things, and that is much, much less expensive. Then, building foundation models themselves—especially if it's going to be the case that scaling is going to run out—we don't need to look for AI advances by building 1 billion models and 10 billion models; we can take the existing foundation models for granted and build on top of them. Then, I would expect that a lot of research might move back to academia. Especially the kind of research that might involve offbeat ideas.

  continue reading

489 에피소드

Artwork
icon공유
 
Manage episode 460014094 series 3305636
Greg La Blanc에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 Greg La Blanc 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.

Where is the line between fact and fiction in the capabilities of AI? Which predictions or promises about the future of AI are reasonable and which are the creations of hype for the benefit of the industry or the company making outsized claims?

Arvind Narayanan is a professor of computer science at Princeton University, the director of the Center for Information Technology Policy, and an author. His latest book is AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference.

Greg and Arvind discuss the misconceptions about AI technology, emphasizing the overestimations of AI's capabilities and the importance of understanding predictive versus generative AI. Arvind also points out the ethical and practical issues of deploying AI in fields like criminal justice and HR. Arvind and Greg also explore the challenges of regulation, the historical context of technological hype, and how academia can play a role in shaping AI's future. Arvind also reflects on his previous work on Bitcoin and cryptocurrency technologies and shares insights into the complexities and future of AI and blockchain.

*unSILOed Podcast is produced by University FM.*

Show Links:

Recommended Resources:

Guest Profile:

His Work:

Episode Quotes:

What can the AI community learn from medicine about testing?

28:51: Let's talk about what we can learn from medicine and what maybe we shouldn't take from them. I think that the community internalized a long time ago that the hard part of innovation is not the building, but the testing. And the AI community needs to learn that. Traditionally, in machine learning, the building was the hard part, and everybody would evaluate on the same few sets of benchmarks. And that was okay because they were mostly solving toy problems as they were building up the complexities of these technologies. Now, we're building AI systems that need to do things in the real world. And the building, especially with foundation models, you build once and apply it to a lot of different things. Right? That has gotten a lot easier—not necessarily easier in terms of technical skills, but in terms of the relative amount of investment you need to put into that, as opposed to the testing—because now you have to test foundation models in a legal setting, medical setting, [and] hundreds of other settings. So that, I think, is one big lesson.

Replacing broken systems with AI can escalate the problem

08:36: Just because one system is broken doesn't mean that we should replace it with another broken system instead of trying to do the hard work of thinking about how to fix the system. And fixing it with AI is not even working because, in the hiring scenario, what's happening is that candidates are now turning to AI to apply to hundreds of positions at once. And it's clearly not solving the problem; it's only escalating the arms race. And it might be true that human decision-makers are biased; they're not very accurate. But at least, when you have a human in the loop, you're forced to confront this shittiness of the situation, right? You can't put this moral distance between yourself and what's going on, and I think that's one way in which AI could make it worse because it's got this veneer of objectivity and accuracy.

Foundation models lower costs and could shift AI research back to academia

27:22: The rise of foundation models has meant that they've kind of now become a layer on top of which you can build other things, and that is much, much less expensive. Then, building foundation models themselves—especially if it's going to be the case that scaling is going to run out—we don't need to look for AI advances by building 1 billion models and 10 billion models; we can take the existing foundation models for granted and build on top of them. Then, I would expect that a lot of research might move back to academia. Especially the kind of research that might involve offbeat ideas.

  continue reading

489 에피소드

모든 에피소드

×
 
Loading …

플레이어 FM에 오신것을 환영합니다!

플레이어 FM은 웹에서 고품질 팟캐스트를 검색하여 지금 바로 즐길 수 있도록 합니다. 최고의 팟캐스트 앱이며 Android, iPhone 및 웹에서도 작동합니다. 장치 간 구독 동기화를 위해 가입하세요.

 

빠른 참조 가이드

탐색하는 동안 이 프로그램을 들어보세요.
재생