Artwork

Carter Phipps에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 Carter Phipps 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.
Player FM -팟 캐스트 앱
Player FM 앱으로 오프라인으로 전환하세요!

Avi Tuschman: Can Wikipedia Save Social Media?

1:26:45
 
공유
 

Manage episode 295079054 series 2933485
Carter Phipps에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 Carter Phipps 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.

Misinformation. Disinformation. Fake news. Conspiracy theories. These viruses of the information age proliferate with frightening speed on social media channels like Facebook, Twitter, and YouTube, sometimes with serious consequences. Over the past few years, as the scope of the problem has become unavoidable, there has been much debate over how to deal with it, and increasing pressure to do so. Should government regulate these platforms? Should the tech companies regulate themselves? Or is there another way? Avi Tuschman, a silicon valley entrepreneur and pioneer in the field of psychometric AI, believes there is. Last year, he published a paper outlining a bold and creative proposal for creating a third-party reviewing system based on a website everyone knows and loves: Wikipedia. Wikipedia, as he points out, is a remarkable success. It’s accurate to an extraordinary degree. Research all over the world rely on it. And its success is due to a unique formula: a distributed group of non-employee volunteers who write and edit the information on the site and, in conjunction with AI processes, make sure it conforms to the site’s high standards. In his paper, entitled Rosenbaum’s Magical Entity: How to Reduce Misinformation on Social Media, he suggests that we should use “the same open-source, software mechanisms and safeguards that have successfully evolved on Wikipedia to enable the collaborative adjudication of verifiability.”

It’s a proposal that potentially avoids many of the politically tricky consequences of getting government involved in regulating public platforms run by private companies. But how exactly would it work? Where does free speech come in? How much fact-checking do we want on our social media sites? And where do we draw the line between discourse that is merely unconventional and that which is outright conspiratorial? To unpack these questions and more, I invited Avi Tuschman to join me on Thinking Ahead for what turned out to be a thought-provoking conversation.

  continue reading

43 에피소드

Artwork
icon공유
 
Manage episode 295079054 series 2933485
Carter Phipps에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 Carter Phipps 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.

Misinformation. Disinformation. Fake news. Conspiracy theories. These viruses of the information age proliferate with frightening speed on social media channels like Facebook, Twitter, and YouTube, sometimes with serious consequences. Over the past few years, as the scope of the problem has become unavoidable, there has been much debate over how to deal with it, and increasing pressure to do so. Should government regulate these platforms? Should the tech companies regulate themselves? Or is there another way? Avi Tuschman, a silicon valley entrepreneur and pioneer in the field of psychometric AI, believes there is. Last year, he published a paper outlining a bold and creative proposal for creating a third-party reviewing system based on a website everyone knows and loves: Wikipedia. Wikipedia, as he points out, is a remarkable success. It’s accurate to an extraordinary degree. Research all over the world rely on it. And its success is due to a unique formula: a distributed group of non-employee volunteers who write and edit the information on the site and, in conjunction with AI processes, make sure it conforms to the site’s high standards. In his paper, entitled Rosenbaum’s Magical Entity: How to Reduce Misinformation on Social Media, he suggests that we should use “the same open-source, software mechanisms and safeguards that have successfully evolved on Wikipedia to enable the collaborative adjudication of verifiability.”

It’s a proposal that potentially avoids many of the politically tricky consequences of getting government involved in regulating public platforms run by private companies. But how exactly would it work? Where does free speech come in? How much fact-checking do we want on our social media sites? And where do we draw the line between discourse that is merely unconventional and that which is outright conspiratorial? To unpack these questions and more, I invited Avi Tuschman to join me on Thinking Ahead for what turned out to be a thought-provoking conversation.

  continue reading

43 에피소드

모든 에피소드

×
 
Loading …

플레이어 FM에 오신것을 환영합니다!

플레이어 FM은 웹에서 고품질 팟캐스트를 검색하여 지금 바로 즐길 수 있도록 합니다. 최고의 팟캐스트 앱이며 Android, iPhone 및 웹에서도 작동합니다. 장치 간 구독 동기화를 위해 가입하세요.

 

빠른 참조 가이드