Artwork

Content provided by Carter Phipps. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Carter Phipps or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Avi Tuschman: Can Wikipedia Save Social Media?

1:26:45
 
Share
 

Manage episode 295079054 series 2933485
Content provided by Carter Phipps. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Carter Phipps or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

Misinformation. Disinformation. Fake news. Conspiracy theories. These viruses of the information age proliferate with frightening speed on social media channels like Facebook, Twitter, and YouTube, sometimes with serious consequences. Over the past few years, as the scope of the problem has become unavoidable, there has been much debate over how to deal with it, and increasing pressure to do so. Should government regulate these platforms? Should the tech companies regulate themselves? Or is there another way? Avi Tuschman, a silicon valley entrepreneur and pioneer in the field of psychometric AI, believes there is. Last year, he published a paper outlining a bold and creative proposal for creating a third-party reviewing system based on a website everyone knows and loves: Wikipedia. Wikipedia, as he points out, is a remarkable success. It’s accurate to an extraordinary degree. Research all over the world rely on it. And its success is due to a unique formula: a distributed group of non-employee volunteers who write and edit the information on the site and, in conjunction with AI processes, make sure it conforms to the site’s high standards. In his paper, entitled Rosenbaum’s Magical Entity: How to Reduce Misinformation on Social Media, he suggests that we should use “the same open-source, software mechanisms and safeguards that have successfully evolved on Wikipedia to enable the collaborative adjudication of verifiability.”

It’s a proposal that potentially avoids many of the politically tricky consequences of getting government involved in regulating public platforms run by private companies. But how exactly would it work? Where does free speech come in? How much fact-checking do we want on our social media sites? And where do we draw the line between discourse that is merely unconventional and that which is outright conspiratorial? To unpack these questions and more, I invited Avi Tuschman to join me on Thinking Ahead for what turned out to be a thought-provoking conversation.

  continue reading

43 episodes

Artwork
iconShare
 
Manage episode 295079054 series 2933485
Content provided by Carter Phipps. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Carter Phipps or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

Misinformation. Disinformation. Fake news. Conspiracy theories. These viruses of the information age proliferate with frightening speed on social media channels like Facebook, Twitter, and YouTube, sometimes with serious consequences. Over the past few years, as the scope of the problem has become unavoidable, there has been much debate over how to deal with it, and increasing pressure to do so. Should government regulate these platforms? Should the tech companies regulate themselves? Or is there another way? Avi Tuschman, a silicon valley entrepreneur and pioneer in the field of psychometric AI, believes there is. Last year, he published a paper outlining a bold and creative proposal for creating a third-party reviewing system based on a website everyone knows and loves: Wikipedia. Wikipedia, as he points out, is a remarkable success. It’s accurate to an extraordinary degree. Research all over the world rely on it. And its success is due to a unique formula: a distributed group of non-employee volunteers who write and edit the information on the site and, in conjunction with AI processes, make sure it conforms to the site’s high standards. In his paper, entitled Rosenbaum’s Magical Entity: How to Reduce Misinformation on Social Media, he suggests that we should use “the same open-source, software mechanisms and safeguards that have successfully evolved on Wikipedia to enable the collaborative adjudication of verifiability.”

It’s a proposal that potentially avoids many of the politically tricky consequences of getting government involved in regulating public platforms run by private companies. But how exactly would it work? Where does free speech come in? How much fact-checking do we want on our social media sites? And where do we draw the line between discourse that is merely unconventional and that which is outright conspiratorial? To unpack these questions and more, I invited Avi Tuschman to join me on Thinking Ahead for what turned out to be a thought-provoking conversation.

  continue reading

43 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide