Artwork

Content provided by My Newsbeat and Newsbeat Radio. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by My Newsbeat and Newsbeat Radio or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

We Are Asking the Wrong Questions of YouTube and Facebook After New Zealand

7:03
 
Share
 

Archived series ("Inactive feed" status)

When? This feed was archived on December 05, 2017 20:59 (7y ago). Last successful fetch was on July 01, 2019 14:07 (5y ago)

Why? Inactive feed status. Our servers were unable to retrieve a valid podcast feed for a sustained period.

What now? You might be able to find a more up-to-date version using the search function. This series will no longer be checked for updates. If you believe this to be in error, please check if the publisher's feed link below is valid and contact support to request the feed be restored or if you have any other concerns about this.

Manage episode 229649880 series 1263995
Content provided by My Newsbeat and Newsbeat Radio. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by My Newsbeat and Newsbeat Radio or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
The New York Times Late Saturday night, Facebook shared some dizzying statistics that begin to illustrate the scale of the online impact of the New Zealand massacre as the gunman’s video spread across social media. According to the social network, the graphic, high-definition video of the attack was uploaded by users 1.5 million times in the first 24 hours. Of those 1.5 million copies of the video, Facebook’s automatic detection systems automatically blocked 1.2 million. That left roughly 300,000 copies ricocheting around the platform to be viewed, liked, shared and commented on by Facebook’s more than two billion users. YouTube dealt with a similar deluge. As The Washington Post reported Monday, YouTube took “unprecedented steps” to stanch the flow of copies of the video that were mirrored, re-uploaded and, in some cases, repackaged and edited to elude moderation filters. In the hours after the shooting, one YouTube executive revealed that new uploads of the attacker’s livestream appeared on the platform “as quickly as one per second.” The volume of the uploads is staggering — for what it says about the power of the platforms and our collective desire to share horrific acts of violence. How footage of the murder of at least 50 innocent people was broadcast and distributed globally dredges up some deeply uncomfortable questions for the biggest social networks, including the existential one: Is the ability to connect at such speed and scale a benefit or a detriment to the greater good? The platforms are not directly to blame for an act of mass terror, but the shooter’s online presence is a chilling reminder of the power of their influence. As Joan Donovan, the director of the Technology and Social Change Research Project at Harvard, told me in the wake of the shooting, “if platform companies are going to provide the broadcast tools for sharing hateful ideologies, they are going to share the blame for normalizing them.” Numerical disclosures of any kind are unusual for Facebook and YouTube. And there’s credit due to the platforms for marshaling resources to stop the video from spreading. On one hand, the stats could be interpreted as a rare bit of transparency on behalf of the companies — a small gesture to signal that they understand their responsibility to protect their users and rein in the monster of scale that they built. But Facebook and YouTube’s choice to pull back the curtain is also a careful bit of corporate messaging. YouTube chose to share just one vague stat, while Facebook never mentioned how many views, shares or comments 300,000 videos received before they were taken down. It’s less an open book and more of an attempt to show their work and assuage critics that, despite claims of negligence, the tech giants are, in fact, “on it.” Most troubling, it’s also a bid to reframe the conversation toward content moderation rather than addressing the role the platforms play in fostering and emboldening online extremism. We shouldn’t let them do it. Content moderation is important and logistically thorny, but not existential. Through the implementation of new monitoring systems and the constant tweaking of algorithmic filters, and robust investments in human intervention and comprehensive trust and safety policies written by experts, companies can continue to get better at protecting users from offensive content. But for those in the press and Silicon Valley to obsess over the granular issues of how fast social networks took down the video is to focus on the symptoms instead of the disease. The horror of the New Zealand massacre should be a wake-up call for Big Tech and an occasion to interrogate the architecture of social networks that incentivize and reward the creation of extremist communities and content. Focusing only on moderation means that Facebook, YouTube and other platforms, such as Reddit, don’t have to answer for the ways in which their platforms are meticulously engineered to encourage the creation of incendiary content, rewarding it with eyeballs, likes and, in some cases, ad dollars. Or how that reward system creates a feedback loop that slowly pushes unsuspecting users further down a rabbit hole toward extremist ideas and communities. On Facebook or Reddit this might mean the ways in which people are encouraged to share propaganda, divisive misinformation or violent images in order to amass likes and shares. It might mean the creation of private communities in which toxic ideologies are allowed to foment, unchecked. On YouTube, the same incentives have created cottage industries of shock jocks and livestreaming communities dedicated to bigotry cloaked in amateur philosophy. The YouTube personalities and the communities that spring up around the videos become important recruiting tools for the far-right fringes. In some cases, new features like “Super Chat,” which allows viewers to donate to YouTube personalities during livestreams, have become major fund-raising tools for the platform’s worst users — essentially acting as online telethons for white nationalists. Part of what’s so unsettling about the New Zealand shooting suspect’s online persona is how it lays bare how these forces can occasionally come together for violent ends. His supposed digital footprint isn’t just upsetting because of its content but because of how much of it appears designed to delight fellow extremists. The decision to call the attack a “real life effort post” reflects an eerie merging of conspiratorial hate from the pages of online forums and into the real world — a grim reminder of how online communities may be emboldening and nudging their most violent and unstable individuals. Stewards of our broken online ecosystem need to accept responsibility — not just for moderating the content but for the cultures and behaviors they can foster. Accepting that responsibility will require a series of hard conversations on behalf of the tech industry’s most powerful companies. It’ll involve big questions about the morality of the business models that turned these start-ups into money-printing behemoths. And even tougher questions about whether connectivity at scale is a universal good or an untenable phenomenon that’s slowly pushing us toward disturbing outcomes. And while it’s hardly the conversations Facebook or YouTube want to have, they’re the ones we desperately need now.
  continue reading

1022 episodes

Artwork
iconShare
 

Archived series ("Inactive feed" status)

When? This feed was archived on December 05, 2017 20:59 (7y ago). Last successful fetch was on July 01, 2019 14:07 (5y ago)

Why? Inactive feed status. Our servers were unable to retrieve a valid podcast feed for a sustained period.

What now? You might be able to find a more up-to-date version using the search function. This series will no longer be checked for updates. If you believe this to be in error, please check if the publisher's feed link below is valid and contact support to request the feed be restored or if you have any other concerns about this.

Manage episode 229649880 series 1263995
Content provided by My Newsbeat and Newsbeat Radio. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by My Newsbeat and Newsbeat Radio or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
The New York Times Late Saturday night, Facebook shared some dizzying statistics that begin to illustrate the scale of the online impact of the New Zealand massacre as the gunman’s video spread across social media. According to the social network, the graphic, high-definition video of the attack was uploaded by users 1.5 million times in the first 24 hours. Of those 1.5 million copies of the video, Facebook’s automatic detection systems automatically blocked 1.2 million. That left roughly 300,000 copies ricocheting around the platform to be viewed, liked, shared and commented on by Facebook’s more than two billion users. YouTube dealt with a similar deluge. As The Washington Post reported Monday, YouTube took “unprecedented steps” to stanch the flow of copies of the video that were mirrored, re-uploaded and, in some cases, repackaged and edited to elude moderation filters. In the hours after the shooting, one YouTube executive revealed that new uploads of the attacker’s livestream appeared on the platform “as quickly as one per second.” The volume of the uploads is staggering — for what it says about the power of the platforms and our collective desire to share horrific acts of violence. How footage of the murder of at least 50 innocent people was broadcast and distributed globally dredges up some deeply uncomfortable questions for the biggest social networks, including the existential one: Is the ability to connect at such speed and scale a benefit or a detriment to the greater good? The platforms are not directly to blame for an act of mass terror, but the shooter’s online presence is a chilling reminder of the power of their influence. As Joan Donovan, the director of the Technology and Social Change Research Project at Harvard, told me in the wake of the shooting, “if platform companies are going to provide the broadcast tools for sharing hateful ideologies, they are going to share the blame for normalizing them.” Numerical disclosures of any kind are unusual for Facebook and YouTube. And there’s credit due to the platforms for marshaling resources to stop the video from spreading. On one hand, the stats could be interpreted as a rare bit of transparency on behalf of the companies — a small gesture to signal that they understand their responsibility to protect their users and rein in the monster of scale that they built. But Facebook and YouTube’s choice to pull back the curtain is also a careful bit of corporate messaging. YouTube chose to share just one vague stat, while Facebook never mentioned how many views, shares or comments 300,000 videos received before they were taken down. It’s less an open book and more of an attempt to show their work and assuage critics that, despite claims of negligence, the tech giants are, in fact, “on it.” Most troubling, it’s also a bid to reframe the conversation toward content moderation rather than addressing the role the platforms play in fostering and emboldening online extremism. We shouldn’t let them do it. Content moderation is important and logistically thorny, but not existential. Through the implementation of new monitoring systems and the constant tweaking of algorithmic filters, and robust investments in human intervention and comprehensive trust and safety policies written by experts, companies can continue to get better at protecting users from offensive content. But for those in the press and Silicon Valley to obsess over the granular issues of how fast social networks took down the video is to focus on the symptoms instead of the disease. The horror of the New Zealand massacre should be a wake-up call for Big Tech and an occasion to interrogate the architecture of social networks that incentivize and reward the creation of extremist communities and content. Focusing only on moderation means that Facebook, YouTube and other platforms, such as Reddit, don’t have to answer for the ways in which their platforms are meticulously engineered to encourage the creation of incendiary content, rewarding it with eyeballs, likes and, in some cases, ad dollars. Or how that reward system creates a feedback loop that slowly pushes unsuspecting users further down a rabbit hole toward extremist ideas and communities. On Facebook or Reddit this might mean the ways in which people are encouraged to share propaganda, divisive misinformation or violent images in order to amass likes and shares. It might mean the creation of private communities in which toxic ideologies are allowed to foment, unchecked. On YouTube, the same incentives have created cottage industries of shock jocks and livestreaming communities dedicated to bigotry cloaked in amateur philosophy. The YouTube personalities and the communities that spring up around the videos become important recruiting tools for the far-right fringes. In some cases, new features like “Super Chat,” which allows viewers to donate to YouTube personalities during livestreams, have become major fund-raising tools for the platform’s worst users — essentially acting as online telethons for white nationalists. Part of what’s so unsettling about the New Zealand shooting suspect’s online persona is how it lays bare how these forces can occasionally come together for violent ends. His supposed digital footprint isn’t just upsetting because of its content but because of how much of it appears designed to delight fellow extremists. The decision to call the attack a “real life effort post” reflects an eerie merging of conspiratorial hate from the pages of online forums and into the real world — a grim reminder of how online communities may be emboldening and nudging their most violent and unstable individuals. Stewards of our broken online ecosystem need to accept responsibility — not just for moderating the content but for the cultures and behaviors they can foster. Accepting that responsibility will require a series of hard conversations on behalf of the tech industry’s most powerful companies. It’ll involve big questions about the morality of the business models that turned these start-ups into money-printing behemoths. And even tougher questions about whether connectivity at scale is a universal good or an untenable phenomenon that’s slowly pushing us toward disturbing outcomes. And while it’s hardly the conversations Facebook or YouTube want to have, they’re the ones we desperately need now.
  continue reading

1022 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide