Tuesday, 26 Nov 2024

Opinion | We’re Asking the Wrong Questions of YouTube and Facebook After New Zealand

Late Saturday night, Facebook shared some dizzying statistics that begin to illustrate the scale of the online impact of the New Zealand massacre as the gunman’s video spread across social media.

According to the social network, the graphic, high-definition video of the attack was uploaded by users 1.5 million times in the first 24 hours. Of those 1.5 million copies of the video, Facebook’s automatic detection systems automatically blocked 1.2 million. That left roughly 300,000 copies ricocheting around the platform to be viewed, liked, shared and commented on by Facebook’s more than two billion users.

YouTube dealt with a similar deluge. As The Washington Post reported Monday, YouTube took “unprecedented steps” to stanch the flow of copies of the video that were mirrored, re-uploaded and, in some cases, repackaged and edited to elude moderation filters.

In the hours after the shooting, one YouTube executive revealed that new uploads of the attacker’s livestream appeared on the platform “as quickly as one per second.”

The volume of the uploads is staggering — for what it says about the power of the platforms and our collective desire to share horrific acts of violence. How footage of the murder of at least 50 innocent people was broadcast and distributed globally dredges up some deeply uncomfortable questions for the biggest social networks, including the existential one: Is the ability to connect at such speed and scale a benefit or a detriment to the greater good?

The platforms are not directly to blame for an act of mass terror, but the shooter’s online presence is a chilling reminder of the power of their influence. As Joan Donovan, the director of the Technology and Social Change Research Project at Harvard, told me in the wake of the shooting, “if platform companies are going to provide the broadcast tools for sharing hateful ideologies, they are going to share the blame for normalizing them.”

Numerical disclosures of any kind are unusual for Facebook and YouTube. And there’s credit due to the platforms for marshaling resources to stop the video from spreading. On one hand, the stats could be interpreted as a rare bit of transparency on behalf of the companies — a small gesture to signal that they understand their responsibility to protect their users and rein in the monster of scale that they built.

But Facebook and YouTube’s choice to pull back the curtain is also a careful bit of corporate messaging. YouTube chose to share just one vague stat, while Facebook never mentioned how many views, shares or comments 300,000 videos received before they were taken down. It’s less an open book and more of an attempt to show their work and assuage critics that, despite claims of negligence, the tech giants are, in fact, “on it.”

Most troubling, it’s also a bid to reframe the conversation toward content moderation rather than addressing the role the platforms play in fostering and emboldening online extremism.

We shouldn’t let them do it.

Content moderation is important and logistically thorny, but not existential. Through the implementation of new monitoring systems and the constant tweaking of algorithmic filters, and robust investments in human intervention and comprehensive trust and safety policies written by experts, companies can continue to get better at protecting users from offensive content. But for those in the press and Silicon Valley to obsess over the granular issues of how fast social networks took down the video is to focus on the symptoms instead of the disease.

The horror of the New Zealand massacre should be a wake-up call for Big Tech and an occasion to interrogate the architecture of social networks that incentivize and reward the creation of extremist communities and content.

Focusing only on moderation means that Facebook, YouTube and other platforms, such as Reddit, don’t have to answer for the ways in which their platforms are meticulously engineered to encourage the creation of incendiary content, rewarding it with eyeballs, likes and, in some cases, ad dollars. Or how that reward system creates a feedback loop that slowly pushes unsuspecting users further down a rabbit hole toward extremist ideas and communities.

On Facebook or Reddit this might mean the ways in which people are encouraged to share propaganda, divisive misinformation or violent images in order to amass likes and shares. It might mean the creation of private communities in which toxic ideologies are allowed to foment, unchecked. On YouTube, the same incentives have created cottage industries of shock jocks and livestreaming communities dedicated to bigotry cloaked in amateur philosophy.

The YouTube personalities and the communities that spring up around the videos become important recruiting tools for the far-right fringes. In some cases, new features like “Super Chat,” which allows viewers to donate to YouTube personalities during livestreams, have become major fund-raising tools for the platform’s worst users — essentially acting as online telethons for white nationalists.

Part of what’s so unsettling about the New Zealand shooting suspect’s online persona is how it lays bare how these forces can occasionally come together for violent ends. His supposed digital footprint isn’t just upsetting because of its content but because of how much of it appears designed to delight fellow extremists. The decision to call the attack a “real life effort post” reflects an eerie merging of conspiratorial hate from the pages of online forums and into the real world — a grim reminder of how online communities may be emboldening and nudging their most violent and unstable individuals.

Stewards of our broken online ecosystem need to accept responsibility — not just for moderating the content but for the cultures and behaviors they can foster. Accepting that responsibility will require a series of hard conversations on behalf of the tech industry’s most powerful companies. It’ll involve big questions about the morality of the business models that turned these start-ups into money-printing behemoths. And even tougher questions about whether connectivity at scale is a universal good or an untenable phenomenon that’s slowly pushing us toward disturbing outcomes.

And while it’s hardly the conversations Facebook or YouTube want to have, they’re the ones we desperately need now.

The Times is committed to publishing a diversity of letters to the editor. We’d like to hear what you think about this or any of our articles. Here are some tips. And here’s our email:[email protected].

Follow The New York Times Opinion section on Facebook, Twitter (@NYTopinion) and Instagram.

Charlie Warzel, a New York Times Opinion writer at large, covers technology, media, politics and online extremism. @cwarzel

Source: Read Full Article

Related Posts