Friday, 19 Apr 2024

Opinion | Can We Block a Shooter’s Viral Aspirations?

In “We’re Asking the Wrong Questions of YouTube and Facebook After New Zealand,” Charlie Warzel wrote about the online impact of the video streamed allegedly by the gunman during the shootings in Christchurch and the need for social media and tech companies to take more responsibility for the content posted by extremists on their platforms.

Readers responded with questions and concerns about policing online content. Mr. Warzel responded to some of these readers; their comments and his replies follow. They have been edited for length and clarity. — Rachel L. Harris and Lisa Tarchak, senior editorial assistants.

Bob, Concord, Mass.: What would be so bad about deleting the accounts of those who upload videos that clearly violate the Terms of Use agreement? If users knew there were consequences to sharing certain material, there’d be significantly less sharing of these videos. Of course, eliminating users would reduce the number of users on the platform over all and could be perceived as negatively affecting shareholder value.

Charlie Warzel: As worried as tech companies are about getting into the business of de-platforming, what you’re describing here is, essentially, standard-practice Terms of Service enforcement. It’s frequently lost in Big Tech’s free-speech debates that every user of most of these platforms has entered into an agreement to abide by that company’s rules of conduct, and in doing so has granted the companies broad, sweeping permission to suspend or permanently remove offending users and content.

That’s all to say that the platforms are well within their right to impose harsh bans on those who upload and attempt to disseminate terroristic propaganda. And, in many respects, the companies already do this. Last year, Twitter announced it had suspended 1.2 million terrorist accounts since 2015.

The Christchurch video is potentially more complicated for the big tech platforms. The unprecedented nature of the footage — livestreamed, high definition, horrifically graphic — and its inherent newsworthiness meant that a cross section of trolls, curious onlookers and even some media organizations felt inclined to repost the footage in the immediate aftermath. It’s important as well to remember that it’s “people, not algorithms” sharing this stuff. And the tech platforms, which tend to bristle at policing speech of any kind, would most likely take issue with the blanket suspension of journalists who may have recklessly shared or reposted the video.

But perhaps the fallout from Christchurch will pressure tech companies to draw a clear line in the sand around videos that undoubtedly fall into the buckets of terroristic content or graphic mass violence. Very strict rules around redistributing images of mass murder, say, might compel those with the loudest megaphones to think twice before sharing. And there’s a decent argument to be made that what’s needed in our current media ecosystem is a little more friction and restraint. The platforms, of course, are the main contributors to our frictionless sharing environment. It’s a potentially intractable problem and, as our editorial board argued just last week, “it’s telling that the platforms must make themselves less functional in the interests of public safety.”

Antony Shugaar, La Jolla, Calif.: Why is this kind of footage not every bit as illegal as child pornography — possession, posting and viewing? After all, it’s the harm done in the making of both kinds of videos that’s at issue, as well as the corrosive effects of their presence in society, to say nothing of the effects on the victims.

Warzel: It’s been particularly revealing watching this situation play out overseas. Last Monday, New Zealand’s Office of Film and Literature Classification declared it illegal to view, possess or distribute the video. Similarly, Australian telecom companies took aggressive action against sites that were still hosting footage of the attack, blocking browser access to sites like 4chan, 8chan, the message board Voat and the video platform Liveleak.

Obviously, free speech concerns in the United States complicate any discussion around internet service providers blocking access to major websites, and I highly doubt we’ll see anything in America approximating such swift unilateral action. That said, you’re hinting at a fascinating argument that’s likely to gain steam around instituting legal penalties for distributing acts of terroristic violence or propaganda.

However, issues of what constitutes possession in a digital world can be contentious (I’m thinking about edge cases like accidental downloads and dissemination through file-sharing services, for instance). This is murky legal territory that’s out of my depth, but I think you’d need very strict legal definitions, like those in child pornography laws, of what constitutes a terroristic, violent act that’s not protected by the First Amendment and how to distinguish that from newsworthy footage of such an act. Is the intent of the video taken into account? Does it matter who uploaded or made it? Is anyone really ready and willing to tackle those questions?

Rather than the government setting limits on speech, it seems far more likely we’ll see stricter guidelines on moderation from the platforms themselves, or from other private companies. For example, web hosting companies. After last year’s mass shooting at a synagogue in Pittsburgh, the web hosting provider Joyent suspended service for the social network Gab in response to anti-Semitic and conspiratorial posts left there by the shooting suspect.

Tom, New York: Facebook and YouTube are rightfully being examined more critically today by both the general public and legislators. I am wondering, though, what actions could be taken toward platforms like 4chan and 8chan, which also do significant damage and have often been the breeding ground or host platform for hate speech and radicalization. Forums operate differently from sites like Facebook and YouTube, obviously, but what best practices and regulations have evolved or could evolve to curb their impact on at-risk users or willful actors?

Warzel: The 4chans and 8chans of the world are a distinct and tricky challenge. They lack the Big Tech business model and their owners are largely free from the scrutiny that the public-facing platforms now receive. They possess nowhere near the dizzying scale of Facebook and Google and yet they’re important, influential engines for hate online. Forums like these, on top of solidifying communities around toxic ideologies, are also breeding grounds for many of the memes and viral content that are then distributed to the larger social networks, like Reddit, Twitter and Facebook. Last year, for example, a group called the Network Contagion Research Institute found that 4chan’s “Politically Incorrect” message board “served as the most prolific source for many of the most offensive memes that eventually spread widely across the internet.”

Since these communities are run privately by owners who tend to see their sites as raucous experiments in maximalist free speech, it’s unclear that anything could really change. There’s always the possibility of internet service providers blocking access or web hosting clients terminating the site’s contracts to exert pressure, but that’s hypothetical at best.

That said, as conspiratorial hate jumps off the forum pages and into the real world, it’s possible that these communities will get greater scrutiny from law enforcement. Last week, public schools in Charlottesville, Va., were shut down for two days after threats of an “ethnic cleansing” massacre were posted to 4chan and 8chan. The posts spurred a vigorous investigation by the local police as well as the F.B.I., and on Friday a 17-year-old suspect was arrested in connection with the posts. It’s one isolated incident, but the emergence of actual investigations of credible online threats as well as real world consequences could have a noticeable effect on the worst variety of trolls.

And then, if the internet’s seediest message board communities choose to police their own platforms after all, the only viable solution is a series of dedicated and empowered moderators. Some communities on sites like Reddit have had a modicum of success with this, but even the most dedicated teams of “mods” tend to lack the support and resources they need from their site’s administrators. Forums like 4chan and 8chan do sometimes have moderators, but, as BuzzFeed News recently reported, their “quixotic quest to clean the internet’s deepest pit of misery is a meme there.”

Mark Gardiner, Kansas City, Mo.: An across-the-board, built-in lag of even a few minutes on social media sites would have enabled Facebook, YouTube, etc. to eliminate most of the viral propagation of this vile video. A lag of five minutes from the time a user posts content until it goes live, and a similar lag on reposting, would not hurt meaningful social communication and would reduce brainless “virality.” It would also greatly improve the ability of social media companies to prevent obviously antisocial uses of their platforms.

Would the New Zealand psychopath have gone on that rampage if he knew it would not be livestreamed? Maybe not. And even if he did, he would not have been able to inspire millions of other white nationalists.

Warzel: I’ve seen this “tape delay” idea debated in the last few days and it’s an interesting one. In practice, though, it seems to be quite difficult to carry out. For example, do you add an upload lag to all videos or just those from certain accounts? If it’s all videos, does that mean the videos ought to be flagged by artificial intelligence for potential violence? On Wednesday evening, Facebook argued that its flagging systems, which are adequate for screening and catching nudity and certain violent imagery, would most likely deliver false positives on more innocuous videos as well.

So what about human moderators? The sophistication of the internet’s worst communities seems to necessitate human moderation to parse the innocent pranks from the insidious trolling. Well-trained moderators with adequate time to pore over videos could suss out satire from hate speech and parse cultural standards and norms that might cause a video to be innocuous in one region and deeply offensive in another. But, as some great reporting has revealed recently, moderators tend to be outside contractors subjected daily to torrents of psychologically traumatizing content, often without the support or pay they deserve. Rather than spend time with a video, they’re forced to pass judgment in a matter of seconds. Still, they’re far more expensive than an algorithm and far less efficient, which is why tech companies tend to prefer deeply imperfect A.I. solutions.

Hovering over all these issues is the platforms’ gargantuan scale.More than 400 hours of content is uploaded on YouTube every minute. Facebook disclosed in 2015 that its videos at the time reached eight billion views a day. The platforms used to tout these metrics as triumphs of connectivity, but it seems they’ve also become incredible liabilities. As a former colleague of mine — who was once a website comment moderator — wrote recently, “sites like Facebook, YouTube and Twitter have failed to support clear and repeatable moderation guidelines for years, while their platforms have absorbed more and more of our basic social functions.”

The Times is committed to publishing a diversity of letters to the editor. We’d like to hear what you think about this or any of our articles. Here are some tips. And here’s our email:[email protected].

Follow The New York Times Opinion section on Facebook, Twitter (@NYTopinion) and Instagram.

Charlie Warzel, a New York Times Opinion writer at large, covers technology, media, politics and online extremism. @cwarzel

Source: Read Full Article

Related Posts