Charlie Warzel: It’s been a rough week for YouTube, even by tech backlash standards.
Just this week the company has come under fire for creating a “digital playground” for pedophiles. Then, it attempted to purge extremist content from its site, only to accidentally delete a number of videos on Nazi history made by professors as part of the crackdown. And finally, there’s been the harassment debate over the conservative commentator Steven Crowder’s mocking of the Vox staffer Carlos Maza.
Quick summary: Mr. Crowder used slurs about Mr. Maza’s ethnicity (he’s Cuban-American) and sexual orientation (he’s gay). Mr. Crowder said the comments were not meant to be offensive. Mr. Maza has openly called for YouTube to take action against Mr. Crowder. YouTube, after days of deliberation, said the comments did not qualify as harassment. A day later, after public outcry, YouTube backtracked and demonetized Mr. Crowder’s channel, noting he’d violated rules. To make matters worse, the whole thing happened via a series of replies on Twitter. Sarah, I don’t have a question here so much as a statement: Help.
Sarah Jeong: Here’s my take: Yikes.
What a mess. I thought this Washington Post interview with Tarleton Gillespie, an academic who studies content moderation, was illuminating. For people who aren’t extremely online, all of this is going to seem truly alien. For others, it’s all too familiar. I’m reminded yet again of this 2014 piece, “The Future of the Culture Wars Is Here, and It’s Gamergate,” on a video games controversy that exploded into a misogynistic maelstrom of death threats (and persists today in strains of the alt-right).
YouTube bounced back and forth between “he’s demonetized” and “he’s not demonetized” like four times on one of its official Twitter accounts. Both of us have covered platform moderation so we know this kind of inexplicable reasoning and inconsistency is all too common, but it’s still wild to watch it play out when the stakes are so high. It’s driving home for a lot of people who don’t necessarily follow tech the way that we do that the internet platforms are the primary battlegrounds of the culture wars, and that content moderation is going to be at the heart of it.
Charlie: When trying to understand YouTube’s position, I liked Felix Salmon’s idea that platforms can do rules-based or principles-based enforcement. If you’re going to do rules-based, you need to have people trust the decisions that are handed down. They need to be transparent and consistent. YouTube has consistently undermined that trust with reversals. There is little consistency and even less transparency. So it fails at the first of two options. And, when it comes to principles, it’s either lacking or too afraid to express them publicly and alienate users.
The end result is that YouTube loses all trust, makes almost nobody happy and basically only empowers the worst actors by giving them more reason to have grievances. Even when YouTube does try to explain the logic to its rules, as it eventually did this week, it inadvertently lays out a map for how bad actors can sidestep takedowns, giving bad faith creators workarounds to harass or profit off bigotry. You know you’re in a real dark place when the good decisions YouTube makes are equally troubling.
[If you’re online, chances are someone is using your information. We’ll tell you what you can do about it. Sign up for Charlie Warzel’s limited-run newsletter.]
Sarah: YouTube is entitled to shoot entirely from the hip, but it leans on a process and a system of written rules instead. They mimic the law. The problem is, they don’t mimic it very well. What drives me nuts is that more or less all of the platforms have lawyers crafting these policies and processes. They all took the same law school classes I did. They know legal theory.
Charlie: O.K. I’ll bite: What does legal theory have to do with any of this?
Sarah: Most laws are a mix of rules and standards. Rules are rigid, and the most rigid are referred to as “bright line” rules because they’re so straightforward to interpret: If you steal a loaf of bread, your hand gets chopped off. A standard is more flexible. There are multistep tests and the weighing of various factors. First Amendment law, for instance, has a lot of standards in it.
The problem with the bright-line rules is that they often lead to injustice because they’re not flexible enough. On the face of it, it seems like standards should be better, right? But standards are harder to enforce, so you’re more likely to get delays in the courts and inconsistency in decisions. So, the more vague and flexible a law is, the more it takes into account the totality of the circumstances, the more it’s actually likely to lead to injustice. Additionally, standards become harder to predict, so there’s a social uncertainty about what’s acceptable and what’s not. There’s a reason the most-watched Supreme Court cases involve standards.
Charlie: And in the platform sense, we’re seeing neither, really. It’s the guise of bright-line rules with contradictory flexibility and inconsistency, rendering them mostly worthless.
Sarah: The platforms are often really vague about their rules and decision-making processes because they think bad actors are going to game them. It’s similar to how they handle spam.
Charlie: The problem is that one is selling Viagra and the other is a political minefield.
Sarah: I’d argue that spam is a political minefield! Just maybe, um, not as high stakes. But yes, the platforms mimic the law without any of the trappings that make law work. Think about how different this would all be if the platforms issued written rulings the way judges do, explaining their reasoning and interpretation of their policies.
Charlie: It’s always so striking to me how these companies and their rules feel mostly formulated from the point of view of a democratic state or a country. These are publicly traded platforms for viral advertising — not a state! There’s no democracy, here. But even in this there’s a huge lack of consistency. They draw up terms of services and rules and talk about protecting speech but also retreat to the neutral platform position when it’s convenient, suggesting they’re a bit more like a utility, maybe — only without the pesky regulations.
Max Read over at New York magazine put it nicely a few years ago in a piece about Twitter’s inability to enforce its rules. “Maybe most importantly, stop trying to strike a Liberal-Democratic balance between free-speech rights and freedom-from-harassment rights and pick the one that’s more important.” I think that’s the tension we’re seeing and the reason we’re all pretty miserable. Another big issue here is that YouTube has helped create an ecosystem of influencer “debate” culture that it doesn’t really understand.
Sarah: Oh, I agree that they don’t understand at all what they’ve spawned. I don’t think anyone does.
Charlie: Mr. Crowder can, in this case, hide behind YouTube’s peculiar creator culture. He can say: Mr. Maza and Vox have a big audience. They sometimes criticize me or my movement. I can respond in my own, slur-filled way. Creators with big audiences creating content to respond to other creators with big audiences is a big part of the way YouTube works. One that I bet it instinctively wants to protect. Or that it doesn’t know how to protect without adding to harassment.
Sarah: Well, the entire system is designed to make creators desperate, and to make them behave in increasingly erratic ways. The creators are pushed to constantly generate content, as cheaply and as quickly as possible. It’s potentially a very lucrative business, but it’s entirely dependent on a mysterious and ever-shifting algorithm that rewards and punishes videos in search and recommendations, seemingly at random.
And on the audience’s side? Recommendation and search behave like radicalization engines, pushing people further toward extremism. It’s not intentionally designed to amplify the worst mob impulses of humanity, but here we are.
Charlie: There’s also an innate understanding of audience from big creators. It’s a learned behavior on YouTube to engage and engage with maximum public confrontation.
Sarah: But that kind of behavior is, in addition, financially rewarded. And rewarded with visibility by the algorithm! You act like that, and you rise to the top. It becomes what everyone sees. It becomes the tone of the “conversation.”
Charlie: Right. YouTube created all of that. And yet, it doesn’t fully understand it. Like you said, who could? Which is why — to me — you have to adopt a principles-based solution to moderation.
Sarah: The thing that troubles me the most is that even if YouTube was doing the best it could possibly do, wouldn’t we all still live in an algorithmic hell?
Charlie: Yes? Maybe?
Sarah: I think we still would be.
Charlie: What’s the answer? The platforms are never going to self-regulate. It’s in YouTube and Google’s best interest to maintain something similar to the status quo and continue to rack up views and advertising dollars. These P.R. disasters, as Facebook has shown, don’t usually have a large effect on the bottom line. It seems we’re in for more opaque, inconsistent rules-based enforcement and speech debates. We may get rid of swastika-wearing neo-Nazi channels (and that’s true progress!), but those are the cut-and-dry calls. The real platform speech wars tend to be fought on the margins, by the players who can dress up conspiracy or bigotry and tap dance on the terms of service without getting thrown off.
Sarah: I’m not sure what the answer is here. Look, I’m all for regulation, but regulation on the basis of speech is bound to be a disaster — and probably unconstitutional to boot. And the people who are currently in power aren’t just weirdly comfortable with white nationalism, there’s a strong antipathy toward journalism and free speech. It would be a self-own to give the government more power over speech. My “hope” here is that breaking up Google and YouTube would do a lot to stem the network effects from being the only game in town.
Charlie: Since those days are a long way away (if they ever come), I suppose we’ll just have more of these conversations to look forward to. Thanks, Sarah!
The Times is committed to publishing a diversity of letters to the editor. We’d like to hear what you think about this or any of our articles. Here are some tips. And here’s our email: [email protected].
Follow The New York Times Opinion section on Facebook, Twitter (@NYTopinion) and Instagram.
Charlie Warzel, a New York Times Opinion writer at large, covers technology, media, politics and online extremism. He welcomes your tips and feedback: [email protected] | @cwarzel
Source: Read Full Article
Home » Analysis & Comment » Opinion | YouTube Is a Very Bad Judge and Jury
Opinion | YouTube Is a Very Bad Judge and Jury
Charlie Warzel: It’s been a rough week for YouTube, even by tech backlash standards.
Just this week the company has come under fire for creating a “digital playground” for pedophiles. Then, it attempted to purge extremist content from its site, only to accidentally delete a number of videos on Nazi history made by professors as part of the crackdown. And finally, there’s been the harassment debate over the conservative commentator Steven Crowder’s mocking of the Vox staffer Carlos Maza.
Quick summary: Mr. Crowder used slurs about Mr. Maza’s ethnicity (he’s Cuban-American) and sexual orientation (he’s gay). Mr. Crowder said the comments were not meant to be offensive. Mr. Maza has openly called for YouTube to take action against Mr. Crowder. YouTube, after days of deliberation, said the comments did not qualify as harassment. A day later, after public outcry, YouTube backtracked and demonetized Mr. Crowder’s channel, noting he’d violated rules. To make matters worse, the whole thing happened via a series of replies on Twitter. Sarah, I don’t have a question here so much as a statement: Help.
Sarah Jeong: Here’s my take: Yikes.
What a mess. I thought this Washington Post interview with Tarleton Gillespie, an academic who studies content moderation, was illuminating. For people who aren’t extremely online, all of this is going to seem truly alien. For others, it’s all too familiar. I’m reminded yet again of this 2014 piece, “The Future of the Culture Wars Is Here, and It’s Gamergate,” on a video games controversy that exploded into a misogynistic maelstrom of death threats (and persists today in strains of the alt-right).
YouTube bounced back and forth between “he’s demonetized” and “he’s not demonetized” like four times on one of its official Twitter accounts. Both of us have covered platform moderation so we know this kind of inexplicable reasoning and inconsistency is all too common, but it’s still wild to watch it play out when the stakes are so high. It’s driving home for a lot of people who don’t necessarily follow tech the way that we do that the internet platforms are the primary battlegrounds of the culture wars, and that content moderation is going to be at the heart of it.
Charlie: When trying to understand YouTube’s position, I liked Felix Salmon’s idea that platforms can do rules-based or principles-based enforcement. If you’re going to do rules-based, you need to have people trust the decisions that are handed down. They need to be transparent and consistent. YouTube has consistently undermined that trust with reversals. There is little consistency and even less transparency. So it fails at the first of two options. And, when it comes to principles, it’s either lacking or too afraid to express them publicly and alienate users.
The end result is that YouTube loses all trust, makes almost nobody happy and basically only empowers the worst actors by giving them more reason to have grievances. Even when YouTube does try to explain the logic to its rules, as it eventually did this week, it inadvertently lays out a map for how bad actors can sidestep takedowns, giving bad faith creators workarounds to harass or profit off bigotry. You know you’re in a real dark place when the good decisions YouTube makes are equally troubling.
[If you’re online, chances are someone is using your information. We’ll tell you what you can do about it. Sign up for Charlie Warzel’s limited-run newsletter.]
Sarah: YouTube is entitled to shoot entirely from the hip, but it leans on a process and a system of written rules instead. They mimic the law. The problem is, they don’t mimic it very well. What drives me nuts is that more or less all of the platforms have lawyers crafting these policies and processes. They all took the same law school classes I did. They know legal theory.
Charlie: O.K. I’ll bite: What does legal theory have to do with any of this?
Sarah: Most laws are a mix of rules and standards. Rules are rigid, and the most rigid are referred to as “bright line” rules because they’re so straightforward to interpret: If you steal a loaf of bread, your hand gets chopped off. A standard is more flexible. There are multistep tests and the weighing of various factors. First Amendment law, for instance, has a lot of standards in it.
The problem with the bright-line rules is that they often lead to injustice because they’re not flexible enough. On the face of it, it seems like standards should be better, right? But standards are harder to enforce, so you’re more likely to get delays in the courts and inconsistency in decisions. So, the more vague and flexible a law is, the more it takes into account the totality of the circumstances, the more it’s actually likely to lead to injustice. Additionally, standards become harder to predict, so there’s a social uncertainty about what’s acceptable and what’s not. There’s a reason the most-watched Supreme Court cases involve standards.
Charlie: And in the platform sense, we’re seeing neither, really. It’s the guise of bright-line rules with contradictory flexibility and inconsistency, rendering them mostly worthless.
Sarah: The platforms are often really vague about their rules and decision-making processes because they think bad actors are going to game them. It’s similar to how they handle spam.
Charlie: The problem is that one is selling Viagra and the other is a political minefield.
Sarah: I’d argue that spam is a political minefield! Just maybe, um, not as high stakes. But yes, the platforms mimic the law without any of the trappings that make law work. Think about how different this would all be if the platforms issued written rulings the way judges do, explaining their reasoning and interpretation of their policies.
Charlie: It’s always so striking to me how these companies and their rules feel mostly formulated from the point of view of a democratic state or a country. These are publicly traded platforms for viral advertising — not a state! There’s no democracy, here. But even in this there’s a huge lack of consistency. They draw up terms of services and rules and talk about protecting speech but also retreat to the neutral platform position when it’s convenient, suggesting they’re a bit more like a utility, maybe — only without the pesky regulations.
Max Read over at New York magazine put it nicely a few years ago in a piece about Twitter’s inability to enforce its rules. “Maybe most importantly, stop trying to strike a Liberal-Democratic balance between free-speech rights and freedom-from-harassment rights and pick the one that’s more important.” I think that’s the tension we’re seeing and the reason we’re all pretty miserable. Another big issue here is that YouTube has helped create an ecosystem of influencer “debate” culture that it doesn’t really understand.
Sarah: Oh, I agree that they don’t understand at all what they’ve spawned. I don’t think anyone does.
Charlie: Mr. Crowder can, in this case, hide behind YouTube’s peculiar creator culture. He can say: Mr. Maza and Vox have a big audience. They sometimes criticize me or my movement. I can respond in my own, slur-filled way. Creators with big audiences creating content to respond to other creators with big audiences is a big part of the way YouTube works. One that I bet it instinctively wants to protect. Or that it doesn’t know how to protect without adding to harassment.
Sarah: Well, the entire system is designed to make creators desperate, and to make them behave in increasingly erratic ways. The creators are pushed to constantly generate content, as cheaply and as quickly as possible. It’s potentially a very lucrative business, but it’s entirely dependent on a mysterious and ever-shifting algorithm that rewards and punishes videos in search and recommendations, seemingly at random.
And on the audience’s side? Recommendation and search behave like radicalization engines, pushing people further toward extremism. It’s not intentionally designed to amplify the worst mob impulses of humanity, but here we are.
Charlie: There’s also an innate understanding of audience from big creators. It’s a learned behavior on YouTube to engage and engage with maximum public confrontation.
Sarah: But that kind of behavior is, in addition, financially rewarded. And rewarded with visibility by the algorithm! You act like that, and you rise to the top. It becomes what everyone sees. It becomes the tone of the “conversation.”
Charlie: Right. YouTube created all of that. And yet, it doesn’t fully understand it. Like you said, who could? Which is why — to me — you have to adopt a principles-based solution to moderation.
Sarah: The thing that troubles me the most is that even if YouTube was doing the best it could possibly do, wouldn’t we all still live in an algorithmic hell?
Charlie: Yes? Maybe?
Sarah: I think we still would be.
Charlie: What’s the answer? The platforms are never going to self-regulate. It’s in YouTube and Google’s best interest to maintain something similar to the status quo and continue to rack up views and advertising dollars. These P.R. disasters, as Facebook has shown, don’t usually have a large effect on the bottom line. It seems we’re in for more opaque, inconsistent rules-based enforcement and speech debates. We may get rid of swastika-wearing neo-Nazi channels (and that’s true progress!), but those are the cut-and-dry calls. The real platform speech wars tend to be fought on the margins, by the players who can dress up conspiracy or bigotry and tap dance on the terms of service without getting thrown off.
Sarah: I’m not sure what the answer is here. Look, I’m all for regulation, but regulation on the basis of speech is bound to be a disaster — and probably unconstitutional to boot. And the people who are currently in power aren’t just weirdly comfortable with white nationalism, there’s a strong antipathy toward journalism and free speech. It would be a self-own to give the government more power over speech. My “hope” here is that breaking up Google and YouTube would do a lot to stem the network effects from being the only game in town.
Charlie: Since those days are a long way away (if they ever come), I suppose we’ll just have more of these conversations to look forward to. Thanks, Sarah!
The Times is committed to publishing a diversity of letters to the editor. We’d like to hear what you think about this or any of our articles. Here are some tips. And here’s our email: [email protected].
Follow The New York Times Opinion section on Facebook, Twitter (@NYTopinion) and Instagram.
Charlie Warzel, a New York Times Opinion writer at large, covers technology, media, politics and online extremism. He welcomes your tips and feedback: [email protected] | @cwarzel
Source: Read Full Article