Addressing Transgressive Content on the Internet: A Metaphor for Hate

Preamble:

This is the first in a three part series of blogs about dealing with transgressive content on the internet.   This post is primarily about a metaphor for transgressive content that I think is useful for shaping policy options.  The second post is about understanding the various layers of the internet, and the risks and benefits of tackling content issues on each one, and the third explores the processes for addressing content issues at the infrastructure layer of the internet.

Part 1. Pollution

I have an interesting and frequently unpleasant part of my job that requires me to think about all the terrible things people put on the internet, and how my company should respond when those terrible things rely on our systems.

I’ve been thinking about a metaphor for hate on the internet for a while, and it seeped into an interview I gave to the Mozilla run podcast IRL.  It seemed worthwhile to expand on that thought.

Put simply, hate manifested in internet content is the toxic by-product of the internet platform industry; in other words, it is the pollution of the internet.  Consequently, it may be helpful to apply some of our thinking and approaches to pollution to the ways we think about transgressive content. 

Unfettered Speech

It seems that on any platform that allows essentially unfettered speech, inevitably a small but unbelievably unpleasant portion becomes hateful.  Try using any online platform, be it a forum, newspaper comment section, or subreddit and you will encounter people being assholes. This is fundamentally a problem with humans, and while hateful speech is nothing new, the access, reach and amplifying effects of current platforms is unlike anything that has come before.

These platforms, like Facebook, Twitter, Youtube and Instagram encourage and require people to generate content. Having a user base that’s engaged and creating content allows platforms to sell ads with increasing levels of sophistication and monetise user data, which is to say, build a business.  In order to allow the broadest array of people to engage and create, these platforms have largely tried to be content neutral.

This content-generation-machine is the current engine of some of the largest companies on the internet. And while that engine is generating economic growth, cool and useful services, it’s also producing hate-speech, misinformation, and normalizes extremism. 

Platforms have treated hate like industry treated pollution  until at least the late 70s - as an externality they are not responsible for. 

Negative Externalities

The social costs of problematic content are varied and likely immeasurable.  Racism, homophobia, sexism, and bullying online keep people in the margins, stifle important conversations and are just plain awful.  The platforms that host this content grant it an air of legitimacy, at least to an undiscerning eye. Anti-vaccination misinformation presents a genuine public health risk, with outbreaks of preventable diseases threatening our most vulnerable.  At the same time, the normalization of hate and conspiracy has repeatedly bled from online to off, with murderous results. The linkage between a number of mass shootings and particularly heinous online forums is clear and bright.

Platforms claim neutrality, but their attempts at waving away responsibility were repeatedly undone by algorithms that surfaced problematic content in order to keep users engaged.  Users either too young, or without enough experience to discern fact from fiction and hype from manipulation.

Cleaning up the Mess

The major content platforms are currently struggling with the social costs of their laissez-faire moderation policies. Content platforms have traditionally attempted to be as hands off as possible, seeing themselves as only responsible for illegal content (like child sexual abuse materials), but public backlash, and the potential for liability appears to be slowly changing this perspective. In 2019, Twitter banned all political advertising, and Facebook began limiting the reach of Anti-Vaccination content.  

The political, social, and legal liabilities of sustaining a business on transgressive content appears to be slowly catching up with the major platforms, and there are signs they have begun to grapple with the issue. Facebook has announced an oversight board, though more details are required to know how the board will operate.

At the end of the day, if your business is sustained by the generation of content and people’s interaction with it, then you have a responsibility for its effects. You cannot claim neutrality when your platform is causing material human harm.  This cleanup will not be easy, cheap, or free of controversy. Over moderation is a risk and will occur. Profit margins will decrease as platforms are required to spend more on laborious manual moderation. 

And while the guidelines for moderation are still lacking, they are getting slowly getting better. No longer are Youtube’s terms of service a recipe for how to bully without consequences.  I should also note that despite the above pessimism, I think it’s likely better that the platforms take it upon themselves to clean this up, rather than relying on Governments to legislate standards.

Extremes to the Margins

Ending the normalization of extreme ideas will be helpful, but there will always be extremists and there is no consensus on whether it's better to have our racists in the light where we can see them, or if it’s better that they’re banished to the corners.  And while they may find themselves unwelcome on mainstream platforms, they will likely always be able to find a home online. 

That said though, there is some evidence that deplatforming transgressive elements is effective, and at the end of the day a reduced reach is going to lessen the pollutive impact of hate online.