Platform police
Society is messy. Social media companies are learning the hard way as they struggle to clean up messes that happen on their watch.
Hey Trustees,
The leading story from last week’s issue discussed how, in the wake of two mass shootings, news outlets focused on the role technology platforms played in enabling the El Paso gunman to spread his racist manifesto.
That coverage may have distracted from other important conversations, but it also reflected a growing sentiment:
66% of American adults say social media companies have a responsibility to remove offensive content from their platforms.
However, only 31% trust companies to figure out what content to remove.
So, this week, let’s take a look at some of the challenges companies face when trying to moderate content online.
But first, some news:
Forevergram

Article: Rob Price / Business Insider
What happened: marketing company Hyp3r secretly collected and saved Instagram users’ location data indefinitely, violating Instagram’s policies. By using a combination of data collection techniques and exploiting an Instagram security flaw, Hyp3r was able to capture photos that were meant to disappear after 24 hours and location data that shouldn’t have been available at all.
Why it matters: Hyp3r belonged to an exclusive list of vetted “Facebook Marketing Partners,” which raises questions about how much due diligence Facebook did on these companies and adds to its seemingly endless struggle to secure users’ data.
What’s next: the Irish Data Protection Commission, which heads up the European Union’s privacy enforcement for Facebook, has opened a whopping 11 cases against the company under Europe’s new GDPR law. The commission said it expects to reach some initial decisions by the end of September, which could mean more fines and other penalties for Facebook.
YouTubig to fail

Article: Elizabeth Dwoskin / Washington Post
What happened: content moderators accused YouTube of giving special treatment to video creators who bring in significant money for the company in cases when their videos violated YouTube’s policies. The moderators described “a demoralizing work environment marked by ad hoc decisions, constantly shifting policies and a widespread perception of arbitrary standards when it came to offensive content.”
Why it matters: this accusation comes as YouTube is facing criticism over allowing problematic content to spread on its platform, addressing issues only in response to crises, and not paying or treating moderators well. If true, granting exceptions to certain creators for financial reasons would directly contradict YouTube’s public statements that it is working to clean up the platform.
What’s next: in yet another case of serendipitous timing, the week’s news tees up my main story perfectly. Read on for more.

Platform police
Social media companies are increasingly coming under fire for not doing enough, doing too much, or simply doing the wrong things to moderate “objectionable” content published by users on their platforms.
Rewind to 2012, when people still described social media as a force for good. Arab Spring protesters relied on it to help them overthrow dictators in the Middle East. President Obama’s campaign used it to better engage and connect with voters. More people sharing more content more quickly meant only good things.
Now fast forward and note the shift in rhetoric: social media is frequently accused of undermining democracy, spreading misinformation, and poisoning online discourse. In Myanmar, Facebook enabled violence against the Rohingya (while the company both underreacted and overreacted to the situation). Cambridge Analytica weaponized ill-gotten Facebook user data to manipulate voters. More people sharing more content more quickly, it turns out, can have some toxic results.
Just last week:
Online forum 8chan drew ire for not taking down a racist manifesto posted on the site by a user believed to be the El Paso gunman.
In response, the White House met with tech companies Friday, asking for tools to identify violent extremists before they act, an approach companies appeared hesitant about (because they’ve all seen Minority Report).
However, CNN reported that the Trump administration is also working on an executive order that would curtail legal protections for social media companies that take down content, the president’s latest effort to punish tech companies for alleged — but unproven — systemic political bias against conservatives.
People want someone to do something about all the internet garbage, but they don’t trust the current custodians, tech companies, and can’t even agree on what a “cleaner” digital world might look like.
I spoke with Michael Bossetta, a political and data scientist at Lund University in Sweden who specializes in the impact of social media on politics, to get some insight into where we’re headed. While he did point to some useful case studies, our conversation mostly confirmed that this is a messy topic involving lots of questions and few clear answers or paths forward.
So, with this issue, I thought it would be more helpful to highlight some of those questions and let people think through things as if they were in charge. At the very least, next time you see a news story about content moderation, you’ll be able to cut through the noise and pinpoint which aspect(s) of the problem are actually relevant.
Before we dive in, here’s some context to keep in mind:
Everything in moderation, except moderators
At first, companies said artificial intelligence would save us. However, as Bossetta noted:
A lot of research into this area recently shows the power of human moderation because interpretation has to be culturally specific.
Our cultures, languages, images, etc. are all incredibly complex, dependent on context and constantly evolving. Machine learning algorithms are improving, but they’re still far from perfect and ultimately still rely heavily on humans to design and train them.
The takeaway: recognizing that humans will play a significant role for the foreseeable future, companies have dramatically expanded their content moderation operations. Facebook now employs at least 30,000 people for this work, while YouTube announced plans in 2017 to grow its team to 10,000.
Section 230
Section 230 of the Communications Decency Act often comes up in stories about content moderation (here’s a helpful explainer). It says two key things:
Interactive computer service providers (e.g. Facebook, YouTube, and Twitter) are not considered publishers of content provided by their users.
Providers can’t be held liable for censoring (or allowing others to censor) content they consider to be “obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected” (emphasis added).
Section 230 really paved the legal road for social media platforms as we know them (imagine if a young Facebook had to worry about getting sued every time a user published something legally dubious). It’s why you hear companies work themselves into pretzels explaining how they’re not, in fact, publishers — they don’t want to lose those legal protections.
I also emphasized “constitutionally protected” because, as private companies, platforms are not bound by the First Amendment. If they wanted to completely ban all Antifa or neo-Nazi content, they legally could. The First Amendment prohibits the government — not private entities — from restricting free speech, and even then there are exceptions. Other laws governing digital content create legal liability for platforms in some specific cases, like copyright infringement and child pornography, but those cases are the minority.
The takeaway: except in rare circumstances, social media companies can not be held legally liable (in the United States) for content posted by users or for content moderation decisions and they are not obligated to protect users’ freedom of speech under the First Amendment.
The value of values
Often, content moderation decisions involve endorsing certain values — or favoring one at the expense of another. Despite having legal immunity, companies have been reluctant to take stands on values for two key reasons:
Capitalism: platforms become more (financially) valuable as they add users. By adopting a policy that bans, say, anti-immigrant content, companies risk alienating users who are anti-immigration, thus cutting into their bottom line.
Libertarianism: as Bossetta pointed out:
Tech founders came to build these platforms with their own values, which is sort of this libertarian/left promoting of open connectivity and free speech to a radical degree.
The takeaway: platforms have historically had strong financial and ideological motivations for taking a hands-off approach to content moderation. It’s only recently that public pressure has forced them to take a more active role, which in turn involves taking stronger stands on values.

Contentious questions
So, with all that in mind… let’s try a little thought experiment: say we’re launching a new social media network and get to design our content moderation system and policies from scratch. How might we think through this? Below are just some of the questions that we might want to answer:
Do we even need rules?
We could say: “screw it, no rules, our platform is a free-for-all.” That might help us get new users quickly, but as our community becomes larger and more diverse, there will inevitably be bad actors and arguments among users. If we let trolls run free and disputes go unresolved, do we risk scaring away well-behaved users who don’t want to put up with it? Or worse, do we put members of our own community at risk by exposing them to cyberstalkers, scam artists, pedophiles and other malicious actors? Still, if we crack down too tightly and the conversation becomes overly sanitized, users — some well-meaning and others we could do without — may defect to fringe sites like Gab and 8chan that heavily prioritize free speech.
Where should we draw the line between permissible and objectionable content?
Okay, so we decide we have to draw the line somewhere. We try to start with something really egregious: shouldn’t child pornography be a clear-cut case? Well, there are always exceptions: we could end up censoring photos with important political and historical context that nevertheless include nude children, a situation Facebook found itself in. Human expression can’t be neatly bisected into “permissible” and “objectionable” — there’s a lot of gray area and context is important.
Should we rethink the line knowing kids may see this content?
Our adult users may have more capacity to deal with objectionable content. However, it’s hard to completely shield kids in today’s digital environment, as recent incidents involving YouTube and Facebook have shown. Do we redraw the line even more cautiously or hope that we’ve designed effective age gates?
How do we account for differences across cultures, regions, or even time?
A 50-year-old Muslim in India may have a very different idea of objectionable than a 21-year-old humanist in Norway. Whose standard do we go with? Or take the swastika, which for thousands of years meant “good fortune” until the Nazi party appropriated it as a symbol of its ideology. Do we ban the swastika forever on the grounds that it constitutes hate speech? Should we exempt certain uses? How do we train our moderators or algorithms to recognize those appropriate uses?
How do we discourage or prevent objectionable content in the first place?
Tech companies are facing pressure to design products in ways that actively promote better digital habits instead of subconsciously steering users toward addiction, depression, isolation, radicalization, or dehumanizing others online. How much effort should we spend trying to head off bad behavior preemptively, knowing some things will still slip through the cracks?
What should our response be when someone crosses the line?
Despite our best efforts, someone just crossed the line. How do we review the facts and confirm it’s a violation? Do we kick the offender off our platform for good, suspend them, or something else? There’s growing evidence that tougher punishments don’t deter criminals — so would a ban or suspension even work? If not, how else can we disincentivize future bad behavior?
How do we make sure we’re consistent with our responses?
Facebook, YouTube and Twitter have all been accused of applying their policies inconsistently, which really gets people riled up. No one wants a referee to call a lopsided game. If our platform grows to millions of users, can we still guarantee that our thousands of moderators or imperfect algorithms will view similar cases through similar lenses?
What should the appeal process look like?
Inevitably, our moderators and algorithms will get it wrong at some point (maybe they didn’t pick up on someone’s sarcasm). Appeals can help us identify deficiencies in our policies and give them legitimacy. But appeals also take time and resources, and without a clear, consistent and robust process, things could backfire.
How do we draw the line between legitimate users and bad actors?
Bad actors will take advantage of our platform — using bots or just good old-fashioned social engineering — to defraud the elderly, manipulate elections, harass their ex-partners, and spread racism. Assuming we have the technological ability to identify them, how do we decide which accounts to ban? We could have a “one person, one account” policy, but what about users who need a separate business account? We could ban all bots, but what about bots that provide positive value? Besides, we’re still trying to grow — would getting rid of 15% of users play well with our investors?
How do we prevent bad actors from getting on the platform in the first place?
Maybe we try to lock down our borders and only let in the “good ones.” But how do we decide who the good ones are? And how do we collect enough information on our users to figure out whether they meet those criteria while still respecting their privacy? If we make the identity verification process too stringent, people won’t want to or won’t be able to join.
Should we try to prevent unfairly loud voices from shouting down others? How?
We’ll also have to deal with the power law: a disproportionate amount of content will be created by a select few, strongly opinionated people. Some might be legitimate, like influencers. Others might use bots to artificially amplify their voice or harass others, like influencers. Bossetta called this the “spiral of silence,” where users who are bullied for expressing an opinion become fearful of speaking up in the future. Do we put our thumbs on the scale or allow the loudest voices to dominate the conversation?
It’s worth pointing out that social media platforms already do this — Facebook’s newsfeed, YouTube’s recommended videos and Twitter’s top tweets are all examples of algorithmically moderated content. Each time they tweak these algorithms, companies are making deliberate decisions about whose voices should be louder or quieter. Sometimes, they target specific types of content (e.g. Facebook deemphasizing news content), other times, they target specific behavior (e.g. Google penalizing sites that use keyword stuffing).
Who should be our gatekeepers?
Once we set policies around which content is objectionable and which users are bad actors, we need to decide who gets to enforce those policies:
Users?
We could take the Reddit approach and allow users to (mostly) police themselves by setting basic ground rules, allowing community moderators to add additional rules, and giving them the authority to remove content that crosses the line. But will that lead to situations where users wish we would have been more proactive?
Moderators?
Maybe we want slightly more control, so we train our own moderators. Well, if we become as successful as YouTube and our users start uploading 500 hours of content per minute, won’t that get expensive? We could minimize those costs, like Facebook, but won’t that just lead to low wages, harsh working conditions and mental health struggles? Still, Facebook survived criticism in 2014, 2017, and again in 2018, so maybe the bad press will just pass?
Algorithms?
Eventually, we give up on humanity and just tell the machines to take over. Then, we remember that neither of us actually know how to write algorithms for the machines to use, so we find the smartest (human) minds in artificial intelligence. Except, oops, AI researchers are even more disproportionately white and male than tech workers generally. By going this route, will we just be perpetuating or exacerbating existing biases? Also, machine learning programs must first learn from “training data” — datasets with the correct answers filled in that help them identify the correct answer in future situations. Who will fill in those answers for us? You guessed it, more humans.
Government?
Annoyed that we haven’t found a silver bullet, aware of our every mistake, and unconvinced by our PR efforts highlighting how much garbage we’ve already taken down, the government decides to intervene. Of course, it has to worry about the First Amendment, doesn’t have great ideas itself, and doesn’t really understand how our platform works.
Alright, you get it. This is a tough problem and there are way more questions than answers. But that’s okay, the important part is that people engage in these conversations and continue to think critically about who gets to set the rules of the road and what those rules should be. It’s especially important when you detest the content being censored — would you still endorse a platform’s decision if it involved objectionable content you agreed with?
For the faithful readers who make it this far each week, I’m no longer going to make you put in more work by giving me feedback. Instead, I want to say thanks by offering a quick tech tip each week.
Tech tip of the week: switch quickly between apps
Instead of clicking and dragging windows around your screen, just hold down ALT (Windows) or COMMAND (Mac) and then press TAB to cycle between apps.

Next week on Trusty: Fakebook <3 News
Facebook wants to make up with news. News doesn’t trust Facebook. Citizens don’t trust Facebook. Citizens don’t trust news.
It’s going to be a wild ride.
As always, subscribe and share if you haven’t already!