This week Twitter approved Elon Musk’s plan to take the microblogging site and social network private in a deal worth $44 billion. In a press release announcing the agreement, Musk said that “Free speech is the bedrock of a functioning democracy, and Twitter is the digital town square where matters vital to the future of humanity are debated.”
It capped a week of high stakes for everyone, from financial analysts to Twitter users and Twitter employees. Free speech experts were apoplectic. St John’s University Law professor Kate Klonick quickly graded Musk with an F-.
Professor Klonick knows what she’s talking about. In 2018 she wrote a paper about moderating online speech titled “The New Governors.” This week’s Twitter saga gave us the perfect opportunity to learn more about the rich history of social media content moderation laid out on that fascinating paper. Professor Klonick breaks it down in three parts: background, companies, and practice.
Let’s start with the background. In 1991, well before the Internet became mainstream, there was a free speech case against CompuServe, one of the first online services providers. The company won it because it had not altered the user’s content at all. A different company, Prodigy, lost a similar case in 1995 because it had made changes to the user’s content. On that same year, the US Congress, on Section 230 of the Communications Decency Act, gave online companies broad immunity for what their users were posting on their services to promote the growth of the internet.
In 1997, in a similar case against America Online, the court sided with the company using section 230. But the court’s decision not just helped the company’s business growth, it also said that the government should not intrude on the free speech of online users.
Fast forward to today’s social media companies. For her paper, Dr. Klonick interviewed the people working on free speech at Facebook, YouTube, and Twitter. She found the three companies have a strong knowledge and affinity about the special role they play with regards to free speech. They know they are not replacing services traditionally provided by the government, like it happens at company towns, nor are they in the same position as broadcast companies or newspapers.
They are not town squares after all, despite what Musk says. They are instead a special kind of intermediary between a user’s speech and the public. The courts, at least in the US, are not mandating that they moderate what their users say. They do it anyway because, if they want to attract more users, they need to offer them a friendly environment.
So how have social media companies done moderating the opinions of their users? That is that third section in professor Klonick’s paper. She draws on the legal concept of “standards,” which the companies used at first to apply in general to all cases saying, for example, “be kind to others,” vs. “rules,” when they started saying they would ban anyone promoting a specific list of terrorist organizations, for example.
To enforce these rules, social media companies are employing thousands of reviewers in elaborate triage processes, to sort through millions of flagged pieces of content every day. That is what keeps social media usable. The main reason Twitter users are alarmed is they fear that Musk, who is considered to have a strong libertarian streak, will now undo the content moderation practices at Twitter, opening it up for all kinds of abuse.
As Professor Klonick says, Twitter has always been the free-speech faction of the free speech party. That is what made it appealing to Mr. Musk. It has also made Twitter much smaller than Facebook and YouTube, making his bid possible. As we said in the past, online free speech is a human story, not a technology story. We are about to find out if a different solution is humanly possible.