The Internet’s Most Important Law Explained
EDITORIAL: The statute that is the subject of this article is the following:
“No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”
[47 U.S.C. §230(c )(1)]
We use this statute in our Disclaimer, Section 2
__________
The Internet’s most important—and misunderstood—law, explained
Section 230 is the legal foundation of social media, and it’s under attack.
There’s at least one thing that Joe Biden and Donald Trump seem to agree on: that federal law gives unfair legal immunity to technology giants.
In an interview with The New York Times published in January, Biden argued that “we should be worried about” Facebook “being exempt” from lawsuits. The Times, he noted, “can’t write something you know to be false and be exempt from being sued.” But under a 1996 law known as Section 230, Biden claimed, Facebook can do just that.
“Section 230 should be revoked immediately,” Biden said.
Just last month, Trump very publicly expressed a similar view.
“Social media giants like Twitter receive an unprecedented liability shield based on the theory that they are a neutral platform, not an editor with a viewpoint,” he said during an Oval Office signing ceremony for an executive order designed to rein in big technology companies.
Further Reading
Trump is desperate to punish Big Tech but has no good way to do it
They aren’t the only politicians who feel this way, of course. Within days of the president’s executive order, after Twitter applied a “fact check” to one of Trump’s tweets, Sen. Josh Hawley (R-Mo.) raised this issue in a letter to Twitter CEO Jack Dorsey.
“Twitter’s decision to editorialize regarding the content of political speech raises questions about why Twitter should continue receiving special status and special immunity from publisher liability under Section 230 of the Communications Decency Act,” Hawley wrote.
There’s a sliver of truth to these descriptions of Section 230. The law really does give broad immunity to websites that wasn’t available to anyone before the Internet. But all three comments fundamentally misrepresent how Section 230 works.
Biden is wrong to suggest that Section 230 treats Facebook differently from The New York Times. If someone posts a defamatory comment in the comment section of a Times article, the company enjoys exactly the same legal immunity that Facebook gets for user posts. Conversely, if Facebook published a defamatory article written by an employee, it would be just as liable as the Times.
Meanwhile, Trump and Hawley are wrong to suggest that Section 230 requires online platforms to be neutral. In reality, the law was written to encourage, not discourage, online platforms to filter user-submitted content. It has no requirement for neutrality—political or otherwise.
But while these criticisms of Section 230 miss the mark, others have raised legitimate concerns about the extraordinary breadth of Section 230. Bad actors have used Section 230 as a shield for a lot of genuinely abhorrent behavior, and a growing number of critics are calling for the law to be curtailed.
Section 230 fixed an emerging problem in the law
To understand Section 230, you have to understand how the law worked before Congress enacted it in 1996. At the time, the market for consumer online services was dominated by three companies: Prodigy, CompuServe, and AOL. Along with access to the Internet, the companies also offered proprietary services such as realtime chats and online message boards.
Prodigy distinguished itself from rivals by advertising a moderated, family-friendly experience. Employees would monitor its message boards and delete posts that didn’t meet the company’s standards. And this difference proved to have an immense—and rather perverse—legal consequence.
In 1994, an anonymous user made a series of potentially defamatory statements about a securities firm called Stratton Oakmont, claiming on a Prodigy message board that a pending stock offering was fraudulent and its president was a “major criminal.” The company sued Prodigy for defamation in New York state court.
Prodigy argued that it shouldn’t be liable for user content. To support that view, the company pointed back to a 1991 ruling that shielded CompuServe from liability for a potentially defamatory article. The judge in that case analogized CompuServe to a bookstore. The courts had long held that a bookstore isn’t liable for the contents of a book it sells—under defamation, obscenity, or other laws—if it isn’t aware of the book’s contents.
But in his 1995 ruling in the Prodigy case, Judge Stuart Ain refused to apply that rule to Prodigy.
“Prodigy held itself out as an online service that exercised editorial control over the content of messages posted on its computer bulletin boards, thereby expressly differentiating itself from its competition and expressly likening itself to a newspaper,” Ain wrote. Unlike bookstores, newspapers exercise editorial control and can be sued any time they print defamatory content.
The CompuServe and Prodigy decisions each made some sense in isolation. But taken together, they had a perverse result: the more effort a service made to remove objectionable content, the more likely it was to be liable for content that slipped through the cracks. If these precedents had remained the law of the land, website owners would have had a powerful incentive not to moderate their services at all. If they tried to filter out defamation, hate speech, pornography, or other objectionable content, they would have increased their legal exposure for illegal content they didn’t take down.
Section 230 allows platforms to act more like publishers
Online services were still a small industry in 1995, so few people in Washington, DC, were paying attention. But a pair of young, tech-savvy congressmen, Ron Wyden (D-Ore.) and Chris Cox (R-Calif.), recognized the significance of the Prodigy decision and began work on legislation to fix the problem. Their strategy was to give the owners of online services broad immunity for user-submitted content, whether or not they engaged in content-based filtering. The pair hoped that if they guaranteed sites they wouldn’t get sued for their moderation decisions, most sites would voluntarily engage in filtering.
At the time, Congress was working on a much broader overhaul of telecommunications law. Civil liberties groups and the fledging online industry were worried about a proposal called the Communications Decency Act that made it a federal crime to knowingly send pornography to someone under 18.
Wyden and Cox cleverly positioned their legislation as an alternative to the Communications Decency Act. Civil liberties advocates liked Section 230 because it would promote online freedom of speech. But the congressmen also had a strong pitch to anti-porn crusaders, since it would remove a legal impediment to porn filtering by overriding the Prodigy rule. According to Jeff Kosseff, author of the definitive history of Section 230, Wyden and Cox’s legislation received little attention and hardly any opposition.
Rather than choosing between these two rival approaches to the porn issue, Congress threw both of them together in the final Communications Decency Act, which was passed as part of the larger 1996 Telecommunications Act. A year later, the Supreme Court declared the anti-porn portions of the Communications Decency Act unconstitutional—but it left Section 230 standing.
“There is nothing in the law about political neutrality”
Given this history, it’s deeply ironic that so many elected officials claim that online platforms are at risk of losing their Section 230 immunity if they start making too many editorial decisions. Kosseff argues this gets things exactly backwards.
“Without Section 230, under common law and the First Amendment, there was emerging this distinction between publishers and platforms,” Kosseff told me in a September interview. Wyden and Cox “wanted the platforms to be able to make editorial judgments without suddenly becoming liable for everything.”
In his Oval Office speech last month, Donald Trump said that social media platforms get immunity “based on the theory that they are a neutral platform, not an editor with a viewpoint.” That almost perfectly describes the law as it existed before 1996, but as Kosseff puts it, “the whole point of Section 230 was to allow online services to have the discretion to block content that they deem objectionable.”
And despite a number of politicians’ claims, Kosseff added, “there’s no mention I can find of a requirement for neutrality.” The authors of the statute “do talk about a need to promote political discourse,” he adds. “But I don’t see anything saying to receive 230 protections you must be neutral.”
This is a point that Wyden, now a US Senator from Oregon, has made repeatedly. “There is nothing in the law about political neutrality,” he tweeted during last month’s Section 230 debate.
Section 230 is the foundation of the social Internet
The breadth of section 230’s immunity is historically unprecedented. While earlier legal doctrines had limited the liability of bookstores and other distributors, it hadn’t granted them the kind of total immunity that online providers now enjoy. A bookstore could still be liable if there was proof it knew it was publishing an obscene or defamatory book. By contrast, Internet providers are immune even if they know about illegal content on their sites and leave it online.
Eric Goldman, a professor at the Santa Clara University Law School, argues that this rule made the modern Internet possible.
It’s hard to imagine sites like Yelp, Reddit, or Facebook existing in their current form without a law like Section 230. Yelp, for example, is regularly threatened by business owners for allegedly defamatory reviews. Section 230 allows Yelp to basically ignore these threats. Without Section 230, Yelp would need a large staff to conduct legal analysis of potentially defamatory reviews—a cost that could have prevented Yelp from getting off the ground 15 years ago.
Section 230 puts the US at odds with most other countries. Different countries take different approaches to intermediary liability, but Kosseff writes that most countries hold online services responsible for user content in at least some circumstances. Courts in Europe, for example, have held that a news site can be held responsible if someone posts a defamatory comment.
Advocates of Section 230 argue that it has contributed to America’s dominance of the Internet economy. American Internet startups that host user content have had an inherent advantage over their overseas rivals because they haven’t had to worry about the legal complications user-generated content creates for companies in other countries.
“I don’t know that we’d even have social media now” without Section 230, Goldman told Ars in a September phone interview. “There’s an entire class of online interactions that didn’t exist and exists only because of the legal protection.”
This is why Section 230 tends to be popular among advocates of free speech. Section 230 allows sites like Facebook and Twitter—not to mention the comment section of Ars Technica—to publish ordinary users’ thoughts without review or censorship. In a world without Section 230’s protections, users might have many fewer opportunities for such unfettered self-expression.
“Four men came in four minutes”
In 2015, Matthew Herrick met a guy through the gay dating app Grindr. A few months later, the couple had a bitter breakup. According to Herrick, the ex-boyfriend retaliated by posting fake Grindr profiles with Herrick’s name and photo, luring men to Herrick’s home and workplace.
Carrie Goldberg, Herrick’s attorney, told Ars that “his ex-boyfriend created multiple accounts and then directed men to my client’s home to have sex with him. People he didn’t know, people who were not invited, people who believed that he had rape fantasies.”
According to a court filing, one fake profile falsely described Herrick as “looking for a group of hung tops to come over and destroy my ass.”
“There would be intruders in the stairwell at his apartment building waiting for him,” Goldberg told Ars in a September phone interview. “They would follow him when he was outside walking his dog. On one day, four men came in four minutes.”
Herrick estimated that around 1,100 people visited him over a six-month period.
Herrick reported the fake profiles to Grindr numerous times, but the ex-boyfriend allegedly kept making new ones. Herrick asked Grindr to prevent the creation of new fake profiles, but the abuse didn’t stop. According to court filings, Herrick filed at least 14 police reports and sought a protective order without success.
So Herrick, represented by Goldberg, sued Grindr. They argued that the app was a dangerously defective product.
“If the whole purpose of your app is to facilitate in-person sexual encounters, you can be pretty sure it’s going to be abused by rapists, child predators, and stalkers,” Goldberg told Ars. Herrick’s believes that Grindr lacks effective tools to deal with abuse of its system. “This is no different than a car designed without brakes,” she added.
Grindr disputes Goldberg’s characterization of the case. “We worked closely with law enforcement and took extensive steps to delete and ban fraudulent accounts,” a Grindr spokesman wrote in an email statement. Those steps included “investigating hundreds of email addresses, profile names and accounts, conducting voluntary daily searches, and searching account profile contents for any potential reference of the physical addresses, phone numbers and other relevant information identified by Mr. Herrick and/or law enforcement.”
Ultimately, Section 230 prevented Herrick from litigating these details because the court held that Section 230 gave Grindr total immunity. Ultimately, Herrick’s suffering was the result of people reading and responding to content posted on Grindr by his ex-boyfriend. A trial court ruled that Section 230 shields Grindr from any liability arising from user-posted content, no matter how abhorrent. The 2nd Circuit Court of Appeals affirmed the ruling.
Section 230 is the legal foundation of social media, and it’s under attack.
Section 230 protects some vile content and behavior
I went into graphic detail about the Grindr case because it helps to illustrate just how broad the immunity provided by Section 230 is. Discussions about Section 230 often focus on defamation, which harms people by harming their reputation. But as the Grindr example illustrates, online services can contribute other kinds of harms—including threats to people’s physical safety.
This is just one of many examples. Goldberg, who specializes in representing victims of sexual abuse, argues that “the protections that platforms are given permeates just about every case of ours that involves harassment and stalking. There’s absolutely no incentive for these tech industries to take seriously any harm that happens to their users.”
Section 230 can protect online forums where users post revenge pornography, coordinate mass harassment campaigns, and spread terrorist propaganda. It’s hard to imagine a site like 4Chan—an anonymous message board known for hosting a range of vile content—surviving for 17 years without Section 230.
But Goldman argues that even stories like Herrick’s don’t provide a compelling reason to change Section 230.
“This is not a case where that person was anonymous. We know who did it,” Goldman told Ars. “The question is what do we need to do to make that bad actor stop and to properly punish him for his bad actions.”
In an ideal world, reporting harassment to the police and requesting a restraining order would be enough to stop it. But at least in this case, that process failed. Goldberg believes victims like Herrick should have recourse against technology platforms, too.
Hawley’s bill: An “unconstitutional mess”
A lot of people think it’s time to narrow the immunity provided by Section 230—but they don’t agree about how to do it. I’ll wrap up by discussing four widely discussed reform proposals.
Last year, Sen. Hawley introduced legislation to require large technology companies (defined as companies with more than 30 million US users, 300 million global users, or $500 million in global revenue) to be politically neutral in order to earn Section 230 protections.
Under Hawley’s plan, large technology companies would need to convince four out of five commissioners of the Federal Trade Commission that its content moderation policies were “content neutral.” If two or more commissioners decide a tech company’s content policies are politically biased, the company would lose Section 230 protections—a potentially catastrophic outcome for a company like Facebook or Twitter that publishes content from hundreds of millions of users.
Goldman argues that people shouldn’t take this bill too seriously. “I don’t think anyone has interpreted this bill as a serious effort to reform Section 230” he told Ars in a recent email.
If Congress passed Hawley’s legislation, it would face an immediate challenge under the First Amendment. Even some conservative commentators believe it goes too far. National Review legal commentator David French, for example, described it as an “unconstitutional mess.” Goldman agrees, predicting that it wouldn’t survive First Amendment scrutiny.
The end of immunity?
A second possible reform is the outright repeal of Section 230. While defenders of Section 230 argue this would be catastrophic, Goldberg disagrees.
“It’s not like there’s going to be a huge race to the courtroom if we lose Section 230,” she told Ars. “If you sue a tech company, there’s still going to be a pretty difficult burden showing that the tech company caused a harm.”
Goldberg points out that the ordinary rules of tort law have served other parts of the economy for centuries. In her view, the same rules would work perfectly well online.
Whatever you think of this argument, it’s hard to imagine Congress repealing Section 230 outright. Companies worth billions of dollars have been built on Section 230. Even if the common-law process of legal precedents would produce sensible rules eventually, it could create a lot of chaos and uncertainty in the meantime. Few members of Congress are going to want to take that risk.
Still, Democratic presidential nominee Joe Biden seems to be advocating some version of this idea when he says that Facebook’s Section 230 immunity should be “revoked.” The former vice president hasn’t gone into detail about exactly how he would revoke Section 230 and what, if anything, would take its place.
Narrow exceptions
A narrower version of the same idea would be for Congress to carve out some specific exceptions to Section 230 to deal with particular types of harmful content. Congress has already done this once. In 2018, Congress carved out a narrow exception from Section 230 for sites that promote prostitution and sex trafficking. Congress could pass more laws like this, making websites liable for content that promoted terrorism, violated civil rights laws, or had other harmful effects.
But there’s a danger that this strategy could be either over- or under-inclusive. On the one hand, it might take a long time for Congress to recognize the need for a new exception, leaving victims without recourse for years or even decades.
On the other hand, if Congress creates too many exceptions to Section 230, it could render Section 230’s protections a lot less helpful for online service operators. The clarity and simplicity of Section 230 allows most lawsuits to be dismissed quickly. If the courts had to litigate whether alleged misconduct fell into any one of numerous Section 230 exceptions, that could dramatically increase the costs of mounting a Section 230 defense.
The case for—and against—reasonableness
Danielle Citron, a law professor at the University of Maryland, and the Brookings Institution’s Ben Wittes have proposed a more holistic alternative that would avoid turning Section 230 into Swiss cheese. They would revise Section 230 so that immunity is only available to platforms that take “reasonable steps to prevent or address unlawful uses of its services.” The law wouldn’t spell out what “reasonable” means, instead leaving it up to judges to flesh this out on a case-by-case basis.
While that vagueness might seem like a downside, it would have benefits. One of them is that it would create an incentive for online service providers to improve the quality of their moderation over time. If most sites in a particular market category—like dating sites or online classifieds—adopt a measure to discourage illegal activity, the courts could find that companies refusing to follow suit have behaved unreasonably and lose Section 230 immunity.
In other words, Citron and Wittes’ proposal would make liability law online more like what exists in the offline world. Ordinary tort law gives the courts significant latitude to decide whether companies took reasonable steps to avoid harm to their customers and others. Citron’s reform proposal would bring this same kind of flexibility to the law governing online intermediaries.
A reasonableness standard could also provide flexibility to deal with varied types of defendants differently. Facebook should probably be doing more to stop harmful content than someone with a personal blog that accepts comments. The court could find that precautions that are reasonable for an individual with a blog—or a startup with only a handful of employees—are not sufficient for a technology giant like Facebook.
Citron points out that online intermediaries like Google and Facebook have far more power than they did 20 years ago. “With power comes responsibility,” she writes. “Laws should change to ensure that such policy is wielded responsibly. Content intermediaries have moral obligations to their users and others affected by their sites.”
But Goldman isn’t convinced.
“I see the proposal as functionally equivalent to repealing Section 230,” he said in a recent email. In a 2019 analysis, he argued that the Citron and Wittes proposal “would make Section 230 litigation far less predictable, and it would require expensive and lengthy factual inquiries into all evidence probative of the reasonableness of defendant’s behavior.” Fear of litigation could cause some online providers to take down lawful speech proactively, harming online freedom of speech.
Further Reading:
Justice Department Proposes Major Overhaul of Section 230 Protections