Section 230, the internet free speech law ***** wants to change, explained

President ***** stands between some columns.

President ***** has signed an executive order that could threaten social media companies like Facebook and Twitter. | Brendan Smialowski / AFP via Getty Images

The pillar of internet free speech is *****’s latest target.

You may have never heard of it, but Section 230 of the Communications Decency Act is the legal backbone of the internet. The law was created almost 30 years ago to protect internet platforms from liability for many of the things third parties say or do on them. And now it’s under threat by one of its biggest beneficiaries: President *****, who hopes to use it to fight back against the social media platforms he believes are unfairly censoring him and other conservative voices.

Section 230 says that internet platforms that host third-party content — think of tweets on Twitter, posts on Facebook, photos on Instagram, reviews on Yelp, or a news outlet’s reader comments — are not liable for what those third parties post (with a few exceptions). For instance, if a Yelp reviewer were to post something defamatory about a business, the business could sue the reviewer for libel, but it couldn’t sue Yelp. Without Section 230’s protections, the internet as we know it today would not exist. If the law were taken away, many websites driven by user-generated content would likely go dark.

The gravity of the situation might be lost on the president. ***** is using this threat to bully social media platforms like Twitter into letting him post whatever he wants after Twitter put a warning label that links to a fact-checking site on two of his recent tweets. To illustrate why there’s much more at stake than *****’s tweets, here’s a look at how Section 230 went from an amendment to a law about internet porn to the pillar of internet free speech to *****’s latest weapon against perceived anti-conservative bias in the media.

Section 230’s salacious origins

In the early ’90s, the internet was still in its relatively unregulated infancy. There was a lot of porn floating around platforms like AOL and the World Wide Web where anyone, including our nation’s impressionable children, could see it. This alarmed some lawmakers. In an attempt to regulate this situation, in 1995 lawmakers introduced a bipartisan bill called the Communications Decency Act which would extend to the internet laws governing obscene and indecent use of telephone services. This would also make websites and platforms responsible for any indecent or obscene things their users posted.

In the midst of this was a lawsuit between two companies you might recognize: Stratton Oakmont and Prodigy. The former is featured in The Wolf of Wall Street, and the latter was a pioneer of the early internet. But in 1995, Stratton Oakmont sued Prodigy for defamation after an anonymous user claimed on a Prodigy bulletin board that the financial company’s president engaged in fraudulent acts. As the New York Times explains the court’s decision:

The New York Supreme Court ruled that Prodigy was “a publisher” and therefore liable because it had exercised editorial control by moderating some posts and establishing guidelines for impermissible content. If Prodigy had not done any moderation, it might have been granted free speech protections afforded to some distributors of content, like bookstores and newsstands.

Fearing that the Communications Decency Act would stop the burgeoning internet in its tracks and mindful of the court’s decision, then-Rep. (now Sen.) Ron Wyden and Rep. Chris Cox authored an amendment that said that “interactive computer services” were not responsible for what their users posted, even if those services engaged in some moderation of that third-party content. The internet companies, in other words, were mere platforms, not publishers.

“What I was struck by then is that if somebody owned a website or a blog, they could be held personally liable for something posted on their site,” Wyden explained to Vox’s Emily Stewart last year. “And I said then — and it’s the heart of my concern now — if that’s the case, it will kill the little guy, the startup, the inventor, the person who is essential for a competitive marketplace. It will kill them in the crib.”

Section 230 also allows those services to “restrict access” to any content they deem objectionable. In other words, the platforms themselves get to choose what is and what is not acceptable content, and they can decide to host it or moderate it accordingly. That means the free speech argument frequently employed by people who are suspended or banned from these platforms — that the Constitution says they can write whatever they want — doesn’t apply, no matter how many times Laura Loomer tries to test it. As Harvard Law professor Laurence Tribe points out, the First Amendment argument is also generally misused in this context:

Wyden likens the dual nature of Section 230 to a sword and a shield for platforms: They’re shielded from liability for user content, and they have a sword to moderate it as they see fit.

The Communications Decency Act was signed into law in 1996. The indecency and obscenity provisions, which made it a crime to transmit such speech if it could be viewed by a minor, were immediately challenged by civil liberty groups. The Supreme Court would ultimately strike them down, saying they were too restrictive of free speech. Section 230 stayed, and the law that was initially meant to restrict free speech on the internet instead became the law that protected it.

This protection has allowed the internet to thrive. Think about it: Websites like Facebook, Reddit, and YouTube have millions and even billions of users. If these platforms had to monitor and approve every single thing every user posted, they simply wouldn’t be able to exist. No website or platform can moderate at such an incredible scale, and no one wants to open themselves up to the legal liability of doing so.

That doesn’t mean Section 230 is perfect. Some argue that it gives platforms too little accountability, allowing some of the worst parts of the internet — think 8chan or sites that promote racism — to flourish along with the best. Simply put, internet platforms have been happy to use the shield to protect themselves from lawsuits, but they’ve largely ignored the sword to moderate the bad stuff their users upload.

Recent challenges

In recent years, Section 230 has come under threat. In 2018, two bills — the Allow States and Victims to Fight Online Sex Trafficking Act (FOSTA) and the Stop Enabling Sex Traffickers Act (SESTA) — were signed into law, which changed parts of Section 230. Now, platforms could be deemed responsible for prostitution ads posted by third parties. These were ostensibly meant to make it easier for authorities to go after websites that were used for sex trafficking, but they did this by carving out an exception to Section 230. The law was vulnerable.

Amid all of this was a growing public sentiment that social media platforms like Twitter and Facebook were becoming too powerful. In the minds of many, Facebook even influenced the outcome of the 2016 presidential election by offering up its user data to shady outfits like Cambridge Analytica. There were also allegations of anti-conservative bias. Right-wing figures who once rode the internet’s relative lack of moderation to fame and fortune were being held accountable for various infringements of hateful content rules and kicked off the very platforms that helped created them. Alex Jones and his expulsion from Facebook and other social media platforms is perhaps the most illustrative example of this.

Republican Sen. Ted Cruz, demonstrating a profound misunderstanding of Section 230, claimed in a 2018 op-ed that the law required the internet platforms it was designed to protect to be “neutral public forums.” Lawmakers have tried to introduce legislation that would fulfill that promise ever since.

Republican Rep. Louie Gohmert introduced the Biased Algorithm Deterrence Act in 2019, which would consider any social media service that used algorithms to moderate content without the user’s permission or knowledge to be legally considered a publisher, not a platform, thereby removing Section 230’s protections. (Remember the Stratton Oakmont v. Prodigy case? This bill would have hearkened back to that era.) Later that year, Republican Sen. Josh Hawley introduced the Ending Support for Internet Censorship Act, which would require that, in order to be granted Section 230 protections, social media companies would have to show the Federal Trade Commission (FTC) that their content moderation practices were politically neutral.

Neither of those bills went anywhere, but the implications were obvious: Emboldened by FOSTA-SESTA, the two sex-trafficking bills from 2018, lawmakers not only wanted to chip away at Section 230 but were actively testing out ways to do it.

Their latest attempt — and the most likely to succeed — is a bipartisan bill introduced in March called the Eliminating Abusive and Rampant Neglect of Interactive Technologies (EARN IT) Act, from Sens. Lindsey Graham and Richard Blumenthal. Here, the lawmakers used the prevention of child pornography as an avenue to both erode Section 230 and end encryption by requiring companies to follow a set of “best practices” or else lose their Section 230 immunity from child pornography charges.

Some privacy advocates worry that these best practices would extend to requiring tech companies to provide law enforcement with access to all user content. A law like this would effectively force websites to comply with those “best practices,” as they’d be sued out of existence if they didn’t. The law has bipartisan support, with Hawley and Democrat Dianne Feinstein among its nine cosponsors.

*****’s executive order

And now President *****, who has benefited greatly from social media, is trying to dial back Section 230’s protections through an executive order. ***** signed the order roughly 48 hours after Twitter applied a new policy of flagging potentially false or misleading content to two of the president’s tweets. At the signing ceremony, ***** referred to Twitter’s actions as “editorial decisions,” and Attorney General Bill Barr referred to social media companies as “publishers.”

Barr is not a fan of Section 230, and his Department of Justice has been looking into the law and how he believes it allows “selective” removal of political speech. Barr added that he thinks there is bipartisan support that Section 230 “has been stretched way beyond its original intention” and was allowing “behemoths” that controlled massive amounts of information to censor it and “act as editors and publishers.”

“They’ve had unchecked power to censure, restrict, edit, shape, hide, alter virtually any form of communication between private citizens or large public audiences,” ***** said in the Oval Office on Thursday. “We cannot allow that to happen, especially when they go about doing what they’re doing.”

It’s unclear if *****’s executive order could stop social media companies from doing much at all. The White House did not immediately release the full text of the executive order after ***** signed it. A draft of the order says that platforms that engage in anything beyond “good faith” moderation of content should be considered publishers and therefore not entitled to Section 230’s protections. It also calls on the Federal Communications Commission (FCC) to propose regulations that clarify what constitutes “good faith;” the FTC to take action against “large internet platforms” that “restrict speech;” and the attorney general to work with state attorneys general to see if those platforms violate any state laws regarding unfair business practices.

While the draft order talks a big game, legal experts don’t seem to think much — or even any — of it can be backed up, citing First Amendment concerns. It’s also unclear whether or not the FCC has the authority to regulate Section 230 in this way, or if the president can change the scope of a law without any congressional approval.

Needless to say, Section 230’s creator isn’t thrilled.

“I have warned for years that this administration was threatening 230 in order to chill speech and bully companies like Facebook, YouTube, and Twitter into giving him favorable treatment,” Wyden said in a statement. “Today ***** proved me right. I expect those companies, and every American who participates in online speech, to resist this illegal act by all possible means. Giving in to bullying by this president may be the single most unpatriotic act an American could undertake.”

“As the co-author of Section 230, let me make this clear: There is nothing in the law about political neutrality,” Wyden added. “It does not say companies like Twitter are forced to carry misinformation about voting, especially from the president. Efforts to erode Section 230 will only make online content more likely to be false and dangerous.”


Support Vox’s explanatory journalism

Every day at Vox, we aim to answer your most important questions and provide you, and our audience around the world, with information that has the power to save lives. Our mission has never been more vital than it is in this moment: to empower you through understanding. Vox’s work is reaching more people than ever, but our distinctive brand of explanatory journalism takes resources — particularly during a pandemic and an economic downturn. Your financial contribution will not constitute a donation, but it will enable our staff to continue to offer free articles, videos, and podcasts at the quality and volume that this moment requires. Please consider making a contribution to Vox today.

via Vox – Recode

Check out the Finding Your Identity Podcast!