*****’s Justice Department proposes legislation that could end the internet as we know it

President ***** stands between some columns.

President ***** has signed an executive order that could threaten social media companies like Facebook and Twitter. | Brendan Smialowski / AFP via Getty Images

The pillar of internet free speech is *****’s latest target.

Open Sourced logo

The ***** administration’s Justice Department has made its latest attempt to chip away at a law that allows the internet as we know it to exist: a proposal to Congress to create legislation that would require social media platforms to moderate users consistently or else lose immunity from lawsuits granted by a law called Section 230.

You may have never heard of it, but Section 230 of the Communications Decency Act is the legal backbone of the internet. The law was created almost 30 years ago to protect internet platforms from liability for many of the things third parties say or do on them. And now it’s under threat by one of its biggest beneficiaries: President *****. In combination with an executive order issued in May, the Justice Department’s proposed law could ensure that the president can say whatever he wants on social media and address the accusation that platforms are biased against conservatives. (Evidence suggests the contrary.)

Section 230 says that internet platforms that host third-party content — think of tweets on Twitter, posts on Facebook, photos on Instagram, reviews on Yelp, or a news outlet’s reader comments — are not liable for what those third parties post (with a few exceptions). For instance, if a Yelp reviewer were to post something defamatory about a business, the business could sue the reviewer for libel, but it couldn’t sue Yelp. Without Section 230’s protections, the internet as we know it today would not exist. If the law were taken away, many websites driven by user-generated content would likely go dark.

*****’s attacks on Section 230 began in earnest in May after Twitter put a warning label and fact-check on two of his tweets that contained misinformation about mail-in voting, but the ramifications of changing the law would extend far beyond a few tweets. Here’s a look at how Section 230 went from an amendment to a law about internet porn to the pillar of internet free speech to *****’s latest weapon against perceived anti-conservative bias in the media.

Section 230’s salacious origins

In the early ’90s, the internet was still in its relatively unregulated infancy. There was a lot of porn floating around platforms like AOL and the World Wide Web where anyone, including our nation’s impressionable children, could see it. This alarmed some lawmakers. In an attempt to regulate this situation, in 1995 lawmakers introduced a bipartisan bill called the Communications Decency Act which would extend to the internet laws governing obscene and indecent use of telephone services. This would also make websites and platforms responsible for any indecent or obscene things their users posted.

In the midst of this was a lawsuit between two companies you might recognize: Stratton Oakmont and Prodigy. The former is featured in The Wolf of Wall Street, and the latter was a pioneer of the early internet. But in 1995, Stratton Oakmont sued Prodigy for defamation after an anonymous user claimed on a Prodigy bulletin board that the financial company’s president engaged in fraudulent acts. As the New York Times explains the court’s decision:

The New York Supreme Court ruled that Prodigy was “a publisher” and therefore liable because it had exercised editorial control by moderating some posts and establishing guidelines for impermissible content. If Prodigy had not done any moderation, it might have been granted free speech protections afforded to some distributors of content, like bookstores and newsstands.

Fearing that the Communications Decency Act would stop the burgeoning internet in its tracks and mindful of the court’s decision, then-Rep. (now Sen.) Ron Wyden and Rep. Chris Cox authored an amendment that said that “interactive computer services” were not responsible for what their users posted, even if those services engaged in some moderation of that third-party content. The internet companies, in other words, were mere platforms, not publishers.

“What I was struck by then is that if somebody owned a website or a blog, they could be held personally liable for something posted on their site,” Wyden explained to Vox’s Emily Stewart last year. “And I said then — and it’s the heart of my concern now — if that’s the case, it will kill the little guy, the startup, the inventor, the person who is essential for a competitive marketplace. It will kill them in the crib.”

Section 230 also allows those services to “restrict access” to any content they deem objectionable. In other words, the platforms themselves get to choose what is and what is not acceptable content, and they can decide to host it or moderate it accordingly. That means the free speech argument frequently employed by people who are suspended or banned from these platforms — that the Constitution says they can write whatever they want — doesn’t apply, no matter how many times Laura Loomer tries to test it. As Harvard Law professor Laurence Tribe points out, the First Amendment argument is also generally misused in this context:

Wyden likens the dual nature of Section 230 to a sword and a shield for platforms: They’re shielded from liability for user content, and they have a sword to moderate it as they see fit.

The Communications Decency Act was signed into law in 1996. The indecency and obscenity provisions, which made it a crime to transmit such speech if it could be viewed by a minor, were immediately challenged by civil liberty groups. The Supreme Court would ultimately strike them down, saying they were too restrictive of free speech. Section 230 stayed, and the law that was initially meant to restrict free speech on the internet instead became the law that protected it.

This protection has allowed the internet to thrive. Think about it: Websites like Facebook, Reddit, and YouTube have millions and even billions of users. If these platforms had to monitor and approve every single thing every user posted, they simply wouldn’t be able to exist. No website or platform can moderate at such an incredible scale, and no one wants to open themselves up to the legal liability of doing so.

That doesn’t mean Section 230 is perfect. Some argue that it gives platforms too little accountability, allowing some of the worst parts of the internet — think 8chan or sites that promote racism — to flourish along with the best. Simply put, internet platforms have been happy to use the shield to protect themselves from lawsuits, but they’ve largely ignored the sword to moderate the bad stuff their users upload.

Recent challenges

In recent years, Section 230 has come under threat. In 2018, two bills — the Allow States and Victims to Fight Online Sex Trafficking Act (FOSTA) and the Stop Enabling Sex Traffickers Act (SESTA) — were signed into law, which changed parts of Section 230. Now, platforms could be deemed responsible for prostitution ads posted by third parties. These were ostensibly meant to make it easier for authorities to go after websites that were used for sex trafficking, but they did this by carving out an exception to Section 230. The law was vulnerable.

Amid all of this was a growing public sentiment that social media platforms like Twitter and Facebook were becoming too powerful. In the minds of many, Facebook even influenced the outcome of the 2016 presidential election by offering up its user data to shady outfits like Cambridge Analytica. There were also allegations of anti-conservative bias. Right-wing figures who once rode the internet’s relative lack of moderation to fame and fortune were being held accountable for various infringements of hateful content rules and kicked off the very platforms that helped created them. Alex Jones and his expulsion from Facebook and other social media platforms is perhaps the most illustrative example of this.

Republican Sen. Ted Cruz, demonstrating a profound misunderstanding of Section 230, claimed in a 2018 op-ed that the law required the internet platforms it was designed to protect to be “neutral public forums.” Lawmakers have tried to introduce legislation that would fulfill that promise ever since.

Republican Rep. Louie Gohmert introduced the Biased Algorithm Deterrence Act in 2019, which would consider any social media service that used algorithms to moderate content without the user’s permission or knowledge to be legally considered a publisher, not a platform, thereby removing Section 230’s protections. (Remember the Stratton Oakmont v. Prodigy case? This bill would have hearkened back to that era.) Later that year, Republican Sen. Josh Hawley introduced the Ending Support for Internet Censorship Act, which would require that, in order to be granted Section 230 protections, social media companies would have to show the Federal Trade Commission (FTC) that their content moderation practices were politically neutral.

Neither of those bills went anywhere, but the implications were obvious: Emboldened by FOSTA-SESTA, the two sex-trafficking bills from 2018, lawmakers not only wanted to chip away at Section 230 but were actively testing out ways to do it.

More likely to succeed is a bipartisan bill introduced in March called the Eliminating Abusive and Rampant Neglect of Interactive Technologies (EARN IT) Act, from Sens. Lindsey Graham and Richard Blumenthal. Here, the lawmakers used the prevention of child pornography as an avenue to erode Section 230 by requiring companies to follow a set of “best practices” developed by a newly established commission or else lose their Section 230 immunity from civil lawsuits over child pornography postings. Some privacy advocates fear that the proposed law would extend to requiring tech companies to provide law enforcement with access to all user content. The law has bipartisan support, with Hawley and Democrat Dianne Feinstein among its cosponsors.

*****’s executive order

President *****, who has benefited greatly from social media, is trying to dial back Section 230’s protections through an executive order. Back in May, ***** signed his “Executive Order on Preventing Online Censorship” roughly 48 hours after Twitter applied a new policy of flagging potentially false or misleading content to two of the president’s tweets. At the signing ceremony, ***** referred to Twitter’s actions as “editorial decisions,” and Attorney General Barr referred to social media companies as “publishers.”

“They’ve had unchecked power to censure, restrict, edit, shape, hide, alter virtually any form of communication between private citizens or large public audiences,” ***** said at the time. “We cannot allow that to happen, especially when they go about doing what they’re doing.”

The order says that platforms that engage in anything beyond “good faith” moderation of content should be considered publishers and therefore not entitled to Section 230’s protections. It also calls on the Federal Communications Commission (FCC) to propose regulations that clarify what constitutes “good faith;” the FTC to take action against “large internet platforms” that “restrict speech;” and the attorney general to work with state attorneys general to see if those platforms violate any state laws regarding unfair business practices.

While the order talks a big game, legal experts don’t seem to think much — or even any — of it can be backed up, citing First Amendment concerns. It’s also unclear whether or not the FCC has the authority to regulate Section 230 in this way, or if the president can change the scope of a law without any congressional approval.

Barr’s proposals

Barr is not a fan of Section 230, and his Department of Justice has been looking into the law and how he believes it allows “selective” removal of political speech. This has included a set of recommendations from the Justice Department in June and the legislation proposal sent to Congress on Wednesday. The proposal includes the addition of a “good faith” section requiring platforms to spell out their moderation rules, follow them to the letter, explain any moderation decisions to the user whose content is being moderated, and provide the user with the chance to respond. There are also additional carve-outs that would remove civil lawsuit immunity for material that violates anti-terrorism, child sex abuse, cyberstalking, and antitrust laws.

“For too long Section 230 has provided a shield for online platforms to operate with impunity,” Barr said in a statement. “Ensuring that the internet is a safe, but also vibrant, open and competitive environment is vitally important to America. We therefore urge Congress to make these necessary reforms to Section 230 and begin to hold online platforms accountable both when they unlawfully censor speech and when they knowingly facilitate criminal activity online.”

It’s not clear how Barr determined that platforms are “unlawfully” censoring speech, as First Amendment protections do not extend to private businesses.

***** and Barr will be meeting with some Republican state attorneys general on Wednesday, where they will, according to reports, discuss ways state laws can be used to further dictate how and when social media platforms can moderate their users’ speech.

Needless to say, Section 230’s creator isn’t thrilled.

“As the co-author of Section 230, let me make this clear: There is nothing in the law about political neutrality,” Wyden said. “It does not say companies like Twitter are forced to carry misinformation about voting, especially from the president. Efforts to erode Section 230 will only make online content more likely to be false and dangerous.”

Open Sourced is made possible by Omidyar Network. All Open Sourced content is editorially independent and produced by our journalists.


Support Vox’s explanatory journalism

Every day at Vox, we aim to answer your most important questions and provide you, and our audience around the world, with information that has the power to save lives. Our mission has never been more vital than it is in this moment: to empower you through understanding. Vox’s work is reaching more people than ever, but our distinctive brand of explanatory journalism takes resources — particularly during a pandemic and an economic downturn. Your financial contribution will not constitute a donation, but it will enable our staff to continue to offer free articles, videos, and podcasts at the quality and volume that this moment requires. Please consider making a contribution to Vox today.

via Vox – Recode

Check out the Finding Your Identity Podcast!