Section 230 is often called “The twenty-six words that created the Internet.” Section 230 of the Communications Decency Act of 1996 is the legal foundation for the Internet. Section 230 was created in response to companies’ liability concerns, which resulted in a significant stall in internet growth and affected attempts to promote freedom of speech. Section 230 is a United States federal statute that regulates internet speech and content. For content produced by third parties, Section 230 expressly exempts both providers and users of “interactive computer services” from liability. This means that websites hosting user-generated content, such as networking sites, social media sites, and other platforms, cannot be held liable for the material its users submit. However, the law does not shield online platforms from responsibility for their content, including anything they produce or actively promote. As technology and online media companies have expanded rapidly over the last 30 years, Section 230 has never been more controversial. These controversies involve claims that it safeguards online platforms from liability excessively.
In contrast, others claim that it is crucial to protect online free expression and foster innovation and growth in the digital economy. Concerns about the size, influence, and dominance of Big Tech companies are growing, as critics contend that Section 230 has allowed these businesses to expand and rule the digital economy without adequate oversight or accountability. Big Tech companies Google, Facebook, and Twitter are currently facing lawsuits in the Supreme Court regarding internet free speech policies, specifically about the legitimacy of using social media platforms for posting and promoting content related to terrorism. The case Gonzalez v. Google seeks to hold Google’s YouTube liable for the killing of an American student, Nohemi Gonzalez, in an ISIS attack in Paris under the claim that the assailants may have been influenced by YouTube recommendations that they say supported terrorism. The Supreme Court has never heard or decided on a case regarding Section 230. Twitter v. Taamneh poses similar questions and reasoning for the responsibility of Big Tech companies to monitor posts about terrorism, though on different grounds. The case concerns the Justice Against Sponsors of Terrorism Act (JASTA) which holds any U.S. citizen injured by terrorism to sue anyone who “aids and abets, by knowingly providing substantial assistance.” The plaintiff claims that ISIS used Twitter to promote its agenda and recruit followers, similar to the claim in Gonzales v. Google. In this case, they claim Twitter was “knowingly providing substantial assistance.” Since there is no precedent on Section 230, the justices of the Supreme Court expressed concern about potential inadvertent consequences after the hearings for both instances in February and how the court may craft a ruling that would hold companies accountable for damaging suggestions while preceding the innocent ones. Justices also raised worries about future challenges to freedom of speech and a wave of lawsuits if they choose to restrict Section 230 and rule against Google.
Previously, under some conditions, such as defamation or inciting violence, the First Amendment prohibits the government from censoring what people say or how they express their beliefs. An example is Berenson v. Twitter, in which Alex Berenson sued Twitter for suspension after voicing speculative statements about the COVID-19 vaccine. Berenson’s argument claims that Section 230 allowed Twitter to moderate objectionable user content “in good faith” without fear of civil liability. This would let social media companies play as a state entity and as an editorial that can promote and censor content without liability. The court did not find sufficient evidence for this specific claim, however.
On the other hand, Section 230 shields web platforms from being held liable for user-generated content. This means that the government cannot violate the First Amendment’s guarantees for free speech by ordering online companies to erase or restrict specific speech. However, it is essential to note that Section 230 also permits internet platforms to control their content, which means they can censor or remove content that transgresses their terms of service or community standards. In other words, internet platforms can impose their criteria for expression on their media platforms through anti-discrimination legislation.
Section 230 has been a crucial legal provision for the growth of the Internet and online communication. There will likely be ongoing debates and discussions about the appropriate balance between protecting free speech online and holding online platforms accountable for the harms caused by user-generated content seen in the Supreme Court shortly. Reforms to Section 230 may be necessary to address these concerns and to ensure that online platforms are responsible administrators of online speech and content.