The limits of liability protections provided by Section 230 of the Communications Decency Act, which generally provides immunity for online service providers for content generated by its users, has been a hotly-contested issue in recent years. Enter Twitter v. Taamneh and Google v. Gonzalez: two cases seeking to hold internet service providers liable under the Anti-Terrorism Act for activities of their third-party users. After months of criticism of Section 230 by politicians from both sides, through these cases the Court had the opportunity to narrow the scope of the statute and limit the protections provided to social media platforms over content posted by their users. And while many believed the future of the statute, and indeed, the internet, rested with the Supreme Court and its decisions in these cases, the Supreme Court ultimately sidestepped the issue by declining to confront the Section 230 claims altogether. Instead, the Court found the plaintiffs’ claims under the Anti-Terrorism Act to be insufficient, freeing Twitter, and Google from liability for allegedly failing to prevent ISIS from using social media platforms to recruit and spread its message.
Section 230 of the 1996 Communications Decency Act
Section 230 of the 1996 Communications Decency Act shields interactive computer providers, namely social media platforms, from liability for content posted by their users. The provision prohibits such platforms from being treated as the publisher or speaker of the information provided by another user. Thus, while an individual could sue a newspaper for an article containing defamatory statements, pursuant to Section 230, an individual could not sue Twitter for a post containing the same defamatory language.
The law was enacted in 1996 with the purpose of encouraging internet development and at the same time fostering an environment where users can freely connect and express themselves. However, over the past two decades, the law has become an issue criticized by both Republicans and Democrats in Congress. Many Republicans arguing for regulation criticize the law for giving tech companies the ability to unfairly censor certain partisan viewpoints, violating free speech. Democratic lawmakers, on the other hand, argue that Section 230 encourages the dissemination of harmful content while giving tech companies the ability to avert accountability. While over forty bills have been introduced to Congress since 2020 with the goal of reforming Section 230, none have passed.
Twitter v. Taamneh
In Twitter v. Taamneh, the family of Nawras Alassaf – a man killed by ISIS in a 2017 terrorist attack at a nightclub in Istanbul – filed suit against Facebook, Google, and Twitter under Section 2333 of the Anti-Terrorism Act. Alassaf’s family alleged that Twitter, among others, knew that its platform was being used in ISIS recruiting, fundraising, and organizing efforts but failed to take action to prevent such terrorist use. Plaintiffs alleged that Twitter’s inaction amounted to aiding and abetting terrorism and as a result, Twitter should be held secondarily liable. Likewise, plaintiffs argued that Twitter’s recommendation algorithm allowed ISIS to reach even more users and thus converted Twitter from a “passive” provider to an “active” abettor.
The Supreme Court held that the mere creation of a social media platform and general awareness of a bad actor’s presence is not enough to show that Twitter “knowingly and substantially” assisted ISIS in its attack. The Court compared social media to other forms of communication, stating that terrorists could similarly use the phone, email, or internet to perpetuate their narrative, yet such internet and service providers should not incur liability or be found to have aided and abetted the terrorists by merely providing services to the public. Likewise, the Court rejected the argument that the use of algorithms matching users with recommended content converted Twitter into an active abettor.
Gonzalez v. Google
Hours after issuing its decision in Twitter v. Taamneh, the Supreme Court heard Gonzalez v. Google, a case with nearly identical facts. The family of Nohemi Gonzalez sought to hold Google liable for the 2015 death of Gonzalez under the Anti-Terrorism Act. The parents argued that YouTube-owned Google was responsible for the death of their 23 year-old daughter, who was killed by ISIS in a 2015 terrorist attack while studying abroad in Paris. Like the family in Taamneh, Gonzalez’s parents claimed that Google aided and abetted international terrorism by allowing ISIS to recruit members and plan terrorist attacks. The argument rested on the idea that Google was at fault not for allowing the material to exist on its platform, but for presenting targeted recommendations to such material in an “up next” video feed, thus assisting ISIS in spreading its message. In Gonzalez, however, the issue was whether Section 230 immunity applies when an online platform makes targeted recommendations of content to its users.
Rather than addressing the issue of Section 230 protections, the Supreme Court cited their decision in Twitter v. Taamneh and found that there was no need to discuss Section 230 because “much (if not all)” of the plaintiffs’ complaint failed under Taamneh and prior rulings by the Ninth Circuit, which found that Section 230 “bar most of the Gonzalez plaintiffs’ claims.”
Understanding the Future of Section 230
By declining to address the scope of Section 230, the Supreme Court was able to avoid interpreting the law narrowly or broadly, either of which (many feared) would result in an influx of litigation. The decision was celebrated by free speech advocates and others advocating on behalf of technology companies. Despite the rulings, however, the issues confronted by Section 230 are unlikely to go away.
Rather, the Section 230 concerns are likely to be pushed to Congress or state legislators. Many U.S. states, including Texas and Florida, have taken Section 230 issues into their own hands by attempting to ban “viewpoint” censorship, fighting major technology platforms’ alleged censorship of Republican viewpoints. Notably, both the proposed Texas and Florida laws impose a “must-carry” provision requiring social media platforms to carry content regardless of partisan viewpoints, thus restricting the platforms’ ability to moderate user-generated content. Both laws, however, have been blocked for violating the First Amendment rights of such internet service providers and remain in limbo until the Supreme Court decides whether to hear the cases.
In addition, due to the remaining ambiguity around Section 230, there is significant inconsistency among circuit courts as to the extent of immunity that content providers can expect. Thus, to avoid exposure, social media platforms and other internet providers with existing “recommendation” algorithms should aim to keep content suggestions and applications neutral. Likewise, providers without existing “recommendation” algorithms should carefully consider whether to implement such technology, recognizing the potential challenges that may arise through system-based suggestions to user-generated content.
The Gordon & Rees Technology Litigation Practice Group would like to acknowledge Summer Associate, Isabela Possino, who served as a contributing author of this article.