top of page

A Supreme Court Case that Could Reshape the Internet: Section 230, Terrorism, and Social Media Algorithms



Writer: Nathan Sever

Editor: Natalie Bouzas

Fall 2023


November 13th, 2015: Terrorists barge into the Boulevard Voltaire in Paris and open fire on the crowd. Momentary confusion ensues, followed by a terrifying realization. Attendees rush to the doors, stepping on and trampling over each other. Eighty-nine people would not make it out. Soon after, two bombs are detonated outside of Le Stade de France, killing another few dozen. Throughout the evening, terrorists shoot up bars, restaurants, and public areas. With over 130 dead (“Paris Attacks”), the day marked the worst terror attack in the country’s history.


Among the victims of that tragic day was Nohemi Gonzales, a 23-year-old American student at Long Beach State University in California, studying abroad at the Strate College of Design (Garrity). After ISIS, an Islamic terrorist organization, claimed responsibility for the attacks through a YouTube video, Gonzales’ father sued Google (YouTube), Twitter, and Facebook. The assertion: these companies were liable for aiding and abetting international terrorism by “failing to take meaningful or aggressive action” against terrorists using its services. Gonzales believed these companies were liable for failing to act, and cited that Google and social media algorithms, specifically, had suggested certain content – including ISIS recruiting videos, fundraising, and propaganda – to users based on their viewing history (“Gonzales v. Google LLC”). They alleged that Google had assisted ISIS in recruiting members, planning attacks, and their goal of intimidation.


Gonzales v. Google, along with companion case Twitter v. Taamneh, had been litigated at the District and Circuit Court levels before being appealed to the Supreme Court for the term ending in 2023. Both cases concerned highly similar facts (for Taamneh, a 2017 terrorist attack in Turkey), but Gonzales included an added element concerning the validity of the immunity of interactive, algorithmic computer systems that make targeted recommendations to their users. Taamneh focused solely on online, interactive companies’ potential liability for aiding and abetting.


Section 2333(a) of the Anti-Terrorism Act states that any U.S. national injured by an act of international terrorism while residing in a foreign country may sue to recover the damages they sustain (Johnson and Castro). Section 2333(d)(2) assigns liability to “any person who aids and abets, by knowingly providing substantial assistance… [to] the person who committed such an act of international terrorism.” Judging by the statute’s text, the victim’s claim appears to be plausible. Twitter (and Facebook) certainly knew that ISIS and other terrorist groups had been using their services for years, and it is reasonable to suspect that their platforms were immensely influential in recruiting new members and spreading intimidation. Above all, there was certainly more that these companies could have done to limit terrorists’ access to these platforms – which may have created difficulty in carrying out the 2017 Turkey attacks – right?


The issue is not that simple, though. We must next turn our attention to Section 230(c)(1) of the Communications Decency Act, which states that “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider” (Johnson and Castro). Essentially, this statute legally distinguishes companies like Twitter (now X), Instagram, and YouTube, among others, from the content their users post. This is why X cannot be held liable for defamation or libel if it spreads through its users, or why Snapchat cannot be held liable for one user harassing another online. The statute provides a shield for such companies to allow diverse speech and promote new ideas online, while still affording companies the power to moderate content as they see fit (“Twitter, Inc. v. Taamneh”). Additionally, Section 230(c)(2) immunizes companies from liability associated with voluntary, good-faith action taken to restrict accessibility to objectionable content (“Twitter, Inc. v. Taamneh”). This provision, however, is not of issue in Taamneh and Gonzales.


Putting aside the ATA’s conditions for assigning secondary liability, under Section 230, Twitter can not be held responsible for even its worst users’ posts and activities, including those of terrorists. As a private entity, Twitter could voluntarily choose to ban such users or remove their content, but they are under no legal obligation to do so.


To resolve the issue of aiding and abetting, Justice Clarence Thomas added that the “nexus” between Twitter and the terrorist attack was so “far removed” that there was no evidence that Twitter provided substantial aid to the specific Turkey attack (United States, pp. 6). Since the ATA assigns liability only for injuries caused by “an act of international terrorism,” Thomas reasoned that it is not the person/party responsible for terrorism, but the act of terrorism itself, that the defendant must aid and abet (United States, pp. 3). The facts showed no evidence of Twitter’s involvement in the 2017 attacks in Turkey, thereby absolving the company of potential burden.


The question in Gonzales then becomes more interesting: are interactive computer services’ algorithms – designed to recommend and direct users to any content they might find appealing – also immune under Section 230? The Supreme Court has declined to answer. Gonzales was remanded (sent back) to the Ninth Circuit Court of Appeals for reconsideration under the ATA in light of the earlier Taamneh decision. Judging by the unanimous verdict against Taamneh, however, it seems unlikely that a majority of Justices will view Gonzales in a drastically different light, where interactive computer algorithms exist separate from Section 230, and can create liability under the ATA. Nevertheless, the question remains open, and the case is unsolved.

        

Perhaps more important than the legal issues in the case itself are its potential ramifications for the future. Specifically, a decision against Google in Gonzales (or against Twitter in Taamneh) could substantially reshape the internet and contemporary online discourse. Seth Kreimer at the University of Pennsylvania Law School has observed for decades that online intermediaries like Twitter and Google are the “weakest link” in free speech protection (Lakier and Douek). Essentially, by hosting such a wide, diverse range of speech on their platforms, there is little incentive for online platforms to host one particular speech or post. In fact, without Section 230, it is estimated that legal fees could range from $100,000 to $500,000 per case (Johnson and Castro). If such companies were to become subject to massive liabilities for failing to remove a few items of speech or posts (amidst millions that are created daily), content moderation on online platforms would tighten drastically, which may inadvertently lead to targeting several modes of “uncontroversial” speech.


The presence of such a strong incentive would make online platforms hyper-cautionary regarding the speech they host, causing companies to defer to federal authorities and law enforcement agencies that request the removal of particular content. It was indeed through this phenomenon that the controversy surrounding how Twitter (now “X”) had removed certain users and limited exposure of certain speech at the request of the Biden Administration to combat COVID-19 “misinformation” (WSJ Editorial Board). Just a few weeks before oral arguments in Taamneh, the House Oversight Committee held a hearing about Twitter’s alleged suppression of the Hunter Biden laptop story from the New York Post at the height of the 2020 Presidential election (Lakier and Douek). In these situations, online platforms have a lot to lose but little to gain. An adverse decision in Gonzales that weakens Section 230 protections would invariably exacerbate this situation. It is easy, therefore, to envision how the fallout of such a revision could hamper journalistic independence or rein in the vast liberties of production or content within the entertainment industry.

   

For the past few decades, Section 230 has drawn increased criticism. Liberals decry how the law limits moderation of “hate speech” and misinformation, while conservatives assert that it allows online platforms to “censor” conservative voices without consequence. While Taamneh closes several doors in Section 230 jurisprudence, Gonzales creates many cracks and leaves several questions concerning the extent of Section 230 remain unanswered. Are interactive computer system algorithms protected? What is the role of the government in moderating online speech? Is there a First Amendment issue at play, and how should it be considered? As previously mentioned, the unanimous decision in Taamneh suggests a change in Section 230 protections is unlikely, at least at this time. Nevertheless, the debate remains open and will likely continue to grow as the digital “marketplace of ideas” continues to expand.


 

References


Garrity, Patrick, et al. “American Student Nohemi Gonzalez Identified as Victim in Paris


“Gonzales v. Google LLC.” Oyez, www.oyez.org/cases/2022/21-1333.

Johnson, Ashley, and Castro, Daniel. “Overview of Section 230: What It Is, Why It Was Created, and What It Has Achieved.” ITIF, Information Technology and Innovation Foundation | ITIF, 4 Jan. 2023, itif.org/publications/2021/02/22/overview-section-230-what-it-why-it-was-created-and-what-it-has-achieved/.


Lakier, Genevieve, and Douek, Evelyn. “The Amendment the Court Forgot in Twitter v.


“Paris Attacks: What Happened on the Night.” BBC News, BBC, 9 Dec. 2015,


“Twitter, Inc. v. Taamneh.” Ballotpedia, ballotpedia.org/Twitter,_Inc._v._Taamneh.


United States, Supreme Court of the United States, Thomas, Clarence, and Ketanji Brown

Jackson. Twitter, Inc. v. Taamneh, 18 May 2023. https://www.supremecourt.gov/opinions/22pdf/21-1496_d18f.pdf.


WSJ Editorial Board. “Opinion | The White House and Twitter Censorship.” The Wall Street

135 views0 comments

Comments


bottom of page