Web platform legal shield faces crucial test as US Supreme Court prepares for oral argument over content moderation

17 February 2023 15:00 by Madeline Hughes, Mike Swift

Twitter Youtube content moderation

In a first-ever review of a foundational law that underpins the modern Internet, the US Supreme Court is poised to hear oral argument about whether social media platforms such as YouTube and Twitter should be held responsible for hosting extremist content.

The outcome of the justices’ decision in the twin cases could put chinks in the broad legal shield interactive online platforms have enjoyed since 1996 under Section 230 of the Communications Decency Act, which says that apart from a few narrow exceptions, online platforms can’t be legally liable for content posted by their users.

The succinct law is so important that one prominent legal scholar described it in a book title as, “The Twenty-Six Words That Created the Internet.” Other commentators in the run-up to next week’s oral argument have dubbed Section 230 the “Internet’s Magna Carta.”

Intended to give companies in the early days of the Internet the legal cover they needed to rid their platforms of false, defamatory, violent, or copyright violating content without becoming mired in costly lawsuits, Section 230 has been credited as one reason why multinational social media giants like Facebook, YouTube, Reddit, Snap and Twitter grew to adulthood in the US, instead of Europe or other countries. But the law has never before been tested by the top US court.

As such, the stakes of twin oral arguments the Supreme Court will hear Tuesday and Wednesday in Gonzalez v. Google and Twitter v. Taamneh are high for the US Internet industry, leading some commentators to opine that the Supreme Court could be poised to deliver the most important ruling so far in the 30-year history of the commercial Internet.

Whether that actually happens depends on whether the justices decide to make a broad or more narrow ruling on Section 230 and the law at issue in the Twitter case, platform liability under the Anti-Terrorism Act, or ATA. The Supreme Court, of course, has unfettered latitude to go wide or narrow.

Another key aspect to watch next week will be any questions asked by conservative Justice Clarence Thomas, who in 2020 urged his colleagues to look for an opportunity to consider narrowing Section 230 by reversing the "too-common practice of reading extra immunity into statutes where it does not belong”.

The stakes are so high for the platforms because next week’s oral argument will touch on the artificial intelligence algorithms that the likes of YouTube, Instagram and TikTok use to generate revenue by promoting content that attracts and holds the attention of billions of people, like an infinite cloud of moths drawn to a digital flame.

Section 230 directs that no “interactive computer service shall be treated as the publisher or speaker” of content posted by others to that platform, but the justices are expected to grapple with the question of where to draw the boundary between hosting user-generated content and promoting it through algorithmic amplification.

Both the Gonzalez and Taamneh cases take aim at the lax regulation of online spaces, in which two families of people killed in ISIS attacks argue the websites have responsibility for the deaths because content promoting the terrorist organizations appeared on the websites.

The families of Nohemi Gonzalez in Paris and Nawras Alassaf in Turkey are attempting to hold the platforms responsible for content-related issues in back-to-back arguments at the Supreme Court.

The two cases are similar, yet markedly different because Gonzalez's family is attempting to hold Google — the owner of YouTube — responsible for the algorithms the family says prioritize extremist content. Alassaf’s family says similarly that Twitter is responsible for terrorism-related content posted by users under the antiterrorism law in Twitter v. Taamneh.

Each case has garnered tons of attention, while parties from a variety of backgrounds have filed nearly 100 amicus briefs between the two cases.

Google v. Gonzalez

The Google lawsuit centers on the question: Do algorithms perform an editorial function, and if so would that make platforms using algorithms liable for promoting certain content? Gonzalez’s family says recommending extremist videos could contribute to radicalization.

It garnered the most interest, with 78 amicus briefs submitted by Internet-based companies, trade groups, nonprofits and politicians.

Google argues a ruling against the social media company would “upend the Internet”. Similarly, companies owning online applications like Match Group, ZipRecruiter and other social media websites agree that algorithms are essential to a functioning Internet where people can sort through the vast amounts of information uploaded onto the sites.

“In 2023, the world is on pace to share 120 zettabytes of data online — 60 million times the amount of information stored in every US academic library combined. To deal with that staggering abundance of content, websites use computer programs called algorithms to sift through billions of pieces of content and publish information in a form most useful to particular users,” Google’s lawyers wrote.

The original authors of Section 230, Senator Ron Wyden, a Democrat from Oregon, and former Representative Christopher Cox, a California Republican, agreed with Google, saying the law’s intent was to ensure platforms were not liable for content posted by third parties.

“Whenever a platform’s content moderation is less than perfect, the platform could be said to send an implicit message that users would like to see the harmful content remaining on the site,” Wyden and Cox wrote. “If that were sufficient to deny immunity, platforms would be subject to liability for their decisions to present or not to present particular third-party content — the very actions that Congress intended to insulate from liability.”

Twitter v. Taamneh

Alassaf's family contends Twitter should be held liable under the ATA for aiding and abetting terrorists by hosting their content on the website. The family doesn’t argue the terrorists involved with killing Alassaf had any accounts on the site directly.

“The complaint asserted that the defendants recommended and disseminated a large volume of written and video terrorist material created by ISIS, and described the nature of that material and the manner in which the defendants thus assisted ISIS’s efforts to recruit terrorists, raise money, and terrorize the public,” according to the plaintiffs’ brief filed with the Supreme Court.

The US Department of Justice has also weighed in the Taamneh case, saying the US Court of Appeals for the Ninth Circuit erred in holding that plaintiffs plausibly allege Twitter knowingly provided substantial assistance to terrorists in the Turkey attack.

“Rather, plaintiffs allege that defendants knew that ISIS and its affiliates used defendants’ widely available social media platforms, in common with millions, if not billions, of other people around the world, and that defendants failed to actively monitor for and stop such use. Those allegations do not plausibly allege that defendants knowingly provided substantial assistance to the Reina attack,” it argued in support of reversal.

Both cases reached the Supreme Court via the Ninth Circuit. But the Ninth Circuit didn't address Section 230 liability in the Taamneh lawsuit against Twitter. Instead, the panel found that the Taamneh plaintiffs sufficiently alleged under the ATA that the tech companies’ support for ISIS was “substantial,” even though they took steps to remove ISIS accounts and videos.

In its last word on the Taamneh case last week, Twitter told the justices that it, Google and Facebook cannot be held in any “commonsense notion” to have abetted a terrorist act, even if the plaintiffs’ allegation was judged true that the companies knew ISIS had accounts on their platforms.

“Plaintiffs likewise do not dispute that no Defendant knew of, yet failed to remove, any account or post used to plan, prepare for, or commit the Reina attack or any other terrorist attack,” Twitter told the justices. “What Plaintiffs allege instead — that Defendants were generally aware that their billions of users included ISIS adherents who were misusing their routine services and that Defendants should have done more to find and prevent that misuse — does not constitute aiding and abetting an act of international terrorism, under the statutory text, common law principles, or any commonsense notion of what it means to ‘abet’ a criminal act.”

One underlying truth the justices will have to confront next week is that the Internet of 1996, when Section 230 became law, was nothing like the Internet of 2023 — an all-encompassing medium where platforms are increasingly the vector of transmission for dangerous conspiracy theories, mental health problems for teenagers, the live-broadcast mass shootings and a growing list of other societal problems.

The buzzing dial-up connections to the primitive online bulletin boards of the 1990s are a far cry from the immersive experience that virtual reality and streaming video are poised to become, if they are not already there. A single video about body image is not likely to cause a teenager to become anorexic, but an algorithmically driven, constant stream of such content directed for months to the Instagram or TikTok feed of someone who shows an initial interest in that content — driven by AI technology that didn’t exist in the mid 1990s — could lead to much more powerful harms.

Whether it decides to draw a clear line on the liability of interactive platforms for algorithmic content moderation or issues a narrow opinion that effectively kicks the issue to US lawmakers, the Supreme Court itself is vulnerable to a key criticism: It should have intervened much sooner than this.

Related Articles

No results found