One year ago the United States Supreme Court heard arguments on Gonzalez v. Google and Twitter v. Taamneh, two cases that drew attention to “Section 230”, the law that represents the surviving elements of the 1996 Communication Decency Act. Its first part, the famous “twenty-xix words that created the internet” states that:
“No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”
Historically, all the way to last year's SCOTUS ruling - or lack thereof - Section 230(c)(1) has been used to create immunity from liability for providers and users of an "interactive computer service" who publish information provided by third-party users. In plain English, internet platforms are not liable for third-party content. Social media companies claim that they do not have editorial responsibility for this content.
This week, on February 26, 2024, two new cases have been heard by the court: Moody v. NetChoice (a.k.a. the State of Florida case) and NetChoice v. Paxton (a.k.a. the State of Texas case). At the center of the cases are laws that Florida and Texas enacted based on conservative complaints about “Silicon Valley censorship.” These laws, currently blocked, come at it from two different angles. The Texas law prohibits the platforms from removing any content based on a user’s viewpoint while the Florida law prevents the platforms from permanently barring candidates for political office as was the case with former President Trump after the events of January 6, 2021.
The view of the internet platforms is represented by the two trade associations challenging the state laws — NetChoice and the Computer & Communications Industry Association. They claim that the actions being characterized as censorship are editorial choices protected by the First Amendment which prohibits government restrictions on speech based on content and viewpoint. The counterargument is that platforms should act as “common carriers” which are required to transmit everyone’s messages ensuring that users have access to many points of view without discriminating among its users.
Last year, Federal appeals courts reached conflicting verdicts about the constitutionality of the two laws - a circuit split, one of the classic ways to elevate a discussion to the Supreme Court. The 11th Circuit upheld an injunction blocking Florida’s law by saying that “Social media platforms exercise editorial judgment that is inherently expressive”, which represents “First Amendment-protected activity.” On the other hand, the Fifth Circuit reversed a lower court’s order blocking the Texas law arguing that the platforms' acts of “censorship” are not speech and, therefore, not protected.
As in the cases from last year, the arguments are long and rich in legal details and nuance. But I would like to remind you that Section 230 has a second part. Section 230(c)(2) provides "Good Samaritan" protection from civil liability when the platforms in good faith remove or moderate third-party material they deem "obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected."
Therefore, the same law that exempted the platforms from liability related to user-generated content in last year’s cases also exempts the platforms from liability when moderating content. These two aspects of Section 230 are the fundamental underpinnings of how social media works today.
While it is difficult to infer the final ruling from the questions during the arguments phase, it seems that most justices agree that the First Amendment protects the right of platforms to moderate content, like how newspapers make editorial decisions or theaters choose which content to run. Not only the nature of the ruling but also how broad or narrow it will be can have far-reaching consequences beyond the scope of these two cases. So far, history has told us that this court has shied away from fundamentally altering the status quo and has been deferring to Congress for further legislative action in tech policy. It will be instructive to see how and if they decide to drive a wedge between last year’s platform arguments of no editorial involvement with third-party generated content and this year’s position characterizing content moderation as editorial work. Will social media platforms be able to have their cake and eat it too? Most importantly, too broad of a First Amendment-based ruling will make any subsequent regulation very difficult in this space.
The justices will continue to wrestle with these arguments and a ruling is expected before the summer and the end of this term. How much political biases will weigh into their decision is still to be seen, but, once again, we are facing a judicial moment that could alter the course of how social media operates in our country.
With the presidential elections just around the corner, the stakes could not be higher.