NetChoice explained: How SCOTUS wisely avoided two extreme approaches to regulating social media

On social media regulation, Supreme Court exercised needed moderation

Last year, Justice Kagan quipped that Supreme Court justices are “not, l​​ike, the nine greatest experts on the internet.” In keeping with that humility, Justice Kagan penned a wise and cautious decision for the Court on July 1, 2024, clarifying how the First Amendment applies to social media companies. The bottom line: while platforms have robust First Amendment rights to curate content in their main feeds, they are by no means immune from regulation. 

The careful approach should prohibit Orwellian speech diktats, but it also maintains space for key regulation of large social media platforms. Striking that balance is a win for free speech and our democracy.

Background

After Facebook, then-Twitter, YouTube, and other social media companies deplatformed former President Donald Trump in 2021, Florida and Texas passed laws to restrict large platforms’ content moderation. Florida’s law prohibited large platforms from “censor[ing]” or “disfavoring” posts based on content or source and required platforms to “consistent[ly]” apply their moderation policies. Texas’s law prohibited large platforms from “censor[ing]” a user based on viewpoint. Both laws required platforms to make certain disclosures, including notifying users when censored, and enabled users to sue platforms that violate the restrictions for monetary damages. 

The purpose of these laws was no secret. In signing the Florida bill, Governor DeSantis issued a press release, titled “Governor Ron DeSantis Signs Bill to Stop the Censorship of Floridians by Big Tech.” Governor Abbott asserted the Texas bill was a necessary response to “a dangerous movement by social media companies to silence conservative viewpoints and ideas.” Both governors were echoing the state legislators who drafted and passed the bills. 

Following the laws’ passage, NetChoice sued both states to have the bills struck down. The district courts in both cases enjoined the bills. On appeal, the Eleventh Circuit upheld the Florida injunction in a thoughtful opinion, while the Fifth Circuit issued a wild and wildly contrasting decision, holding the platforms have no First Amendment right to moderate content. 

When the cases arrived at the Supreme Court, NetChoice and Florida each took extreme positions. Florida, for example, adopted the Fifth Circuit’s position that platforms’ content moderation is not protected by the First Amendment. On the polar side, the platforms argued that the First Amendment bars nearly all regulations of the content platforms carry. 

Decision

The decision thankfully rejected both extremes in a unanimous, narrow holding. Technically, the only thing the Court did is hold that neither lower court correctly applied the standard for facial challenges and remand the cases for further development on exactly what the state laws require to resolve the platforms’ facial challenges. Specifically, the Court held the lower courts must (1) assess the scope of the laws (including the full range of platforms and platform activities they apply to), (2) decide which applications of the laws are constitutional and which are not, and (3) determine whether the laws’ unconstitutional applications substantially outweigh their constitutional applications. 

Almost sub silentio, the Court’s holding appears to have resolved one significant issue: the Court held that Zauderer review applies to platform disclosure requirements. Zauderer review is a relatively low standard of First Amendment scrutiny developed to assess commercial disclosure requirements of “purely factual and uncontroversial information” based on whether they are “unduly burdensome.” Although the Fifth and Eleventh Circuits held that Zauderer should apply to the platform disclosure requirements at issue in the NetChoice cases, there were serious questions about whether the Court would agree. The Court did and that issue is likely resolved.

Critically, the Court’s opinion did not stop at its narrow holding. Instead, in dicta, the Court offered three guiding principles on how the First Amendment applies in this context: 

  • First, the Court explained “the First Amendment offers protection when an entity engaging in expressive activity, including compiling and curating others’ speech, is directed to accommodate messages it would prefer to exclude.”
  • Second, the Court established that the First Amendment applies even if “a compiler includes most items and excludes just a few.” (The Court did not address whether the First Amendment kicks in if a compiler excludes no items – i.e., unmoderated platforms – but its opinion doesn’t foreclose the Court subsequently concluding they are covered.)
  • Third, there is no legitimate government interest in “forcing a private speaker to present views it wished to spurn in order to rejigger the expressive realm” in the name of “improving, or better balancing, the marketplace of ideas.” Subsumed within that prohibition, “a State may not interfere with private actors’ speech to advance its own vision of ideological balance.”

Foreshadowing how those principles will apply to the NetChoice cases, the Court explained that domestic social media companies are covered by the First Amendment when they “[d]ecid[e] on the third-party speech that will be included in or excluded from a compilation — and then organiz[e] and present[] the included items.” And it concluded that Texas (and we think the same is true for Florida) articulated only an impermissible justification for restricting that protected activity: namely, it “wants [platforms] to create a different expressive product, communicating different values and priorities.” The upshot is two-fold: (1) content-moderated feeds such as Facebook’s News Feed and YouTube’s “Home” page are likely generally covered by the First Amendment and (2) laws like Texas’ and Florida’s that regulate those feeds with the intention of protecting views the government likes to “improve” the marketplace of ideas will fall. We say “generally” because the Court hedged. 

Impact

The most immediate impact of the Court’s decision is that the core of the Texas and Florida laws will fall. That’s a good thing. Both laws were clearly aimed at propping up conservative speakers and views in ways that violate the north star of First Amendment doctrine — i.e., that “no official, high or petty, can prescribe what shall be orthodox in politics, nationalism, religion, or other matters of opinion.” The decision should establish a bulwark preventing the government from the most blatant attempts at conscripting large social media companies into distributing speech it prefers and restricting speech it does not. 

On the other hand, just because the First Amendment generally applies to platforms’ primary feeds does not mean the feeds are un-regulable; it only means that regulations must pass constitutional muster. The Court noted that the “whole project of the First Amendment” is to ensure “a well-functioning sphere of expression, in which citizens have access to information from many sources.” Furthermore, the Court specifically noted that “[m]any possible interests relating to social media can meet” the intermediate First Amendment scrutiny test applicable to content-neutral regulations, and nothing in its decision “puts regulation of [social media platforms] off-limits as to a whole array of subjects.” The majority specifically points to competition law, in addition to alluding to the issue of adolescents’ mental health. 

Beyond antitrust and child safety laws, the broader contours of the decision also holds space for future regulation in other areas. Consider a few.

Transparency Regulations. Most straightforwardly, the Court’s determination that Zauderer review applies to disclosure requirements should offer ample space for platform transparency requirements. That is a hopeful development for future transparency policymaking, including Congressional efforts such as the Platform Accountability and Transparency Act

Public Accommodations Requirements. Another avenue of regulation the decision preserves is applying public accommodation laws to large platforms. Public accommodation laws have traditionally prevented private parties from discriminating against individuals based on protected characteristics, including race, religion, and national origin, or disabilities. Today, large social media platforms are the hubs of our digital democracy and platforms should not have absolute control over who has access to the digital public square. There may already be some agreement here. Justice Kagan inquired about this point at oral argument to which Solicitor General Prelogar responded the United States believes generally applicable public accommodation regulations may be extended to platforms. Interestingly, Justices Thomas and Alito likely agree. Justice Thomas suggested large platforms may be subjected to a broad array of common-carrier regulations, and Justice Alito dropped a footnote making clear he believes preventing invidious discrimination on large platforms is a compelling government interest. There is, however, disagreement on how far such laws may extend, with General Prelogar noting that any platform public accommodation requirements may need to yield to First Amendment concerns when they change a platforms’ message.

Candidate Access Requirements. In addition, the decision may enable the government to extend some version of candidate access requirements to the social media realm. Current candidate access laws include requirements that broadcasters grant candidates equal opportunity for use of their stations and newspapers not charge political advertisers more than commercial advertisers or other political advertisers. In the social media context, candidate access laws could offer an important check to ensure candidates are not treated disparately in accessing the largest platforms’ offerings, which are fundamental to campaigning and voter engagement. That said, we strongly believe broadcasters’ candidate access requirements shouldn’t be extended wholesale to platforms. Where the cost of running advertisements naturally limits the potential for abuse of access requirements on broadcast media, social media platforms offer both far cheaper ads, costless organic content, and constant contact with the electorate, making any access requirements much more ripe for abuse.

Privacy and Process-Based Restrictions. Broadly, the majority’s dicta offers policymakers strong encouragement to pursue lawmaking driven by interests outside of forcing platforms to host content they would otherwise exclude. This should push would-be reformers to redouble efforts that already take this approach, such as passing comprehensive federal privacy reform. At the same time, it raises the question of what else — what government interests and policy approaches — qualifies as one of the “[m]any possible interests relating to social media” that can survive intermediate scrutiny? For those of us engaged in the project of imagining how our online information ecosystem can better “be a well-functioning sphere of expression,” Justice Kagan’s assessment that there are “many possible interests” sounds both a hopeful note and call to action.    

Apart from preserving avenues of regulation, the decision will likely force policymakers to regulate with greater precision in this space. It makes clear the challenge facing policymakers in defining “social media platforms” and identifying with specificity the activities they aim to regulate, such as the social media giants and their many functions. It is the very breadth of the Texas and Florida laws’ definitions that all the justices identify as driving the need to remand the cases — in acknowledgement of the need to engage with “the ever-growing number of apps, services, functionalities, and methods for communication and connection” that might be implicated by the state laws. While some applications of the laws may well pass constitutional muster, others will not. Omnibus laws that reach all platforms and activities, like Texas and Florida’s laws, likely will have a harder time withstanding constitutional scrutiny in all of their applications. 

Open Questions

While the majority invites future policymaking endeavors, it also leaves open an array of questions lower courts, policymakers, and, eventually, the Supreme Court itself will inevitably have to navigate. 

Perhaps most importantly (and critical to the viability of any future regulation), the decision leaves open the question of what level of First Amendment scrutiny — strict or intermediate — applies to regulations of social media platforms. The Court simply noted that, even assuming without deciding that intermediate scrutiny applies, the Texas law would fail. Where the lower courts and ultimately the Supreme Court land on that question will have an enormous impact on the regulability of social media companies. 

The majority’s dicta begs important line-drawing questions around what constitutes “expressive activity” or an “expressive product” online. The Court explicitly states and repeats that expressive activity includes “presenting a curated compilation of speech originally created by others” and points to platforms’ content-moderation policies as evidence of the “views and priorities” that platforms strive to convey in their “expressive products.” 

But what else counts? For example, as the Court itself anticipates, “[c]urating a feed and transmitting direct messages, one might think, involve different levels of editorial choice, so that the one creates an expressive product and the other does not.” According to the majority, “it is not hard to see how the answers might differ” between Facebook’s News Feed and messaging services on whether a regulation violates the First Amendment. But, what if a direct messaging service is subject to the same content-moderation standards the Court uses to justify platforms’ main feeds being “curated,” as in the case of direct messages on Instagram? Even as to feeds, is a platform applying any type of content-moderation regime sufficient to ensure its moderated products rise to the level of being expressive — or is there a minimum bar that platforms may need to clear to demonstrate a content-moderation regime is conveying a platform’s “views and priorities”? 

Notably, in a footnote, the majority caveated that it was not addressing whether the First Amendment pertains to “feeds whose algorithms respond solely to how users act online — giving them the content they appear to want, without any regard to independent content standards.” And Justices Barrett and Alito specifically raised the prospect that the First Amendment may well not protect feeds solely moderated to deliver content the user seems to want or that deploy artificial intelligence to moderate feeds.

These are just a few of the many open questions that platforms, policymakers, and lower courts will have to navigate in the coming years.

Causes for Concern

Apart from the open questions, there are some real causes for concern. 

First, under long-standing precedent, the traditional rule is that laws that restrict speech based on the viewpoint it expresses are “presumptively unconstitutional.” And in determining whether a law is content or viewpoint based, the Court has made clear that a law may either be content or viewpoint based on its face or in its purpose. Even if a law is not viewpoint discriminatory on its face, clear indications that a law was motivated by a desire to discriminate on the basis of viewpoint typically render a statute viewpoint-based and presumptively unconstitutional. 

Yet, in NetChoice, the Court took a different approach. The majority observed that the stated goal of the Texas law’s main sponsor was to stop “West Coast Oligarchs” from “silenc[ing] conservative viewpoints and ideas.” On those grounds, the Court could easily have struck the statute down as an unconstitutional viewpoint-based restriction of speech. But the Court didn’t do that. That is quite concerning, and may suggest a watering down of the strong medicine the Court has traditionally applied to viewpoint-based restrictions of speech.

Second, as noted above, both Justices Alito and Thomas invoked “common-carrier doctrine” and urged the lower courts to consider whether platforms may be considered common carriers. That might be beneficial in some respects, but, as Professor Blake Reid has laid out, there really is no coherent common carrier doctrine ready to be applied to social media platforms. And if courts begin to deploy “talismanic invocations of ‘common carriage’” rather than seriously engaging with the novel nature of digital platforms, they could upend our digital public sphere. 

For example, as the majority points out, enforcing Texas’ prohibition on viewpoint discrimination could well force platforms to carry content platforms regularly exclude, such as posts that “advocate for terrorism” or “encourage teenage suicide and self-injury” — or what some refer to as “lawful but awful.” Imposing First-Amendment-like limits on what content platforms can and can’t remove under the guise of “common carrier doctrine” — as Texas’ law does — would mean platforms “simply will not be able to moderate effectively.” Such an outcome would be bad for large platforms, but worse for our democracy. We hope courts will think long and hard before bluntly invoking “common carrier doctrine” to uphold all manner of government regulation. The far better approach is for courts to assess any regulation of platforms in context, sensitive to the “complex technical, social, and cultural features of the internet and how they interact with our evolving understanding of the First Amendment.”

Takeaway: A Welcome Dose of Humility and Caution

In anticipation of the oral argument, many worried the Court’s decision in the NetChoice cases would “change the internet forever.” It’s safe to say that fate has been avoided and the Court is erring on the side of caution and avoiding extreme positions, even when proffered by parties or lower courts. 

While admittedly not a conclusive decision, NetChoice walks a careful line at the intersection between the First Amendment and the internet — and in so doing, sounds a hopeful note for this area of jurisprudence. As Justice Kagan acknowledges in the majority, the internet and social media platforms have brought a “dizzying transformation in how people communicate” — and this pace of development and change will likely continue. 

But this change shouldn’t tempt us to pursue policymaking or judicial rulings that neglect the principles surrounding free speech that are bedrock to our democracy — we can’t afford for this to be an all-or-nothing area of the law. While some have read NetChoice as First Amendment law out of control, we read the decision as heeding Justice Robert Jackson’s call to “temper . . . doctrinaire logic with a little practical wisdom.” If nothing else, NetChoice holds space for us to continue to strive towards a place of balance — seeking policies new and old by which to hold platforms accountable for their role as the digital squares of our democracy without throwing out the First Amendment along the way. 

Related Content