Policy Issues / Big Tech

Three Powerful Ways States Can Combat Big Tech’s Power

The power to regulate the major internet platforms does not belong exclusively to the federal government. The states have many weapons in their arsenals—almost all of which have remained unused. The following brief provides a legal roadmap for state legislatures, attorneys general, and all others eager and willing to respond to Big Tech’s threats to our democracy.

Executive Summary

Over the last few years, Congress has debated legislation to regulate Big Tech, the Department of Justice has filed antitrust suits against major internet platforms, and the Federal Trade Commission has examined the possibility of investigations and enforcement proceedings. These efforts have produced few, if any, tangible results. But they have underscored the need to ensure that the major social media and search platforms refrain from discriminating against individuals on the basis of their religious, social, and political views as well as to provide meaningful disclosure about their opaque and seemingly inconsistent content moderation policies.

Texas and Florida have stepped forward with groundbreaking legislation that actually tackles social media. Industry has challenged these laws in court. The following policy brief outlines the major legal approaches to requiring fair treatment from the dominant social media platforms that are likely to survive court challenges, namely common carriage and public accommodation law. 

In addition, this policy brief examines the regulatory and enforcement proceedings that states can initiate under so-called “Little Fair Trade Commission Acts.”  With this approach, state attorneys general or agencies could take action against platforms’ unfair or deceptive trade practices.  In particular, in a manner similar to the Federal Trade Commission’s work against “Joe Camel,” states could combat platforms’ failure to disclose their products’ addictive and psychologically injurious effect on children.

The power to regulate the major internet platforms does not belong exclusively to the federal government. The states have many weapons in their arsenals—almost all of which have remained unused. The following brief provides a legal roadmap for state legislatures, attorneys general, and all others eager and willing to respond to Big Tech’s threats to our democracy. 

I.  Common Carriers and Industries “Affected with the Public Interest”

Starting in the 19th century and throughout the 20th century, states played a large role in regulating communication industries within their borders, notably telegraphs and telephones. States’ regulatory jurisdiction came from their power to regulate the wide category of industries “affected with the public interest.” These judicially created terms include transportation and communication firms known as common carriers.[mfn]Munn v. Illinois, 94 U.S. at 130 (1876).[/mfn] These types of firms, courts reasoned, played a central role in economic and social life. Government, even in the highly business-protective Lochner era, had vast powers to regulate industries affected with the public interest. 

Common carriage regulation formed the basis of the public utility model that emerged in the early part of the 20th century.Starting in the 19th century and throughout the 20th century, states played a large role in regulating communication industries within their borders, notably telegraphs and telephones. States’ regulatory jurisdiction came from their power to regulate the wide category of industries “affected with the public interest.” These judicially created terms include transportation and communication firms known as common carriers.[mfn]See Joseph D. Kearney and Thomas W. Merrill, “The Great Transformation of Regulated Industries Law,” Columbia Law Review 98, no. 6 (October 1998): 1323, 1330-31.[/mfn] These types of firms, courts reasoned, played a central role in economic and social life. Government, even in the highly business-protective Lochner era, had vast powers to regulate industries affected with the public interest. That model gives states the power to regulate carrier prices, interconnection between companies, disclosure, and the terms and conditions of service for intrastate communications. Intrastate communications were conducted for most of the 20th century under this comprehensive regulatory mode.

The 1996 Telecommunications Act, as upheld by the Supreme Court in AT&T v. FCC, 525 U.S. 366 (1999), blessed an expansion of federal jurisdiction to regulate intrastate communications. However, when the federal government fails to act, states retain their authority. This dynamic is at work with current internet regulation.

With the Federal Communications Commission’s (FCC) Restoring Internet Freedom Order, 33 FCC Rcd 311 (Jan 4, 2018), Chairman Ajit Pai withdrew the federal government from common carriage-type network neutrality regulation. Currently, the FCC does not regulate internet communications firms as common carriers but instead imposes minimal regulation via Title I of the Telecommunications Act. Courts, e.g., Mozilla Corp. v. Federal Communications Commission, 940 F.3d 1 (D.C. Cir. 2019), have ruled that because the FCC chose not to regulate the internet, states have the power to regulate internet communications via their traditional common carriage authority. This is an opening that several states have taken advantage of—most notably Vermont and California by passing network neutrality laws.[mfn]See California Internet Consumer Protection and Net Neutrality Act of 2018, available at https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=201720180SB822 and Vermont Statutes, S. Bill 289 (Act 169), available at https://legislature.vermont.gov/bill/status/2018/S.289.[/mfn] 

With the federal government currently staying its hand, states have the power to impose some type of common carriage obligations on the internet and social media platforms. This does not mean that states must impose the full gamut of public utility regulation from rate regulation to universal service. Rather, the states are free to choose from the diverse palette of common carriage regulation.

Specifically, states could simply impose non-discrimination requirements on social media. Non-discrimination is one of the oldest obligations of common carriers and requires them to accept all customers.[mfn]Verizon v. F.C.C., 740 F.3d 623 at 630 (D.C. Cir. 2014) (“By virtue of their designation as common carriers, providers of basic services were subject to the duties that apply to such entities, including that they . . . engage in no unjust or unreasonable discrimination in charges, practices, classifications, regulations, facilities, or services”) (quotations omitted).[/mfn] Currently, airplanes and telephones still operate under this requirement. In the era of competitive telephony, local telephone companies had to interconnect with all long-distance companies.

Non-discrimination requirements do not mean that firms cannot refuse service to those who refuse to follow reasonable rules, e.g., an airplane can remove disruptive passengers, but rather they must accept all who follow their posted and clearly defined terms of service. And they cannot enforce their rules in ways that discriminate against individuals or groups, nor can they add terms of service that would contradict legal obligations for non-discrimination.

As applied to social media, these requirements would likely include prohibitions on de-platforming and discriminatory content moderation.  To adjudicate claims of discrimination properly, such a regime would also have to include disclosure requirements.

There are some drawbacks to this approach. First, there would be First Amendment challenges, but they are unlikely to succeed. While then-Judge Brett Kavanaugh’s dissent in USTA v. FCC, 855 F.3d 381 (2017), would suggest that any antidiscrimination requirements on platforms violate the First Amendment, this position is an outlier, with cases such as Turner v. FCC, 512 U.S. 622 (1994), and Denver Area Ed. Telecommunications Consortium, Inc. v. FCC, 518 U.S. 727 (1996), suggesting the Supreme Court would uphold this law.  

Second, the FCC under the Biden Administration may change course and impose common carrier obligations on the internet pursuant to Title II. Such regulation would arguably preempt state regulation in this area. Third, the statute would have to be drafted to ensure discrimination is defined carefully, allowing plaintiffs to have access to data to prove allegations of biased treatment.

However, if drafted carefully and framed as a narrow state anti-discrimination statute, such a bill could rest on a wide range of justifications. Common carrier status is only one. If framed in the language of anti-discrimination, the bill could avoid possible preemption by the FCC if that were to occur.

The Texas social media bill, H.B. 20, that Governor Abbott signed into law on September 9, 2021, and which State Senator Bryan Hughes (R) introduced, explicitly relies upon common carriage to impose its requirements (§(3)). It prohibits a social media company from censoring “based on the viewpoint of the user or another person; the viewpoint represented in the user’s expression or another person’s expression; or user’s geographic location in this state or any part of this state.” (§143A.002). The law serves as a good model for the common carriage approach and would likely withstand constitutional challenge for reasons set forth above.

II. Public Accommodation

Another legislative approach to responding to social media platforms would rely upon public accommodation law. Public accommodation is very similar in effect to the common-carrier-type statute described above but has a different legal foundation. States have the power to declare businesses, both online and brick and mortar, to be places of public accommodation that cannot discriminate on the basis of political party or affiliation. States can impose this requirement on online platforms and businesses that offer intrastate services.

The states and the federal government enacted public accommodation laws in the 1950s and 1960s. In addition, municipalities and other local government entities have imposed these laws in recent years. These laws helped end Jim Crow segregation and discrimination in business and other everyday aspects of life. These laws have two essential features: (1) designating a business or other entity as a “public accommodation” that must offer its services without discrimination and (2) defining which characteristics a public accommodation may not discriminate against.

First, as a general matter, a public accommodation is a business or other private organization or entity that offers goods or services to the general public and typically includes restaurants, hotels, stores, gas stations, theaters, libraries, buses, banks, and stadiums. The 1964 Civil Rights Act defines the terms as “any inn, hotel, motel, or other establishment which provides lodging to transient guests,  . . . any restaurant, cafeteria, lunchroom, lunch counter, soda fountain, or other facility principally engaged in selling food for consumption on the premises, including, but not limited to, any such facility located on the premises of any retail establishment; or any gasoline station; any motion picture house, theater, concert hall, sports arena, stadium or other place of exhibition or entertainment.”[mfn]Prohibition against discrimination or segregation in places of public accommodation, 42 U.S.C. § 2000a (1964).[/mfn] States, however, have expanded the list to include insurance companies, all retail establishments, and a host of other businesses and activities.[mfn]States can be quite broad in the use of the term. For instance, New Jersey defines the term in the following way:  “A place of public accommodation” shall include, but not be limited to: any tavern, roadhouse, hotel, motel, trailer camp, summer camp, day camp, or resort camp, whether for entertainment of transient guests or accommodation of those seeking health, recreation, or rest; any producer, manufacturer, wholesaler, distributor, retail shop, store, establishment, or concession dealing with goods or services of any kind; any restaurant, eating house, or place where food is sold for consumption on the premises; any place maintained for the sale of ice cream, ice and fruit preparations or their derivatives, soda water or confections, or where any beverages of any kind are retailed for consumption on the premises; any garage, any public conveyance operated on land or water or in the air or any stations and terminals thereof; any bathhouse, boardwalk, or seashore accommodation; any auditorium, meeting place, or hall; any theatre, motion-picture house, music hall, roof garden, skating rink, swimming pool, amusement and recreation park, fair, bowling alley, gymnasium, shooting gallery, billiard and pool parlor, or other place of amusement; any comfort station; any dispensary, clinic, or hospital; any public library; and any kindergarten, primary and secondary school, trade or business school, high school, academy, college and university, or any educational institution under the supervision of the State Board of Education or the Commissioner of Education of the State of New Jersey.  N.J. Stat. Ann. § 10:5-5.[/mfn]  

Second, while the 1964 Civil Rights Act prohibits discrimination in places of public accommodation on account “of race, color, religion, or national origin,”[mfn]42 U.S.C.A. § 2000a.[/mfn] the state jurisdictions include broader categories. Eighteen states prohibit discrimination based on marital status, twenty-five on sexual orientation, twenty-four on gender identity, twenty on age, three on veteran status, six on military status, and five on pregnancy or childbirth.[mfn]“State Public Accommodation Laws,” National Conference of State Legislatures, updated June 25, 2021, https://www.ncsl.org/research/civil-and-criminal-justice/state-public-accommodation-laws.aspx.[/mfn] Notably, several jurisdictions, such as the District of Columbia, arguably California, and the city of Seattle prohibit discrimination on the basis of “political affiliation.”

Public accommodation laws typically provide two kinds of protections: (1) prohibit discrimination in the provision of service on the basis of protected characteristics such as race, gender, religion, sexual orientation, or disability; and (2) prohibit third-parties from discriminatorily interfering with someone seeking to patronize a business.  

Five states already explicitly apply their public accommodation laws to online services,[mfn]California, Colorado, New Mexico, New York, and Oregon.[/mfn] and many other states’ laws would arguably apply online. According to a recent study, seventeen states are likely to apply their statutes to online firms but their courts have not yet addressed the legal question.[mfn]“State Public Accommodation Laws,” National Conference of State Legislatures.[/mfn] Therefore, it would be consistent with other existing laws for a legislature to make social media platforms places of public accommodation that cannot discriminate on the basis of protected characteristics, including political affiliation or belief.

The First Amendment limits applications of public accommodation law. The Supreme Court has said that public accommodations laws “are well within the State’s usual power to enact when a legislature has reason to believe that a given group is the target of discrimination, and they do not, as a general matter, violate the First or Fourteenth Amendments.”[mfn]Boy Scouts of Am. v. Dale, 530 U.S. 640 at 658 (2000).[/mfn] But, when the application of a public accommodation law would impose a “serious burden” on the organization’s rights of expressive association, the First Amendment forbids the law’s application. The line between those activities the First Amendment protects and those it does not is not always easy to determine, with the Boy Scouts and Irish-American Parades enjoying protection from application of the public accommodation laws[mfn]Boy Scouts of Am. v. Dale; Hurley v. Irish–American Gay, Lesbian and Bisexual Group of Boston, Inc., 515 U.S. 557 at 581 (1995).[/mfn] but not the Jaycees or Rotarians.[mfn]Roberts v. United States Jaycees, 468 U.S. 609 (1984); Board of Directors of Rotary Int’l v. Rotary Club of Duarte, 481 U.S. 537 (1987).[/mfn]

In deciding whether an entity has First Amendment protection against application of public accommodation laws, courts balance the associational interest in freedom of expression against the state’s interest in nondiscrimination. It is not clear what degree the First Amendment protects social media firms from carriage requirements, as courts have not explicitly determined platforms’ expressive rights when weighed against general anti-discrimination requirements. On the other hand, common carriers, which are often communications networks, generally do not enjoy strong First Amendment protections.[mfn]F.C.C. v. League of Women Voters of California, 468 U.S. 364 at 378 (1984) (“Unlike common carriers, broadcasters are “entitled under the First Amendment to exercise the widest journalistic freedom consistent with their public [duties],” (quotation omitted); Henry H. Perritt, Jr., “Tort Liability, the First Amendment, and Equal Access to Electronic Networks,” Harvard Journal of Law & Technology 5, no. 2 (Spring 1992): 65, 67 (“If the network holds itself out as a common carrier, it is treated as a conduit and does not have First Amendment rights of its own.”).[/mfn] Further, social media companies often claim their content moderation and de-platforming decisions receive liability protection from section 230(c)(1), which protects firms from liability for content spoken or published by third-parties. It is not clear how social media platforms can claim First Amendment protection for editorial decisions that third-parties make.

III. Unfair and Deceptive Trade Practices and Consumer Protection

Another way to address Big Tech abuses would be for states to bring unfair and deceptive trade practice actions to require disclosure and make platforms uphold their representations and promises about fairness, freedom, and the lack of censorship on their platforms. For years, Facebook, Twitter, and the other major platforms have promised their users free speech and fair treatment. Relying upon these representations, individuals and firms built public personae and businesses. The platforms changed the rules, de-platformed unknown tens of thousands, essentially eliminating billions of dollars of value created by these firms and individuals.

Further, as social scientists such as Jonathan Haidt have long documented, social media inflicts psychological harm on children.[mfn]Greg Lukianoff and Jonathan Haidt, in The Coddling of the American Mind: How Good Intentions and Bad Ideas Are Setting Up a Generation for Failure (New York: Penguin Press, 2018).[/mfn] Recent evidence shows that at least some of the platforms knew about the harm they were causing, but, like cigarette companies and “Joe Camel,” they continued to market to children.

The Federal Trade Commission Act prohibits “unfair or deceptive acts or practices in or affecting commerce.”[mfn]15 U.S.C. § 45(a)(1).[/mfn] Over forty states have state laws that mirror the FTC Act’s protections, the so-called “Little FTC Acts.” The wording of these laws typically copies the FTC Act, the Uniform Deceptive Trade Practices Act, or the Uniform Consumer Sales Practices Act. While little FTC Acts often proceed from common law concepts, they usually allow causes of action that expand on historical fraud or misrepresentation actions. They offer a range of potential remedies, including actual damages, enhanced damages, injunctive or declaratory relief, attorneys’ fees, court costs, and rescission for unfair and deceptive practices committed in the conduct of trade or commerce.

The “Little FTC Acts” allow the states, as with the federal government, to take action specifically to protect children. Advertising and marketing to children is judged under a more protective standard, appreciating children’s limited ability to distinguish true from false and make reasoned decisions.[mfn]Roscoe B. Starek, III, “The ABCs at the FTC: Marketing and Advertising to Children” (Speech at the Minnesota Institute of Legal Education, July 25, 1997) https://www.ftc.gov/public-statements/1997/07/abcs-ftc-marketing-and-advertising-children).[/mfn] The FTC used its authority to regulate advertising to children in the famous Joe Camel complaint.[mfn]Federal Trade Commission, “Joe Camel Advertising Campaign Violates Federal Law, FTC Says,” press release, May 28, 1997, https://www.ftc.gov/news-events/press-releases/1997/05/joe-camel-advertising-campaign-violates-federal-law-ftc-says.[/mfn]

The social media and internet platforms market to and serve children. They extract valuable information and data from children in exchange for entertainment, which can often be addictive. As demonstrated by leading psychologists and social scientists, such as Jonathan Haidt of New York University and Jean M. Twenge of San Diego State University, social media has harmed our children and has been unquestionably linked to epidemic levels of depression, mental illness, loneliness, low school performance, and other illnesses.[mfn]Jonathan Haidt and Jean Twenge, “Social Media Use and Mental Health: A Review” (unpublished manuscript, 2021), New York University, https://docs.google.com/document/u/1/d/1w-HOfseF2wF9YIpXwUUtP65-olnkPyWcgF5BiAtBEy0/mobilebasic#h.xi8mrj7rpf37.[/mfn] Enticing children to engage on social media through marketing, without parental approval, is an unfair trade practice. Similarly, now that reports have emerged that platforms, such as Facebook, knew of the harm they were causing, the responsibility, even culpability, of the platforms is increased.[mfn]“Facebook’s Documents About Instagram and Teens, Published,” Wall Street Journal,  September 29, 2021, https://www.wsj.com/articles/facebook-documents-instagram-teens-11632953840.[/mfn]

States could use their Little FTC Acts, perhaps with some amendments, to prohibit tech platforms from arbitrarily applying or selectively enforcing their terms of service against users and/or abusively marketing their goods to children. Further, states could make real improvements in children’s lives.  

Such consumer protection laws would be consistent with the interpretation of Section 230 that Justice Clarence Thomas and many lower courts have favored. They would not hold platforms liable for hosting third-party content, or treat them as publishers or speakers. They would simply ensure that platforms do not moderate content deceptively or otherwise in bad faith—in harmony with subsection (c)(2)’s existing limitations.

The question of whether new legislative language is needed would vary from state to state given the specific world of each state’s FTC Act. Consider Illinois’s law:

(a) A person engages in a deceptive trade practice when, in the course of his or her business, vocation, or occupation, the person:

(1) passes off goods or services as those of another;

(2) causes likelihood of confusion or of misunderstanding as to the source, sponsorship, approval, or certification of goods or services;

(3) causes likelihood of confusion or of misunderstanding as to affiliation, connection, or association with or certification by another;

(4) uses deceptive representations or designations of geographic origin in connection with goods or services;

(5) represents that goods or services have sponsorship, approval, characteristics, ingredients, uses, benefits, or quantities that they do not have or that a person has a sponsorship, approval, status, affiliation, or connection that he or she does not have;

(6) represents that goods are original or new if they are deteriorated, altered, reconditioned, reclaimed, used, or secondhand;

(7) represents that goods or services are of a particular standard, quality, or grade or that goods are a particular style or model, if they are of another;

(8) disparages the goods, services, or business of another by false or misleading representation of fact;

(9) advertises goods or services with intent not to sell them as advertised;

(10) advertises goods or services with intent not to supply reasonably expectable public demand,

unless the advertisement discloses a limitation of quantity;

(11) makes false or misleading statements of fact concerning the reasons for, existence of, or amounts of price reductions;

(12) engages in any other conduct which similarly creates a likelihood of confusion or

misunderstanding.[mfn]Ill. Comp. Stat. Ann. 510/2(A).[/mfn]

Certainly, legal action for Big Tech platforms’ discrimination and deceptive advertising to children could be brought under subparagraph 5 that focuses on businesses that “represen[t] that goods or services have sponsorship, approval, characteristics, ingredients, uses, benefits, or quantities that they do not have” or under the catch-all subparagraph 12. 

Conclusion

Recent debates concerning the reform of Big Tech have occurred mostly within Washington, D.C., circles. These debates have failed so far to yield reform to Section 230 of the Communications Decency Act, to bring successful antitrust actions, or to initiate effective FTC investigations or enforcement proceedings. Particularly in light of the complexity of the challenges Big Tech poses, policymakers must be willing to pursue novel approaches and use all available avenues. 

State regulation offers a powerful way for policymakers to respond to internet platforms’ discriminatory de-platforming and content-moderation decisions. The well-established legal principles of common carriage and public accommodation offer the basis for states to place non-discrimination and disclosure requirements on the major internet platforms. In addition, unfair trade practices offer a venue for regulatory scrutiny of the platforms’ false or misleading public representations concerning the nature of their offerings as well as their failure to disclose their products’ harmful effect on children.