Policy Issues / Healthy Communities

Primer: A Society in Flux: Copycats, Creativity, and the Future of Artificial Intelligence Governance

AI policy cannot be left to those who build or profit from the algorithms alone. It must rest on law, transparency, and respect for human reason.

I. Introduction: A Civilizational Transition in Technology

Artificial intelligence (AI) is a transformative force reshaping economies, politics, and even the basic patterns of human life. Advocates of rapid adoption often speak in terms of efficiency, productivity, and global competition. Yet, when examined closely, the present trajectory of AI reveals a more complex, stirring picture. Labor markets are being unsettled, with workers from virtually every industry facing job displacement. Some say these transformations are happening at an unstoppable rate, while others say they could take decades; nevertheless, one cannot deny that social bonds are being strained as human relationships are substituted by algorithmic simulations. Mental health is at risk as individuals report cognitive dulling and emotional disorientation after extended engagement with generative systems. And at the cultural level, the very idea of shared truth is threatened by the rise of synthetic media that can effortlessly mimic human voices, images, and narratives, creating the “largest intellectual property theft in American history.”1

II. The Legislative and Executive Foundations of AI Governance

A. Congressional Action: The AI Accountability and Personal Data Protection Act

Policymakers have begun to recognize that a proper response to this new landscape might require more than financial optimism or industry self-regulation. Senators Josh Hawley and Richard Blumenthal have introduced the AI Accountability and Personal Data Protection Act, which would prohibit companies from training artificial intelligence systems on copyrighted works or personal data without express prior consent and would establish a private right of action for those whose work or information has been exploited.2 The bill would create a new federal tort that prohibits the appropriation or exploitation of an individual’s personal data or copyrighted works without express prior consent.3 The bill defines “covered data” broadly to include personal information, biometric identifiers, and creative works, along with AI-generated content that imitates or derives from them.4 Training a generative AI system on such material without permission would constitute misuse and tort liability.5 To enforce these protections, the bill would create a private right of action in federal and state court, with remedies including compensatory damages, treble profits, punitive damages, injunctive relief, and attorneys’ fees.6 The bill also makes clear that arbitration clauses and class-action waivers cannot block individuals from seeking redress, and this would ensure that Americans can collectively defend their rights against large technology firms.7

B. Executive Policy: OMB Leadership and Responsible Procurement

At the executive level, the administration has also taken important steps to establish a framework for responsible AI use. In April 2025, the Office of Management and Budget (OMB) issued a memorandum that set requirements for federal agencies acquiring AI systems.8 The memo instructs agencies to include contractual protections against misuse of government data, ensure that privacy safeguards are in place from the beginning, avoid vendor lock-in by encouraging competition and interoperability, and implement ongoing performance monitoring and sunsetting reviews for underperforming systems.9 It also directs agencies to prioritize American innovation by buying American-made AI.10 OMB’s guidelines show that building up AI requires trust, accountability, and responsible oversight.

C. Policy Philosophy

These dual policy tracks show that legislation in Congress and procurement reform in the executive branch are complementary: They establish the principle that AI must be disciplined by consent, accountability, and transparency. Yet there remains a broader question about the direction AI is pushing our society.

III. The Technological Crossroads: LLMs, ART, and the Future of Creativity

Most of today’s systems are large language models (LLMs), trained to predict the next word in a sequence or to replicate patterns from massive collections of human-generated data. They are powerful mimics, capable of producing convincing prose, images, and sounds. But they do not reason, hypothesize, or innovate. They are not instruments of discovery but, more often than not, engines of regurgitation and copycatism. If America builds its AI future solely on these copycats, we risk constructing a society of mimicry rather than progress, of derivative outputs rather than creative inputs.

A. ART as an Alternative

One possible alternative is the use of Automatic Reasoning and Tool (ART) models. Such systems do not just imitate existing human work but can engage in reasoning, hypothesis testing, and the intelligent use of external tools. ART models can partner with human beings to solve pressing challenges in medicine, energy, and governance. A policy approach could be to regulate misuse and to channel investment, incentives, research, and governance toward reasoning systems, whether those are LLM, ART, or others, that genuinely expand human capacity—not encourage redundancy—and to ensure that the United States stays ahead of the game in AI power and economic competition with other countries.

B. The Broader Policy Vision: Human Dignity and Trust

Good public policy can build on the accountability platform being proposed by Congress and the procurement standards established by OMB and extend them into the broader cultural and civilizational context. The consent-first framework of the aforementioned bill recognizes that individuals do not have to have their work or their personal lives expropriated without permission. The procurement safeguards from OMB show that the government itself understands the need for intellectual property protection, privacy-by-design, competition, and monitoring in order to preserve public trust. The next step is to apply these principles across society. That means ensuring that AI policy protects work and vocation, not just with creative products but in all forms of labor. It means attending to the mental health consequences of engaging with generative systems and regulating their use in multimodal settings. It means recognizing the threats to community and family life posed by AI companions and social simulations. And it means eliminating the chance of deepfakes running “alternatives” to reality produced by synthetic media, which could even be used to generate fake evidence in legal settings, destroying “truth” as we know it.11 

The stakes are civilizational. Policy must ask whether technology will deepen human dignity or hollow it out. Will AI sharpen human cognition or dull it? Will it reinforce community or substitute simulation? Will it strengthen the integrity of truth or diffuse it through fabrication? The fight over copyright and data rights is about the kind of society Americans will inhabit in the coming decades. On that front, “the law of copyright has developed in response to significant changes in technology.”12 

Proponents of AI technology are eager to accelerate its adoption, and it is vital not to lose sight of the need for responsible safeguards. Fortunately, the administration’s own procurement rules already demonstrate a wise recognition of risk and a constructive approach to management. By building on those rules and supporting legislative reforms, the United States can pursue an AI policy that does more than advance software and algorithms; it can secure a future where technology is accountable, work is dignified, communities are strong, truth is preserved, and innovation takes the form not of mimicry but of real reasoning and problem-solving.

IV. The Copyright Law Surrounding AI

The intellectual property clause in the U.S. Constitution protects human creativity by granting authors limited exclusive rights in original works of authorship to “promote the Progress of Science and the useful Arts, by securing for limited Times to Authors and Inventors the exclusive Right to their respective Writings and Discoveries.”13 Congress implemented that constitutional command through the Copyright Act of 1976, which confers on authors the exclusive rights to reproduce, distribute, publicly perform, and display their works, and to prepare derivative works.14 Copyright protections extend to the “original works of authorship,”15 and originality and human authorship are the twin pillars of those protections. Copyright exists not to reward labor for its own sake but to encourage the production of creative works for public benefit by protecting original expression.16 The “sine qua non of copyright is originality,” which must contain “at least some minimal degree of creativity.”17 Modern copyright jurisprudence has limited copyright to human beings through the human-authorship requirement as the bright line rule.18 LLMs recombine human expression at scale and often produce outputs that imitate recognizable expressive arrangements rather than merely evoke unprotectable “style,” placing such outputs at risk of being treated as unauthorized “derivative works.”19 The doctrinal line between mimicry and independent creativity tracks the basic premise that copyright protects expression and not ideas or styles.20 Even commentators who foresee qualitatively different future systems acknowledge that today’s AI falls short on agency and continual learning; that gap is precisely why a policy fix can protect human authorship now.21

V. Why Copyright and Data Rights Matter

The copyright fight is about not only authors’ or artists’ work but also nearly every aspect of thef society America will inhabit in the AI age:

  • Work and Vocation: AI displaces both blue- and white-collar workers, undermining the meaning of work as vocation and contribution.

  • Cognition and Mental Health: Generative AI fosters dependency, dulls resilience, and carries documented risks of anxiety and despair.

  • Human Relationships: AI companions and algorithmic environments encourage isolation, replacing real community with only a hollow simulation.

  • A Sense of Shared Truth: Synthetic media corrodes civic life. Deepfakes, algorithmic propaganda, and machine-generated news undermine trust in courts, elections, media, the news, and educational systems.

A. LLMs or ART: Divergent Paths

LLMs are engines of mimicry. They generate convincing text, images, or sounds by predicting what should come next in a sequence based on patterns in enormous amounts of existing human data. They can draft essays, summarize reports, and even mimic a writer’s voice, but they do not understand the meaning of what they produce. Their “intelligence” is statistical, not conceptual. They do not reason, plan, or verify. In this sense, LLMs are more often mirrors of the past than instruments of discovery. Their dominance entrenches a culture of imitation in a fabricated world in which creativity is replaced by fluent repetition. This distinction matters because imitation at scale has social costs. When AI systems simply remix existing human expression, they crowd out the creative labor that copyright law was designed to protect. The market fills with synthetic material that competes directly with authentic human work, hollowing out both professional opportunity and cultural originality. Left unchecked, reliance on LLMs risks civilizational stagnation and an economy and culture optimized for regurgitation rather than invention.

ARTs (automatic reasoning and tools), by contrast, are emerging forms of AI designed not to mimic but to reason. Instead of predicting words, they can plan steps toward a goal, use external tools such as databases and calculators, and adjust their reasoning when new information appears. In short, they combine language understanding with problem-solving and verification, capabilities that more closely resemble human thought. Early examples of this direction are beginning to take shape. DeepMind’s AlphaGeometry uses symbolic reasoning to prove geometric theorems, demonstrating genuine problem-solving instead of statistical guessing.22 Anthropic and others have developed “tool-use” models that can browse the web, perform calculations, generate images, and even test their own hypotheses before producing an answer, demonstrating the shift from pure language mimicry toward reasoning and verification.23 These are early examples, but they show how AI can move our society, technology, and economy forward and not just imitate the past.

Protecting human creativity through copyright enforcement and consent-based data use would discourage the industrial-scale imitation that fuels AI development and instead incentivize the creation of reasoning systems that add real value. Strong copyright rules and strategic federal investment can steer the market away from models that exploit human work without permission and toward those that genuinely extend human capacity.

Only by protecting creativity, enforcing intellectual-property rights, and directing public and private investment toward reasoning systems can America capture AI’s real potential. The goal is not to stop progress but to ensure that progress takes the form of discovery, not duplication.

B. The Civilizational Stakes

The central questions are not technical at all; they are about society as a whole:

  • Will work retain dignity or be hollowed out? AI has accelerated job displacement across creative, clerical, and analytical fields by replicating human output at negligible cost. Writers, designers, translators, and even coders now compete against models trained on their own work.24 It is estimated that 13.7 percent of U.S. workers have reported losing their jobs to a robot or AI-driven automation.25 Due to the rapid advancements in AI now underway, that number is expected to reach 30 percent by 2030.26 When human labor becomes raw input for machine reproduction, work loses its moral dignity and economic worth. Labor is no longer valued for creativity or skill but for being replaceable, infinitely sampled, and automated. Over time, this erodes the “social contract” between effort and reward, weakening the foundation of middle-class life.

  • Will cognition be sharpened or deadened? Constant reliance on generative AI dulls human reasoning. LLMs offer effortless answers that discourage reflection, synthesis, or verification. Studies already suggest that overreliance on AI tools impairs attention, memory, and problem-solving.27 The more people offload thought to predictive systems, the more cognition atrophies, especially in younger people.28 Intellectual effort, which is what used to define human learning, is displaced by machine prediction, creating a generation of users adept at prompting but not at thinking.

  • Will community bonds be deepened or replaced by simulations? AI companions, chatbots, and algorithmically curated platforms simulate empathy while isolating users from real relationships. By offering emotionally responsive but artificial interactions, LLM-driven systems substitute connection with mimicry, producing the illusion of friendship without accountability or intimacy. This deepens loneliness and can cause actual harm, especially among young men and the elderly,29 and frays the social fabric that depends on genuine reciprocity. Communities built around shared humanity are replaced by algorithms.

  • Will truth be defended or dissolve into synthetic noise? LLMs can generate endless plausible falsehoods involving deepfakes, fabricated news, counterfeit documents, and synthetic evidence.30 As these systems flood the information space, it becomes harder to distinguish authentic journalism, legal records, or historical facts from machine invention. The Department of Homeland Security has warned that synthetic media and deepfakes pose significant risks to trust in courts, elections, and even identity verification for national security.31 The foundation of a free society, shared reality, could erode into noise and suspicion. Whoever controls the algorithms controls the narrative; whoever controls the narrative controls belief itself.

Copyright and data rights are the front lines of this societal choice. They determine whether society values human labor and originality or reduces them to free raw material for machines.

VI. Policy Recommendations

The following section addresses several of the concrete legal and governance challenges raised in this paper, including copyright uncertainty, procurement standards, synthetic media authentication, and the need for consent-first data practices, but it does not address every structural or societal issue posed by AI. Some of the deeper questions, including long-term labor displacement, the reshaping of human judgment, and the cultural consequences of a society dependent on AI, require further discussion and policy development. We are not trying to specially argue for one AI model or version over the other but instead pointing out that there is room for robust public discourse on this subject. These recommendations, therefore, represent a foundation for a set of actionable steps that Congress and the executive branch can take now while leaving space for the ongoing deliberation, debate, and innovation necessary to meet the larger challenges ahead.

A. Legislative Reform and AI Copyright Jurisprudence

Congress should look at passing legislation that protects human dignity and hard-working Americans. One example could be the AI Accountability and Personal Data Protection Act, which would codify a consent-first rule by prohibiting the use of personal or copyrighted data in AI training or generation without express prior consent and by creating a private right of action with robust remedies and no forced arbitration.32 The bill’s structure directly responds to the judiciary’s reaffirmation that only human authors can claim copyright protection: “Congress has the constitutional authority and the institutional ability to accommodate fully the varied permutations of competing interests that are inevitably implicated by such new technology.”33 The D.C. Circuit in Thaler v. Perlmutter underscores why legislation is necessary: As a matter of statutory law, the Copyright Act “requires . . . work to be authored in the first instance by a human being,” and the term “author” refers to people, not machines.34 The court emphasized that expansions of copyright to new technologies occur at Congress’s direction, not by judicial revision of settled terms, which places the onus for modernizing the law squarely on the legislature.35 At the same time, Thaler confirms that human-authored works created with AI tools remain protectable, so long as a person and not the machine is the author of the expression.36 

B. Executive Implementations

At the executive level, the OMB memorandum mentioned above requires agencies to embed intellectual-property and privacy protections in AI contracts, avoid vendor lock-in through interoperability, and conduct ongoing performance and risk assessments, providing a governance model that the private sector can emulate.37 Executive Order 14,179, Removing Barriers to American Leadership in Artificial Intelligence, complements that approach by directing agencies to prioritize U.S.-developed AI technologies in procurement, streamline outdated regulatory barriers, and strengthen coordination across agencies to promote domestic innovation.38 The order aligns with the White House’s AI Action Plan, which outlines steps to advance trustworthy AI, safeguard democratic institutions, and protect against synthetic media and algorithmic misinformation.39 

The administration’s Genesis Mission, launched by executive order on November 24, 2025, provides a model for how federal leadership can channel AI toward reasoning and discovery rather than imitation.40 By directing the Department of Energy to build the American Science and Security Platform, an integrated system for AI-driven hypothesis testing and autonomous experimentation, the order operationalizes a national framework for ART models.41 This initiative demonstrates how government can invest in AI systems that expand human capacity and scientific progress while maintaining security, transparency, and accountability.

At the same time, the executive branch can separate itself from the judiciary’s role in clarifying unsettled areas of copyright law. Absent a federal statute, the administration should firmly reject efforts by technology companies to interfere in active litigation or shield commercial AI training practices from judicial review. Copyright law has long evolved through case law in response to new technologies. It is the role of courts and not industry pressure to determine the legal boundaries of how copyrighted material may be used in AI development.

C. Proposed Actions

Several policy recommendations can be based on this foundation:

1. Congress can enact a national consent-first standard for the use of creative or personal data in AI training and generation to preserve human authorship and autonomy, ensuring that legislative reform aligns with Thaler’s recognition that only Congress may expand copyright’s scope.42

    2. Policymakers can prioritize disclosures for AI-generated media in high-stakes contexts—elections, courts, and journalism—so that synthetic content can be authenticated at scale.43 

      3. OMB’s procurement template—IP clauses, privacy-by-design, interoperability, and performance monitoring—can be extended as best practices for government agencies.44 

        4. Congress can clarify that AI mimicry of protected expression with outputs that substantially replicate the selection and arrangement of an author’s creative elements falls within the definition of “derivative works” under 17 U.S.C. § 101, ensuring that imitation cannot substitute for genuine creativity. Recognizing AI-generated mimicry as a derivative work under the Copyright Act strengthens the consent-first framework at the core of responsible AI policy. Under 17 U.S.C. §§ 101 and 106(2), the right to prepare derivative works belongs exclusively to the original author, and unauthorized recasting or adapting of protected expression constitutes infringement. Treating AI outputs that replicate or closely imitate creative expression as derivative therefore would reinforce the principle that human creators, not algorithms, retain control over how their works are used and adapted. This approach would not inhibit innovation but channel it: When reproduction without consent carries legal risk, investment flows naturally toward AI systems that reason, hypothesize, and generate new ideas rather than merely repackage existing ones.

          5. AI policy stakeholders can establish requirements for explainability and auditability so that AI systems can be understood, verified, and trusted. Modern AI models, especially large-scale neural networks, operate as “black boxes”: They generate outputs through internal processes that are opaque even to their own developers. Their reasoning cannot be traced step by step, their decision pathways cannot be independently validated, and their internal logic cannot be inspected for consistency or safety. This opacity creates profound risks in national security, law enforcement, infrastructure, and other critical areas. Without a method to audit how a model reached a conclusion, institutions cannot reliably distinguish accurate reasoning from error, bias, or manipulation. Courts have recognized that unexplainable or untestable algorithmic decision-making can raise constitutional concerns when affected individuals cannot meaningfully challenge the basis of an automated determination.45

            6. Both public and private investment can prioritize AI that reasons and partners with human beings so that innovation favors genuine problem-solving over mimicry.46

              This unified framework, grounded in statute, reinforced by judicial precedent, and informed by executive policy, offers a path for AI governance that protects authors, preserves what is real, and channels technological power toward reasoning rather than replication.

              VII. Conclusion: Truth, Power, and Freedom of Thought

              America faces a choice. We can allow copycat systems to dominate, entrusting society to mimicry, labor deprivation, and cultural erosion, or we can build a framework that protects human dignity. Several government policy fronts have already shown that accountability is possible. The task now is to extend those protections from government to all of society. The stakes are clear: AI is about not only efficiency or innovation but also whether technology our society or hollows it out.

              AI policy should encompass not only technical rules and economic structures but also the deeper questions of how AI shapes human judgment, civic life, cultural meaning, and national strength. It should include governance, ethics, investment, research, education, workforce transition, security, competition, and the preservation of human agency. Above all, it can concern itself with how a free people choose to direct powerful technologies to elevate human creativity, protect individual dignity, and strengthen the institutions that sustain self-government.

              We all have personalized algorithms. Words matter. The way you see the world now depends on the machine curating it for you. If you only follow Congressman Hakeem Jeffries on social media, for example, and that is the only way you read or consume any kind of news, then you will likely believe Republicans alone voted to shut down the government in October 2025.47 That’s largely because of your personal algorithm, which is built to give you only one kind of news. That is the algorithm’s power to shape the narrative.

              The algorithm is quietly distorting the flow of information, showing you what confirms your views and hiding what might challenge them. It feeds into your fears, your distrust, your habits. It shapes what you think about and how you think about it. AI now enforces this dynamic. It writes the headlines, curates the stories, and even generates the images you see. AI does not just describe the world but defines it. It decides which voices matter, which are buried, and which truths survive the feed. Whoever controls AI controls knowledge. Whoever controls knowledge controls the narrative.

              That is why AI policy cannot be left to those who build or profit from the algorithms alone. It must rest on law, transparency, and respect for human reason. Americans can still trust their understanding of human nature and instinct for truth, fairness, and reason, but they cannot trust the conventional wisdom of Washington or Big Tech, those who control and regulate AI and hold the keys to the kingdom of the future economy. A nation that gives up that control of meaning gives up its freedom of thought.

              Endnotes

              1.  Press Release, Sen. Josh Hawley, Chairman Hawley Exposes Big Tech’s Complicity in Piracy to Train AI Models & Willfulness to Bankrupt U.S. Creative Community (July 16, 2025), https://www.hawley.senate.gov/chairman-hawley-exposes-big-techs-complicity-in-piracy-to-train-ai-models-willfulness-to-bankrupt-u-s-creative-community/.

              2.  AI Accountability and Personal Data Protection Act, S. __, 119th Cong. (2025), https://www.hawley.senate.gov/wp-content/uploads/2025/07/Hawley-AI-Accountability-and-Personal-Data-Protection-Act.pdf.

              3.  Id. § 3(a).

              4.  Id. § 2(4).

              5.  Id. § 3(a).

              6.  Id. § 3(b)(1)–(2).

              7.  Id. § 3(c)(1)–(2).

              8.  Russell Vought, Dir., Office of Mgmt. & Budget, Exec. Office of the President, Memorandum M-25-22, Driving Efficient Acquisition of Artificial Intelligence in Government (Apr. 3, 2025), https://www.whitehouse.gov/wp-content/uploads/2025/02/M-25-22-Driving-Efficient-Acquisition-of-Artificial-Intelligence-in-Government.pdf.

              9.  Id

              10.  Id.; see also Removing Barriers to American Leadership in Artificial Intelligence, Exec. Order No. 14,179, 90 Fed. Reg. 8741 (Jan. 31, 2025), https://www.federalregister.gov/documents/2025/01/31/2025-02172/removing-barriers-to-american-leadership-in-artificial-intelligence (highlighting the president’s “Buy American” agenda in artificial intelligence leadership).

              11.  THE WHITE HOUSE, America’s AI Action Plan 12–13 (July 2025), https://www.whitehouse.gov/wp-content/uploads/2025/07/Americas-AI-Action-Plan.pdf (showing that the administration has made it a priority to remove the risk of AI deepfakes being used as synthetic media and being used in the legal system).

              12.  Sony Corp. of America v. Universal City Studios, Inc., 464 U.S. 417, 430 (1984).

              13.  U.S. Const. art. I, § 8, cl. 8.

              14.  17 U.S.C. §§ 101–106.

              15.  Id. § 102(a).

              16.  Feist Publ’ns, Inc. v. Rural Tel. Serv. Co., 499 U.S. 340, 349–50 (1991).

              17.  Id. at 345.

              18.  Thaler v. Perlmutter, No. 23-5233, slip op. at 18–22 (D.C. Cir. Mar. 18, 2025).

              19.  See 17 U.S.C. § 101. 

              20.  See Feist, 499 U.S. at 348–51.

              21.  Dean Ball, How I Approach AI Policy, HYPERDIMENSIONAL (Sep. 18, 2025), https://www.hyperdimensional.co/p/how-i-approach-ai-policy.

              22.  Trieu Trinh and Thang Luong, AlphaGeometry: An Olympiad-Level AI System for Geometry, GOOGLE DEEP MIND (Jan. 17, 2024), https://deepmind.google/blog/alphageometry-an-olympiad-level-ai-system-for-geometry/.

              23.  Anthropic, Claude Now Uses Tools to Browse the Web, Do Math, and Generate Images, CLAUDE BLOG (May 30, 2024), https://claude.com/blog/tool-use-ga.

              24.  Timothy Prestianni, 59 AI Job Statistics: Future of U.S. Jobs, NATIONAL UNIVERSITY (May 30, 2025), https://www.nu.edu/blog/ai-job-statistics/.

              25.  Id

              26.  Id

              27.  Michael Gerlich, AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking, 15 Societies 6 (2025), https://mdpi-res.com/d_attachment/societies/societies-15-00006/article_deploy/societies-15-00006-v2.pdf.

              28.  Id

              29.  Claudia Cox, Three Young Men Have Now Taken Their Lives After Disturbing Messages with AI Chatbots, THE TAB (Sep. 2025), https://thetab.com/2025/09/03/three-young-men-have-now-taken-their-lives-after-disturbing-messages-with-ai-chatbots; see also Jeff Horwitz, Meta’s Flirty AI Chatbot Invited a Retiree to New York, REUTERS (Aug. 14, 2025), https://www.reuters.com/investigates/special-report/meta-ai-chatbot-death/.

              30.  John Villsaenor, Artificial Intelligence, Deepfakes, and the Uncertain Future of Truth, BROOKINGS (Feb. 14, 2019), https://www.brookings.edu/articles/artificial-intelligence-deepfakes-and-the-uncertain-future-of-truth/.

              31.  U.S. Dep’t of Homeland Sec., Increasing Threats of Deepfake Identities (Feb. 2024), https://www.dhs.gov/sites/default/files/publications/increasing_threats_of_deepfake_identities_0.pdf.

              32.  AI Accountability and Personal Data Protection Act.

              33.  Sony, 464 U.S. at 431.

              34.  See Thaler, slip op. at 16–17. 

              35.  Id. at 21.

              36.  Id. at 18–19.

              37.  Vought, supra note 8.

              38.  Removing Barriers to American Leadership in Artificial Intelligence, Exec. Order No. 14,179.

              39.  THE WHITE HOUSE, America’s AI Action Plan.

              40.  THE WHITE HOUSE, Launching the Genesis Mission (Nov. 24, 2025), https://www.whitehouse.gov/presidential-actions/2025/11/launching-the-genesis-mission/.

              41.  Id

              42.  AI Accountability and Personal Data Protection Act, S. __, 119th Cong. § 3 (2025); Thaler, slip op. at 21. 

              43.  THE WHITE HOUSE, America’s AI Action Plan.

              44.  Vought, supra note 8.

              45.  See Houston Fed’n of Teachers v. Houston Indep. Sch. Dist., 251 F. Supp. 3d 1168 (S.D. Tex. 2017) (holding that the use of a proprietary “black-box” algorithm to make consequential employment decisions plausibly violated procedural due process because affected individuals could not test or challenge the basis of the system’s conclusions).

              46.  See Feist, 499 U.S. at 349–50.

              47.  Press Release, Rep. Hakeem Jeffries, U.S. House of Representatives, Leader Jeffries on CNN: Republicans Refuse to Reopen the Government Because of Their Unwillingness to Provide Affordable Healthcare (Oct. 23, 2025), https://jeffries.house.gov/2025/10/23/leader-jeffries-on-cnn-republicans-refuse-to-reopen-the-government-because-of-their-unwillingness-to-provide-affordable-healthcare/.