Blackburn Releases Discussion Draft of National Policy Framework for Artificial Intelligence

March 18, 2026

TRUMP AMERICA AI Act Would Unleash AI Innovation While Protecting Children, Creators, Conservatives, and Communities from Harm

WASHINGTON, D.C. – Today, U.S. Senator Marsha Blackburn (R-Tenn.) released a discussion draft of her legislative framework to codify President Trump’s executive order to create one rulebook for artificial intelligence (AI) that protects children, creators, conservatives, and communities from harm while ensuring the United States wins the global race for AI supremacy:  

“Instead of pushing AI amnesty, President Trump rightfully called on Congress to pass federal standards and protections to solve the patchwork of state laws that has hindered AI innovation,” said Senator Blackburn. “Now, Congress must answer his call to establish one federal rulebook for AI to protect children, creators, conservatives, and communities across the country and ensure America triumphs over foreign adversaries in the global race for AI dominance. The TRUMP AMERICA AI Act is the solution America needs.”

TRUMP AMERICA AI ACT

Below is a summary of how The Republic Unifying Meritocratic Performance Advancing Machine intelligence by Eliminating Regulatory Interstate Chaos Across American Industry (TRUMP AMERICA AI Actwould protect the “4 Cs” (children, creators, conservatives, and communities) from exploitation, abuse, and censorship and ensure American AI companies can innovate without cumbersome regulation. This framework includes bipartisan legislation Senator Blackburn has previously introduced, the Kids Online Safety Act and NO FAKES Act, to protect children and creators.

Protecting Children

  • Places a duty of care on AI developers in the design, development, and operation of AI platforms to prevent and mitigate foreseeable harm to users.
  • Sunsets Section 230.   
  • Requires covered online platforms, including social media platforms, to implement tools and safeguards to protect users under the age of 17 against online harms. Specifically, this would:
    • Require covered platforms to exercise reasonable care in the design and use of features that increase minors’ online activity to prevent and mitigate harm to minors (e.g., mental health disorders and severe harassment).   
    • Require covered platforms to provide certain safeguards to minors, such as protections for minors’ data; tools for parents of minors, such as access to minors’ privacy settings; and a mechanism for account holders and visitors to report harm to minors on the platform. 
    • Prohibit covered platforms from conducting market or product research on children under the age of 13 and may only conduct research on those under the age of 17 with parental consent. 
    • Require covered platforms to provide users notice when using algorithms and permit users to switch to an algorithm that does not rely on user-specific data.  
  • Establishes requirements for companies providing AI chatbot and companion services to protect kids.
  • Enables the U.S. Attorney General, state attorneys general, and private actors to file suit to hold AI system developers liable for harms caused by the AI system for defective design, failure to warn, express warranty, and unreasonably dangerous or defective product claims. If an AI system deployer substantially modifies an AI system or intentionally misuses an AI system contrary to its intended use, the deployer could also be held liable. 

Protecting Creators

  • Makes clear that an AI model's unauthorized reproduction, copying, or processing of copyrighted works for the purpose of training, fine-tuning, developing, or creating AI does not constitute fair use under the Copyright Act. 
  • Protects the voice and visual likenesses of individuals and creators from the proliferation of digital replicas without their consent. Specifically, this would: 
    • Hold individuals or companies liable if they distribute an unauthorized digital replica of an individual’s voice or visual likeness.
    • Hold platforms liable for hosting an unauthorized digital replica if the platform has knowledge of the fact that the replica was not authorized by the individual depicted. 
  • Helps creators, musicians, artists, writers, and others access the courts to protect their copyrighted works if and when they are used to train generative AI models. Specifically, this would: 
    • Promote transparency about when and how copyrighted works are used to train generative AI models by enabling copyright holders to obtain this information through an administrative subpoena.  
    • Ensure that subpoenas are granted only upon a copyright owner's sworn declaration that they have a good faith belief their work was used to train the model, and that their purpose is to protect their rights.  
  • Sets new federal transparency guidelines for marking, authenticating, and detecting AI-generated content. Specifically, this would:
    • Require the National Institute of Standards and Technology (NIST) to develop guidelines and standards for content provenance information, watermarking and synthetic content detection. 
    • Direct NIST to develop cybersecurity measures to prevent tampering with provenance and watermarking on AI content. 
    • Require providers of AI tools used to generate creative or journalistic content to allow owners of that content to attach provenance information to it and prohibits its removal. 
    • Authorize the Federal Trade Commission (FTC) and state attorneys general to enforce the bill's requirements.

Protecting Conservatives

  • Combats the consistent pattern of bias against conservative figures demonstrated by AI systems by requiring third-party audits to prevent discrimination based on political affiliation.
  • Codifies President Trump's executive order preventing woke AI in the federal government by only allowing agency heads to procure large language models that are truthful in responding to user prompts seeking factual information and that are neutral and do not manipulate responses in favor of ideological biases.

Protecting Communities

  • Requires certain companies and federal agencies to issue reports on AI-related job effects, including layoffs and job displacement to the U.S. Department of Labor (DOL) on a quarterly basis. 
  • Directs the U.S. Secretary of Energy to enter into agreements with owners and operators of data centers to protect consumers from rate increases and adverse impacts of data center development. If a covered entity declines to enter into such agreements, they will be deemed ineligible for such federal incentives and assistance as the Secretary shall identify. 
  • Establishes an “Advanced Artificial Intelligence Evaluation Program” within the U.S. Department of Energy to evaluate advanced AI systems and collect data on the likelihood of adverse AI incidents, such as loss-of-control scenarios and weaponization by adversaries.

Empowering AI Innovation 

  • Promotes partnerships between government, business, and academia to advance AI research. Specifically, this would:
    • Authorize the Center for AI Standard and Innovation at NIST, which develops guidelines and standards with the private sector and federal agencies.
    • Create new AI testbeds with National Laboratories.
    • Create grand challenge prize competitions to spur private sector AI Innovation.
    • Create international alliances on AI standards, research, and development.
  • Establishes the National Artificial Intelligence Research Resource (NAIRR) to remove barriers to essential tools and infrastructure that power artificial intelligence research and development. Specifically, this would:  
    • Make computing resources, massive datasets, and advanced infrastructure required to perform cutting-edge research in AI available to students, researchers, non-profits, small businesses, and academic institutions as a shared resource. 
    • Establish a formal governance structure for NAIRR to oversee operations, manage federal and private resource contributions, select an independent operating entity through a transparent bidding process, and ensure adherence to strict standards of privacy, ethics, scientific integrity, and national security. 
    • Require NAIRR be built using donated resources from both federal agencies and the private sector.

Click here to read the discussion draft of the TRUMP AMERICA AI Act.  

Click here to read the section-by-section summary of the TRUMP AMERICA AI Act.

RELATED