US President Donald Trump’s administration today released its National Policy Framework for Artificial Intelligence: Legislative Recommendations , a document that reads less like the AI safety blueprints that states are increasingly adopting and more like a playbook for asserting federal control over AI governance. It is part of a coordinated push with congressional allies, most notably Republican Senator Marsha Blackburn , to translate federal preemption of state regulations into law. “We need one national AI framework, not a 50-state patchwork,” Michael Kratsios , science and technology adviser to Trump, told one publication. “The administration’s proposal represents a smart starting point for a pro-innovation AI policy legislative framework,” Adam Thierer , a senior fellow at the R Street Institute, said in an interview. “It is now time for Congress to do its job and step up to the plate with a sensible policy framework based on those principles.” An effort to block fragmented state laws At its core, the White House framework is an attempt to prevent a state-by-state regulatory system from becoming the default architecture of AI governance. States are already moving ahead , drafting and passing their own AI laws in the absence of federal action. The longer that continues, the more those rules solidify into a fragmented regulatory system that will be difficult to unwind. The centerpiece of the White House strategy is federal preemption , the legal mechanism that would allow Congress to override state AI laws and establish a single national standard. Blackburn’s companion legislation translates that idea into statutory form, envisioning a single federal rulebook that would replace the emerging patchwork of state policies. If enacted, it would not simply harmonize regulation; it would strip states of primary authority over AI policy and consolidate that power in Washington. A strategy tailored for a stalled Congress Congress has spent years debating AI regulation without producing a comprehensive framework. The White House is attempting to break that deadlock by pairing its proposal with children’s online safety measures, one of the few areas where bipartisan agreement is still possible. The framework organizes its substantive proposals around what it calls the “4 Cs”: children, creators, conservatives, and communities, borrowing from Blackburn’s draft legislation . The first pillar of the framework states, “AI services and platforms must take measures to protect children, while empowering parents to control their children’s digital environment and upbringing.” The focus on children translates into obligations around safety and exposure to harmful content. The emphasis on creators reflects growing concern over how AI systems use copyrighted material and replicate human likenesses. The inclusion of “conservatives” points to ongoing debates about bias and perceived censorship in AI outputs. And the broader category of communities serves as a catch-all for localized or societal impacts. Deregulation—with consequences Throughout the strategy document, the administration stresses that AI policy should be “minimally burdensome,” favoring a lighter-touch approach to regulation. One of the strategy’s pillars states, “The United States must lead the world in AI by removing barriers to innovation, accelerating deployment of AI applications across sectors, and ensuring broad access to the testing environments needed to build world-class AI systems.” Blackburn’s proposal moves toward a system in which liability plays a central role, opening the door to legal claims against AI developers and platforms when harm occurs. In doing so, it shifts enforcement away from regulators and toward the courts. A liability-driven system produces standards through litigation rather than rulemaking, and favors companies with the resources to absorb legal risk, potentially accelerating consolidation in the AI sector. The First Amendment strategy One of the most consequential elements of the framework is its emphasis on protecting AI outputs as a form of speech. The administration suggests that certain types of regulation, particularly those that would require altering or constraining outputs, may raise First Amendment concerns. According to one of the strategy’s pillars, “American creators, publishers, and innovators should be protected from AI-generated outputs that infringe their protected content, without undermining lawful innovation and free expression.” Another pillar states, “The federal government must defend free speech and First Amendment protections, while preventing AI systems from being used to silence or censor lawful political expression or dissent.” These statements reflect a strategic move to anchor AI policy within constitutional doctrine. If courts accept this framing, it could significantly limit the scope of future regulation, particularly in areas such as misinformation, bias mitigation, and content moderation. Congress is the weakest link For all its ambition, the framework depends on a single institution, Congress, which has so far remained divided and slow-moving even as AI technologies race ahead. While the government’s executive branch can set direction, coordinate with allies, and apply pressure through enforcement and funding mechanisms, it cannot, on its own, establish a binding national standard or fully preempt state law. Progressives and Democrats in Congress oppose the effort to create a federal preemption or moratorium on state AI legislation. They say that any legislation enacted at the federal level, rather than blocking state action, should “focus on setting a strong federal floor of protections, including prohibitions on the most dangerous uses of AI, while preserving state authority to go further in addressing new harms.” That tension between a federal framework that overrides states and one that builds on them is likely to define the next phase of the AI policy fight in Washington. This article originally appeared on CIO.com .