America's approach to governing artificial intelligence has, for the past year, resembled a particularly chaotic relay race. States sprint ahead with legislation. Washington watches, then intervenes. The baton — control over the rules — is now very much in dispute.

The numbers tell the story plainly. More than 100 state AI laws were enacted in 2025 alone. Colorado passed the first comprehensive state AI statute, requiring companies to take reasonable care to prevent algorithmic discrimination. California and Texas each enacted major frameworks that took effect on January 1st of this year. Dozens of other states produced narrower rules on automated hiring tools, facial recognition, deepfakes and chatbots in schools. For a company operating nationally, compliance became a full-time puzzle.

Then, on December 11th 2025, President Trump signed an executive order titled "Ensuring a National Policy Framework for Artificial Intelligence." Its core ambition: preempt state AI laws that federal officials deem inconsistent with national policy. The administration had already, in January 2025, rescinded Joe Biden's 2023 AI safety executive order, signalling that Washington's new posture would favour development over precaution. The December order extended that logic — if states were going to slow AI down with their own rules, the federal government would push them aside.

The case for preemption is not absurd. A patchwork of 50-plus regulatory regimes is genuinely costly. Multinationals complain, with some justice, that complying with California's transparency rules, Texas's governance requirements and Colorado's anti-discrimination standards simultaneously is expensive and often contradictory. Legal uncertainty chills investment. A single national standard, the argument goes, would let American companies compete more effectively against Chinese rivals unburdened by similar constraints.

But the case against is equally substantial, and rather more uncomfortable for the administration to acknowledge. Federal preemption only simplifies the picture if Washington actually fills the gap it creates. At present, there is no comprehensive federal AI law. Congress has tried and failed repeatedly to pass one. The executive order gestures at "national policy" without specifying what that policy is. Preempting state rules while offering nothing in their place does not produce clarity — it produces a void.

That void has consequences. Colorado's AI Act, for instance, targets a real problem: automated systems that make consequential decisions about employment, housing and credit can encode historical biases and entrench them at scale. If federal preemption blocks Colorado's law before its June 2026 implementation date, and Congress provides no equivalent protection, affected individuals are simply left without recourse. The free market will not spontaneously solve algorithmic discrimination.

There is also a jurisdictional question that courts will eventually have to answer. Executive orders cannot preempt state law in the same way that federal statutes can. The Trump administration's order can instruct federal agencies not to enforce rules that conflict with national AI policy, and it can shape how federal contracts and procurement treat AI governance. But actually displacing state consumer protection or civil rights statutes requires an act of Congress or a Supreme Court ruling. Several of the state laws passed in 2025 are built on existing civil rights and consumer protection foundations, making preemption legally complicated even if politically attractive.

The international dimension adds further pressure. The EU AI Act, whose first requirements became applicable in 2025, is gradually tightening its grip on companies selling into European markets. Its full high-risk AI compliance deadline was extended from August 2026 to December 2027 — a concession to industry's practical difficulties — but the direction of travel is clear. European companies and their American competitors selling in Europe will face substantial obligations regardless of what Washington does. Multinationals will therefore maintain compliance infrastructure anyway. The argument that state laws impose uniquely crippling burdens grows weaker when the alternative is not zero regulation but simply fewer American regulators.

What is actually being contested here is not primarily a question of regulatory efficiency. It is a question of values. The Trump administration believes that AI development should be accelerated, that safety concerns are overstated by regulators and that American technological primacy requires deregulatory boldness. Many state legislatures believe, with equal sincerity, that powerful automated systems making decisions about people's lives require democratic accountability, and that Washington cannot be trusted to provide it.

Both positions are internally coherent. The problem is that resolving the tension requires legislation, not executive orders. Without a federal statute setting minimum standards, preemption arguments lack a legal anchor. Without political consensus on what those standards should be, legislation will not pass. And without legislation, the courts will spend years adjudicating the boundary between federal and state authority while companies navigate uncertainty and affected individuals wait for protection.

The relay race, in other words, is nowhere near finished. The baton is still being fought over. And the crowd watching — companies trying to plan, consumers hoping for protection, courts preparing for litigation — is growing impatient with a contest that shows no sign of producing a winner.