«Данное сообщение (материал) создано и (или) распространено иностранным средством массовой информации, выполняющим функции иностранного агента, и (или) российским юридическим лицом, выполняющим функции иностранного агента»
Topic: Artificial Intelligence (AI) Blog Brand: Techland Region: Americas Tags: AI Regulation, Donald Trump, North America, and United States The Complicated Politics of Trump’s New AI Executive Order January 29, 2026 By: Vikram Venkatram, Mina Narayanan, and Jessica Ji
Share
The administration’s attempt to suppress state AI regulation risks legal backlash, bipartisan resistance, and public distrust, undermining the innovation it seeks to advance.
The Trump administration has released an executive order (EO) aiming to preempt states’ ability to regulate artificial intelligence (AI). The order challenges the constitutionality of states’ AI laws, withholds federal funds from states deemed to have onerous AI regulatory regimes, facilitates the drafting of national AI standards, and directs the creation of a federal AI policy framework that could eventually become law. It could chill state AI regulatory activity while simultaneously provoking an onslaught of legal challenges from states and intense bipartisan backlash, dimming the prospect of turning a preemptive national policy framework into federal law.
The move comes as states develop legislation to address AI risks salient for many voters: child safety, mental health, labor and economic concerns, and energy demands, to name a few. At least twice, policymakers have tried and failed to enact a moratorium that would condition states’ access to federal funds on not enforcing AI legislation (or bar states from enforcing AI laws through other means) for a predetermined number of years. The new executive order goes even further by directing a task force to challenge state AI laws and to facilitate the creation of a national AI framework. The EO appears to be the administration’s attempt to overcome strong opposition from stakeholders across party lines to preempting state AI legislation—opposition that appears to reflect shared prioritization of and demand for AI governance.
A History of Failed Efforts to Block State AI Laws
Prior attempts to restrict state AI legislation took several forms. First, Senator Ted Cruz (R-TX) championed adding a 10-year moratorium to President Donald Trump’s expansive tax and immigration bill last summer. The moratorium cleared the House but not the Senate after outcry from lawmakers and consumer protection groups. Although some similar language made it into the July AI Action Plan, the White House’s AI strategy document, the Plan’s policy recommendations were non-binding. Most recently, a rumored attempt to include a moratorium in the National Defense Authorization Act was swiftly blocked by leaders of the Armed Services Committees to speed reconciliation after the government shutdown in the fall. The administration seemingly resorted to an executive order after failing to clear these legislative hurdles.
The primary reason for federally preempting state AI laws appears to be the administration’s bet that AI deregulation will continue to juice the economy and maintain America’s global dominance in AI. AI companies and some political supporters are wary of overregulation, pointing out that trigger-happy lawmakers could cripple the trajectory of American AI development by creating a fragmented regulatory environment that slows both AI development and adoption. Regulation skeptics hope that if states hold off on passing AI laws, the AI industry can continue its unfettered race toward the technology’s promised benefits.
Fractures Within the Republican Coalition on AI Regulation
The failure of multiple efforts to block state AI laws before the December EO raises interesting questions about the different factions shaping AI policy. AI regulation has emerged as a wedge issue among Republicans, dividing even President Trump’s base and their ideological allies. Other Republicans who are anti-Big Tech (such as Senator Josh Hawley [R-MO]), pro-states’ rights (such as former Representative Marjorie Taylor Greene [R-GA]), or concerned about child safety issues (such as Senator Marsha Blackburn [R-TN]) are similarly hesitant to completely nullify states’ ability to regulate a powerful consumer technology. Others are concerned that the draft executive order would consolidate power under David Sacks, the White House’s Special Advisor for AI and Crypto. Steve Bannon, influential among some Trump voters, opposed the moratorium two times on the grounds that it overly empowers Big Tech.
Republican lawmakers are not alone. Conservative and liberal legislators alike are increasingly concerned about the concentration of power in the tech industry and recognize the need to protect the US populace from existing and future AI harms. While members within this coalition may differ on which AI risks to prioritize, they acknowledge the value of governance in shaping the trajectory of AI systems.
At present, states are leading in developing legal AI governance mechanisms. The fact that AI-related bills have already passed in several states—Colorado, Texas, California, Tennessee, and, likely soon, New York, just to name a few—reflects growing momentum and demand that will be difficult to stop, even with a comprehensive moratorium.
The Limits of Federal Leadership Through Executive Action
President Trump claims the EO will establish federal leadership in AI regulation, echoing critics who argue that a patchwork of state-level laws will cause corporate confusion and impede AI innovation. However, in the absence of federal laws, states are driving effective AI governance development that may actually stimulate innovation and benefit US national security and foundational AI governance infrastructure. Section 8 of the EO partly tries to reconcile this by exempting state laws related to child safety, AI compute and data center infrastructure incentives, and government procurement and use of AI from federal preemption. These exemptions read as an attempt to appease critics of a blanket ban on state laws and acknowledge the importance of these topics to the administration’s Republican allies.
However, critics will likely view this olive branch as deeply inadequate. Lawmakers of all political stripes are concerned about many issues beyond those exempted in the order. Codifying a federal AI framework based on the EO will still likely face opposition in Congress, ultimately undermining a key pillar of the order.
The Political Risks of Preempting the States
Beyond Congress, restricting states’ ability to regulate AI could be politically costly. Polling shows that US citizens across the political spectrum broadly support AI regulation and are opposed to banning it. This ban could stoke growing fears among voters about AI’s potential to increase job loss, risks to children and vulnerable groups, and energy effects from data center buildouts, or that too much power is being ceded to the tech industry. If Americans feel that the administration is moving full steam ahead on AI without establishing necessary guardrails, they may be reluctant to adopt AI and realize its full potential, ultimately undercutting the administration’s desires to supercharge innovation. Ultimately, the moratorium may be more of a political liability than an innovation booster.
Hobbling states’ ability to govern themselves will likely have significant negative political repercussions. The question is: how much is the administration willing to risk in order to get its way on AI?
About the Authors: Vikram Venkatram, Mina Narayanan, and Jessica Ji
Vikram Venkatram is a research analyst at Georgetown University’s Center for Security and Emerging Technology (CSET), specializing in emerging issues in biotechnology. His work on biosecurity, dual-use neuroscience, and global surveillance has been featured by the U.S. Department of Defense, the Atlantic Council, and TRADOC’s Mad Scientist Network. He previously interned at the Atlantic Council and the U.S. Department of State and holds an M.A. and B.S.F.S. from Georgetown University.
Mina Narayanan is a research analyst at Georgetown University’s Center for Security and Emerging Technology (CSET), working on AI governance and safety. Prior to joining CSET, Mina worked at the US Department of State in the Bureau of Consular Affairs. Mina holds a BS in Software Engineering from Auburn University and an MS in Public Policy and Management from Carnegie Mellon University.
Jessica Ji is a senior research analyst on the CyberAI Project at Georgetown University’s Center for Security and Emerging Technology (CSET). Prior to joining CSET, Jessica was a software engineer at Expedia Group. Jessica holds a master’s in Security Studies from Georgetown University and a BA in Computer Science from Princeton University.
Image: TSViPhoto/shutterstock
The post The Complicated Politics of Trump’s New AI Executive Order appeared first on The National Interest.
Источник: nationalinterest.org
