The Trump administration’s approach to artificial intelligence regulation has fundamentally shifted America’s AI policy landscape, prioritizing innovation and competitiveness over safety oversight. Since taking office in January, President Trump has systematically dismantled predecessor policies while positioning the United States for what his administration frames as a critical technological race with China.
This transformation culminated Wednesday with the release of the AI Action Plan, a 20-page policy document that emphasizes “promoting innovation, reducing regulatory burdens and overhauling permitting” while notably avoiding contentious issues like copyright protections for AI training data. The plan emerged alongside Trump’s speech at a summit featuring leaders from tech companies including Hadrian, Palantir, and Y Combinator.
Understanding how we reached this point requires examining the administration’s methodical policy reversals and new initiatives over the past six months. Each action reveals a consistent pattern: removing what the White House characterizes as “unnecessarily burdensome requirements” while accelerating AI development and deployment across government and industry.
*January 23*
On his first day back in office, Trump eliminated President Biden’s October 2023 executive order on AI, which had established extensive safety requirements and oversight mechanisms for AI development. The original Biden order required companies developing powerful AI systems to share safety test results with the government, established standards for AI safety and security, and created frameworks for protecting consumer privacy.
Trump’s replacement executive order stripped away these requirements, focusing instead on sustaining “America’s global AI dominance in order to promote human flourishing, economic competitiveness, and national security.” The new document conspicuously omitted terms that appeared throughout Biden’s order, including “safety,” “consumer,” “data,” and “privacy.”
This reversal signaled a fundamental philosophical shift. Where Biden’s administration treated AI safety as essential for responsible innovation, Trump’s team positioned safety requirements as obstacles to progress. Peter Slattery, a researcher on MIT’s FutureTech team, warned at the time that “the Trump administration’s willingness to overlook the potential dangers of AI could prove to be shortsighted: a high-profile failure could spark a crisis of public confidence, slowing the progress that the administration hopes to accelerate.”
The same week, Trump launched Project Stargate, a massive data center investment initiative partnering with OpenAI and several international investors. This $500 billion commitment demonstrated the administration’s focus on expanding AI infrastructure rather than regulating its development. Meanwhile, several AI companies, including Anthropic, quietly adjusted their public safety commitments to align with the new administration’s priorities.
*February 6 to March 15*
The administration opened a public comment period through the National Science Foundation (NSF), a federal agency that funds scientific research, inviting input on AI policy development. This process attracted more than 10,000 submissions from citizens, corporations, and advocacy groups, revealing sharp divisions within the AI industry about appropriate regulation levels.
OpenAI advocated for minimal copyright enforcement and exclusively federal regulation, arguing that state-by-state approaches would create compliance burdens that could stifle innovation. The company specifically opposed allowing individual states to create their own AI oversight rules, preferring a single national framework.
Anthropic took a different position, urging national testing requirements that would mandate safety evaluations before deploying powerful AI systems. This stance reflected ongoing tensions within the AI industry between companies prioritizing rapid deployment and those emphasizing careful development practices.
These competing visions highlighted a central challenge: balancing innovation speed with adequate oversight in an industry where technological capabilities advance faster than regulatory frameworks can adapt.
*March 3*
The Department of Government Efficiency (DOGE), Trump’s new cost-cutting initiative led by Elon Musk, targeted AI research staff and funding at multiple federal agencies. The cuts particularly affected the US AI Safety Institute (US AISI), a research organization established under Biden’s executive order to develop AI safety standards and testing protocols.
Staff reductions also hit the NSF, which administers grants to universities and colleges for AI research. These cuts alarmed experts about America’s AI talent pipeline, particularly since the administration simultaneously emphasized the need to compete with China in AI development.
The irony wasn’t lost on observers: Trump’s first administration had originally directed the National Institute of Standards and Technology (NIST), which houses US AISI, to focus on emerging AI technologies. Now his second administration was reducing the very research capacity it had previously built.
Universities reported uncertainty about ongoing AI projects as grant funding became less predictable. This created a contradiction within Trump’s broader AI strategy—cutting research investment while demanding technological leadership.
*April 23*
Trump signed two executive orders addressing AI’s impact on the workforce: one establishing AI-focused apprenticeship programs and another promoting AI education from kindergarten through professional development. The education order urged “educators, industry leaders, and employers who rely on an AI-skilled workforce to partner to create educational programs that equip students with essential AI skills.”
However, the Department of Government Efficiency simultaneously eliminated several education grants specifically designed to advance AI literacy in schools. Danaë Metaxa, a professor whose AI education grant was terminated just three days after Trump’s education executive order, highlighted this contradiction on social media: “There is something especially offensive about this EO from April 23 about the need for AI education… Given the termination of my grant on exactly this topic on April 26.”
The disconnect illustrated broader challenges in the administration’s approach—announcing ambitious goals while cutting funding for programs that could achieve them. Private companies increasingly filled gaps left by reduced federal support, offering their own AI training programs to maintain workforce competitiveness.
*June 3*
The administration completed its overhaul of federal AI oversight by renaming and restructuring the US AI Safety Institute. The organization became the “US Center for AI Standards and Innovation” (CAISI), with a mandate focused on promoting industry growth rather than ensuring safety compliance.
Elizabeth Kelly, who had led US AISI under the Biden administration, departed in February and joined Anthropic as head of Beneficial Deployment. Her replacement signaled the administration’s intention to eliminate what it viewed as Biden-era obstacles to AI development.
Commerce Secretary Howard Lutnick explained the transformation: “For far too long, censorship and regulations have been used under the guise of national security. Innovators will no longer be limited by these standards.” CAISI would “evaluate and enhance US innovation of these rapidly developing commercial AI systems while ensuring they remain secure to our national security standards.”
The new organization maintained some testing and standards development functions but eliminated requirements for companies to report safety test results or undergo independent evaluations. This shift meant AI companies would largely self-regulate their deployment practices.
Notably absent from CAISI’s mandate was any mention of “red-teaming”—the practice of deliberately testing AI systems for potential harmful outputs or behaviors. Previous safety frameworks had encouraged or required such testing to identify risks before public deployment.
*June through July*
During negotiations over Trump’s tax legislation, Congress temporarily included provisions that would have prevented states from passing AI regulations for five to ten years. The proposal would have withheld federal broadband funding from states that enacted AI oversight laws, effectively centralizing all AI regulation at the federal level.
This approach aligned with OpenAI’s March policy recommendations but faced resistance from states that had already begun developing their own AI oversight frameworks. California, New York, and several other states had enacted or were considering legislation addressing AI use in hiring, housing, and other consumer-facing applications.
The provision was ultimately removed from the final tax bill, leaving states free to continue developing their own AI regulations. However, the attempt demonstrated the administration’s preference for minimal, federally-controlled oversight rather than a patchwork of state-level requirements that could create compliance complexity for AI companies.
*July 14*
The Department of Defense finalized $200 million in contracts with Google, OpenAI, xAI, and Anthropic, marking a significant expansion of AI integration into military operations. These contracts built on earlier announcements, including Anthropic’s June launch of Claude Gov, a government-specific version of its AI assistant designed for cybersecurity and administrative applications.
OpenAI had similarly announced a comprehensive government initiative in June, consolidating its various federal contracts under a single umbrella program. The military contracts represented a notable shift from previous approaches that emphasized extensive testing and evaluation before deploying AI systems in sensitive applications.
The rapid contract approval process suggested that reduced safety oversight requirements were facilitating faster integration of commercial AI tools into government operations. While military applications typically involve rigorous security evaluations, the lack of transparency around these contracts made it difficult to assess what testing protocols were being applied.
*July 14*
In a significant reversal of both his previous policy and Biden administration restrictions, Trump authorized Nvidia to resume selling advanced AI chips to Chinese companies. These semiconductors provide the computational power necessary for training and running sophisticated AI systems, making them critical infrastructure for AI development.
The policy change, advocated by Nvidia CEO Jensen Huang and AI Czar David Sacks, reflected a strategic calculation: revenue from Chinese sales would fund research and development efforts that could maintain American technological advantages. This approach prioritized economic benefits over the previous strategy of limiting Chinese AI capabilities through export controls.
The reversal demonstrated the administration’s confidence that American AI companies could maintain competitive advantages through innovation rather than restricting competitors’ access to essential technologies. However, it also provided Chinese AI companies with the same advanced hardware that powers American AI systems.
*July 15*
Trump announced a massive $92 billion investment in Pennsylvania’s AI and energy infrastructure, targeting data centers and power generation facilities necessary for large-scale AI operations. The investment reflected the administration’s strategy of concentrating AI development within the United States while creating jobs in key political constituencies.
Data centers require enormous amounts of electricity to power the servers that train and run AI systems. By investing in both computing infrastructure and energy generation, the administration aimed to ensure that American AI companies could scale their operations without facing power constraints that might push development overseas.
The Pennsylvania announcement followed similar infrastructure commitments in other states, suggesting a broader strategy of using federal investment to anchor AI development in American locations while reducing regulatory barriers that might discourage private investment.
The Trump administration’s AI policy approach creates a regulatory environment that prioritizes rapid development and deployment over extensive safety oversight. For businesses, this means fewer compliance requirements but also less government guidance on responsible AI practices.
Companies developing AI systems face reduced federal oversight but must navigate an increasingly complex landscape of state regulations and international standards. The administration’s focus on competing with China may accelerate government adoption of AI tools while creating pressure for faster innovation cycles.
The policy shifts also transfer responsibility for AI safety from government agencies to private companies and industry self-regulation. This approach may accelerate innovation but could leave businesses and consumers with fewer protections against AI systems that malfunction or produce harmful outputs.
As AI capabilities continue advancing rapidly, the administration’s bet on innovation over regulation will face real-world tests. The ultimate measure of success will be whether American AI leadership can be maintained without the safety frameworks that previous policies attempted to establish.