With the conclusion of the 2024 U.S. presidential election, significant changes are on the horizon for AI regulation, impacting how AI is developed and governed. In this blog post, I explore how Trump’s return to the White House could reshape AI regulations based on expert analyses from recent articles.
Trump’s AI Stance and National Priorities
AI technologies were at an early stage during Trump’s first presidency (2017-2021). However, AI capabilities have expanded dramatically in recent years, meaning Trump’s policy framework must adapt to new realities. Trump’s rhetoric has displayed a mix of awe and caution for AI evolution. For example, He called AI a “superpower” with potentially “alarming” capabilities in a June 2024 interview and emphasized the importance of the U.S. maintaining a competitive edge over China.
Undoing Biden’s AI Executive Order
In October 2023, President Biden issued an extensive Executive Order for AI regulation aimed to safeguard civil rights, ensure privacy, strengthen national security, and foster competition while simultaneously promoting innovation. It also introduced risk management frameworks, audit and accountability mandates for advanced AI systems developers.
Trump promised to repeal this Executive Order during his campaign. He views it as a hindrance to innovation and a vehicle for “radical leftwing ideas.”
Dan Hendrycks from the Center for AI Safety has suggested that Trump’s administration might roll back sections focused on racial bias and social equity. But bipartisan elements like national security could be retained or even expanded. Samuel Hammond from the Foundation for American Innovation noted that Trump might preserve parts of Biden’s order involving cybersecurity and risk evaluations.
This shift can accelerate AI growth and competitiveness but raises concerns about potential risks associated with reduced oversight.
The Uncertain Future of the AI Safety Institute
The U.S. AI Safety Institute (AISI) was founded in late 2023 to guide government efforts on AI safety and collaborate with private companies. While some Republicans view AISI as a potential obstacle to innovation, there is substantial bipartisan support for the AISI’s mission as many Congressional Republicans recognize its role in sustaining U.S. leadership in AI safety.
Furthermore, Coalitions of tech companies, universities, and nonprofits are advocating for its continuation. This includes big names like OpenAI, Carnegie Mellon University, and Encode Justice. Efforts are already underway to secure legislative support for the AISI.
AISI could act as a counterbalance to Trump’s deregulation agenda and ensure that some safety measures remain intact.
Divergent Views Within Trump’s Inner Circle
There are notable divisions among Trump’s allies on AI regulations:
- Pro-Innovation and Minimal Regulation: Vice President J.D. Vance and venture capitalists like Peter Thiel and Marc Andreessen advocate for reduced oversight to foster innovation. Vance has dismissed existential AI risks as overblown.
- Safety-Focused Views: Elon Musk, a significant Trump supporter and CEO of Tesla and xAI, has consistently voiced concerns about AI’s existential risks. He has estimated that there is a 10-20% chance that advanced AI could “go bad.”
Whether Trump’s administration will align with Musk’s safety-oriented approach or Vance’s pro-innovation stance will have major influence on the future of AI regulations.
Navigating Open-Source AI and Policy Dilemmas
The issue of open-source AI poses another challenge for Trump’s administration. Open-source models can drive innovation but also present misuse risks. Recently Chinese researchers reportedly adapted earlier versions of Meta’s Llama model for military purposes.
The GOP is divided on this issue. Some members are advocating for open-source as a tool for rapid innovation, while others are calling for restrictions to prevent misuse from adversaries like China.
Trump has taken a tough public stance on China. In the past, he has balanced strict export controls with strategic negotiations with China to serve US interests. He may extend a similar transactional approach to AI.
The Drive for Economic Growth and Technological Leadership
The Trump administration’s focus on economic growth and maintaining U.S. technological dominance will shape AI policies.
Trump has hinted that environmental regulations could pose obstacles to AI development. His administration may look to ease such rules to facilitate infrastructure expansion. Dean Ball, a research fellow at George Mason University, noted the likely emphasis on building more data centers and boosting chip production to support AI growth.
Trump has criticized aspects of the CHIPS Act, which incentivizes U.S. semiconductor manufacturing. But most analysts believe he is unlikely to repeal it. Strengthening export controls to limit China’s access to advanced chips is expected to remain a key component of Trump’s strategy. Scott Singer of the Carnegie Endowment for International Peace pointed out that Trump’s administration will likely double down on these controls, reinforcing bipartisan consensus on their importance.
This approach ensures that U.S. companies can operate in a more relaxed regulatory environment, boosting productivity and economic returns. Though it has potential downsides, including increased risks of biased outcomes, data privacy concerns, and limited safeguards against the misuse of AI.
Influence of tech industry leaders like Elon Musk, hints towards fewer regulations and more entrepreneurial freedom. This will benefit private sector growth but will potentially sideline broader ethical and social considerations.
An Ambitious Yet Uncertain Path
President Trump’s return to office brings a complex and divided AI policy landscape. The tension between fostering rapid growth and ensuring ethical use will be a central challenge for Trump’s AI policy framework.
Trump is likely to prioritize U.S. dominance in AI through deregulation and infrastructure expansion. But bipartisan pressures and national security considerations will influence how much attention he gives to safety and ethical concerns. The international AI community will also observe how these shifts impact global competitiveness and collaborative efforts in AI safety and ethics.
This is a pivotal moment that will define the future for AI policies and regulations on the global stage.





