If this quarter is any indicator of the year to come, the future of AI is turbulent. The first quarter of 2025 has begun a pivotal year in AI policy. Between California’s assertiveness, federal deregulation, and global divergences, it feels like the entire landscape of AI governance is shifting before our eyes. California is pushing for aggressive privacy and automation policies. Nationally, the U.S. federal government is shifting toward deregulation and encouraging innovation. Meanwhile, AI governance is evolving globally through new laws, multilateral summits, and growing geopolitical scrutiny. 

California

  1. California Attorney General Issues AI Advisories

In January, California Attorney General Rob Bonta issued two advisories concerning AI technologies. One outlines how existing consumer protection, civil rights, and data privacy laws impact AI systems and emphasizes that developers and users must ensure compliance with such regulations to prevent bias and discrimination. The second highlights AI’s influence on healthcare and mandates that both general and sector-specific state laws concerning patient privacy and autonomy apply to the development, distribution, and deployment of AI systems in the health sector. These advisories signal the state’s commitment to holding AI developers and users accountable under existing and future laws, particularly in high-impact sectors such as healthcare, and ensure that AI integration does not compromise legal and ethical standards. It’s fascinating to see California doubling down on healthcare AI regulation at a time when federal policy is pulling back. 

Learn more: Arnon, Tadmor-Levy – Quarterly AI Update | Q1 2025

  1. California Civil Rights Council Finalizes AI Employment Regulations

On March 21st, the California Civil Rights Council adopted final regulations regarding automated decision-making systems in employment. These regulations address the use of AI tools in hiring, firing, promotion, and other employment decisions, and require employers to test for bias, maintain records, and ensure that AI criteria are job-related and non-discriminatory.  These regulations impose new compliance obligations on employers using AI with the goal of preventing algorithmic discrimination and promoting fairness in employment practices.

Learn more: California Labor & Employment Law Blog – New AI Laws May Go Into Effect As Early As July 1, 2025

  1. California Courts Develop Model Policy for Generative AI Use

In February, the Judicial Council of California previewed a new model policy providing guidelines for court use of generative AI. This includes reviewing AI-generated material for accuracy and disclosing substantial AI-generated content provided to the public. This initiative represents a proactive approach to integrating AI in the judicial system and aims to foster accountability, transparency, and privacy protection.

Learn more: California Courts Newsroom – Council Receives Preview of New Model Policy That Provides Guidelines, Safeguards on Use of Generative AI

National

  1. Executive Order 14179: Trump Administration Resets U.S. AI Strategy

In January, President Trump signed Executive Order 14179, revoking the Biden administration’s AI safety directives. The new EO focuses on removing regulatory barriers and promoting American AI leadership and innovation. It calls for an AI Action Plan within 180 days and emphasizes economic competitiveness and national security. The change in federal administration brings a sharp pivot in federal AI policy. Rather than focusing on precaution and regulation, the administration seeks to increase innovation and speed. While some believe that deregulation can greatly expand the industry’s output and keep America at the forefront of AI development, critics warn it could sideline ethical and safety concerns just as generative AI enters widespread use.

Learn more: White House Executive Order – Jan 2025

  1. FDA Issues Draft Guidance on AI-Enabled Medical Devices

On January 7th, the FDA published guidance on the use of AI-powered medical devices, recommending disclosures on AI model use, data handling, cybersecurity, and model training processes. This is a significant move toward ensuring transparency, safety, and bias mitigation in AI-driven healthcare technologies.

Learn more: Arnon TL – Q1 2025 AI Law Update

  1. Court Ruling: No Fair Use for AI Training Using Copyrighted Material

On February 11th, a Delaware federal court ruled against Ross Intelligence in a copyright case with Thomson Reuters, rejecting the argument that AI training constituted fair use. This decision limits AI companies’ ability to use copyrighted materials for model training, especially when competing directly with content owners.

Learn more: Arnon TL – Q1 2025 AI Law Update

  1. Public Comments Pour in on U.S. AI Action Plan

In February, the White House’s Request for Information (RFI) on the AI Action Plan received over 8,700 public comments from nonprofits, think tanks, academics, and industry groups to inform a national strategy due mid-2025. The high engagement shows public demand for thoughtful, inclusive AI governance and could set the tone for how the U.S. balances innovation with accountability.

Learn more: Inside Government Contracts – March 2025

  1. NIST Launches “AI Standards Zero Drafts” Initiative

In March, the National Institute of Standards and Technology (NIST) launched the Zero Drafts Project, a new effort to develop baseline standards for AI in transparency, synthetic content, testing and evaluation, and terminology. As federal regulation pulls back, these voluntary standards could shape industry norms and future legislation.  

Learn more: Global Policy Watch – March 2025]

  1. DeepSeek Banned in U.S. Navy Devices

In January, the U.S. The National Security Council launched an investigation of DeepSeek, a Chinese AI chatbot, over concerns about data collection and national security. The U.S Navy banned its use, and allied nations followed with similar restrictions. This event highlights growing scrutiny over the use of foreign AI tools amidst rising tensions between the U.S and China and illustrates how AI is increasingly becoming a national security priority. 

Learn more: CNBC – U.S. Navy bans use of DeepSeek due to ‘security and ethical concerns’

Global

  1. EU AI Act: Key Provisions Take Effect

On February 2nd, enforcement of the European Union’s AI Act began, which included critical provisions such as bans on manipulative AI techniques, social scoring, unauthorized facial recognition, biometric categorization, and certain uses of real-time surveillance in public spaces. The EU continues to lead the world in rights-based AI regulation. These rules signal to global developers that ethical design is not optional, and could affect any company working with European users.

Learn more: Securiti – February 2025

  1. European Parliament Research Center Publishes Report on Algorithmic Discrimination

On February 26, the EPRC published a report addressing how the EU AI Act and GDPR interact on processing sensitive personal data for bias monitoring and endorsed such processing under “substantial public interest” exceptions. The report clarifies that bias detection in AI systems may legally involve sensitive data use, helping operationalize fairness obligations under European law.

Learn more: Arnon TL – Q1 2025 AI Law Update

  1. Paris AI Action Summit: Countries Pledge to Develop Inclusive AI

In February, over 100 countries gathered at the Paris AI Action Summit, where 58 nations signed a declaration for inclusive and sustainable AI development. However, the U.S. and U.K. did not sign, citing concerns over national sovereignty and security. Vice President JD Vance gave a speech at the summit, where he reiterated the administration’s focus on an innovation-first AI policy. While the summit reflects a general global push for shared AI governance, it also indicates widening ideological divides, with some countries choosing not to participate. 

Learn more: Wikipedia – AI Action Summit

All in all, Q1 showed just how fragmented and fast-moving AI policy has become. It is clear that we are headed into a year where local, national, and international debates over AI ethics, rights, and security will only get more tense and unpredictable.

Trending