If Q1 was about federal resets and international declarations, Q2 2025 was about implementation and resistance. While the EU AI Act shifted from legislation to execution, the U.S. saw states and local governments step into the AI regulation. For example, California continued asserting its leadership through targeted transparency laws and budget allocations. Meanwhile, Canada created a cabinet-level AI ministry, Japan doubled down on agile governance, and momentum continued for multilateral frameworks. Globally, corporate pushback against compliance complexity is growing louder.

Here’s a breakdown of the most important developments in AI policy from April to June 2025.

California

  1. Major Advances in SB 896 Implementation

Passed in October 2024, SB 896 established landmark AI transparency requirements for California state agencies, including mandatory disclosures, risk assessments, and public reporting on high-impact automated decision systems. While the law took effect in January 2025, Q2 brought key implementation milestones.

In April, the state’s Joint Policy Working Group released a draft Frontier AI Policy Report, outlining recommendations for risk mitigation, transparency, and adverse event tracking. A final version is expected by mid-2025. Meanwhile, state cybersecurity agencies launched the first annual risk assessments of frontier AI threats to critical infrastructure. Several departments also began piloting AI-use notices in public-facing communications, as required by the law.

Learn more: California Government: Draft Report on Frontier AI Policy (PDF), Clark Hill: AI Implementation Progress in California

  1. Significant AI Investments in Q2 2025 Budget Revisions

Governor Newsom’s May 2025 revised budget allocated substantial funding to modernize government using AI. This includes $85 million for generative AI tools in areas like health inspections and DMV systems. Newsom also sided with tech lobbying groups over a state fiscal advisory agency in committing to spend $25 million on a Silicon Valley microchip design center partnered with the federal government. These allocations underscore California’s commitment to building AI capacity in public agencies despite budget shortfalls.

Learn More: CA Budget and Policy Center, Politico: Newsom makes room for AI priorities despite budget woes

  1. City of San José Publishes Updated Generative AI Guidelines

San José’s Information Technology Department released an updated version of its Generative AI Guidelines on April 24, 2025. This is the first major update since the initial 2023 release. The revision strengthens transparency requirements by clarifying:

  1. All AI-generated content and prompts are public records and must be fact-checked and cited.
  2. City staff must log AI usage, attend training sessions, and engage through the GovAI Coalition to share best practices.
  3. Use of generative AI must be tracked via an internal form, and the guidelines will be reviewed quarterly, with a detailed change log included.

Learn More: City of San Jose Gen AI Guidelines

Editor’s Note: I interviewed Ms. Leila Doty, founding member of the San José GovAI Coalition, earlier this year. I’m currently interning with the City’s AI and Privacy team, where I have seen firsthand how these guidelines are evolving. These experiences have deepened my understanding of local government approaches to AI transparency and governance.

National

  1. Federal preemption fails

A 10‑year federal moratorium on state and local AI laws was passed in the house in May as part of the “One Big Beautiful Bill”. As I had covered in my post last month, this moratorium was introduced to prevent “patchwork” state rules that could slow down innovation. In early July, the Senate overwhelmingly voted 99–1 to strip it from the final reconciliation package, preserving states’ regulatory autonomy.  A compromise to reduce the ban to five years with exemptions had also failed.

As the states retain full authority to enact AI laws, U.S. AI regulatory landscape remains decrentralized and increases pressure for a coherent federal AI framework.

Learn more: Winston & Strawn: No moratorium on state AI laws in federal budget bill

2. Executive branch eases procurement rules

In April, the Office of Management and Budget (OMB) issued Memo M-25-21, relaxing Biden-era AI safeguards. Agencies can now procure “American-made” AI systems with limited documentation or third-party assessment. This is part of the broader deregulatory shift under Executive Order 14192 that I covered in February 2025 here.

Learn more: US Resist News: OMB’s new AI guidance reflects deregulatory turn

3. Copyright: Courts Affirm Fair Use while Congress Pushes for Transparency

In late June,  tensions over AI training data reached a turning point in both the courts and Congress. Two major court decisions affirmed the legal grounding for AI training.

  1. A Northern District of California judge ruled that Meta’s use of copyrighted books that were acquired from shadow libraries constituted fair use.
  2. A separate ruling in favor of Anthropic confirmed that using lawfully obtained books to train AI models also fell within fair use.

Even though these rulings are setting an important precedent for training generative AI systems, this issue is quite far from resolved. 

At a Senate Judiciary subcommittee hearing in mid-June, bestselling author David Baldacci testified that his 44 novels were used to train AI models without consent. “Someone had backed up a truck to my imagination and stolen everything I had ever created,” he told lawmakers. Senators across the aisle echoed concerns and called it “the largest intellectual property theft in American history.”

The courts may have offered AI developers a temporary green light, but Congress is already laying the groundwork for stronger guardrails that could reshape how and what AI systems are allowed to learn.

Learn more: Baldacci’s Senate Testimony, AI Law Digest

Global

  1. EU AI Act: Continued Operational Phase

The EU AI Act continues to transition from law to implementation in Q2. The European Commission published a final Code of Practice for General-Purpose AI, offering a voluntary framework for developers of models like GPT-4 and Gemini. It covers: 

  • Copyright transparency
  • Independent risk assessments
  • AI-generated content labeling

Companies that follow the code will benefit from legal certainty ahead of binding enforcement in August 2025. Yet some big European firms like Siemens, SAP, and Airbus have called for revisions, citing compliance burdens.

Learn more: EU unveils AI code of practice for companies, Big firms urge EU to soften AI Act rules

  1. Canada creates Cabinet-Level AI Ministry 

In May 2025, Prime Minister Mark Carney appointed Evan Solomon as Canada’s first-ever Minister of Artificial Intelligence and Digital Innovation. This new portfolio establishes a dedicated cabinet role focused on shaping a national AI strategy, overseeing public sector modernization via AI, and developing ethical frameworks for digital innovation. The minister is expected to release a national framework this fall. It marks a significant shift from AI oversight being lumped into broader industry or innovation ministries.

Learn more: Canada appoints first AI minister

  1. Japan reaffirms its “soft-law” AI governance

Japan continues its commitment to a flexible, innovation-first approach to AI regulation. The Act on the Promotion of R&D and Utilization of AI-Related Technologies, passed in late May and effective June 4, serves as a framework law that encourages voluntary compliance, risk mitigation, and cross-sector coordination rather than imposing prescriptive mandates. 

A mid-June Cabinet interim report reinforced this vision, calling for sector-specific rules, ongoing risk monitoring, and international interoperability instead of imposing rigid controls.

Learn more: Japan’s Innovation First Blueprint

  1. Australia urged to go slow on AI rules

In early June, Australia’s Business Council urged lawmakers to avoid European-style AI regulation and focus on digital infrastructure, data strategy, upskilling programs and targeted frameworks for high-risk AI use. The government is expected to announce its AI roadmap later in 2025. 

The BCA advocates for a three-year roadmap starting July 2025 that includes an AI Safety Institute and sector-fit oversight to balance innovation and accountability.

Learn more: BCA’s three year AI plan for Australia

Trending