If Q1 was about federal resets and global divergence, and Q2 brought state‑level accelerations and EU implementation, then Q3 2025 has been a mix of local and global execution.

Here’s a breakdown of the most important developments in AI policy from July to September 2025.

California

  1. California Court Clarifies AI Training Boundaries

In June 2025, a California court ruled that AI developers, such as Anthropic, can use lawfully obtained books for training purposes under fair use, but not pirated content. In September, Anthropic agreed to a $1.5 billion settlement and the destruction of infringing material, marking the largest copyright-related resolution in the AI sector to date.

This settlement is especially noteworthy because it comes at a time when many AI companies are being sued for using copyrighted content without permission. Meanwhile, more media companies and platforms are choosing to partner with the same AI firms to share their data in exchange for a share of the benefits.

Learn More: US Counts Document, The Verge 

  1. New Regulations

On July 24, 2025, the California Privacy Protection Agency (CPPA) finalized new regulations under the California Consumer Privacy Act (CCPA) specifically governing “automated decision‑making technology. (ADMT)” These rules require enterprises to be transparent about how automated systems are used and mandate enhanced data governance practices.

Under the regulations, ADMT is broadly defined as any technology that processes personal information to replace or substantially replace human decision-making. In the employment context, this can include application screening tools, performance evaluation analytics, productivity monitoring software, and/or any system used to influence employment decisions such as hiring, promotion, discipline, scheduling, compensation, or termination.

Learn More: CDF Labor Law Blog

National

  1. State law moratorium removed

In a decisive 99–1 Senate vote in September 2025, the proposed 10‑year federal moratorium on state‑level AI laws was stripped from the reconciliation package. This revert states free to regulate AI independently. I had covered this moratorium in my big beautiful bill blog post


This reinforces the fragmented U.S. landscape, federal action remains limited, and states are now more clearly empowered. We should expect more state bills and regulatory competition, and watch out for how tech firms react to a patchwork of state laws. 


Learn More: Time

  1. Federal AI action plan unveiled

In July 2025, the White House released America’s AI Action Plan, laying out over 100 policy actions organized around innovation, safety, and global leadership.

While not strictly a regulation, the plan signals federal priorities and may shape future rule‑making. Track the agencies charged with implementation—these will indicate where rules are likely to land.


Learn More: The White House

 Global

  1. UN takes institutional step

On August 26, 2025, the United Nations General Assembly adopted Resolution A/RES/79/118 to establish two new mechanisms: one, an independent scientific panel on AI, and the other, a global multi-stakeholder dialogue on AI governance.

This represents a significant shift from voluntary statements to institutionalized global governance. Assess how this influences non‑US jurisdictions and whether the US absence (silent so far) becomes more conspicuous. I have covered this in detail in my earlier post.

Learn More: UN Resolution

  1. EU Holds Firm on AI Act Implementation Timeline

Despite industry pressure to delay enforcement, the European Commission confirmed in Q3 that no grace period will be granted for the EU AI Act. The legal deadlines set out in the Act remain unchanged: prohibitions on certain AI practices are already in effect, obligations for general-purpose AI models will apply from August 2025, and requirements related to high-risk AI systems will take effect in August 2026. 

While the Commission acknowledged industry concerns and is offering support measures such as an AI service desk and a voluntary code of practice, it emphasized that the legal text and its deadlines are binding and cannot be postponed. Companies should prepare for compliance according to the established timeline, as no formal extension or grace period will be granted.

Learn More: Reuters, Dig Watch

  1. France’s CNIL Releases Practical GDPR Guidance for AI Development

In September 2025, France’s data protection authority CNIL issued comprehensive recommendations to help organizations comply with the GDPR when developing AI systems, particularly where personal data is used for their training. 

The guidance complements the EU AI Act and covers lawful data reuse, transparency, data minimization, and security best practices across the AI lifecycle. Organizations are encouraged to conduct Data Protection Impact Assessments, document their processes, and adopt safeguards such as anonymization, pseudonymization, and regular risk assessments to mitigate privacy risks throughout the AI lifecycle.

Learn More: CNIL Blog

If Q1 was about federal resets and global divergence, and Q2 brought state‑level accelerations and EU implementation, then Q3 2025 has been a mix of local and global execution.

Here’s a breakdown of the most important developments in AI policy from July to September 2025.

California

  1. California Court Clarifies AI Training Boundaries

In June 2025, a California court ruled that AI developers, such as Anthropic, can use lawfully obtained books for training purposes under fair use, but not pirated content. In September, Anthropic agreed to a $1.5 billion settlement and the destruction of infringing material, marking the largest copyright-related resolution in the AI sector to date.

This settlement is especially noteworthy because it comes at a time when many AI companies are being sued for using copyrighted content without permission. Meanwhile, more media companies and platforms are choosing to partner with the same AI firms to share their data in exchange for a share of the benefits.

Learn More: US Counts Document, The Verge 

  1. New Regulations

On July 24, 2025, the California Privacy Protection Agency (CPPA) finalized new regulations under the California Consumer Privacy Act (CCPA) specifically governing “automated decision‑making technology. (ADMT)” These rules require enterprises to be transparent about how automated systems are used and mandate enhanced data governance practices.

Under the regulations, ADMT is broadly defined as any technology that processes personal information to replace or substantially replace human decision-making. In the employment context, this can include application screening tools, performance evaluation analytics, productivity monitoring software, and/or any system used to influence employment decisions such as hiring, promotion, discipline, scheduling, compensation, or termination.

Learn More: CDF Labor Law Blog

National

  1. State law moratorium removed

In a decisive 99–1 Senate vote in September 2025, the proposed 10‑year federal moratorium on state‑level AI laws was stripped from the reconciliation package. This revert states free to regulate AI independently. I had covered this moratorium in my big beautiful bill blog post


This reinforces the fragmented U.S. landscape, federal action remains limited, and states are now more clearly empowered. We should expect more state bills and regulatory competition, and watch out for how tech firms react to a patchwork of state laws. 


Learn More: Time

  1. Federal AI action plan unveiled

In July 2025, the White House released America’s AI Action Plan, laying out over 100 policy actions organized around innovation, safety, and global leadership.

While not strictly a regulation, the plan signals federal priorities and may shape future rule‑making. Track the agencies charged with implementation—these will indicate where rules are likely to land.


Learn More: The White House

 Global

  1. UN takes institutional step

On August 26, 2025, the United Nations General Assembly adopted Resolution A/RES/79/118 to establish two new mechanisms: one, an independent scientific panel on AI, and the other, a global multi-stakeholder dialogue on AI governance.

This represents a significant shift from voluntary statements to institutionalized global governance. Assess how this influences non‑US jurisdictions and whether the US absence (silent so far) becomes more conspicuous. I have covered this in detail in my earlier post.

Learn More: UN Resolution

  1. EU Holds Firm on AI Act Implementation Timeline

Despite industry pressure to delay enforcement, the European Commission confirmed in Q3 that no grace period will be granted for the EU AI Act. The legal deadlines set out in the Act remain unchanged: prohibitions on certain AI practices are already in effect, obligations for general-purpose AI models will apply from August 2025, and requirements related to high-risk AI systems will take effect in August 2026. 

While the Commission acknowledged industry concerns and is offering support measures such as an AI service desk and a voluntary code of practice, it emphasized that the legal text and its deadlines are binding and cannot be postponed. Companies should prepare for compliance according to the established timeline, as no formal extension or grace period will be granted.

Learn More: Reuters, Dig Watch

  1. France’s CNIL Releases Practical GDPR Guidance for AI Development

In September 2025, France’s data protection authority CNIL issued comprehensive recommendations to help organizations comply with the GDPR when developing AI systems, particularly where personal data is used for their training. 

The guidance complements the EU AI Act and covers lawful data reuse, transparency, data minimization, and security best practices across the AI lifecycle. Organizations are encouraged to conduct Data Protection Impact Assessments, document their processes, and adopt safeguards such as anonymization, pseudonymization, and regular risk assessments to mitigate privacy risks throughout the AI lifecycle.

Learn More: CNIL Blog

Trending