As we conclude our monthly AI policy updates for 2024, it’s time to reflect on the most significant developments in AI Governance. In this December edition, I will take a broader view and highlight key developments at local, national, and global levels throughout 2024.

Local: California’s AI Regulations & Tech Industry Shifts

During California’s 2023-24 legislative session, Governor Gavin Newsom enacted 17 artificial intelligence laws addressing key issues such as deepfakes, AI watermarking, digital replicas of entertainers, and transparency in training data [4]. Here are some major legislative and industrial callouts.

  • The California AI Transparency Act (SB942) [2]
    • This act was introduced in January 2024 and got signed into law in September 2024. It imposes civil penalties of $5,000 per day for violations and mandates Generative AI providers to:
      • Make AI detection tools available at no cost to the user.
      • Provide users with both “latent” disclosures in AI-generated content and an option of including a “manifest” disclosure. Latent disclosures are present in the metadata of AI-generated content while manifest disclosures are clearly visible, conspicuous, appropriate, and permanent.
      • Enter into contracts with licensees to maintain latent disclosure capabilities.
  • Generative AI Accountability Act (SB 896) [7]
    • Enacted on September 29, 2024, this law establishes oversight and accountability measures for the use of generative AI in state agencies
  • Artificial Intelligence Training Data Transparency (AB 2013) [1]
    • Approved in December 2024, this law requires AI developers to publicly disclose details about the datasets used to train their models (effective January 2026).
  • Governor Gavin Newsom vetoed Senate Bill 1047 [1]
    • Governor Gavin Newsom vetoed Senate Bill 1047, which aimed to prevent catastrophic harms caused by AI. I have covered this in detail in my post here
  • AB-2655 and AB-2355 passed to address AI-generated deepfakes in elections [3]
    • CA AB-2655 (Defending Democracy from Deepfake Deception Act), requires large online platforms to remove materially deceptive AI-generated content related to elections within 120 days before an election and provide mechanisms for reporting such content. The act took effect immediately upon signing in September, 2024.
    • AB-2355 mandates disclosure for AI-generated or substantially altered content in political advertisements created or distributed by committees. This act will take effect starting January 1, 2025.
  • Protection against unauthorized use of digital replicas AB2602 [12]
    • AB 2602 safeguards individuals from unauthorized creation and distribution of digital replicas, especially in AI-generated content by mandating clear contractual terms for digital replicas usage and requires legal representation for performers. 
    • The law prevents misuse in entertainment, advertising, and deepfake technology, ensuring individuals retain control over their voice and likeness.
  • OpenAI and Anthropic Agree to Government Oversight [9]
    • OpenAI and Anthropic entered into agreements with the U.S. AI Safety Institute to allow testing of their AI models before public release. This collaboration aims to enhance safety evaluations and risk mitigation in AI development.

National: U.S. AI Policy & Ethics Developments

In 2024, 693 pieces of AI legislation were introduced across 45 states, a significant increase from 191 bills in 2023[1]. 31 states successfully passed some form of AI legislation by the end of 2024[1]. Check out State level AI regulations and their impact post. Here are key callouts.

  • Follow Ups on Biden’s Executive Order from October 2023 [14][15]
    • Throughout 2024, various federal agencies implemented the directives of Executive Order 14110 (Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence), with notable progress in October and November 2024. 
      • Federal agencies successfully implemented 13 specific requirements outlined in the EO, laying the groundwork for comprehensive AI governance across the government.
      • White House Issued National Security Memorandum (NSM) focusing on the use of AI in national security systems. It mandates voluntary safety tests for frontier AI models, sector-specific AI testing for various risks, and outlines prohibited uses and risk management practices for AI in national security. It also suggests potential amendments to the Federal Acquisition Regulation to streamline procurement of safe and reliable AI systems.
      • The Office of Management and Budget (OMB) issued Memorandum M-24-18, setting requirements for federal agencies procuring AI systems. This memo emphasizes vendor monitoring, documentation, testing, risk mitigation, and incident reporting for AI systems impacting rights and safety. It also establishes guidelines for generative AI and AI-based biometric systems.
      • The Department of Labor released guidance titled “AI and Worker Well-Being,” providing principles and best practices for employers and AI developers. Recommendations include notifying employees about AI use, protecting employee data, and conducting AI audits to prevent adverse impacts.
      • The Department of Treasury finalized the outbound investment rule on AI, which restricts U.S. investments in national security-related technologies in countries of concern, particularly China. The rule prohibits or requires notice for certain AI-related transactions involving Chinese-owned entities, especially AI systems designed for military or government intelligence purposes or those exceeding specific computational thresholds. 
      • The National Institute of Standards and Technology (NIST) published the final version of NIST AI 100-4, “Reducing Risks Posed by Synthetic Content.” The report identifies methods such as provenance data tracking and synthetic content detection to ensure digital content transparency and mitigate risks associated with AI-generated content.
      • The Department of Education’s Office for Civil Rights released guidance on “Avoiding the Discriminatory Use of Artificial Intelligence.” The document outlines how federal civil rights laws apply to AI use in educational settings and provides examples of potential violations, such as biased facial recognition technology and AI systems that fail to accommodate students with disabilities.
      • The Department of Homeland Security (DHS) released the “Roles and Responsibilities Framework for Artificial Intelligence in Critical Infrastructure”, providing voluntary guidelines for AI data security, model robustness, and human oversight in critical infrastructure. Developed with input from AI experts and government agencies, the framework assigned responsibilities to AI developers, cloud providers, infrastructure operators, and public sector entities to ensure safe AI deployment. 
      • The U.S. AI Safety Institute (AISI) announced the formation of the Testing Risks of AI for National Security (TRAINS) Taskforce. Chaired by AISI, the taskforce includes representatives from the National Institutes of Health, Department of Defense, Department of Energy, and Department of Homeland Security, with additional federal agencies expected to join. The taskforce aims to coordinate AI research and testing in national security and public safety, ensuring adversaries cannot exploit AI to undermine U.S. security. 
  • August 2024: AI in U.S. Elections Becomes a National Concern [16]
    • During the 2024 U.S. presidential election, public concerns grew over AI’s role in spreading misinformation as social media platforms struggled to moderate AI-generated political content.
    • A survey from Harvard Kennedy School indicated that four out of five respondents were worried about AI-driven misinformation affecting the electoral process. This paper details specific examples of election frauds, survey methods and results.
  • DOJ vs. Google: Major Developments in AI and Tech Antitrust Case
    • In November 2024, the US Department of Justice (DOJ) escalated its antitrust case against Google, proposing remedies such as the divestiture of Chrome and restrictions on Google’s AI operations to curb its market dominance. Google has pushed back, arguing these measures are excessive. A court hearing is set for April 2025, with a ruling expected by August 2025. Learn more about this case in my post.
  • BIAS Act Introduction (November 1, 2024) [17][18]
    • On November 1, 2024, Representative Summer Lee (D-PA) and Senator Edward J. Markey (D-MA) introduced the BIAS Act, requiring federal agencies involved in AI to establish civil rights offices to address algorithmic bias and discrimination. 
    • These offices are required to submit regular reports to Congress detailing their monitoring and mitigation efforts as well as policy recommendations.
    • The Act also proposes an interagency working group, led by the DOJ’s Civil Rights Division, to coordinate AI bias assessments across federal agencies.
    • Check out my post on Challenges in Mitigating Bias in AI Models to get more technical context on the issues.

In 2024, the US made major advancements in AI governance through new regulations, agency frameworks, and legislative efforts aimed at AI safety, transparency, and competition. However, with a new administration taking office in 2025, it remains uncertain which policies will continue, be revised, or face repeal.

Global: AI Governance Around the World

In 2024, significant global developments in AI regulations occurred, reflecting a concerted effort by nations and international bodies to establish frameworks ensuring the ethical and safe deployment of artificial intelligence.

  • EU AI Act Adoption and Entry into Force [20][21]
    • The EU AI Act completed its legislative journey in 2024, marking the world’s first comprehensive legislation governing artificial intelligence. The EU AI Act officially entered into force across all 27 EU Member States on August 1, 2024, starting the clock for organizations to prepare for compliance.
    • The Act introduces a risk-based classification system and imposes transparency requirements. I have covered the EU AI Act in detail in my blog post.
    • The Digital Services Act (DSA), implemented on November 4, 2024, complements AI regulations by standardizing transparency reporting for online platforms and establishing harmonized content moderation rules.
    • Additionally, on November 14, 2024, the EU AI Code of Practice was released to guide providers of general-purpose AI models on compliance obligations, risk management, and transparency standards under the AI Act.
  • UK, US and EU: Sign Cross Border AI Treaty [23]
    • The Council of Europe adopted the “Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law,” the first legally binding international treaty on AI in September 2024. 
    • Unlike the EU AI Act, this treaty focuses more on governmental applications of AI, emphasizing the protection of human rights and the prevention of AI misuse.
  • China Continues to Regulate AI Companies [22][23]
    • China implemented various regulations on AI throughout 2024, including the Interim Measures for Administration of Generative AI Services and the Administrative Provisions on Deep Synthesis of Internet-based Information Services.
    • China continued to regulate AI companies, requiring foundation models to be registered with the government before public release.
    • In September 2024, China’s National Information Security Standardization Technical Committee published the ‘Artificial Intelligence Security Governance Framework,’ emphasizing a people-centered approach and the principle of developing AI for good. The framework highlights key principles, including innovation, inclusivity, risk identification, and preventive measures in AI governance.
  • UN Report on Governing AI for Humanity [23]
    • In November 2024, the UN AI Advisory Body published its final report on “Governing AI for Humanity.” The report highlights the escalating risks associated with AI and proposes a framework for international cooperation. 
    • The proposed mechanisms are designed to complement existing efforts and foster inclusive global AI governance arrangements that are agile, adaptive, and effective in keeping pace with AI’s rapid evolution.
  • GEMA Lawsuit Against OpenAI Sets Legal Precedent [24]
    • In November 2024, German music rights organization GEMA filed a lawsuit against OpenAI, alleging that ChatGPT, reproduced song lyrics without authorization. The lawsuit, currently under review with the Munich Regional Court, marks a significant legal action addressing copyright infringement concerns related to AI-generated content.
  • AI and Military Applications Spark Global Debate [25]
    • In December 2024, the United Nations Security Council convened to discuss the rapid evolution of artificial intelligence (AI) and its implications for global peace and security. 
    • The session emphasized the need for a unified framework to prevent fragmented governance and ensure responsible AI use in conflicts
  • Singapore’s 2024 AI Governance and Security Initiatives [25] [26]
    • The AI Verify Foundation and Infocomm Media Development Authority (IMDA) introduced a draft framework to promote ethical AI development, focusing on transparency, accountability, and responsible deployment of generative AI.
    • The Cyber Security Agency of Singapore (CSA) launched guidelines to enhance AI system security, addressing threats such as adversarial attacks and supply chain risks while advocating for “AI security by design.”
  • Significant Step in Global Cooperation [25]
    • On November 20, 2024, the International Network of AI Safety Institutes (INASI) was launched during a two-day conference in San Francisco. 
    • The network’s initial members include Australia, Canada, the European Union, France, Japan, Kenya, the Republic of Korea, Singapore, the United Kingdom, and the United States. 
    • The INASI focuses on three critical areas: managing synthetic content risks, testing foundation models, and conducting risk assessments for advanced AI systems. 
    • More than $11 million in global research funding commitments were announced to address the network’s new joint research agenda on mitigating risks from synthetic content.

Conclusion: A Year of Ethical AI Milestones

As we reflect on these developments, it’s clear that 2024 has been a pivotal year for AI policy and regulation. While California and the U.S. introduced stricter AI safety measures, global bodies like the EU and UN set new international AI standards. Meanwhile, corporate shifts in Silicon Valley raised fresh ethical concerns about AI’s role in society.

The surge in legislative activity reflects the growing recognition of AI’s impact and the need for responsible governance. However, with Donald Trump taking office in late January 2025, uncertainty remains over whether his administration will revise or roll back recent policies. As we move into 2025, these trends are likely to continue, with further refinement and implementation of this year’s groundwork.

Citations:

[1] https://www.cooley.com/news/insight/2024/2024-10-16-californias-new-ai-laws-focus-on-training-data-content-transparency 

[2] https://www.mayerbrown.com/en/insights/publications/2024/09/new-california-law-will-require-ai-transparency-and-disclosure-measures 

[3] https://www.cozen.com/news-resources/publications/2025/california-passes-flurry-of-year-end-ai-legislation 

[4] https://complianceconcourse.willkie.com/articles/california-enacts-17-ai-bills-in-2024/ 

[5] https://www.bytebacklaw.com/2024/10/california-privacy-and-ai-legislation-update-october-7-2024/ 

[6] https://www.whitecase.com/insight-alert/raft-california-ai-legislation-adds-growing-patchwork-us-regulation 

[7] https://www.gov.ca.gov/2024/09/29/governor-newsom-announces-new-initiatives-to-advance-safe-and-responsible-ai-protect-californians/ 

[8] https://iapp.org/resources/article/california-privacy-ai-legislation-tracker/ 

[9] https://www.cnbc.com/2024/08/29/openai-and-anthropic-agree-to-let-us-ai-safety-institute-test-models.html 

[10] https://www.vox.com/future-perfect/364384/its-practically-impossible-to-run-a-big-ai-company-ethically 

[11] https://natlawreview.com/article/california-ai-law-hit-constitutional-challenge-x-corp-attempts-take-down 

[12]

[13] https://www.ansi.org/standards-news/all-news/2024/09/9-9-24-us-ai-safety-institute-signs-agreements-with-anthropic-and-openai 

[14] https://www.gao.gov/products/gao-24-107332 

[15] https://www.cov.com/en/news-and-insights/insights/2024/11/white-house-issues-national-security-memorandum-on-artificial-intelligence-ai 

[16] https://misinforeview.hks.harvard.edu/article/the-origin-of-public-concerns-over-ai-supercharging-misinformation-in-the-2024-u-s-presidential-election 

[17] https://www.congress.gov/bill/118th-congress/house-bill/10092/text 

[18] https://summerlee.house.gov/posts/release-rep-summer-lee-introducing-the-eliminating-bias-act-with-senator-ed-markey-to-combat-algorithmic-discrimination-in-federal-agencies 

[19] https://www.dhs.gov/sites/default/files/2024-11/24_1114_dhs_ai-roles-and-responsibilities-framework-508.pdf 

[20] https://www.dlapiper.com/en-us/insights/publications/ai-outlook/2024/eu-publishes-its-ai-act-key-considerations-for-organizations 

[21] https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai 

[22] https://www.reedsmith.com/en/perspectives/2024/08/navigating-the-complexities-of-ai-regulation-in-china 

[23] https://www.eversheds-sutherland.com/en/estonia/insights/global-ai-regulatory-update-november-2024 

[24] https://www.gema.de/en/news/ai-and-music/ai-lawsuit [25] https://press.un.org/en/2024/sc15946.doc.htm
[26] https://www.imda.gov.sg/resources/press-releases-factsheets-and-speeches/press-releases/2024/public-consult-model-ai-governance-framework-genai

Trending