November 2024 has been a month of significant advancements in AI policy and practice. From new global collaborations to local initiatives, here’s a summary of the key AI developments:
1. Global AI Safety Network Launched
Description: On 20 November 2024, the US Department of Commerce and the US Department of State launched the International Network of AI Safety Institutes (INASI) at inaugural convening in San Francisco to promote global cooperation on AI safety. Over $11 million in funding was committed to research solutions for synthetic content, with contributions from the US, Australia, and the Republic of Korea. INASI conducted its first joint testing of Meta’s Llama 3.1 model, providing insights into AI safety testing and informing future evaluations. The Network also proposed a six-pillar framework for AI risk assessments to align global safety practices.
Why It Matters: This event brought together government, industry, academic, and civil society representatives to lay the foundation for international collaboration in AI safety. This collaboration could lead to more robust, globally accepted safety protocols.
Learn More: The Time
2. EU Advances AI Governance with Transparency Reporting Regulation and New Code of Practice
Description: On 4 November 2024, the European Commission adopted an Implementing Regulation under the Digital Services Act (DSA) to standardize transparency reporting. The regulation harmonizes reporting formats and content for various providers, including Very Large Online Platforms (VLOPs) and Search Engines (VLOSEs), with mandatory biannual or annual reporting schedules. Full implementation begins in July 2025, with the first harmonized reporting cycle slated for 2026.
Subsequently on 14 November 2024, the European AI Office published a draft of the Code of Practice for general-purpose AI models with systemic risks. The code is developed collaboratively across four working groups, aligns with the EU AI Act (effective since August 2024) and focuses on transparency, risk mitigation, technical governance, and adherence to EU principles. It outlines systemic risks like cybersecurity threats, disinformation, and election interference, requiring providers to adopt proportional risk management strategies and ensure copyright compliance. The Code also introduces an Acceptable Use Policy (AUP) to regulate security, privacy, and usage monitoring.
Why It Matters: These initiatives exemplifies the EU’s commitment to ethical AI development and offers a template for other regions to follow. These measures’ eventual implementation could influence global AI policy and standardize responsible AI development.
Learn More: European Commission
3. GEMA Sues OpenAI Over Copyright Infringement in AI Training
Description: On 13 November 2024, GEMA, Germany’s collecting society for composers and music publishers, filed a lawsuit against OpenAI and OpenAI Ireland Ltd., alleging the unlicensed use of song lyrics to train ChatGPT. GEMA claims ChatGPT reproduces song lyrics when prompted, circumventing licensing fees typically paid by other services. The lawsuit challenges OpenAI’s adherence to copyright laws, including a declared opt-out for GEMA’s members and the applicability of the text and data mining exception under German and EU laws.
Why It Matters: This groundbreaking lawsuit is the first of its kind globally. It is brought by a major rights organization to address copyright infringement in generative AI. Its outcome could set a legal precedent for how AI companies handle copyrighted material and reshape the framework for AI training data.
Learn More: GEMA
4. DHS Releases Framework for AI Roles in Critical Infrastructure
Description: On 14 November 2024, the Department of Homeland Security (DHS) unveiled the “Roles and Responsibilities Framework for Artificial Intelligence in Critical Infrastructure.” This framework is developed in collaboration with stakeholders across the AI supply chain. It provides voluntary guidelines for securing data, designing robust AI models, and fostering human oversight in critical infrastructure applications. It emphasizes transparency, accountability, risk mitigation, and continuous monitoring to address emerging AI risks and protect U.S. infrastructure.
Why It Matters: This initiative underscores the growing importance of AI in critical infrastructure while setting best practices to balance innovation and security. It also highlights the need for collaboration between public and private entities to address national risks effectively.
Learn More: DHS Press Release
5. Eliminating Bias in Algorithmic Systems (BIAS) Act Introduced
Description: On 1 November 2024, the U.S. introduced the BIAS Act, requiring federal agencies using or overseeing AI to establish Civil Rights Offices to address algorithmic bias. These offices must submit biennial reports detailing risks, mitigation actions, and recommendations to Congress. The Act also creates an interagency working group led by the Department of Justice to coordinate efforts across federal agencies. The legislation specifically targets sectors like healthcare, finance, and law enforcement, where AI has disproportionately harmed marginalized communities.
Why It Matters: The BIAS Act represents a significant step in combating algorithmic discrimination, promoting fairness, and protecting civil rights in AI systems used in critical sectors.
Learn More: Federal Legislation Details, Justice Department Updates
6. NIST Releases Report on Reducing Risks from Synthetic Content
Description: On 20 November 2024, NIST published its AI 100-4 report, focusing on managing risks associated with synthetic content. The report evaluates standards, tools, and practices for authentication, detection, labeling, and harm mitigation. It highlights digital transparency techniques, including watermarking and metadata tracking, and reviews the societal costs of AI misuse, such as non-consensual imagery and misinformation. The report also identifies gaps in current tools and calls for comprehensive mitigation strategies.
Why It Matters: This report provides critical insights into addressing the growing risks of synthetic content, offering actionable solutions to enhance transparency, security, and accountability in AI applications.
Learn More: NIST Report, Synthetic Media Risk Assessment
7. X Sues California Over Deepfake Election Content Law
Description: On 14 November 2024, X (formerly Twitter) filed a lawsuit against California to block the “Defending Democracy from Deepfake Deception Act” (AB 2655), set to take effect on 1 January 2025. The law mandates platforms to label or remove AI-generated deceptive election content. X claims the law infringes on First Amendment protections, arguing it could lead to over-censorship and suppress legitimate political content. X contends its existing policies on manipulated media already address these issues while balancing free expression.
Why It Matters: This lawsuit raises critical questions about the balance between regulating harmful AI-generated content and protecting free speech, potentially setting legal precedents for AI-related election laws.
Learn More: California Assembly Bill Details, X Press Release





