Latest AI Governance Updates - September‘25 Edition
September 18, 2025

As artificial intelligence scales across borders and industries, AI governance and regulation are evolving rapidly. The last few months produced several important developments - from global initiatives at the United Nations to detailed state legislation in the U.S. This article unpacks the most consequential updates and provides context for policymakers, enterprise leaders and researchers.
Sources
- A step in the right direction: UN establishes new mechanisms to advance global AI governance
- High-level Meeting to Launch the Global Dialogue on AI Governance
- EU’s General-Purpose AI Obligations Are Now in Force, With New Guidance
- Colorado Legislature Passes a Five-Month Delay for Colorado’s AI Act
- California Privacy and AI Legislation Update: September 15, 2025
- Utah Becomes First State To Enact AI-Centric Consumer Protection Law
- Fostering Effective Policy for a Brave New AI World: A Conversation with Rishi Bommasani
- Global AI Governance: Five Key Frameworks Explained
UN launches new global AI governance mechanisms
On 26 August 2025, the UN General Assembly adopted resolution 79/1, creating two new mechanisms for global AI governance: an Independent International Scientific Panel and a Global Dialogue on AI Governance[1]. The Scientific Panel will connect researchers to policymakers, while the Global Dialogue provides a forum for governments, industry, academia and civil society to discuss AI risks, ethics and best practices. The first high‑level meeting of the dialogue is scheduled for 2026[2]. This step signals growing international consensus that AI governance requires coordinated global oversight.
Why this matters
International frameworks set the tone for national regulation and industry standards. A global dialogue can harmonize definitions, promote interoperability and reduce regulatory fragmentation, key goals for companies deploying AI systems across borders. The dialogue also opens an avenue for civil society and academia to influence policy.
EU AI Act obligations for general‑purpose AI models
The European Union’s AI Act reached a milestone on 2 August 2025 when obligations for general‑purpose AI (GPAI) models[3], came into force. The European Commission simultaneously released a Code of Practice and a disclosure template to help GPAI providers comply. Key points include:
- Threshold definition – GPAI providers are those whose models exceed 10^23 floating‑point operations (FLOPs)[3]. This threshold clarifies which models fall under the regime.
- Modifiers become providers – developers who modify or fine‑tune a GPAI model using at least one‑third of the original compute are deemed providers and share compliance obligations[3].
- Transparency requirements – the voluntary Code of Practice requires providers to summarise model properties, intended uses, training data sources, known limitations and risk mitigation measures[3]. Signing the code allows providers to demonstrate compliance while the formal rules are finalised.
For companies operating in Europe or building models destined for EU markets, understanding these GPAI obligations is critical. Failing to meet transparency expectations could limit market access or invite penalties once the Act is fully implemented.
U.S. state laws: Colorado, California, Texas and Utah
Colorado delays its AI Act
Colorado’s ambitious Artificial Intelligence Act was supposed to take effect on February 1 2026 and imposes broad obligations on providers of high‑risk AI systems. However, the Colorado legislature passed SB25B‑004 on 27 August 2025, delaying the law’s effective date to 30 June 2026[4]. Governor Jared Polis called a special session and issued an executive order, citing concerns about regulatory complexity and implementation costs[4]. Businesses now have extra time to prepare for compliance while lawmakers consider amendments.
California’s AI legislative blitz
California’s legislature closed its 2025 session with a wave of AI‑related bills awaiting Governor Gavin Newsom’s signature. Key measures include:
- SB 53 – Transparency in Frontier AI Act – Requires developers of large‑scale AI systems (above a certain compute threshold) to publish risk‑management frameworks, safety and adversarial‑testing results, and to establish whistleblower protections[5].
- SB 7 – Employment ADS Notice and Fairness – Employers using automated decision systems for hiring, promotion or termination must provide notice and allow individuals to access and correct data[5].
- SB 771 – Social Media Algorithm Accountability – Creates a private right of action against platforms whose algorithms cause addictive behaviour or harm; damages are capped[5].
- SB 361 – Data Broker Disclosures – Requires data brokers to disclose when they sell or share sensitive data, including data used to train generative AI models[5].
- AB 566 – Opt‑out Preference Signal – Mandates browsers to support universal opt‑out signals and requires online businesses to honour them[5].
- AB 1043 – Digital Age Assurance Act – Requires operating systems and app stores to collect age information and provide age signals to age‑restricted apps[5].
- AB 853 – AI Transparency Act – Requires devices and platforms that capture audio, video or images to attach provenance information so users can verify whether content is real[5].
These bills focus on transparency, consumer protection and safety for high‑impact applications. If enacted, they would make California a leader in comprehensive AI regulation.
Texas and Utah take lighter approaches
While Colorado and California pursue comprehensive regimes, other states are more restrained. A legal analysis from Quinn Emanuel describes four state approaches: Colorado’s risk‑based framework, California’s transparency‑oriented framework, Texas’s targeted pro‑innovation approach and Utah’s minimalist disclosure regime. Texas’s Responsible AI Governance Act (effective 1 January 2026) regulates only certain uses of AI (e.g., discrimination, behavioural manipulation), requires proof of intent and sets a high bar for enforcement. Utah’s AI Policy Act[6] (in force since May 1 2024) merely requires businesses to inform consumers when interacting with AI and offers a sandbox for experimentation. These divergent state laws mean companies must monitor multiple compliance regimes.
Evidence‑based AI policy: bridging research and regulation
On 8 September 2025, Stanford HAI published an interview with Rishi Bommasani, a researcher who coined the term “foundation models” and now serves on the EU AI Act implementation board. Bommasani argues that academia must bridge the divide between AI research and policy[7]. He notes that the concept of foundation models has informed both the EU AI Act and the U.S. government’s approach to open‑source models[7]. Bommasani calls for evidence‑based AI policy, safe harbours for third‑party testing and incentives for credible research[7]. His perspective underscores the need for rigorous evidence in shaping AI governance.
Global governance frameworks and ethical guidelines
Beyond legislation, organisations are aligning with voluntary frameworks. A commentary from Bradley highlights the major international standards: the OECD AI Principles and their updated definitions of AI systems, UNESCO’s Ethics of AI Recommendation, the U.S. NIST AI Risk Management Framework, and emerging certification standards like ISO/IEC 42001 and the IEEE 7000 series[8]. These frameworks stress transparency, fairness, accountability and human rights. Companies with multinational operations should benchmark against them to stay ahead of regulatory requirements.
More on AI Governance from AryaXAI
At AryaXAI, we believe that proactive governance and explainability are the foundation of safe AI deployment. Our recent articles offer actionable guidance:
- AI Governance Reimagined: Why Context Comes Before Control – This article explains why context, who is affected, where AI is deployed and why it’s used - must be crucial parameters guiding AI governance strategies. It argues that infrastructure like model cards and bias audits alone is insufficient without understanding the social, legal and operational context
- Building Safer AI: A Practical Guide to Risk, Governance and Compliance – Offers a field‑tested blueprint for translating alignment and fairness principles into operational safeguards. It emphasises mapping principles to technical requirements, failure‑mode analysis, quantifiable safeguards and real‑time monitoring.
- AI and Ethics: Risks, Responsibilities and Regulations – Discusses ethical dilemmas in AI (bias, privacy, transparency) and explores the UNESCO Ethics of AI Recommendation. It underscores the importance of integrating ethics into AI governance and aligning with global standards.
Exploring these resources will deepen your understanding of AI governance and provide practical tools for aligning your organisation’s AI deployment with emerging regulations.
SHARE THIS
Discover More Articles
Explore a curated collection of in-depth articles covering the latest advancements, insights, and trends in AI, MLOps, governance, and more. Stay informed with expert analyses, thought leadership, and actionable knowledge to drive innovation in your field.

Is Explainability critical for your AI solutions?
Schedule a demo with our team to understand how AryaXAI can make your mission-critical 'AI' acceptable and aligned with all your stakeholders.















