AI Regulations in the US: Mapping the 2025 Landscape

Article

By

Ketaki Joshi

10 minutes

May 26, 2025

As artificial intelligence (AI) systems continue to evolve and find applications across every sector—from healthcare to finance to governance—regulatory bodies in the US have started rolling out a range of laws, executive orders, and frameworks to ensure AI is developed and deployed responsibly.

In this post, we explore the evolving regulatory landscape in the United States by highlighting notable AI-related regulations, categorized by their product lifecycle relevance—covering data privacy, fairness, explainability, model risk management, and monitoring.

1. Understanding AI Framework on Data Privacy and Consumer Rights

A significant portion of AI regulations in the US focuses on protecting consumer data and giving users more control over how their data is used.

In the absence of comprehensive federal legislation regulating AI, state and local laws—including local laws enacted by municipalities—play a crucial role in filling regulatory gaps and protecting consumer rights. State and local laws are actively addressing AI-related issues, helping to safeguard individuals from algorithmic harm and set privacy standards at the regional level

Montana Consumer Data Privacy Act (MTCDPA) 

The Montana Consumer Data Privacy Act (MTCDPA), enacted as Senate Bill 384, came into effect on October 1, 2024. This legislation positions Montana among a growing number of US states implementing comprehensive data privacy laws. It aims to enhance consumer rights and establish clear obligations for businesses handling personal data.

Key Provisions of the MTCDPA:

  • Scope of Applicability: The MTCDPA applies to entities conducting business in Montana or offering products or services to Montana residents, provided they process the personal data of at least 50,000 consumers annually or derive over 25% of their revenue from the sale of the personal data of at least 25,000 consumers.
  • Consumer Rights: Montana residents are granted several rights under the MTCDPA, including:
    • The right to confirm whether a controller is processing their personal data.
    • The right to access their personal data.
    • The right to correct inaccuracies in their personal data.
    • The right to delete their personal data.
    • The right to obtain a copy of their personal data in a portable format.
    • The right to opt out of processing their personal data for purposes of targeted advertising, the sale of personal data, or profiling in furtherance of decisions that produce legal or similarly significant effects.
  • Sensitive Data Handling: Processing of sensitive data, such as racial or ethnic origin, religious beliefs, health information, and precise geolocation data, requires the consumer's explicit consent. For children under 13, parental consent is mandatory, aligning with the Children's Online Privacy Protection Act (COPPA).
  • Business Obligations: Data controllers are required to implement reasonable data security measures, conduct data protection assessments for processing activities that present a heightened risk of harm to consumers, and enter into contracts with processors that outline data processing instructions and confidentiality obligations. 
  • Enforcement: The Montana Attorney General has exclusive authority to enforce the MTCDPA. Upon identifying a violation, the Attorney General must issue a notice, after which the entity has 60 days to cure the violation. This cure period is set to expire on April 1, 2026.

Virginia Consumer Data Protection Act (VCDPA)

The Virginia Consumer Data Protection Act (VCDPA), effective from January 1, 2023, is a comprehensive data privacy law that grants Virginia residents significant rights over their personal data and imposes obligations on businesses handling such data.

Applies to businesses that:

  • Control/process data of 100,000+ consumers annually, or
  • 25,000+ consumers and derive 50%+ revenue from selling data.

Consumer Rights:

Under the VCDPA, Virginia residents have the right to:

  • Confirm whether a controller is processing their personal data and access such data.
  • Correct inaccuracies in their personal data.
  • Delete personal data provided by or obtained about them.
  • Obtain a copy of their personal data in a portable and, to the extent technically feasible, readily usable format.
  • Opt out of the processing of personal data for purposes of targeted advertising, the sale of personal data, or profiling in furtherance of decisions that produce legal or similarly significant effects concerning the consumer.

Business Obligations:

Businesses subject to the VCDPA must:

  • Provide consumers with a clear and accessible privacy notice detailing the categories of personal data processed, the purposes of processing, and how consumers can exercise their rights.
  • Implement reasonable data security practices to protect the confidentiality, integrity, and accessibility of personal data.
  • Conduct data protection assessments for processing activities that present a heightened risk of harm to consumers, such as processing sensitive data or personal data for targeted advertising.
  • Obtain consumer consent before processing sensitive data, which includes data revealing racial or ethnic origin, religious beliefs, mental or physical health diagnosis, sexual orientation, or citizenship or immigration status.

California Consumer Privacy Act (CCPA)

A foundational law governing how consumer data is collected and processed, with implications for AI profiling. The CCPA grants California residents significant rights over their personal data and imposes obligations on businesses handling such information

Key Consumer Rights:

  • Right to Know: Consumers can request details about the personal information a business collects, uses, shares, or sells.
  • Right to Delete: Consumers can request the deletion of their personal information held by a business, with certain exceptions.
  • Right to Opt-Out: Consumers can direct businesses to stop selling or sharing their personal information.
  • Right to Non-Discrimination: Businesses cannot discriminate against consumers for exercising their CCPA rights.
  • Right to Correct: Consumers can request corrections to inaccurate personal information.
  • Right to Limit Use of Sensitive Information: Consumers can limit the use and disclosure of sensitive personal information, such as race, health data, or precise geolocation.

AI regulation under the CCPA also has important implications for healthcare services, particularly in ensuring data privacy, compliance, and ethical standards in medical settings.

Applicability:

The CCPA applies to for-profit businesses that collect personal information from California residents and meet at least one of the following criteria:

  • Annual gross revenues exceeding $25 million.
  • Buy, receive, sell, or share the personal information of 100,000 or more California consumers or households.
  • Derive 50% or more of annual revenue from selling or sharing California consumers' personal information.

CA Healthcare Utilization Review, Illinois Supreme Court Policy on Artificial Intelligence, The American Privacy Rights Act (APRA), Stop Spying Bosses Act,

Colorado AI Act

The Colorado AI Act represents a significant step forward in regulating the use of artificial intelligence systems at the state level, with a strong emphasis on consumer protection, transparency, and accountability. This comprehensive law addresses the deployment and development of AI systems across sectors such as employment, education, and personal or professional services, ensuring that both AI developers and users are held to high standards.

A key feature of the Colorado AI Act is its focus on high risk AI systems—those that have the potential to significantly impact individuals' rights, safety, or access to essential services. The Act requires AI developers to provide clear disclosures about their AI systems, including information on the data used for training and any potential biases that may exist. This transparency is designed to help consumers and organizations better understand the capabilities and limitations of AI technologies.

To further safeguard against potential risks, the Colorado AI Act establishes a robust risk management framework. AI developers and deployers must conduct thorough risk assessments, implement mitigation strategies, and regularly monitor their AI systems for unintended consequences or algorithmic discrimination. These requirements are particularly stringent for high risk AI, where the stakes for individuals and society are greatest.

Enforcement of the Colorado AI Act is backed by both civil and criminal penalties for non-compliance, ensuring that organizations take their obligations seriously. By setting clear standards and consequences, the Act aims to foster trustworthy AI innovation while protecting the public from the misuse of artificial intelligence. As one of the first state laws of its kind, the Colorado AI Act is likely to influence future AI legislation and regulatory frameworks across the United States.

2. Explainability and Transparency

California AI Transparency Act (Effective January 1, 2026)

Requires generative AI systems to disclose AI-generated content to consumers. The Act applies to "Covered Providers," defined as entities that create, code, or produce GenAI systems with over 1 million monthly users and that are publicly accessible within California.

Key Requirements

  1. Manifest Disclosures: Covered Providers must offer users the option to include a clear and conspicuous disclosure in AI-generated or altered image, video, or audio content, indicating that the content is AI-generated. This disclosure must be permanent or extraordinarily difficult to remove. 
  2. Latent Disclosures: In addition to manifest disclosures, providers are required to embed hidden metadata in AI-generated content, conveying information such as the provider's identity, the system used, and the creation date. This metadata should be consistent with industry standards and, to the extent technically feasible, be permanent or extraordinarily difficult to remove.
  3. AI Detection Tool: Providers must make available, at no cost to users, an AI detection tool capable of identifying whether content was created or altered by their GenAI system. This tool should allow users to assess the content and provide relevant provenance data, excluding personal information.
  4. Third-Party Licensee Compliance: If a Covered Provider licenses its GenAI system to a third party, the licensee must maintain the system's capability to include the required disclosures. Should a provider become aware that a licensee has modified the system to prevent the inclusion of these disclosures, the provider must revoke the license within 96 hours, and the licensee must cease using the system upon revocation.

Utah S.B. 226 – Artificial Intelligence Consumer Protection Amendments

Utah's Senate Bill 226 amends the state's Artificial Intelligence Policy Act (UAIPA) to refine disclosure requirements for generative AI (GenAI) in consumer interactions, focusing on high-risk scenarios and establishing a safe harbor for compliant entities. The bill aims to balance consumer protection with innovation by targeting disclosure obligations to high-risk AI interactions and providing a compliance pathway for businesses through the safe harbor provision.

Key Provisions:

  • Disclosure Requirements:
    • Upon Direct Inquiry: Entities must disclose the use of GenAI when a consumer makes a clear and unambiguous request to determine if they are interacting with AI.
    • High-Risk Interactions: For regulated occupations (e.g., healthcare, legal, financial services), if the GenAI interaction involves collecting sensitive personal information or providing personalized advice that could influence significant personal decisions, a prominent disclosure is required at the outset of the interaction.
  • Safe Harbor Provision:
    • Entities that provide clear and conspicuous disclosures at the beginning and throughout the GenAI interaction—indicating that the user is interacting with AI—may qualify for a safe harbor, protecting them from certain enforcement actions.

Preventing Deep Fake Scams Act (H.R. 1734) – 119th Congress (2025–2026)

Introduced in February 27, 2025, the Preventing Deep Fake Scams Act aims to address the growing threat of deepfake technology in the financial sector. It proposes the establishment of a federal task force to study and report on the use of artificial intelligence (AI) and deepfakes in financial services.

Key Provisions

  • Establishment of a Task Force: The Act calls for the creation of a Task Force on Artificial Intelligence in the Financial Services Sector. This task force will be chaired by the Secretary of the Treasury and include representatives from major financial regulatory agencies, such as the Federal Reserve, FDIC, CFPB, OCC, NCUA, and FinCEN.
  • Objectives of the Task Force: Within one year of the Act's enactment, the task force is required to submit a comprehensive report to Congress. The report should:
    • Assess current uses of AI and deepfake technologies in financial services.
    • Identify risks associated with AI misuse, including fraud and identity theft.
    • Recommend best practices for safeguarding consumer data.
    • Propose legislative and regulatory measures to mitigate identified risks.
  • Public Consultation: The task force must solicit public feedback within 90 days of the Act's enactment to inform its report.

This legislation recognizes both the benefits and risks of AI in financial services and seeks to proactively address potential threats posed by deepfake technologies.

3. Model Risk Management and Monitoring

Virginia Executive Order 30 (2024): Safe Use of Artificial Intelligence

Issued by: Governor Glenn Youngkin | Effective: January 18, 2024

This executive order sets a framework for the responsible use of AI across Virginia's state government, education, and law enforcement.

Key Highlights:

  • AI Use Standards: VITA will publish policies requiring risk assessments, documentation, data protection, and disclosure when AI influences public decisions.
  • Education Integration: AI curriculum and guidelines to be developed for K-12 and higher education.
  • Law Enforcement Oversight: AI use standards in policing must be established by October 2024.
  • AI Task Force: A state-level task force will guide AI adoption and oversight.
  • Funding: $600,000 allocated for state agency AI pilot programs.

Artificial Intelligence Risk Management Framework (AI RMF 1.0) by NIST

Released in January 2023, the NIST AI Risk Management Framework (AI RMF 1.0) provides a structured approach to help organizations identify, assess, and manage risks associated with artificial intelligence (AI) systems. Developed by the National Institute of Standards and Technology (NIST), the framework is designed to enhance the trustworthiness of AI systems by addressing potential harms to individuals, organizations, and ecosystems.

Core Functions

The AI RMF is organized around four key functions:

  1. Govern: Establishes a culture of risk management by integrating policies, processes, and accountability structures across the organization.
  2. Map: Identifies and contextualizes risks throughout the AI lifecycle, considering factors such as data quality, model behavior, and operational environment.
  3. Measure: Assesses and tracks the trustworthiness characteristics of AI systems, including validity, reliability, safety, security, explainability, privacy, and fairness.
  4. Manage: Prioritizes and addresses identified risks through mitigation strategies, continuous monitoring, and stakeholder engagement.

The AI RMF is voluntary and adaptable, intended for use by organizations of all sizes and sectors involved in the development, deployment, or use of AI systems. It aims to promote a common understanding of AI risks and encourage responsible practices across the AI ecosystem. The NIST AI RMF also aims to encourage responsible development of AI technologies by providing clear guidelines for risk management.

Responsible AI Disclosure Act of 2024

The Responsible AI Disclosure Act of 2024 is a piece of federal legislation that mandates federal financial agencies to conduct a study to establish standardized descriptions for vendor-provided artificial intelligence (AI) systems. This federal legislation requires federal agencies to coordinate efforts to enhance transparency and accountability in AI systems. The aim is to enhance transparency and accountability in AI technologies used within the financial sector.

Key Provisions:

  • Study and Report: The bill requires federal financial agencies to carry out a study and report on standardized descriptions for vendor-provided AI systems. This includes establishing current and recommended definitions and standards for categorizing AI data and methodologies used to train AI models commonly utilized by regulated entities.
  • Coordination Among Agencies: The heads of each covered agency are authorized to coordinate with other covered agencies and consult with other relevant government agencies to carry out the study.
  • Assessment of Practices: The study will assess current and recommended practices for regulated entities and vendors to identify data used to train AI models utilized by regulated entities.

NIST AI 800-1: Managing Misuse Risk for Dual-Use Foundation Models

The US Artificial Intelligence Safety Institute (AISI) at NIST released the second public draft of NIST AI 800-1 on January 15, 2025. This document provides voluntary guidelines for identifying, assessing, and mitigating misuse risks associated with dual-use foundation models—AI systems capable of both beneficial and harmful applications.

Key Updates in the Second Draft

  • Enhanced Evaluation Practices: A new appendix offers detailed methodologies for measuring misuse risks, aiding developers in implementing effective safeguards.
  • Domain-Specific Guidance: Expanded appendices address misuse risks in chemical, biological, and cybersecurity domains, providing targeted strategies for these high-risk areas.
  • Marginal Risk Framework: The draft emphasizes assessing the incremental risk a model introduces compared to existing technologies, promoting a nuanced understanding of potential harms.
  • Consideration for Open Models: Guidelines now include recommendations tailored for open-source model developers, recognizing the unique challenges and responsibilities in this space.
  • Supply Chain Risk Management: The document extends its focus beyond developers to include other stakeholders in the AI supply chain, encouraging a holistic approach to risk mitigation.

The public comment period for this draft closed on March 15, 2025. Feedback from various stakeholders, including industry experts and academic institutions, has been instrumental in refining the guidelines.

4. Fairness and Governance

Illinois Supreme Court Policy on Artificial Intelligence

The Illinois Supreme Court has established a comprehensive policy to guide the ethical and responsible use of artificial intelligence (AI) within the state's judicial system. This initiative, developed by the Illinois Judicial Conference's AI Task Force, aims to balance technological innovation with the preservation of judicial integrity and public trust.

Key Highlights:

  • Accountability: Judges, attorneys, and court personnel are responsible for ensuring that AI-generated content used in legal proceedings is accurate and complies with existing legal and ethical standards.
  • Ethical Compliance: The policy reaffirms that the Illinois Rules of Professional Conduct and the Code of Judicial Conduct fully apply to AI usage, maintaining that AI should not compromise due process, equal protection, or access to justice.
  • Use of AI: The use of AI tools by legal professionals is permitted and encouraged, provided such use adheres to legal and ethical obligations.
  • Disclosure: There is no requirement for parties to disclose the use of AI in legal filings, acknowledging the widespread integration of AI in various technologies.
  • Data Protection: The policy cautions against using AI applications in ways that could compromise sensitive or confidential information, including personal identifying information and protected health information.

5. Emerging Areas: Generative AI and National Security

US Department of Commerce: Generative AI and Open Data Guidelines (January 2025)

In January 2025, the US Department of Commerce released the Generative Artificial Intelligence and Open Data: Guidelines and Best Practices, a comprehensive framework aimed at enhancing the accessibility and utility of public data for generative AI applications. Developed by the Commerce Data Governance Board's AI and Open Government Data Assets Working Group, the guidelines are designed to optimize the Department's open data assets for integration with AI systems.

Key Objectives:

  • Enhancing AI Accuracy: Ensuring that generative AI models produce accurate and properly sourced responses when utilizing Commerce data.
  • Promoting Authoritative Data: Prioritizing the use of official Commerce datasets over non-authoritative sources in AI-generated outputs.

Core Recommendations:

  1. Documentation: Provide comprehensive metadata and contextual information to facilitate AI understanding and processing.
  2. Data and Metadata Formats: Adopt standardized, machine-readable formats to ensure interoperability and ease of use by AI systems.
  3. Data Storage and Dissemination: Implement efficient storage solutions and dissemination practices to support AI access and utilization.
  4. Data Licensing and Usage: Clearly define licensing terms to govern the use of data by AI applications, ensuring compliance and ethical use.
  5. Data Integrity and Quality: Maintain high data quality standards to ensure reliability and trustworthiness in AI application

Generative AI Training Data Disclosure (AB 2013) (Effective January 1, 2026)

Mandates that companies disclose the datasets used for training large language models.

Safe and Secure Innovation for Frontier Artificial Intelligence Models (SB 1047)

Introduces baseline safety standards for powerful foundation models, with carve-outs for smaller startups.

Managing Misuse Risk for Dual-Use Foundation Models (NIST Draft) 

Offers risk management strategies for models capable of both beneficial and harmful uses.

6. International and Global AI Regulations

As artificial intelligence technologies continue to advance, the international community has recognized the need for robust and harmonized approaches to AI regulation. Countries and organizations around the world are actively developing their own frameworks for regulating AI, each reflecting unique cultural, social, and economic priorities.

The European Union's AI Act stands out as a landmark piece of AI legislation, setting a high bar for AI governance globally. The EU AI Act introduces a risk-based approach to AI regulation, categorizing AI systems according to their potential impact on fundamental rights and safety. High-risk AI systems—such as those used in critical infrastructure, health care, or law enforcement—are subject to strict requirements for transparency, human oversight, and accountability. The Act also mandates comprehensive risk assessment and documentation throughout the AI lifecycle, promoting responsible AI development and deployment.

Other countries, including Japan and South Korea, have also established their own ai regulations, with a strong focus on promoting responsible innovation and ensuring that ai systems align with societal values. These frameworks often emphasize the importance of transparency, ethical standards, and the protection of individual rights in the use of artificial intelligence.

Global efforts to establish common standards for AI governance are ongoing, with international organizations and governments collaborating to address cross-border challenges and promote trustworthy AI. As AI technologies become increasingly integrated into both private and public sectors, the development of coherent and effective AI regulations at the international level will be essential for fostering innovation while safeguarding public interests

7. Legislative Momentum and Future Direction

Several bills have been introduced but not yet enacted, signaling growing bipartisan interest:

Recent legislative activity includes a range of AI bills at both the federal and state levels, reflecting the increasing urgency to regulate artificial intelligence. The White House Blueprint for an AI Bill of Rights, released under the Biden administration, outlines key principles for ethical AI development, including privacy, fairness, and accountability. The Biden administration has played a central role in advancing AI policy through executive orders and advocacy for responsible AI practices. In contrast, President Trump issued the "Removing Barriers to American Leadership in AI" Executive Order in January 2025, emphasizing a reduction in regulatory constraints to promote US AI leadership. The Trump administration previously shaped early US AI policy with initiatives like the National Artificial Intelligence Initiative Act of 2020, focusing on innovation and a free-market approach. These AI bills and policy frameworks highlight the evolving landscape of AI regulation and the significance of government efforts in shaping the future of artificial intelligence.

The American Privacy Rights Act (APRA)

Aims to establish a national standard for data privacy and AI accountability.

AI Accountability Act 

Proposes studies on how AI accountability can be embedded at every product lifecycle stage.

SAFE Innovation Framework for Artificial Intelligence

Introduced in June 2023, the SAFE Innovation Framework outlines a bipartisan approach to AI policy, aiming to balance innovation with safety and democratic values.

Core Principles:

  • Security: Ensure AI systems bolster national security and economic stability, while preventing misuse by adversaries.
  • Accountability: Mandate transparency and responsibility from AI developers and users, addressing concerns like intellectual property and liability. It is essential to provide human alternatives to AI-driven decisions, ensuring users can stop or override automated outcomes and maintain human oversight for safety and accountability. AI is defined as a machine-based system that makes predictions, recommendations, or decisions to influence environments, operating based on human-defined objectives.
  • Foundations: Align AI development with democratic values, safeguarding civil rights and electoral integrity.
  • Explainability: Promote AI systems that provide clear, understandable decisions, addressing the technical challenges of transparency.

SHARE THIS

Subscribe to AryaXAI

Stay up to date with all updates

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Discover More Articles

Explore a curated collection of in-depth articles covering the latest advancements, insights, and trends in AI, MLOps, governance, and more. Stay informed with expert analyses, thought leadership, and actionable knowledge to drive innovation in your field.

View All

Is Explainability critical for your AI solutions?

Schedule a demo with our team to understand how AryaXAI can make your mission-critical 'AI' acceptable and aligned with all your stakeholders.

AI Regulations in the US: Mapping the 2025 Landscape

Ketaki JoshiKetaki Joshi
Ketaki Joshi
May 26, 2025
AI Regulations in the US: Mapping the 2025 Landscape
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

As artificial intelligence (AI) systems continue to evolve and find applications across every sector—from healthcare to finance to governance—regulatory bodies in the US have started rolling out a range of laws, executive orders, and frameworks to ensure AI is developed and deployed responsibly.

In this post, we explore the evolving regulatory landscape in the United States by highlighting notable AI-related regulations, categorized by their product lifecycle relevance—covering data privacy, fairness, explainability, model risk management, and monitoring.

1. Understanding AI Framework on Data Privacy and Consumer Rights

A significant portion of AI regulations in the US focuses on protecting consumer data and giving users more control over how their data is used.

In the absence of comprehensive federal legislation regulating AI, state and local laws—including local laws enacted by municipalities—play a crucial role in filling regulatory gaps and protecting consumer rights. State and local laws are actively addressing AI-related issues, helping to safeguard individuals from algorithmic harm and set privacy standards at the regional level

Montana Consumer Data Privacy Act (MTCDPA) 

The Montana Consumer Data Privacy Act (MTCDPA), enacted as Senate Bill 384, came into effect on October 1, 2024. This legislation positions Montana among a growing number of US states implementing comprehensive data privacy laws. It aims to enhance consumer rights and establish clear obligations for businesses handling personal data.

Key Provisions of the MTCDPA:

  • Scope of Applicability: The MTCDPA applies to entities conducting business in Montana or offering products or services to Montana residents, provided they process the personal data of at least 50,000 consumers annually or derive over 25% of their revenue from the sale of the personal data of at least 25,000 consumers.
  • Consumer Rights: Montana residents are granted several rights under the MTCDPA, including:
    • The right to confirm whether a controller is processing their personal data.
    • The right to access their personal data.
    • The right to correct inaccuracies in their personal data.
    • The right to delete their personal data.
    • The right to obtain a copy of their personal data in a portable format.
    • The right to opt out of processing their personal data for purposes of targeted advertising, the sale of personal data, or profiling in furtherance of decisions that produce legal or similarly significant effects.
  • Sensitive Data Handling: Processing of sensitive data, such as racial or ethnic origin, religious beliefs, health information, and precise geolocation data, requires the consumer's explicit consent. For children under 13, parental consent is mandatory, aligning with the Children's Online Privacy Protection Act (COPPA).
  • Business Obligations: Data controllers are required to implement reasonable data security measures, conduct data protection assessments for processing activities that present a heightened risk of harm to consumers, and enter into contracts with processors that outline data processing instructions and confidentiality obligations. 
  • Enforcement: The Montana Attorney General has exclusive authority to enforce the MTCDPA. Upon identifying a violation, the Attorney General must issue a notice, after which the entity has 60 days to cure the violation. This cure period is set to expire on April 1, 2026.

Virginia Consumer Data Protection Act (VCDPA)

The Virginia Consumer Data Protection Act (VCDPA), effective from January 1, 2023, is a comprehensive data privacy law that grants Virginia residents significant rights over their personal data and imposes obligations on businesses handling such data.

Applies to businesses that:

  • Control/process data of 100,000+ consumers annually, or
  • 25,000+ consumers and derive 50%+ revenue from selling data.

Consumer Rights:

Under the VCDPA, Virginia residents have the right to:

  • Confirm whether a controller is processing their personal data and access such data.
  • Correct inaccuracies in their personal data.
  • Delete personal data provided by or obtained about them.
  • Obtain a copy of their personal data in a portable and, to the extent technically feasible, readily usable format.
  • Opt out of the processing of personal data for purposes of targeted advertising, the sale of personal data, or profiling in furtherance of decisions that produce legal or similarly significant effects concerning the consumer.

Business Obligations:

Businesses subject to the VCDPA must:

  • Provide consumers with a clear and accessible privacy notice detailing the categories of personal data processed, the purposes of processing, and how consumers can exercise their rights.
  • Implement reasonable data security practices to protect the confidentiality, integrity, and accessibility of personal data.
  • Conduct data protection assessments for processing activities that present a heightened risk of harm to consumers, such as processing sensitive data or personal data for targeted advertising.
  • Obtain consumer consent before processing sensitive data, which includes data revealing racial or ethnic origin, religious beliefs, mental or physical health diagnosis, sexual orientation, or citizenship or immigration status.

California Consumer Privacy Act (CCPA)

A foundational law governing how consumer data is collected and processed, with implications for AI profiling. The CCPA grants California residents significant rights over their personal data and imposes obligations on businesses handling such information

Key Consumer Rights:

  • Right to Know: Consumers can request details about the personal information a business collects, uses, shares, or sells.
  • Right to Delete: Consumers can request the deletion of their personal information held by a business, with certain exceptions.
  • Right to Opt-Out: Consumers can direct businesses to stop selling or sharing their personal information.
  • Right to Non-Discrimination: Businesses cannot discriminate against consumers for exercising their CCPA rights.
  • Right to Correct: Consumers can request corrections to inaccurate personal information.
  • Right to Limit Use of Sensitive Information: Consumers can limit the use and disclosure of sensitive personal information, such as race, health data, or precise geolocation.

AI regulation under the CCPA also has important implications for healthcare services, particularly in ensuring data privacy, compliance, and ethical standards in medical settings.

Applicability:

The CCPA applies to for-profit businesses that collect personal information from California residents and meet at least one of the following criteria:

  • Annual gross revenues exceeding $25 million.
  • Buy, receive, sell, or share the personal information of 100,000 or more California consumers or households.
  • Derive 50% or more of annual revenue from selling or sharing California consumers' personal information.

CA Healthcare Utilization Review, Illinois Supreme Court Policy on Artificial Intelligence, The American Privacy Rights Act (APRA), Stop Spying Bosses Act,

Colorado AI Act

The Colorado AI Act represents a significant step forward in regulating the use of artificial intelligence systems at the state level, with a strong emphasis on consumer protection, transparency, and accountability. This comprehensive law addresses the deployment and development of AI systems across sectors such as employment, education, and personal or professional services, ensuring that both AI developers and users are held to high standards.

A key feature of the Colorado AI Act is its focus on high risk AI systems—those that have the potential to significantly impact individuals' rights, safety, or access to essential services. The Act requires AI developers to provide clear disclosures about their AI systems, including information on the data used for training and any potential biases that may exist. This transparency is designed to help consumers and organizations better understand the capabilities and limitations of AI technologies.

To further safeguard against potential risks, the Colorado AI Act establishes a robust risk management framework. AI developers and deployers must conduct thorough risk assessments, implement mitigation strategies, and regularly monitor their AI systems for unintended consequences or algorithmic discrimination. These requirements are particularly stringent for high risk AI, where the stakes for individuals and society are greatest.

Enforcement of the Colorado AI Act is backed by both civil and criminal penalties for non-compliance, ensuring that organizations take their obligations seriously. By setting clear standards and consequences, the Act aims to foster trustworthy AI innovation while protecting the public from the misuse of artificial intelligence. As one of the first state laws of its kind, the Colorado AI Act is likely to influence future AI legislation and regulatory frameworks across the United States.

2. Explainability and Transparency

California AI Transparency Act (Effective January 1, 2026)

Requires generative AI systems to disclose AI-generated content to consumers. The Act applies to "Covered Providers," defined as entities that create, code, or produce GenAI systems with over 1 million monthly users and that are publicly accessible within California.

Key Requirements

  1. Manifest Disclosures: Covered Providers must offer users the option to include a clear and conspicuous disclosure in AI-generated or altered image, video, or audio content, indicating that the content is AI-generated. This disclosure must be permanent or extraordinarily difficult to remove. 
  2. Latent Disclosures: In addition to manifest disclosures, providers are required to embed hidden metadata in AI-generated content, conveying information such as the provider's identity, the system used, and the creation date. This metadata should be consistent with industry standards and, to the extent technically feasible, be permanent or extraordinarily difficult to remove.
  3. AI Detection Tool: Providers must make available, at no cost to users, an AI detection tool capable of identifying whether content was created or altered by their GenAI system. This tool should allow users to assess the content and provide relevant provenance data, excluding personal information.
  4. Third-Party Licensee Compliance: If a Covered Provider licenses its GenAI system to a third party, the licensee must maintain the system's capability to include the required disclosures. Should a provider become aware that a licensee has modified the system to prevent the inclusion of these disclosures, the provider must revoke the license within 96 hours, and the licensee must cease using the system upon revocation.

Utah S.B. 226 – Artificial Intelligence Consumer Protection Amendments

Utah's Senate Bill 226 amends the state's Artificial Intelligence Policy Act (UAIPA) to refine disclosure requirements for generative AI (GenAI) in consumer interactions, focusing on high-risk scenarios and establishing a safe harbor for compliant entities. The bill aims to balance consumer protection with innovation by targeting disclosure obligations to high-risk AI interactions and providing a compliance pathway for businesses through the safe harbor provision.

Key Provisions:

  • Disclosure Requirements:
    • Upon Direct Inquiry: Entities must disclose the use of GenAI when a consumer makes a clear and unambiguous request to determine if they are interacting with AI.
    • High-Risk Interactions: For regulated occupations (e.g., healthcare, legal, financial services), if the GenAI interaction involves collecting sensitive personal information or providing personalized advice that could influence significant personal decisions, a prominent disclosure is required at the outset of the interaction.
  • Safe Harbor Provision:
    • Entities that provide clear and conspicuous disclosures at the beginning and throughout the GenAI interaction—indicating that the user is interacting with AI—may qualify for a safe harbor, protecting them from certain enforcement actions.

Preventing Deep Fake Scams Act (H.R. 1734) – 119th Congress (2025–2026)

Introduced in February 27, 2025, the Preventing Deep Fake Scams Act aims to address the growing threat of deepfake technology in the financial sector. It proposes the establishment of a federal task force to study and report on the use of artificial intelligence (AI) and deepfakes in financial services.

Key Provisions

  • Establishment of a Task Force: The Act calls for the creation of a Task Force on Artificial Intelligence in the Financial Services Sector. This task force will be chaired by the Secretary of the Treasury and include representatives from major financial regulatory agencies, such as the Federal Reserve, FDIC, CFPB, OCC, NCUA, and FinCEN.
  • Objectives of the Task Force: Within one year of the Act's enactment, the task force is required to submit a comprehensive report to Congress. The report should:
    • Assess current uses of AI and deepfake technologies in financial services.
    • Identify risks associated with AI misuse, including fraud and identity theft.
    • Recommend best practices for safeguarding consumer data.
    • Propose legislative and regulatory measures to mitigate identified risks.
  • Public Consultation: The task force must solicit public feedback within 90 days of the Act's enactment to inform its report.

This legislation recognizes both the benefits and risks of AI in financial services and seeks to proactively address potential threats posed by deepfake technologies.

3. Model Risk Management and Monitoring

Virginia Executive Order 30 (2024): Safe Use of Artificial Intelligence

Issued by: Governor Glenn Youngkin | Effective: January 18, 2024

This executive order sets a framework for the responsible use of AI across Virginia's state government, education, and law enforcement.

Key Highlights:

  • AI Use Standards: VITA will publish policies requiring risk assessments, documentation, data protection, and disclosure when AI influences public decisions.
  • Education Integration: AI curriculum and guidelines to be developed for K-12 and higher education.
  • Law Enforcement Oversight: AI use standards in policing must be established by October 2024.
  • AI Task Force: A state-level task force will guide AI adoption and oversight.
  • Funding: $600,000 allocated for state agency AI pilot programs.

Artificial Intelligence Risk Management Framework (AI RMF 1.0) by NIST

Released in January 2023, the NIST AI Risk Management Framework (AI RMF 1.0) provides a structured approach to help organizations identify, assess, and manage risks associated with artificial intelligence (AI) systems. Developed by the National Institute of Standards and Technology (NIST), the framework is designed to enhance the trustworthiness of AI systems by addressing potential harms to individuals, organizations, and ecosystems.

Core Functions

The AI RMF is organized around four key functions:

  1. Govern: Establishes a culture of risk management by integrating policies, processes, and accountability structures across the organization.
  2. Map: Identifies and contextualizes risks throughout the AI lifecycle, considering factors such as data quality, model behavior, and operational environment.
  3. Measure: Assesses and tracks the trustworthiness characteristics of AI systems, including validity, reliability, safety, security, explainability, privacy, and fairness.
  4. Manage: Prioritizes and addresses identified risks through mitigation strategies, continuous monitoring, and stakeholder engagement.

The AI RMF is voluntary and adaptable, intended for use by organizations of all sizes and sectors involved in the development, deployment, or use of AI systems. It aims to promote a common understanding of AI risks and encourage responsible practices across the AI ecosystem. The NIST AI RMF also aims to encourage responsible development of AI technologies by providing clear guidelines for risk management.

Responsible AI Disclosure Act of 2024

The Responsible AI Disclosure Act of 2024 is a piece of federal legislation that mandates federal financial agencies to conduct a study to establish standardized descriptions for vendor-provided artificial intelligence (AI) systems. This federal legislation requires federal agencies to coordinate efforts to enhance transparency and accountability in AI systems. The aim is to enhance transparency and accountability in AI technologies used within the financial sector.

Key Provisions:

  • Study and Report: The bill requires federal financial agencies to carry out a study and report on standardized descriptions for vendor-provided AI systems. This includes establishing current and recommended definitions and standards for categorizing AI data and methodologies used to train AI models commonly utilized by regulated entities.
  • Coordination Among Agencies: The heads of each covered agency are authorized to coordinate with other covered agencies and consult with other relevant government agencies to carry out the study.
  • Assessment of Practices: The study will assess current and recommended practices for regulated entities and vendors to identify data used to train AI models utilized by regulated entities.

NIST AI 800-1: Managing Misuse Risk for Dual-Use Foundation Models

The US Artificial Intelligence Safety Institute (AISI) at NIST released the second public draft of NIST AI 800-1 on January 15, 2025. This document provides voluntary guidelines for identifying, assessing, and mitigating misuse risks associated with dual-use foundation models—AI systems capable of both beneficial and harmful applications.

Key Updates in the Second Draft

  • Enhanced Evaluation Practices: A new appendix offers detailed methodologies for measuring misuse risks, aiding developers in implementing effective safeguards.
  • Domain-Specific Guidance: Expanded appendices address misuse risks in chemical, biological, and cybersecurity domains, providing targeted strategies for these high-risk areas.
  • Marginal Risk Framework: The draft emphasizes assessing the incremental risk a model introduces compared to existing technologies, promoting a nuanced understanding of potential harms.
  • Consideration for Open Models: Guidelines now include recommendations tailored for open-source model developers, recognizing the unique challenges and responsibilities in this space.
  • Supply Chain Risk Management: The document extends its focus beyond developers to include other stakeholders in the AI supply chain, encouraging a holistic approach to risk mitigation.

The public comment period for this draft closed on March 15, 2025. Feedback from various stakeholders, including industry experts and academic institutions, has been instrumental in refining the guidelines.

4. Fairness and Governance

Illinois Supreme Court Policy on Artificial Intelligence

The Illinois Supreme Court has established a comprehensive policy to guide the ethical and responsible use of artificial intelligence (AI) within the state's judicial system. This initiative, developed by the Illinois Judicial Conference's AI Task Force, aims to balance technological innovation with the preservation of judicial integrity and public trust.

Key Highlights:

  • Accountability: Judges, attorneys, and court personnel are responsible for ensuring that AI-generated content used in legal proceedings is accurate and complies with existing legal and ethical standards.
  • Ethical Compliance: The policy reaffirms that the Illinois Rules of Professional Conduct and the Code of Judicial Conduct fully apply to AI usage, maintaining that AI should not compromise due process, equal protection, or access to justice.
  • Use of AI: The use of AI tools by legal professionals is permitted and encouraged, provided such use adheres to legal and ethical obligations.
  • Disclosure: There is no requirement for parties to disclose the use of AI in legal filings, acknowledging the widespread integration of AI in various technologies.
  • Data Protection: The policy cautions against using AI applications in ways that could compromise sensitive or confidential information, including personal identifying information and protected health information.

5. Emerging Areas: Generative AI and National Security

US Department of Commerce: Generative AI and Open Data Guidelines (January 2025)

In January 2025, the US Department of Commerce released the Generative Artificial Intelligence and Open Data: Guidelines and Best Practices, a comprehensive framework aimed at enhancing the accessibility and utility of public data for generative AI applications. Developed by the Commerce Data Governance Board's AI and Open Government Data Assets Working Group, the guidelines are designed to optimize the Department's open data assets for integration with AI systems.

Key Objectives:

  • Enhancing AI Accuracy: Ensuring that generative AI models produce accurate and properly sourced responses when utilizing Commerce data.
  • Promoting Authoritative Data: Prioritizing the use of official Commerce datasets over non-authoritative sources in AI-generated outputs.

Core Recommendations:

  1. Documentation: Provide comprehensive metadata and contextual information to facilitate AI understanding and processing.
  2. Data and Metadata Formats: Adopt standardized, machine-readable formats to ensure interoperability and ease of use by AI systems.
  3. Data Storage and Dissemination: Implement efficient storage solutions and dissemination practices to support AI access and utilization.
  4. Data Licensing and Usage: Clearly define licensing terms to govern the use of data by AI applications, ensuring compliance and ethical use.
  5. Data Integrity and Quality: Maintain high data quality standards to ensure reliability and trustworthiness in AI application

Generative AI Training Data Disclosure (AB 2013) (Effective January 1, 2026)

Mandates that companies disclose the datasets used for training large language models.

Safe and Secure Innovation for Frontier Artificial Intelligence Models (SB 1047)

Introduces baseline safety standards for powerful foundation models, with carve-outs for smaller startups.

Managing Misuse Risk for Dual-Use Foundation Models (NIST Draft) 

Offers risk management strategies for models capable of both beneficial and harmful uses.

6. International and Global AI Regulations

As artificial intelligence technologies continue to advance, the international community has recognized the need for robust and harmonized approaches to AI regulation. Countries and organizations around the world are actively developing their own frameworks for regulating AI, each reflecting unique cultural, social, and economic priorities.

The European Union's AI Act stands out as a landmark piece of AI legislation, setting a high bar for AI governance globally. The EU AI Act introduces a risk-based approach to AI regulation, categorizing AI systems according to their potential impact on fundamental rights and safety. High-risk AI systems—such as those used in critical infrastructure, health care, or law enforcement—are subject to strict requirements for transparency, human oversight, and accountability. The Act also mandates comprehensive risk assessment and documentation throughout the AI lifecycle, promoting responsible AI development and deployment.

Other countries, including Japan and South Korea, have also established their own ai regulations, with a strong focus on promoting responsible innovation and ensuring that ai systems align with societal values. These frameworks often emphasize the importance of transparency, ethical standards, and the protection of individual rights in the use of artificial intelligence.

Global efforts to establish common standards for AI governance are ongoing, with international organizations and governments collaborating to address cross-border challenges and promote trustworthy AI. As AI technologies become increasingly integrated into both private and public sectors, the development of coherent and effective AI regulations at the international level will be essential for fostering innovation while safeguarding public interests

7. Legislative Momentum and Future Direction

Several bills have been introduced but not yet enacted, signaling growing bipartisan interest:

Recent legislative activity includes a range of AI bills at both the federal and state levels, reflecting the increasing urgency to regulate artificial intelligence. The White House Blueprint for an AI Bill of Rights, released under the Biden administration, outlines key principles for ethical AI development, including privacy, fairness, and accountability. The Biden administration has played a central role in advancing AI policy through executive orders and advocacy for responsible AI practices. In contrast, President Trump issued the "Removing Barriers to American Leadership in AI" Executive Order in January 2025, emphasizing a reduction in regulatory constraints to promote US AI leadership. The Trump administration previously shaped early US AI policy with initiatives like the National Artificial Intelligence Initiative Act of 2020, focusing on innovation and a free-market approach. These AI bills and policy frameworks highlight the evolving landscape of AI regulation and the significance of government efforts in shaping the future of artificial intelligence.

The American Privacy Rights Act (APRA)

Aims to establish a national standard for data privacy and AI accountability.

AI Accountability Act 

Proposes studies on how AI accountability can be embedded at every product lifecycle stage.

SAFE Innovation Framework for Artificial Intelligence

Introduced in June 2023, the SAFE Innovation Framework outlines a bipartisan approach to AI policy, aiming to balance innovation with safety and democratic values.

Core Principles:

  • Security: Ensure AI systems bolster national security and economic stability, while preventing misuse by adversaries.
  • Accountability: Mandate transparency and responsibility from AI developers and users, addressing concerns like intellectual property and liability. It is essential to provide human alternatives to AI-driven decisions, ensuring users can stop or override automated outcomes and maintain human oversight for safety and accountability. AI is defined as a machine-based system that makes predictions, recommendations, or decisions to influence environments, operating based on human-defined objectives.
  • Foundations: Align AI development with democratic values, safeguarding civil rights and electoral integrity.
  • Explainability: Promote AI systems that provide clear, understandable decisions, addressing the technical challenges of transparency.

See how AryaXAI improves
ML Observability

Learn how to bring transparency & suitability to your AI Solutions, Explore relevant use cases for your team, and Get pricing information for XAI products.