Global Trends on AI Regulation: Transparent and Explainable AI at the Core

Article

By

Ketaki Joshi

10 minutes

December 19, 2024

Global Trends on AI Regulation | Article by AryaXAI

The accelerating capabilities of artificial intelligence, particularly in generative AI (GenAI) and large language models (LLMs), have elevated the need for AI regulations to the forefront of policymakers’ agendas worldwide. GenAI is a transformative technology that powers content generation and automation, driving innovation across industries and reshaping workflows.

Generative AI refers to systems that can autonomously create new content—such as text, images, audio, or video—across various modalities, powered by generative AI models. These systems can also independently perform tasks like writing, research, or booking travel arrangements, highlighting their capability to operate without human intervention. Key application areas include text generation, image generation, music generation, video generation, and art, with tools like DALL-E and Stable Diffusion facilitating the creation of realistic images and videos. Generative AI models are built on deep learning models, foundation models, and machine learning models, which together form the technological backbone that enables advanced AI capabilities and content generation.

The training of foundation models for generative AI is a compute-intensive and time-consuming process, requiring significant resources and effort. Foundational architectures such as generative adversarial networks (which use two neural networks), variational autoencoders, and recurrent neural networks have played a crucial role in advancing the field. Transformers, which process entire sequences, have revolutionized natural language processing and content creation. Generative AI is also widely used for voice cloning, writing, coding, and software code generation, supporting developers and content creators. Its applications extend to customer service chatbots, AI assistants, and generating medical images, designing molecular structures, and transforming businesses. Generative AI models are trained on large volumes of internet data, with data scientists playing a vital role in developing, evaluating, and ensuring the trustworthiness of these models. Gen AI tools are transforming industries and the global economy, with organizations leveraging these tools for competitive advantage. The use of synthetic data to train machine learning models, along with other methods, is essential for improving model performance.

Generative AI is influencing the creative process by enabling the development of art and writing and is central to creating new content—some of which is generated entirely by AI. The landscape of generative AI is evolving rapidly, with new advancements and opportunities anticipated in the coming weeks. The emphasis is on promoting innovation while implementing safeguards to manage associated risks, with transparency and explainability emerging as key focus areas. The rapid adoption of AI by businesses and governments worldwide further heightens the urgency for clear regulatory frameworks.

India

India embraces a pro-innovation approach to AI, focusing on unlocking its full potential while implementing adequate safeguards to manage associated risks. Accordingly, the Ministry of Electronics and Information Technology (Meity) has published a blueprint for a new Digital India Act, which contains specific provisions for regulating high-risk AI systems. This blueprint advocates for the definition and governance of such systems through legal and institutional quality testing frameworks.

The act underscores AI explainability as a crucial requirement for regulating high-risk AI systems in several respects:

Algorithmic Transparency: AI models, particularly those used in high-risk scenarios, must be explainable to ensure accountability. This entails clearly articulating how decisions are made to demonstrate fairness, reduce bias, and prevent harmful outcomes.

Quality Testing Frameworks: Explainability will be integral to the legal and institutional frameworks that assess the reliability, robustness, and compliance of AI systems. These frameworks typically evaluate the quality and diversity of training data and may utilize synthetic data to enhance model robustness and privacy. Data augmentation is also employed as a technique to expand and diversify datasets, thereby supporting more reliable AI system performance.

Risk Mitigation for Zero-Day Threats: Explainable AI tools assist in identifying vulnerabilities and diagnosing potential risks in AI systems, making them essential for security and resilience.

Justifications in Ad-Targeting & Content Moderation: Explainability ensures that AI systems can provide clear, understandable reasons for ad placements or flagged/moderated content, aligning with ethical and regulatory standards.

The planning commission NITI Aayog, in partnership with the World Economic Forum, has released frameworks on Responsible AI (RAI). The latest paper in the series, “Responsible AI for All: Adopting the Framework – A use case approach on Facial Recognition Technology,” tackles the dual challenges of accuracy and interpretability in AI systems. It emphasizes how increasingly complex algorithms, while yielding more accurate results, frequently compromise explainability.

The paper also highlights:

  • Challenges of balancing accuracy and explainability.
  • Principles like Self-Explainable Systems, Meaningful Explanations, and Accurate Decision Justifications to build trust and accountability.

European Union

In March 2024, the European Parliament passed the AI Act, which is the world’s first comprehensive AI law and will take effect in August 2026.

The AI Act emphasizes that transparency is essential for high-risk AI systems to address their complexity, ensuring that users can understand and effectively utilize them. The act also prioritizes transparency in high-risk AI systems through measures such as:

  • Design Transparency: Clear documentation of AI operations, potential risks, and fundamental rights considerations.
  • Interaction Clarity: Informing users about AI-generated content and the automated nature of systems, particularly in emotion detection and content manipulation. Transparency measures also apply to AI-generated web pages and other digital content, ensuring that users are informed about the origin and nature of the content. This is especially important in content creation, where disclosing AI involvement helps users make informed decisions when engaging with AI-driven content.

These transparency measures aim to empower individuals to understand and navigate AI systems and content.

Australia

In September 2024, the Australian Government issued a Policy for the responsible use of AI in government, marking a significant step toward positioning itself as a global leader in the safe and ethical use of AI. The policy underscores the need for AI to be used in an ethical, responsible, transparent, and explainable manner. It mandates that Australian Public Service (APS) officers must be able to explain, justify, and take ownership of AI-driven advice and decisions. The development and deployment of AI applications for public service is a key focus, ensuring that these technologies deliver value across various government functions.

Earlier, in June 2024, Australia’s Data and Digital Ministers issued a National Framework for the assurance of artificial intelligence in government, outlining how governments can apply eight AI Ethics Principles to their AI assurance processes:

  1. Human, societal and environmental wellbeing
  2. Human-centred values
  3. Fairness
  4. Privacy protection and security
  5. Reliability and safety
  6. Transparency and explainability
  7. Contestability
  8. Accountability

Key measures for AI transparency and explainability include:

  1. Disclose AI Usage: Governments must inform users when AI is employed, maintaining a public register detailing the AI system’s purpose, intended use, and limitations.
  2. Reliable Data Practices: Governments should adhere to legal and policy standards for recording decisions, testing, and data assets, enabling scrutiny, knowledge continuity, and accountability. Robust software development practices are essential to ensure transparency and accountability in government AI systems.
  3. Clear Explanations: Governments must provide understandable explanations for AI outcomes, including:some text
  • Inputs, variables, and their influence on system reliability.
  • Testing results and human validation.
  • Implementation of human oversight. When explainability is limited, governments should document reasons for using AI and apply increased oversight.
  1. Human Accountability: In administrative decision-making, humans remain accountable for AI-influenced decisions, which must be explainable.
  2. Frontline Staff Support: Staff should be trained to explain AI outcomes, prioritizing human-to-human communication, especially for vulnerable groups or those uncomfortable with AI use.

United States and Generative AI Tools

The National Institute of Standards and Technology (NIST) in June 2024 published the ‘Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile’, as a companion to the AI RMF 1.0, addressing cross-sectoral risks in Generative AI (GAI). Key trustworthy AI characteristics include fairness (with bias mitigation), safety, validity, reliability, explainability, and interpretability.

The framework highlights risks like GAI outputs producing confabulated logic or citations, misleading users into overtrusting the system. Suggested measures include applying explainable AI (XAI) techniques, such as gradient-based attributions, counterfactual prompts, and model compression, to continuously improve system transparency and mitigate risks. Retrieval-augmented generation (RAG) can also be used to enhance the transparency and accuracy of generative AI outputs by integrating external data sources, providing more reliable and current information. It also emphasizes validating, documenting, and contextualizing AI outputs to support responsible use and governance, recommending interpretability methods to align GAI decisions with their intended purpose. Providing access to relevant documentation and model explanations is crucial for stakeholders and regulators to review and understand AI systems.

US President Joe Biden issued an executive order (EO) on the ‘Safe, Secure, and Trustworthy Development and Use of AI’ in October 2023. The Executive Order calls on independent regulatory agencies to fully utilize their authority to protect consumers from risks associated with AI. It emphasizes the need for transparency and explainability, requiring AI models to be transparent and mandating that regulated entities can explain their AI usage.

The Blueprint for an AI Bill of Rights by the White House Office of Science and Technology Policy (OSTP) provides a non-binding set of guidelines for the design, development, and deployment of artificial intelligence (AI) systems. The document acknowledges the challenges posed by the opaque and complex decision-making processes of automated systems, which can undermine accountability. It stresses that these complexities must not be used as an excuse to avoid explaining critical decisions to those affected. Clear and valid explanations are identified as a baseline requirement to uphold transparency and accountability in automated decision-making.

Global Trends and Challenges in Explainable AI (XAI) and AI Adoption

Global regulations are struggling to keep pace with the rapid evolution of AI technologies and their applications. The lack of a universally accepted definition of AI complicates efforts to regulate the field. Additionally, the absence of standardized metrics for measuring explainability poses difficulties for enforcing compliance. Cutting-edge advancements in AI technology, such as generative AI, LLMs, and AI Agents, further drive regulatory challenges by introducing new complexities and capabilities at a rapid pace.

Other key challenges include:

  • AI systems are multi-faceted: AI systems are inherently diverse, with applications varying significantly across industries. There are many generative AI models and learning models, each with unique architectures and purposes, such as generative models for text, images, and sounds. Historically, early generative models like the markov chain and markov chains played a foundational role in probabilistic text generation, laying the groundwork for more advanced approaches. The importance of natural language processing is evident in enabling these generative AI models to process and generate human language effectively. This diversity, especially among generative models, complicates explainability, as different types of models present unique challenges in understanding and interpreting their outputs. Hence, a ‘one size fits all’ approach to regulating AI risks being overly stringent in some contexts while insufficient in others, highlighting the need for tailored regulatory frameworks that consider industry-specific requirements and challenges.
  • Trade-off between accuracy and interpretability: AI systems, like deep neural networks and Large Language Models (LLMs), are inherently complex. Achieving higher accuracy and pursuing higher quality outputs often means increased model complexity, which in turn makes these systems more difficult to explain. Even when explanations are available, simplifying them for non-technical stakeholders remains a significant challenge, as it can risk losing critical nuances of the underlying decision-making process. Modern generative AI, including the generative pretrained transformer architecture, serves as a foundational model for many advanced applications, further increasing the complexity of explainability.
  • Navigating Proprietary Algorithms: Many AI systems are built on proprietary algorithms, creating tensions between the need for transparency and the protection of intellectual property rights.
  • Cultural Considerations: Global AI regulations must respect the diverse legal frameworks, societal values, and cultural norms across countries. As AI capabilities develop unevenly worldwide, there are challenges in policy and governance, with advancements and regulatory efforts often concentrated in more developed regions, potentially creating disparities in global AI standards and practices.

Conclusion: Implications for Large Language Models

The need for robust regulations has never been more urgent. As AI technologies continue to shape industries and societies, it is imperative for global regulatory frameworks to evolve alongside them. Countries like India, the EU, Australia, and the US are taking crucial steps toward integrating explainability into their AI regulations, focusing on transparency, fairness, and accountability.

While there are considerable challenges in establishing effective AI regulations, these trends highlight the ever growing need for explainable AI (XAI), to ensure that AI predictions are not only accurate, but also understandable and justifiable to both technical and non-technical stakeholders. As regulations continue to catch up with the rapid advancements in AI, it is up to the organizations to ensure that transparency and accountability are prioritized in their AI deployments. This will not only ensure regulatory compliance, but also build trust with users, mitigate risks, and ultimately drive the ethical deployment of AI technologies.

SHARE THIS

Subscribe to AryaXAI

Stay up to date with all updates

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Discover More Articles

Explore a curated collection of in-depth articles covering the latest advancements, insights, and trends in AI, MLOps, governance, and more. Stay informed with expert analyses, thought leadership, and actionable knowledge to drive innovation in your field.

View All

Is Explainability critical for your AI solutions?

Schedule a demo with our team to understand how AryaXAI can make your mission-critical 'AI' acceptable and aligned with all your stakeholders.

Global Trends on AI Regulation: Transparent and Explainable AI at the Core

Ketaki JoshiKetaki Joshi
Ketaki Joshi
December 19, 2024
Global Trends on AI Regulation: Transparent and Explainable AI at the Core
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

The accelerating capabilities of artificial intelligence, particularly in generative AI (GenAI) and large language models (LLMs), have elevated the need for AI regulations to the forefront of policymakers’ agendas worldwide. GenAI is a transformative technology that powers content generation and automation, driving innovation across industries and reshaping workflows.

Generative AI refers to systems that can autonomously create new content—such as text, images, audio, or video—across various modalities, powered by generative AI models. These systems can also independently perform tasks like writing, research, or booking travel arrangements, highlighting their capability to operate without human intervention. Key application areas include text generation, image generation, music generation, video generation, and art, with tools like DALL-E and Stable Diffusion facilitating the creation of realistic images and videos. Generative AI models are built on deep learning models, foundation models, and machine learning models, which together form the technological backbone that enables advanced AI capabilities and content generation.

The training of foundation models for generative AI is a compute-intensive and time-consuming process, requiring significant resources and effort. Foundational architectures such as generative adversarial networks (which use two neural networks), variational autoencoders, and recurrent neural networks have played a crucial role in advancing the field. Transformers, which process entire sequences, have revolutionized natural language processing and content creation. Generative AI is also widely used for voice cloning, writing, coding, and software code generation, supporting developers and content creators. Its applications extend to customer service chatbots, AI assistants, and generating medical images, designing molecular structures, and transforming businesses. Generative AI models are trained on large volumes of internet data, with data scientists playing a vital role in developing, evaluating, and ensuring the trustworthiness of these models. Gen AI tools are transforming industries and the global economy, with organizations leveraging these tools for competitive advantage. The use of synthetic data to train machine learning models, along with other methods, is essential for improving model performance.

Generative AI is influencing the creative process by enabling the development of art and writing and is central to creating new content—some of which is generated entirely by AI. The landscape of generative AI is evolving rapidly, with new advancements and opportunities anticipated in the coming weeks. The emphasis is on promoting innovation while implementing safeguards to manage associated risks, with transparency and explainability emerging as key focus areas. The rapid adoption of AI by businesses and governments worldwide further heightens the urgency for clear regulatory frameworks.

India

India embraces a pro-innovation approach to AI, focusing on unlocking its full potential while implementing adequate safeguards to manage associated risks. Accordingly, the Ministry of Electronics and Information Technology (Meity) has published a blueprint for a new Digital India Act, which contains specific provisions for regulating high-risk AI systems. This blueprint advocates for the definition and governance of such systems through legal and institutional quality testing frameworks.

The act underscores AI explainability as a crucial requirement for regulating high-risk AI systems in several respects:

Algorithmic Transparency: AI models, particularly those used in high-risk scenarios, must be explainable to ensure accountability. This entails clearly articulating how decisions are made to demonstrate fairness, reduce bias, and prevent harmful outcomes.

Quality Testing Frameworks: Explainability will be integral to the legal and institutional frameworks that assess the reliability, robustness, and compliance of AI systems. These frameworks typically evaluate the quality and diversity of training data and may utilize synthetic data to enhance model robustness and privacy. Data augmentation is also employed as a technique to expand and diversify datasets, thereby supporting more reliable AI system performance.

Risk Mitigation for Zero-Day Threats: Explainable AI tools assist in identifying vulnerabilities and diagnosing potential risks in AI systems, making them essential for security and resilience.

Justifications in Ad-Targeting & Content Moderation: Explainability ensures that AI systems can provide clear, understandable reasons for ad placements or flagged/moderated content, aligning with ethical and regulatory standards.

The planning commission NITI Aayog, in partnership with the World Economic Forum, has released frameworks on Responsible AI (RAI). The latest paper in the series, “Responsible AI for All: Adopting the Framework – A use case approach on Facial Recognition Technology,” tackles the dual challenges of accuracy and interpretability in AI systems. It emphasizes how increasingly complex algorithms, while yielding more accurate results, frequently compromise explainability.

The paper also highlights:

  • Challenges of balancing accuracy and explainability.
  • Principles like Self-Explainable Systems, Meaningful Explanations, and Accurate Decision Justifications to build trust and accountability.

European Union

In March 2024, the European Parliament passed the AI Act, which is the world’s first comprehensive AI law and will take effect in August 2026.

The AI Act emphasizes that transparency is essential for high-risk AI systems to address their complexity, ensuring that users can understand and effectively utilize them. The act also prioritizes transparency in high-risk AI systems through measures such as:

  • Design Transparency: Clear documentation of AI operations, potential risks, and fundamental rights considerations.
  • Interaction Clarity: Informing users about AI-generated content and the automated nature of systems, particularly in emotion detection and content manipulation. Transparency measures also apply to AI-generated web pages and other digital content, ensuring that users are informed about the origin and nature of the content. This is especially important in content creation, where disclosing AI involvement helps users make informed decisions when engaging with AI-driven content.

These transparency measures aim to empower individuals to understand and navigate AI systems and content.

Australia

In September 2024, the Australian Government issued a Policy for the responsible use of AI in government, marking a significant step toward positioning itself as a global leader in the safe and ethical use of AI. The policy underscores the need for AI to be used in an ethical, responsible, transparent, and explainable manner. It mandates that Australian Public Service (APS) officers must be able to explain, justify, and take ownership of AI-driven advice and decisions. The development and deployment of AI applications for public service is a key focus, ensuring that these technologies deliver value across various government functions.

Earlier, in June 2024, Australia’s Data and Digital Ministers issued a National Framework for the assurance of artificial intelligence in government, outlining how governments can apply eight AI Ethics Principles to their AI assurance processes:

  1. Human, societal and environmental wellbeing
  2. Human-centred values
  3. Fairness
  4. Privacy protection and security
  5. Reliability and safety
  6. Transparency and explainability
  7. Contestability
  8. Accountability

Key measures for AI transparency and explainability include:

  1. Disclose AI Usage: Governments must inform users when AI is employed, maintaining a public register detailing the AI system’s purpose, intended use, and limitations.
  2. Reliable Data Practices: Governments should adhere to legal and policy standards for recording decisions, testing, and data assets, enabling scrutiny, knowledge continuity, and accountability. Robust software development practices are essential to ensure transparency and accountability in government AI systems.
  3. Clear Explanations: Governments must provide understandable explanations for AI outcomes, including:some text
  • Inputs, variables, and their influence on system reliability.
  • Testing results and human validation.
  • Implementation of human oversight. When explainability is limited, governments should document reasons for using AI and apply increased oversight.
  1. Human Accountability: In administrative decision-making, humans remain accountable for AI-influenced decisions, which must be explainable.
  2. Frontline Staff Support: Staff should be trained to explain AI outcomes, prioritizing human-to-human communication, especially for vulnerable groups or those uncomfortable with AI use.

United States and Generative AI Tools

The National Institute of Standards and Technology (NIST) in June 2024 published the ‘Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile’, as a companion to the AI RMF 1.0, addressing cross-sectoral risks in Generative AI (GAI). Key trustworthy AI characteristics include fairness (with bias mitigation), safety, validity, reliability, explainability, and interpretability.

The framework highlights risks like GAI outputs producing confabulated logic or citations, misleading users into overtrusting the system. Suggested measures include applying explainable AI (XAI) techniques, such as gradient-based attributions, counterfactual prompts, and model compression, to continuously improve system transparency and mitigate risks. Retrieval-augmented generation (RAG) can also be used to enhance the transparency and accuracy of generative AI outputs by integrating external data sources, providing more reliable and current information. It also emphasizes validating, documenting, and contextualizing AI outputs to support responsible use and governance, recommending interpretability methods to align GAI decisions with their intended purpose. Providing access to relevant documentation and model explanations is crucial for stakeholders and regulators to review and understand AI systems.

US President Joe Biden issued an executive order (EO) on the ‘Safe, Secure, and Trustworthy Development and Use of AI’ in October 2023. The Executive Order calls on independent regulatory agencies to fully utilize their authority to protect consumers from risks associated with AI. It emphasizes the need for transparency and explainability, requiring AI models to be transparent and mandating that regulated entities can explain their AI usage.

The Blueprint for an AI Bill of Rights by the White House Office of Science and Technology Policy (OSTP) provides a non-binding set of guidelines for the design, development, and deployment of artificial intelligence (AI) systems. The document acknowledges the challenges posed by the opaque and complex decision-making processes of automated systems, which can undermine accountability. It stresses that these complexities must not be used as an excuse to avoid explaining critical decisions to those affected. Clear and valid explanations are identified as a baseline requirement to uphold transparency and accountability in automated decision-making.

Global Trends and Challenges in Explainable AI (XAI) and AI Adoption

Global regulations are struggling to keep pace with the rapid evolution of AI technologies and their applications. The lack of a universally accepted definition of AI complicates efforts to regulate the field. Additionally, the absence of standardized metrics for measuring explainability poses difficulties for enforcing compliance. Cutting-edge advancements in AI technology, such as generative AI, LLMs, and AI Agents, further drive regulatory challenges by introducing new complexities and capabilities at a rapid pace.

Other key challenges include:

  • AI systems are multi-faceted: AI systems are inherently diverse, with applications varying significantly across industries. There are many generative AI models and learning models, each with unique architectures and purposes, such as generative models for text, images, and sounds. Historically, early generative models like the markov chain and markov chains played a foundational role in probabilistic text generation, laying the groundwork for more advanced approaches. The importance of natural language processing is evident in enabling these generative AI models to process and generate human language effectively. This diversity, especially among generative models, complicates explainability, as different types of models present unique challenges in understanding and interpreting their outputs. Hence, a ‘one size fits all’ approach to regulating AI risks being overly stringent in some contexts while insufficient in others, highlighting the need for tailored regulatory frameworks that consider industry-specific requirements and challenges.
  • Trade-off between accuracy and interpretability: AI systems, like deep neural networks and Large Language Models (LLMs), are inherently complex. Achieving higher accuracy and pursuing higher quality outputs often means increased model complexity, which in turn makes these systems more difficult to explain. Even when explanations are available, simplifying them for non-technical stakeholders remains a significant challenge, as it can risk losing critical nuances of the underlying decision-making process. Modern generative AI, including the generative pretrained transformer architecture, serves as a foundational model for many advanced applications, further increasing the complexity of explainability.
  • Navigating Proprietary Algorithms: Many AI systems are built on proprietary algorithms, creating tensions between the need for transparency and the protection of intellectual property rights.
  • Cultural Considerations: Global AI regulations must respect the diverse legal frameworks, societal values, and cultural norms across countries. As AI capabilities develop unevenly worldwide, there are challenges in policy and governance, with advancements and regulatory efforts often concentrated in more developed regions, potentially creating disparities in global AI standards and practices.

Conclusion: Implications for Large Language Models

The need for robust regulations has never been more urgent. As AI technologies continue to shape industries and societies, it is imperative for global regulatory frameworks to evolve alongside them. Countries like India, the EU, Australia, and the US are taking crucial steps toward integrating explainability into their AI regulations, focusing on transparency, fairness, and accountability.

While there are considerable challenges in establishing effective AI regulations, these trends highlight the ever growing need for explainable AI (XAI), to ensure that AI predictions are not only accurate, but also understandable and justifiable to both technical and non-technical stakeholders. As regulations continue to catch up with the rapid advancements in AI, it is up to the organizations to ensure that transparency and accountability are prioritized in their AI deployments. This will not only ensure regulatory compliance, but also build trust with users, mitigate risks, and ultimately drive the ethical deployment of AI technologies.

See how AryaXAI improves
ML Observability

Learn how to bring transparency & suitability to your AI Solutions, Explore relevant use cases for your team, and Get pricing information for XAI products.