EU AI Act is here: How can organizations stay compliant
2 Min Read
August 7, 2024
The world's first comprehensive AI law - the European Artificial Intelligence Act (AI Act) entered into force on 1st August 2024.
The AI Act will be effective two years after entry into force on 2 August 2026, except for the following specific provisions:
- “The prohibitions, definitions and provisions related to AI literacy will apply 6 months after entry into force on 2 February 2025;
- The rules on governance and the obligations for general-purpose AI become applicable 12 months after entry into force on 2 August 2025;
- The obligations for high-risk AI systems that are classified as high-risk because they are embedded in regulated products, listed in Annex II (list of Union harmonisation legislation), apply 36 months after entry into force on 2 August 2027”.
Definitions and key points
The Act defines 'AI' in the following terms:
"Artificial intelligence system (AI system)" means " a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments;"
The techniques through which software qualifies as an AI system, as mentioned in Annex I, are
(a)" Machine learning approaches, including supervised, unsupervised and reinforcement learning, using a wide variety of methods including deep learning;
(b) Logic- and knowledge-based approaches, including knowledge representation, inductive (logic) programming, knowledge bases, inference and deductive engines, (symbolic) reasoning and expert systems;
(c) Statistical approaches, Bayesian estimation, search and optimization methods."
The regulation categorizes AI systems based on the risk they pose to users, differentiating between uses of AI that create (i) an unacceptable risk, (ii) a high risk, and (iii) low or minimal risk.
There is also another risk classification system, for general-purpose AI (GPAI) models. The act defines GPAI models as:
"General-purpose AI model" means "an AI model, including where such an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications, except AI models that are used for research, development or prototyping activities before they are placed on the market"
The AI Act also considers systemic risks that could arise from general-purpose AI models, including large generative AI models. Such models can be used for several purposes, and are becoming the foundation for many AI systems in the EU. If widely used, some of these models could carry systemic risks depending on their capability. Currently, general-purpose AI models trained using a total computing power of more than 10^25 FLOPs are considered to pose systemic risks.
The rules and obligations collectively aim to establish a comprehensive framework for the responsible and ethical development, deployment, and use of AI technologies.
Preparing Your Organization for AI Regulation:
The Act focuses heavily on prioritising and safeguarding the fundamental rights of people. The proposal acknowledges that AI, due to its characteristics like opacity and dependency on data, can impact fundamental rights outlined in the EU Charter. The Act lays out that transparency and traceability in AI systems, along with robust post-deployment controls, will enable effective redress for individuals affected by potential fundamental rights violations.
Several key themes emerge from the Act, which can be guiding steps for organizations to take strategic steps for ensuring compliance, mitigate risks, and leverage opportunities.
Transparency
The AI Act underscores transparency as crucial for high-risk AI systems to counter their complexity, ensuring users can understand and effectively use them. Accompanying documentation with clear instructions is mandated, including information about potential risks to fundamental rights and discrimination.
Integrating transparency measures during the design and development of the AI system becomes vital, since it provides deeper insights to explain technical aspects of the model decision, demonstrating impact through a change in variables, how much weightage the inputs are given, and much more. Such transparency measures provide the much-needed evidence for backing the model’s predictions to make it trustworthy, responsible and auditable.
Documentation
The AI Act mandates detailed technical documentation for high-risk AI systems to ensure transparency and accountability. This documentation, created before market placement or usage, needs to stay updated and include essential information, such as system characteristics, capabilities, limitations, algorithms, data details, training, testing, validation processes, and risk management documentation.
Organizations must review and update technical documentation for high-risk AI systems. Proactive implementation of logging mechanisms to keep records of AI system operations for traceability and audit purposes ensures readiness for compliance evaluations by competent authorities or notified bodies.
Technical robustness
The AI Act emphasizes that high-risk AI systems must prioritize technical robustness. These systems must withstand various limitations such as errors, faults, inconsistencies, and unexpected situations. Failing to safeguard against these risks might result in safety issues or infringements on fundamental rights due to erroneous decisions or biased outputs from the AI system.
To adhere to these requirements, organizations must develop and implement strategies to mitigate risks associated with AI systems, including safety and ethical risks. Implementing MLOps and ML observability practices enables the early detection of issues like underlying bias, model or data drifts, and performance degradation, allowing for quick resolution before they impact end-users.
Human oversight
Article 14 of the AI Act emphasizes that high-risk AI systems must be designed to allow effective human oversight while the AI system is in use. This oversight aims to prevent or minimize risks to health, safety, or fundamental rights that may arise when using the AI system as intended or under foreseeable misuse conditions.
With human oversight, it becomes much easier for organizations to govern and determine AI behaviour to reflect organisational, societal and business preferences. AI systems must be designed with mechanisms for human oversight, which enable individuals overseeing the system to comprehend its operation, identify biases, interpret outputs accurately, and intervene when necessary.
Monitoring and reporting obligations
Article 61 of the AI Act mandates providers of high-risk AI systems to establish a post-market monitoring system, scaled to the nature of the AI technologies and associated risks. This system must actively gather, document, and analyze relevant data on the performance of these systems throughout their lifespan.
Tracking system performance and functioning pre and post-production brings a proactive approach to investigating model issues and highlighting the root cause of the problem. This makes it possible for orgnaizations to explore granular visibility into the model and navigate to cause from the effects.
Mitigating Bias and Ensuring Safety
High-risk AI systems are allowed on the market or into service within the Union only if they meet specific mandatory requirements. In Article 15, the focus is on high-risk AI systems that continue to learn post-market release. These systems must be developed to address potential biases caused by feedback loops, ensuring appropriate measures mitigate biased outputs used as inputs for future operations. There are additional obligations regarding testing, risk management, and human oversight of the systems.
Post-model deployment, mechanisms like alert systems, performance benchmarks and logs can be set to maintain the required accuracy and model performance. Flagging performance outliers uncover retraining opportunities and ensure that there is no underlying bias within the data. Such mechanisms can help organizations reduce time to detection and time to resolution of issues. Additionally, the models should be trained and tested on sufficiently representative datasets to minimise the risk of unfair biases.
Complying with the EU AI Act will take organizations a great deal of preparation, especially those developing high-risk AI systems and general-purpose AI. Nonetheless, organizations must start preparing to comply with the regulations sooner rather than later, since the penalties for non-compliance are significant.
To get more insights on how your organization can adopt AI monitoring and explainability templates, connect with our team today.
References:
SHARE THIS
Discover More Articles
Explore a curated collection of in-depth articles covering the latest advancements, insights, and trends in AI, MLOps, governance, and more. Stay informed with expert analyses, thought leadership, and actionable knowledge to drive innovation in your field.
Is Explainability critical for your 'AI' solutions?
Schedule a demo with our team to understand how AryaXAI can make your mission-critical 'AI' acceptable and aligned with all your stakeholders.