In an era marked by the rapid ascent of artificial intelligence (AI), governments and regulatory bodies worldwide are navigating the complex terrain of AI governance. At the forefront of these efforts is the European Union’s proposed “Artificial Intelligence Act.” This comprehensive regulation aims to harmonize AI rules across the EU, fostering innovation while safeguarding fundamental rights and values. 


Understanding the AI Act: A European approach to excellence and trust 

The European Commission’s shift towards a legislative approach to AI regulation seeks to maintain control over the burgeoning field of AI technology. It strives to strike a balance between promoting innovation and ensuring responsible AI development. Anticipated for adoption by the end of 2023, the AI Act embodies a human-centric approach, designed to inspire trust in AI systems while upholding compliance with the law and respecting human rights. The overarching goal is to encourage the uptake of AI’s benefits while mitigating its associated risks and fostering a unified market for AI applications. 


The Scope of the AI Act: Who will it apply to? 

This far-reaching regulation applies to AI system providers within and outside the EU who offer services within the EU or place their systems on the market. Users of AI systems are also subject to the law upon its adoption. Even AI providers and users located outside the EU may come under its scope if their system’s output is intended for use within the EU. 

The broad definition of AI is chosen deliberately to encompass current and future technologies, focusing on the system’s key functional characteristics. The Commission will continually update a list of techniques and approaches used in AI system development to ensure that emerging technologies fall within the regulatory ambit. 


How the AI Act operates: categorizing AI systems  

The AI Act categorizes AI systems into four primary groups: 

  • Unacceptable risk systems 
  • High-risk systems 
  • Limited-risk systems 
  • Low and minimal risk systems 

This risk-based approach forms the foundation of the proposed legislation. The level of obligations imposed on providers or users depends on the risk level associated with the AI system in question. Unacceptable risk systems face total prohibition, while high-risk systems are subject to varying degrees of obligations. Limited-risk systems primarily deal with transparency requirements, while low and minimal risk systems are encouraged to follow an ethical code without obligatory mandates. 


Obligations for high-risk AI systems: navigating responsibility  

Among the categories defined by the European Union’s proposed Artificial Intelligence Act, high-risk AI systems stand out as a focal point of regulation. These systems, characterized by their potential to significantly impact individuals and society, are subject to a set of rigorous obligations designed to ensure safety, accountability, and transparency. 

At the heart of these obligations is a risk-based approach, acknowledging that not all AI systems pose the same level of risk. High-risk AI systems, often found in sectors such as healthcare, transportation, and critical infrastructure, are entrusted with responsibilities that go hand in hand with their potential impact. 


Risk-management system: Providers of high-risk AI systems are required to establish robust risk-management systems. These systems are designed to identify, assess, and mitigate potential risks associated with the AI technology they deploy. The goal is to ensure that these systems operate safely and reliably, minimizing any adverse effects on individuals and society. 

Technical documentation: Transparency is a cornerstone of responsible AI. High-risk AI providers must maintain detailed technical documentation that outlines the functioning and capabilities of their systems. This documentation serves as a reference point for regulatory authorities, fostering accountability and traceability. 

Conformity assessment: Before high-risk AI systems can be placed on the market or put into service, they must undergo a thorough conformity assessment. This assessment evaluates whether the system complies with the requirements set forth in the AI Act. It serves as a critical checkpoint to ensure that high-risk AI systems meet the necessary safety and ethical standards. 

Registration obligations: To enhance oversight and accountability, providers of high-risk AI systems are required to register their systems with the relevant authorities. This step further strengthens the regulatory framework, allowing authorities to track the deployment and use of these systems effectively. 

Corrective action: Should a high-risk AI system fail to meet the requirements established in the risk-management system, corrective action is mandatory. Providers must take immediate steps to rectify any issues, ensuring that their AI systems align with safety and ethical standards. 

The obligations imposed on high-risk AI systems are robust and comprehensive, reflecting the EU's commitment to ensuring that AI technology benefits society without compromising safety or ethics. These measures not only protect individuals and businesses but also pave the way for responsible AI innovation in Europe. 


Enforcement of the AI Act: oversight and coordination 

Enforcement of the AI Act falls under the purview of national supervisory authorities. To ensure coordination and uniform application of the regulation, the European Commission proposes the creation of a European Artificial Intelligence Board (EAIB). This board's role is to coordinate between national supervisory authorities and the Commission, addressing any issues arising from the regulation. It will provide guidance to these entities, ensuring consistent compliance. The EAIB will comprise national supervisory authorities, represented by their heads or high-level officials, along with the European Data Protection Supervisor, and will be chaired by the Commission. 


Impact on businesses: complying with the AI Act 

The AI Act imposes obligations on all AI system providers and users during development, operation, or while on the market. It applies uniformly across all EU Member States and extends its reach to certain AI systems located outside the EU if their output is intended for use within the EU. 

In conjunction with the AI Act, businesses should also consider the EU’s AI Liability directive. This directive addresses the complexities of establishing liability for AI-related damages across Member States, linking such damage to the responsibility of users or providers. 

As we approach the anticipated adoption of the European Union's Artificial Intelligence Act in 2023, businesses and stakeholders in AI technology must prepare to navigate this transformative landscape. The AI Act represents a vital step towards responsible AI development and innovation in Europe, where trust, accountability, and progress converge. 


Embracing responsible AI in Europe 

In summary, the European Union’s proposed Artificial Intelligence Act represents a significant leap forward in the responsible governance of AI technology. Its key objectives include fostering innovation, ensuring compliance with the law, and upholding fundamental human rights. However, as with any transformative regulatory framework, there are accompanying concerns, such as finding the right balance between innovation and regulation, achieving global harmonization, and addressing the compliance costs faced by businesses. 

At Grant Thornton, we understand the challenges and opportunities presented by the AI Act. As leaders in AI strategy and compliance, we are here to support your organization in navigating this evolving landscape. Our team of experts can help you prepare for and implement the changes brought about by the AI Act, ensuring that your business not only complies with regulations but also leverages AI technology to its fullest potential. 


Take action today 

If you’re seeking guidance on how to align your AI strategy with the AI Act, or if you require assistance with compliance, feel free to reach out to Grant Thronton. Together, we can embrace the responsible use of AI in Europe, promoting innovation, safeguarding rights, and shaping a brighter future for AI technology. 

Contact us today to embark on a journey towards responsible and successful AI adoption in the European Union. Let’s pioneer a future where trust, accountability, and innovation coexist harmoniously in the realm of artificial intelligence.