In line with the European Commission’s shift to a legislative approach to AI, a regulation has been proposed which harmonises rules on artificial intelligence across the European Union. The proposed law intends to maintain control over the rapid development of these new technologies, whilst allowing innovation and optimising operations at the same time. 

It is expected for the proposed regulation to be adopted by the end of the year 2023.  

A human-centric approach has been taken by the Commission, to ensure that trust in the AI system is maintained, as well as compliance with the law and respect for fundamental human rights. This is aimed at encouraging the uptake of the benefits of AI, while mitigating its risks and facilitating the creation of a single market of AI applications to avoid market fragmentation. 

 

What will the AI Act apply to? 

The proposed regulation shall be applicable to any provider who; within the EU or outside the EU, established an AI system which provides service or has been placed on the market. User of AI systems themselves are also subject to this law, upon its adoption. In cases where, although the provider or user of the AI system are not located within the EU, the output generated by the system is to be utilised within the EU, then the proposed legislation shall also be applicable to such a system.  

A wide definition of AI has been chosen, aimed at encompassing present and future technologies, with emphasis placed on key-functional characteristics of the system. A list of techniques and approaches utilised in the development of the AI system will be continuously updated by the Commission through delegated legislation, to ensure that new technologies fall within the scope of the act.  

 

How will the AI Act work? 

AI systems shall be classified under four main categories under the AI Act. These are: 

  1. Unacceptable risk systems 
  2. High risk systems 
  3. Limited risk systems 
  4. Low and minimal risk systems 

A risk-based approach has been adopted through the proposed legislation. The level of obligations which the provider or user must adhere to is therefore based on the level of risk incurred by the AI system in question.  

Unacceptable risk systems will be prohibited entirely. High risk systems shall be further divided and obligations will be imposed on the system accordingly. The obligations which limited risk systems will be expected to adhere to mainly pertain to transparency. Low and minimal risk systems will only be encouraged to abide by a code of ethics which would eventually be published by the European Commission, but no obligation shall be imposed.  

The obligations imposed on high-risk systems are the most significant, and include: 

  • A risk-management system 
  • Technical documentation 
  • Conformity assessment is to be carried out prior to its placing on the market or be put into service 
  • Registration obligations 
  • Corrective action is taken if the system is not in line with requirements of the risk management system 

 

How will the AI Act be enforced? 

National supervisory authorities will be responsible for the enforcement of the AI Act. The European Commission has therefore proposed the creation of a European Artificial Intelligence Board (EAIB), whose role it shall be to coordinate between national supervisory authorities and the European Commission in respect of the issues arises from the regulation.  

The board should also provide guidance both to national supervisory authorities and the commission, whilst ensuring consistent application of the regulation. The EAIB is to be composed of the national supervisory authorities, represented by the head or a high-level official, together with the European Data Protection Supervisor, and will be chaired by the Commission. 

 

What will be the impact of the AI Act on your business? 

The AI Act will impose obligations on all providers and users of AI systems, applicable both in the development stage of the system, as well as during its operation or while it is on the market. It shall be enforceable across all EU Member States, and some AI systems which are located outside the EU, may also be under the scope of the act, if their output is intended to be utilised within the EU. 

In this light, businesses must also consider the EU’s AI Liability directive, which shall be adopted together with the AI Act. The directive seeks to address the inadequacies in establishing liability stemming from an AI system throughout Member States, by creating a link between the damage caused by the AI system and the fault of the user or provider of the AI system. 

 

Take action today 

If you’re seeking guidance on how to align your AI strategy with the AI Act, or if you require assistance with compliance, feel free to reach out to Grant Thronton. Together, we can embrace the responsible use of AI in Europe, promoting innovation, safeguarding rights, and shaping a brighter future for AI technology. 

We encourage you to watch this space for further information about the scope of the Artificial Intelligence Act as well as how it can impact your business.