Spring naar content

Habemus ordinationem: the AI act is here!

Home » Habemus ordinationem: the AI act is here!
Leestijd: 4 minuten

White smoke ascending from Brussels on Friday 8 December: after 36 hours of negotiations, the European Parliament (EP) and the Council of Ministers (i.e. member states) have reached consensus on the content of the AI Act. This outcome was by no means a given, but the amount of pressure to reach an agreement was huge, since elections for EP will be held next year, followed by the appointment of a new European Commission. Thus, should negotiations have come to a stop this year, the realisation of an AI Act would have been set back for at least another year.

The final matters of discussion: real-time facial recognition and foundation models

Before the final negotiations could begin, two matters on which the EP and member states needed to agree were still open: the EP wanted a complete ban on real-time facial recognition, whereas member states wanted to leave a little elbow room. And possibly even more important: back in November, Germany, France and Italy argued that so-called foundation models should only be mandated to self-regulation, whereas the EP want to label them high-risk systems.

Why do we need an AI act?

In recent years, AI development, which started in the 50’s of the 20th century, has made astonishing progress, partly due to the huge advancement in computation and the immense growth in data storage capacity. This has created great opportunities, but also great risks for people and society. There is an endless number of cases in which the implementation of AI has led to unwanted consequences. In order to curtail the risks and encourage responsible use of AI, the European Commission submitted a proposal for regulation of the development and implementation of AI in 2021.

The result: four categories of risk

In accordance with the European Commission’s proposal, the AI-regulation-to-be differentiates four categories of risk:

  1. Unacceptable risk:
    These systems have such a high-risk profile, that they are prohibited outright. This category includes social scoring systems, emotion recognition in schools and work environments, the large-scale and random scraping of portrait pictures and systems that are meant to manipulate behaviour.
  2. High risk:
    These systems will be strictly regulated, distributed among developers and deployers of the systems. Including for example, real-time facial recognition (which will be allowed rarely for law enforcement against specific, serious crimes), critical infrastructure implementation, essential public and private services, in jurisdiction and concerning democratic processes.
  3. Limited risk:
    AI systems that interact with people, like chatbots.
  4. Minimal risk:
    Applications that produce little or no risk, like AI applications used in video games and spam filters.
Source: European Commission

The problem with this horizontal-layer approach is that it is all or nothing: there are either extremely strict rules (high-risk systems), or hardly any. Considering that each AI application carries its own risk profile, a necessity for nuance is to be expected. Even for systems that are thought to have limited or minimal risk, thinking about responsible implementation is important.

Requirements for high risk ai systems

Although the final text has not yet been released, the European Commission’s original proposal gives us a good idea of the requirements concerning high-risk systems. Some examples of the expected requirements are (depending on whether you are a developer or a user of AI systems):

  • Having an operational quality management system in place for AI systems in question;
  • Record-keeping on technical documentation;
  • Performing impact assessments on human rights;
  • Performing compliance assessments;
  • Ensuring sufficient quality of input data;
  • Strict transparency requirements.

Specific requirements for foundation models

Foundation models are systems that have been created for a great variety of tasks, and that may serve as a basis for a large range of (sometimes high-risk) AI applications. A few examples include large language systems such as GPT, Gemini and Llama. Specific requirements are in force for these systems concerning transparency, with additional rules for high-impact foundation models, which are trained for large quantities of data and which require huge amounts of computing power.

What now?

Now, the negotiated text still needs to be adopted officially by both the EP and the Council. This is expected to happen within a couple of months, ensuring the AI Act’s implementation the first half of 2024. Most stipulations will have a transposition period of two years, which means that the AI Act will not fully apply until 2026. Furthermore, past experience with other (European) legislation tells us that there will be a few i’s that need dotting and the establishment of case law will take time. In addition, as it was for the GDPR, national supervisors will need time to decide on various interpretations.

In the meantime, each and every system will have to be categorised into one of the four risk categories. This will, no doubt, lead to a lot of debate because a lower risk qualification will be beneficial to deployers as well as users. However, as I mentioned earlier: even if your system is labelled a ‘limited’ risk, you should be mindful of responsible application.

What effect will the AI act have on your organisation?

It looks like the AI Act will be adopted officially somewhere mid-2024. If your organisation is already using AI (such as prediction models, foundation or large language models) you need to comply with the requirements as stated in the AI Act by 2026. That may seem like a long time, but experience has shown that compliance with new legislation takes effective planning and action. Preparing your organisation by listing AI applications and installing the required safeguards will be extremely time-consuming. It is imperative for management to understand that this is a process of change that cannot be carried out at the drop of a hat. If you would you like to learn more, or have a chat about this, please do not hesitate to contact me.

Scroll naar boven