The AI Act - what will new EU legislation on artificial intelligence bring?

On 8 December 2023, after three days of negotiations, the European Parliament and the Council of the European Union reached political agreement on the…

On 8 December 2023, after three days of negotiations, the European Parliament and the Council of the European Union reached political agreement on the EU Regulation on Artificial Intelligence ("AI Act") - the first comprehensive piece of legislation to regulate artificial intelligence ("AI") in the EU. The declared aim of the AI Act is to contain the risks to health, safety and fundamental rights associated with the use of AI systems and to protect democracy, the rule of law and the environment.

The final version of the AI Act is not yet available. However, the most important key points and changes compared to the original Commission proposal from April 2021 [KWR article Artificial Intelligence: New EU legislation] canalready be found in the press releases of the Parliament and the Council, as well as the Q&A on the AI Act of the European Commission of 12 December 2023.

Who does the AI Act concern?

The AI Act concerns both public and private actors - providers, developers and importers - inside and outside the EU if the AI system is placed on the market in the EU or affects people in the EU.

Certain exceptions exist for providers of free and open-source AI models, as well as for certain research, development and prototype development activities.

The risk-based approach

As not all AI systems pose an unacceptable or high risk whilst most AI systems come with a low risk only, the AI Act follows a four-stage, risk-based approach:

  • Forbidden AI

A particularly controversial point in the negotiations between Parliament and Council had to do with systems which pose an unacceptable risk to security and fundamental rights, thus falling into the category of prohibited applications. Whereas Parliament wanted to include a significantly larger number of systems, Council favoured more freedom in the use of AI for national security and law enforcement.

Prohibited applications now include i.a.:

- facial recognition in public spaces for law enforcement purposes, with exceptions for measures taken with prior judicial authorisation and in cases of certain criminal offences (e.g. to prevent an immediate terrorist threat or human trafficking).

- Social scoringfor private and public purposes;

- Exploitation of "weak points" such as age, disability, social or economic circumstances;

- biometric categorisation based on sensitive characteristics (including political views, sexual orientation, religious or philosophical beliefs and race);

  • High-risk AI

These AI systems pose a high risk to security or fundamental rights and are listed in an annex to the AI Act. This annex is continuously updated and reviewed by the Commission. Applications covered include general education and occupational training, employment and human resources management, credit scoring, law enforcement and judicial administration.

AI systems which are mentioned in this annex but do not in fact pose a significant risk to security or fundamental rights are exempted.

High-risk AI systems must fulfil a number of obligations in order to be placed on the market or put into operation in the EU:

- Conformity assessment: the requirements for trustworthy AI, for example regarding data quality, transparency or cyber security, must be demonstrably fulfilled and the system must bear the European conformity marking (CE);

- Quality and risk management system: the risks for users and affected persons must be minimised and the new requirements must be complied with;

- Registration in a public database of the EU: this is required for use by public authorities or on behalf of public authorities;

- Fundamental rights impact assessment: the result must be communicated to national authorities and contain certain information, such as a description of the operator's processes in which the high-risk AI system is to be used, the categories or groups of persons concerned and the risks of harm to them.

  • General-purpose AI

Using a two-stage approach, the AI Act also regulates "general purpose AI" which has been trained using a large number of data. General purpose AI can perform tasks of very different types and can be integrated into a variety of downstream AI systems (e.g. the GPT model on which ChatGPT is based):

- For general general-purpose AI (first stage), the Act primarily envisages transparency requirements. For example, providers must maintain technical documentation and provide sufficient information about their model so that downstream providers integrating the general-purpose AI into their systems are able to fulfil their own obligations under the AI Act.  Moreover, certain information on the handling of copyrights and training data must be made available.

- For general-purpose AI with "high impact" which poses a "systemic risk" (second level), a model evaluation and tests must be carried out to assess, mitigate and monitor potential security risks at EU level. A systemic risk is assumed to exist if the general-purpose AI has been trained with a cumulative computing power of more than 10^25 floating-point operations.

  • Minimal-risk AI

All other AI systems - which the Commission considers to include the majority of AI systems currently being used or likely to be used in the EU in the future - can be developed and used in compliance with generally applicable legislation.

What are the penalties for breaching the AI Act?

Breaches of the ban on certain forms of AI are penalised with 7% of the global annual turnover of the infringing group of companies or EUR 35 million, whichever amount is higher.

Penalties for other material breaches can be up to the higher of 3% of global annual turnover or EUR 15 million.

Individuals can submit complaints about AI systems, on the basis of which the competent authorities can initiate market surveillance.

When do the new provisions of the AI Act take effect?

The final version of the AI Act still requires numerous technical details to be clarified; moreover, the draft has to undergo legal review, it has to be formally adopted by Parliament and Council and translated into all official EU languages. The publication of the final text in the Official Journal of the EU and its entry into force 20 days after publication are not expected to take place before the summer of 2024.

The provisions will take effect in stages over the following two years until mid-2026, starting with the ban on certain categories of impermissible AI after six months, followed by the provisions concerning general-purpose AI with a high impact and systemic risks and those governing obligations to be fulfilled by high-risk AI after twelve months.

Outlook

Even though the final version of the AI Act is not yet known, companies should already start thinking about the impact of the AI Act on their business model and begin setting up an AI governance process in light of the Artificial Intelligence Act. Our New Technologies and IP Team will be happy to assist you and keep you up to date on all key developments.

Your contact


This website uses cookies

For offering you the best experience possible we use various types of cookies. Please select the types of cookies you would like to allow and then click on "Agree". By clicking on „Agree to all“, you agree to the use of all cookies. You can withdraw your consent at any time by changing your browser settings, with future effect. For more information about the cookies we use click here: cookie policy. Further information about data protection can be found here: data protection.

Imprint

Operational and
functional cookies
Statistic cookies


Further information