Aktuelles

The AI Regulation: New requirements from February 2, 2025

Artificial intelligence is now a constant companion in the everyday lives of companies and private individuals. With the EU AI Regulation (AI Regulation), there is now also a legal basis that regulates the use of AI. The regulation is the (world’s) first comprehensive set of rules for regulating artificial intelligence (AI) and has officially been in force since August 1, 2024. In the following article, we would like to give you an initial, cursory overview of the topic:

Most of the requirements of the AI Regulation will only apply after a transitional period from August 2, 2026, but there are the following exceptions:

  • Chapters I and II will apply from February 2, 2025;
  • Chapter III, Section 4, Chapter V, Chapter VII, Chapter XII and Article 78 shall apply from August 2, 2025, with the exception of Article 101;

Chapters I and II of the AI Regulation, which will apply from February 2, 2025, contain key provisions on the regulation of artificial intelligence. The aim of the regulation is to create uniform standards for the development and use of AI systems across Europe in order to ensure their safety, transparency and fairness. In particular, the AI Regulation introduces new obligations for providers and operators of AI systems that directly affect companies.

The AI Regulation distinguishes between different risk categories of AI systems – from minimal, limited and high risk to unacceptable risks. Unacceptable practices according to Article 5 of the AI Regulation (Chapter II) include systems that pose a threat, for example because they can deceive or exploit certain characteristics of individuals for manipulation – for example by using manipulative techniques that impair the free will of individuals and the use of certain biometric identification systems in real time in public spaces. These are strictly prohibited.

All other AI systems are permitted, provided that certain requirements are met. This is regulated in Chapter I. Art. 4 of the AI Regulation obliges providers and operators of AI systems to ensure that the respective users have a sufficient level of “AI competence”.

What exactly is it about?

1. prohibition of unlawful practices

According to Article 5 of the AI Regulation, certain applications and practices of AI systems are prohibited. These include:

  • The use of manipulative techniques aimed at unfairly influencing people’s behavior.
  • The use of AI systems to monitor and classify people based on social behavior (“social scoring”).
  • The use of biometric systems for real-time identification in public spaces by law enforcement authorities (except in narrowly defined exceptional cases).

2. imparting “AI competence”

Providers and operators of AI systems are obliged to ensure that users acquire a sufficient level of “AI competence”. The scope of the required “AI competence” always depends on the respective context of AI use. It must therefore be examined on a case-by-case basis how and to what level of detail the skills must be taught.

Concrete examples of skills to be taught to users could be

  • Information on how the respective AI tool works,
  • Information about possible risks when using the results,
  • Clarification of further obligations under the AI Regulation for the respective AI system (e.g. additional transparency obligations for so-called high-risk systems)
  • Informing users that AI is being used.
  • In the case of the use of high-risk AI systems:
    • Skills transfer for monitoring the AI system
    • Skills transfer for reviewing the decisions made by AI

Proof of competence can be provided in different ways. For example, there are already some offers that certify the general AI competence of employees. However, there are also more pragmatic approaches that are tailored to the specific use of AI systems.

In principle, it should be noted that the transfer of skills is required per AI system. It may therefore make sense to build up AI skills “layer by layer”. It should also be noted that the scope of the competence to be imparted increases in proportion to the identified risk of the system.

In any case, you as a company must take action and keep an eye on what at first glance appears to be an unexpected requirement for skills transfer!

Consequences for companies

From February 2, 2025, companies that develop or use AI systems will have to take a brief “AI inventory”:

  • Identification and elimination of AI systems in the company that could be classified as prohibited practices,
  • Development and implementation of skills training measures for employees and users.
  • Establishment of processes for evaluating AI systems used according to a risk class (minimal, limited, high)

Conclusion

From February 2, 2025, the AI Regulation will impose obligations on your company to ensure that AI systems in your company are handled in accordance with the regulations. Although your company will initially have to invest time and effort in implementation, this will be increasingly reduced if you take a consistent approach. At the same time, there are decisive advantages: Employees learn to better understand and protect the rights of third parties as well as the company’s own rights, for example to its own know-how. This can help to avoid legal disputes and promote a positive reputation for the company in the long term.

We will be happy to help you assess and classify the extent to which you are subject to the provisions of the AI Regulation and work with you to find pragmatic solutions. Feel free to contact our Digitalization & AI competence team !

AI Ordinance February 2025