Kostenlos abonnieren

Werden Sie regelmäßig per E-Mail über neue Ausgaben der campuls informiert. Sie können Ihr kostenloses Abo jederzeit einfach online über den Abmeldelink im Newsletter kündigen.

Weitere Infos zu Datenschutz & Widerrufsrecht finden Sie hier.

New AI Act of European Union aims to create trust in technology

The European Union recently adopted uniform rules for the use of artificial intelligence – and hopes to set a global standard. At the invitation of Hof University of Applied Sciences, Laura Jugel, Legal Officer of the EU Commission, informed researchers and students about the new AI law, which she herself played a key role in drafting and is currently working on its practical implementation.

Laura Jugel, Legal Officer of the EU Commission; Image: European Commission 2024;

In May 2024, the 27 EU member states adopted stricter rules for artificial intelligence (AI) in the European Union with the “AI Act”. They thus approved plans to regulate the use of the technologies in areas such as video surveillance, speech recognition and the analysis of financial data. The law, which will not come into force until summer 2026, is the first of its kind in the world. It could set a global standard for the regulation of AI.

Technology neutrality of the legislation

As Laura Jugel, Legal Officer at the EU Commission, first explained in an online presentation, the law, which is part of the EU’s digital strategy, is intended to strengthen excellence in the field and, in particular, consumer confidence in the technology that is currently the subject of much debate:

AI can bring many benefits, for example better healthcare, safer transportation, more efficient manufacturing or a cheaper and more sustainable energy supply. But of course, everyone is also aware that AI could also be used for discrimination or manipulation. That’s why it was important to define these risks and deal with them legislatively.”

Laura Jugel

The regulation based on Article 114 TFEU is now purely a product safety law that deals with how technology enters the market and how AI technology can be implemented there – but not with how research and development is carried out in this area. “The legislation is intended to safeguard the basic health and safety needs of EU citizens as well as the protection of fundamental rights,” says Jugel.

Classification into risk categories

The new regulations lay down obligations for providers and users that are specifically based on the risk posed by the AI system. The AI Regulation thus follows a risk-based approach with complex regulations that include prohibitions on use, but also provider obligations such as documentation and transparency obligations. Which regulations are applied in individual cases therefore depends on the intended use of the AI and a dedicated risk assessment.

Source: EU Commission 2024;

If there is an “unacceptable risk” to the values and rights to be protected, there will be a complete ban on use. This applies, for example, to the well-known “social scoring”, in which AI evaluates the social behavior of individuals – with consequences for personal creditworthiness or other areas of life, for example. This technology is already in use in the People’s Republic of China, but is completely banned in the EU under the new law. It is also on the EU’s “index”:

  • Cognitive behavioral manipulation of individuals or certain vulnerable groups (for example, through voice-controlled toys that encourage dangerous behavior in children)
  • Biometric identification and categorization of natural persons or
  • Biometric real-time remote identification systems (facial recognition) used for law enforcement purposes in public spaces
  • Systems that use profiling to make predictions about criminality
  • Emotion recognition systems in the workplace and in education
  • Untargeted ‘scraping’ of facial images to build databases

The regulation focuses on “high-risk” applications, such as those used in medicine or recruiting. The AI Regulation introduces rules for classification based on the intended use of the AI system. These are basically divided into two categories:

  1. AI systems that are used as safety components in products that fall under EU product safety regulations, such as toys, aviation, vehicles, medical devices and elevators, or are such products themselves.
  2. AI systems that are to be used for “high-risk” use cases and must therefore be registered in an EU database. This includes use cases from the following areas

-Management and operation of critical infrastructure
-education and training
-Employment, management of employees and access to self-employment
-Use of essential private and public services
-Law enforcement
-Management of migration, asylum and border controls

Source: EU Commission 2024;

All high-risk AI systems must meet technical requirements, are reviewed before going to market and are evaluated throughout their life cycle. Citizens should have the right to lodge complaints against applications with the competent national authorities.

At the third level, transparency issues in particular will be regulated, such as those that may be relevant when using chatbots or deepfakes. Content that has been generated or modified with the help of AI – images, audio or video files – must therefore be machine-readable and, in certain cases, also visibly marked as AI-generated.

In addition to the rules for AI systems, the AI Regulation also introduces rules for providers of so-called AI models with a general purpose, also known as language models or base models. “In future, the sources of the training data for these language models will have to be documented in a public summary. This is particularly relevant with regard to copyright law,” says lawyer Jugel.

Next steps

The new AI Act will be fully applicable 24 months after it comes into force. However, some parts will apply earlier: for example, the ban on AI systems that pose unacceptable risks will apply just six months after it comes into force.

Accompanying codes of conduct will also be completed nine months after entry into force. Regulations for general purpose AI systems will apply after 12 months.

Source: EU Commission 2024;

The EU Commission is setting up three advisory bodies for the upcoming enforcement of the law: The “AI Board”, in which the member states are represented, is to ensure coordination at EU level. A “Scientific Panel”, consisting of scientists, will provide technical input. An advisory forum also provides the necessary input from stakeholders from the business sector.

Laura Jugel was invited as part of an event organized by Verband der Hochschullehrer Bayerns (vhb).

More about the AI Act ….

Rainer Krauß

Weitere Themen