EU AI Act: first regulation on artificial intelligence

The use of artificial intelligence in the EU is regulated by the AI Act, the world’s first comprehensive AI law. Find out how it protects you.

A man faces a computer generated figure with programming language in the background
This illustration of artificial intelligence has in fact been generated by AI

As part of its digital strategy, the EU wanted to regulate artificial intelligence (AI) to ensure better conditions for the development and use of this innovative technology. AI can create many benefits, such as better healthcare, safer and cleaner transport, more efficient manufacturing, and cheaper and more sustainable energy.

AI regulation in Europe: the first comprehensive framework


In April 2021, the European Commission proposed the first EU artificial intelligence law, establishing a risk-based AI classification system. AI systems that can be used in different applications are analysed and classified according to the risk they pose to users. The different risk levels mean more or less AI compliance requirements.

What Parliament wanted in AI legislation


Parliament's priority was to make sure that AI systems used in the EU are safe, transparent, traceable, non-discriminatory and environmentally friendly. AI systems should be overseen by people, rather than by automation, to prevent harmful outcomes.


Parliament also wanted to establish a technology-neutral, uniform definition for AI that could be applied to future AI systems.

AI Act: different rules for different risk levels


The new rules establish obligations for providers and users depending on the level of risk of AI risk qualification. While many AI systems pose minimal risk, they need to be assessed.


Unacceptable risk


Banned AI applications in the EU include:

  • Cognitive behavioural manipulation of people or specific vulnerable groups: for example voice-activated toys that encourage dangerous behaviour in children
  • Social scoring AI: classifying people based on behaviour, socio-economic status or personal characteristics
  • Biometric identification and categorisation of people
  • Real-time and remote biometric identification systems, such as facial recognition in public spaces


Some exceptions may be allowed for law enforcement purposes. “Real-time” remote biometric identification systems will be allowed in a limited number of serious cases, while “post” remote biometric identification systems, where identification occurs after a significant delay, will be allowed to prosecute serious crimes and only after court approval.


High risk


AI systems that negatively affect safety or fundamental rights will be considered high risk and will be divided into two categories:


1) AI systems that are used in products falling under the EU’s product safety legislation. This includes toys, aviation, cars, medical devices and lifts.


2) AI systems falling into specific areas that will have to be registered in an EU database:

  • Management and operation of critical infrastructure
  • Education and vocational training
  • Employment, worker management and access to self-employment
  • Access to and enjoyment of essential private services and public services and benefits
  • Law enforcement
  • Migration, asylum and border control management
  • Assistance in legal interpretation and application of the law.



All high-risk AI systems will be assessed before being put on the market and also throughout their lifecycle. People will have the right to file complaints about AI systems to designated national authorities.

 

Transparency requirements


Generative AI, like ChatGPT, will not be classified as high-risk, but will have to comply with transparency requirements and EU copyright law:

  • Disclosing that the content was generated by AI
  • Designing the model to prevent it from generating illegal content
  • Publishing summaries of copyrighted data used for training

 

High-impact general-purpose AI models that might pose systemic risk, such as the more advanced AI model GPT-4, would have to undergo thorough evaluations and any serious incidents would have to be reported to the European Commission.

Content that is either generated or modified with the help of AI - images, audio or video files (for example deepfakes) - need to be clearly labelled as AI generated so that users are aware when they come across such content.

Encouraging AI innovation and start-ups in Europe


The law aims to support AI innovation and start-ups in Europe, allowing companies to develop and test general-purpose AI models before public release.

That is why it requires that national authorities provide companies with a testing environment for AI that simulates conditions close to the real world. This will help small and medium-sized enterprises (SMEs) compete in the growing EU artificial intelligence market.

Implementation


The Parliament has set up a working group to oversee the implementation and enforcement of the AI Act., MEPs want to make sure that the adopted AI rules contribute to the development of the digital sector in Europe.

The group cooperates with the European Commission’s EU AI office, which was set up to clarify key provisions of the act.

EU AI Act compliance timeline


In June 2024, the EU adopted the world’s first rules on AI. The Artificialt Intelligence Act will be fully applicable 24 months after entry into force, but some parts will be applicable sooner:

  • The ban of AI systems posing unacceptable risks started to apply on 2 February 2025
  • Codes of practice will apply nine months after entry into force
  • Rules on general-purpose AI systems that need to comply with transparency requirements will apply 12 months after the entry into force


High-risk systems will have more time to comply with the requirements as the obligations concerning them will become applicable 36 months after the entry into force.

More on the EU’s digital measures