Artificial intelligence (AI) is increasingly impacting every aspect of our lives, from self-driving cars to cybersecurity. It is seen as central to the current digital transformation of society, and it has become an EU priority. While it possesses challenges, it also has opportunities. The EU is currently negotiating the EU AI Act, which seeks to ensure trustworthy AI innovation within the EU. This Just the Facts looks at AI technology, what the EU AI Act is, how it will work, the current state of negotiations on it and the next steps.
What is Artificial Technology?
According to the European Parliament, “AI is the ability of a machine to display human-like capabilities such as reasoning, learning, planning and creativity” and is set to define future technologies. “AI enables technical systems to perceive their environment, deal with what they perceive, solve problems and act to achieve a specific goal.”
- AI can be used to provide personalised recommendations while online shopping, based on previous searches and purchases.
- AI systems can help recognise and fight cyberattacks and other cyber threats based on the input of data, recognising patterns and backtracking the attacks.
- Certain AI tools can detectfake news and disinformation by mining social media information, looking for words that are sensational or alarming and identifying which online sources are deemed authoritative.
What is the EU AI Act?
In April 2021, the European Commission published the EU Artificial Intelligence Act (EU AI Act), a proposed law that aims to regulate AI systems in the EU. The Act will apply to providers of AI systems within the EU borders, users of AI systems in the EU, and certain AI systems produced externally but utilised in the EU.
The Act takes a risk-based approach, categorising AI systems as high-risk, limited-risk, minimal-risk, or no-risk, with requirements getting progressively stricter as the risk level rises. High-risk AI systems are those used in critical infrastructures, employment law enforcement and other sensitive areas. These high-risk systems, such as self-driving cars, will face rigorous obligations related to data quality, documentation, transparency, human oversight, accuracy and more, before being placed on the market.
The EU AI Act also creates new EU-wide rules for voluntary codes of conduct that can govern lower-risk AI uses. It would require all AI systems, even those not deemed high-risk, to meet baseline requirements related to transparency, provision of information to users, human oversight, and other issues. At present, the EU and US are working on developing the voluntary AI Code of Conduct through the US-EU Trade & Tech Council.
Limited-risk systems like chatbots will need some transparency and oversight safeguards, while minimal-risk such as AI enabled video games may follow voluntary codes of conduct. No-risk AI systems, such as spam filters, will not face any new requirements. Certain uses of AI, like social scoring and mass surveillance, will be banned entirely under the Act.
National competent authorities in each Member State will supervise compliance and can issue fines for violations. An EU AI Board will facilitate coordination on oversight. There will also be mechanisms like regulatory sandboxes where companies can test innovative AI under supervision before full deployment.
While the requirements aim not to stifle but encourage beneficial AI, the Act includes provisions to enable valuable data sharing and access for startups and researchers. In summary, oversight will be proportional to an AI system’s level of risk, but violations will be penalised. Ultimately, the EU AI Act seeks to facilitate trustworthy AI innovation within the EU.
Progress of Negotiations
Since the European Commission published the proposals in April, the EU institutions have differed on key aspects of the EU AI Act, such as governance structures, requirements for high-risk systems, treatment of biometric surveillance, and access for researchers.
The final agreed-upon definition of AI is important, as it will determine the scope of systems covered by the AI Act. A broader definition means more systems will face requirements and oversight under the law.
European Commission’s proposal:
- Defines AI systems as software that is developed with machine learning, logic- and knowledge-based approaches, or statistical approaches, among others.
- The system must be able to, “for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations or decisions influencing the environments they interact with.”
European Parliament’s negotiating position (adopted in June 2023):
- Uses the same core definition as the Commission but adds that the system must be able to “adapt its behaviour by analysing how the environment is affected by its previous actions.”
- Also introduces the requirement that the AI system must pose a significant risk of harming health, safety, and fundamental rights before falling under the regulation.
- The European Parliament created a document with all three EU institutional positions side-by-side.
The European Parliament’s expansive definition covers more AI systems and adds a risk-based approach, it encompasses the Commission’s definition but adding criteria around adapting behaviour and risk levels. This provides flexibility to regulate AI proportionately based on the level of risk.
Council of the EU’s common position (adopted in December 2022):
- Defines AI systems much more narrowly as just software developed with machine learning approaches “which can, for a given set of human-defined objectives, generate outputs, such as content, predictions, recommendations or decisions” that influence environments.
- Does not include the requirements around adapting behaviour or posing significant risk that were introduced by the Parliament.
The Council’s narrow machine learning-focused definition would be less adaptive to future AI advances. The current Spanish Presidency of the Council of the EU began preparing for new discussions to bring the Council’s position closer to the Parliament’s.
Dara Calleary, Minister of State for Enterprise, Trade and Employment stated in Dáil Éireann in June 2023 that “Ireland very much welcomes” the Act and that “it is important that this regulation is flexible and future-proofed in order to ensure that it continues to protect the safety and fundamental rights of the individual while also ensuring that innovation for good continues in this area.”
Negotiations between the European Commission, the European Parliament, and Council of the EU are in their final stages, with trilogue negotiations happening in June and July this year. With the European Parliament elections taking place in June 2024 and wider pressures to close open legislative files before election campaigning begins, there is an ambition to reach agreement on the EU AI Act by the end of 2023 or early in 2024. As there will likely be a two-year implementation period, the Act will be enforced by 2025 or 2026.
Most recent EMI Publications:
Subscribe now to The EU Inside Track!
Welcome to The EU Inside Track, a new briefing update from EM Ireland. We aim to bring you key developments from Brussels, Strasbourg and beyond in an accessible, digestible format. You can subscribe now for timely updates straight to your email inbox.