Aline Beltrame de Moura*
The European Union Artificial Intelligence Act (AIA)[1] is a key regulation for establishing a uniform legal framework in the European Union (EU) that governs the development, commercialization, implementation, and use of Artificial Intelligence (AI) systems. This regulation aims to ensure that AI is used ethically, safely, and in accordance with fundamental rights.
The regulation of AI is part of a broader framework proposed by the EU Digital Strategy[2], which aims to create better conditions for the development and use of this innovative technology. Digital transformation is the integration of digital technologies into business operations and public services, as well as the impact of these technologies on society.
The regulation of AI in the EU began to take shape in 2017 with various recommendations and the creation of an expert group. In 2019, the Ethical Guidelines for Trustworthy AI were published[3], highlighting the need for an ethical approach that respects human rights. In April 2021, the European Commission presented the proposal for the Artificial Intelligence Regulation, which was formally adopted by the European Parliament in March 2024 and is now in the process of being published in the Official Journal of the European Union. This regulatory framework contains 180 recitals, 113 articles, and 13 annexes.
The AIA has four main objectives:
1. Safety and Fundamental Rights: Ensure that AI systems in the EU market are safe and comply with existing fundamental rights legislation.
2. Legal Certainty: Facilitate investment and innovation in AI by providing a clear regulatory framework.
3. Improved Governance: Ensure the effective implementation of existing legislation and safety requirements for AI systems, involving various actors at both national and EU levels.
4. Development of the Single Market: Promote the legal, safe, and reliable use of AI applications and prevent market fragmentation.
To achieve these objectives, the regulation establishes a horizontal, balanced, and proportionate normative approach to AI, imposing the minimum necessary requirements to mitigate risks without hindering technological development or disproportionately increasing market costs. The legal framework is robust and flexible, with broad requirements based on principles that can endure over time.
The EU has adopted a risk-based approach, evaluating AI systems in a differentiated manner. This approach ensures that regulation is proportional to the risks presented by the systems, avoiding overly restrictive regulations for low-risk systems and guaranteeing robust protection for those that pose greater dangers. Thus, the AIA categorizes AI systems into four levels: unacceptable, high, limited, and minimal. This categorization determines the level of supervision and the requirements that developers and operators of these systems must meet.
Certain uses of AI deemed a clear threat to the rights and safety of European citizens because they are contrary to Union values are defined as “unacceptable risk” and are prohibited by the AIA. These include practices such as social scoring, the exploitation of vulnerabilities of vulnerable people, real-time remote biometric identification in public spaces, and the use of subliminal manipulative techniques. Implementing these prohibitions is crucial to maintaining public trust in emerging technologies and ensuring they are developed and used in a way that respects fundamental rights and promotes general well-being.
High-risk AI systems are those that pose a danger to people’s health, safety, or fundamental rights. These systems are subject to strict requirements and must undergo conformity assessments both before being introduced to the market and throughout their lifecycle. Additionally, citizens will have the right to file complaints against AI systems with the competent national authorities.
Examples of these systems include: Remote biometric identification and categorization based on sensitive attributes; Security components in critical infrastructures; Systems used in education for admission and student evaluation; Hiring tools and performance evaluation in the workplace; Systems for granting essential services and creditworthiness evaluation; Migration management and border control tools; AI systems used in justice administration and democratic processes.
Limited-risk systems, such as chatbots and emotion recognition systems, are subject to transparency requirements. Providers must clearly inform users when they are interacting with AI or when the content has been generated by AI. This includes clearly labeling AI-generated creations to avoid confusion and protect copyright.
This regulation is fundamental to protecting user rights, fostering trust in technology, and ensuring the ethical and responsible use of AI. These requirements contribute to a safer and more transparent digital environment, benefiting both users and technology developers.
Minimal-risk AI systems do not pose a significant danger and are usually used for non-critical tasks. Examples include spam filters, music or movie recommendation systems, and virtual assistants. Although they are subject to fewer regulations, they must comply with basic transparency requirements and best practices, as well as existing laws such as the General Data Protection Regulation (GDPR)[4].
The AIA establishes a governance system at the Member State level and a cooperation mechanism at the EU level, including the creation of the EU AI Office and a European AI Committee. These bodies facilitate cooperation, uniform implementation of regulations, and the promotion of innovation. Additionally, the creation of “regulatory sandboxes” or controlled testing spaces for experimenting with new AI technologies in a safe environment is encouraged.
International cooperation is essential for the effectiveness of the AIA. The regulation has an extraterritorial dimension, also applying to AI systems developed outside the EU but intended for the European market. This prevents the creation of “data havens” and ensures uniform protection of European citizens’ rights. However, systems used for military and defense purposes, as well as those intended for scientific research, are excluded from the scope of the AIA.
Legislators worldwide have recognized the relevance and urgency of regulating artificial intelligence, as the opportunities it offers are numerous, but the associated risks can be even greater. In this context, the European Artificial Intelligence Act stands out as one of the most recent and complex pieces of technology regulation.
By establishing a clear and flexible legal framework, the EU ensures that AI is developed and used in an ethical, safe, and responsible manner. This approach not only benefits European citizens but also sets an important precedent for global AI regulation, promoting a future where technology serves human well-being without compromising fundamental values.
[1] Legislative Resolution of the European Parliament, dated March 13, 2024, on the proposal for a Regulation of the European Parliament and of the Council establishing harmonized rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts (COM(2021)0206 – C9-0146/2021 – 2021/0106(COD)). Pending publication in the Official Journal of the European Union.
[2] Decision (EU) 2022/2481 of the European Parliament and of the Council of December 14, 2022. Establishing the Strategic Program for the Digital Decade for 2030.
[3] Available at: https://digital-strategy.ec.europa.eu/es/library/ethics-guidelines-trustworthy-ai#:~:text=El%208%20de%20abril%20de,trav%C3%A9s%20de%20una%20consulta%20abierta.
[4] Regulation (EU) 2016/679 concerning the protection of natural persons regarding the processing of personal data and on the free movement of such data.
*Aline Beltrame de Moura
Professor of Law at the Federal University of Santa Catarina (Brazil). Jean Monnet Chair of European Union Law. Coordinator of the Jean Monnet Module (2018-2021), the Jean Monnet Network – BRIDGE Project (2020-2023), and the Jean Monnet Policy Debate – BRIDGE Watch (2023-2026). All projects are co-financed by the Erasmus+ Programme of the European Commission. Coordinator of the Latin American Center for European Studies (LACES) and Editor-in-Chief of the Latin American Journal of European Studies. PhD in International Law from the Università degli Studi di Milano (Italy).