European Parliament regulates artificial intelligence.
- Ivo Almeida
- Mar 14, 2024
- 3 min read
The European Parliament [EP] took a significant step today by formally approving the law that will regulate the use of artificial intelligence in the European Union [AI ACT]. Approved with 523 in favor, 46 against, and 49 abstentions.
Finally, we have the world's first binding law on artificial intelligence, aimed at reducing risks, creating opportunities, combating discrimination, and bringing transparency. Unacceptable AI practices will be banned in Europe, and the rights of workers and citizens will be increasingly protected.
The Artificial Intelligence Regulation directly responds to the proposals of citizens during the Conference on the Future of Europe.
The regulation, as approved, aims to protect fundamental rights, democracy, as well as the rule of law – positioning Europe as a leader in this essential endeavor.
The "AI ACT" is a proposal based on four fundamental principles.
Safeguarding for general-purpose artificial intelligence;
Limits on the use of biometric identification systems by law enforcement authorities;
Prohibition of social scoring and AI used to manipulate or exploit user vulnerabilities.
Consumer rights to lodge complaints and receive meaningful explanations.
Ban on applications
The new rules prohibit certain AI applications that threaten citizens' rights, including biometric categorization systems based on sensitive characteristics and the indiscriminate collection of facial images from the internet or closed-circuit television to create facial recognition databases. Emotion recognition in the workplace and schools, social scoring, predictive policing (when based solely on profiling a person or evaluating their characteristics), and AI that manipulates human behavior or exploits people's vulnerabilities will also be prohibited.
Exceptions to the application of the proposal
There are, however, exceptions for law enforcement purposes. The use of remote biometric identification systems by law enforcement authorities is, in principle, prohibited except in exhaustively listed and narrowly defined situations. Remote biometric identification "in real-time" can only be applied if stringent safeguards are met, including limiting its use in time and geography and being subject to specific prior judicial or administrative authorization. Such uses may include, for example, the targeted search for a missing person or the prevention of a terrorist attack. The use of remote biometric identification "deferred" is considered a high-risk use case, requiring judicial authorization associated with a criminal offense.
High-risk systems
Concerns regarding "High-risk Systems" were also addressed in this proposal, as clear obligations are provided for other high-risk AI systems due to their potential significant harm to health, safety, fundamental rights, the environment, democracy, and the rule of law.
Examples of high-risk AI uses include critical infrastructures, education and vocational training, employment, essential public and private services (including healthcare and banking), certain law enforcement systems, migration and border management, justice, and democratic processes (e.g., influencing elections).
These systems must assess and mitigate risks, maintain usage records, be transparent and accurate, and ensure human oversight. Citizens will have the right to lodge complaints about AI systems and receive explanations about decisions based on high-risk AI systems affecting their rights.
Transparency requirements
General-purpose AI systems, as well as the AI models on which such systems are based, must meet certain transparency requirements, including compliance with EU copyright legislation and publishing detailed information about the training data used. The most powerful general-purpose AI models that may pose systemic risks will have to comply with additional requirements, such as conducting model assessments, evaluating and mitigating systemic risks, and reporting incidents.
Additionally, artificial or manipulated image, audio, or video content ("deepfakes") must be clearly labeled as such.
Support measures for innovation and SMEs
At the national level, testing environments for regulation and testing under real conditions accessible to SMEs and startups will need to be created to develop and train innovative AI before it is brought to market.
The regulation still needs to undergo a final review by legal linguists, with it expected to be definitively adopted before the end of the legislature (through the so-called "rectification process"). The legislation also still needs formal support from the Council.

Comments