Brussels – August 1 is a day that will go down in the history of the European Union. “oday, the Artificial Intelligence Act comes into force. Europe’s pioneering framework for innovative and safe AI,” European Commission president Ursula von der Leyen announced in a post on X, emphasizing the importance of the news: “It will drive AI development that Europeans can trust. And provide support to European SMEs and startups to bring cutting-edge AI solutions to market.”
Social Democrats in the European Parliament are also jubilant, thanking the co-rapporteur for the EU Act on AI and former Democratic Party delegation leader, Brando Benifei, for the work done in the last legislature: “Unacceptable practices of artificial intelligence will be banned in Europe and the rights of workers and citizens will be protected,” including through “human control and its responsible use.” As Benifei confirmed in an interview with Eunews after the almost unanimous approval of the text by MEPs in the plenary session on March 13, further legislative work on even more specific aspects, like its use in the workplace, may be set up in the 10th legislature of the EU Parliament. “We Europeans have always advocated an approach that puts people and their rights at the center of everything we do,” von der Leyen assures in launching the new EU Regulation today, “We create new stakes, not only to protect people and their interests but also to give businesses and innovators clear rules and certainty.”
At this point, the focus is on the dates: The new legislation will be fully applicable in 24 months, meaning that by August 2, 2026, all member countries will have to comply with the provisions, with some exceptions. For bans on prohibited practices, implementation will already be triggered after six months, for codes of conduct after nine, and for general AI rules (including governance) after 12. Obligations for high-risk systems will be delayed for another year, with implementation after 36 months. In the meantime, considering the timing of the entry into force of the new Regulation and the potential impact of new technologies that will continue to develop, the EU pact on artificial intelligence was launched in mid-November 2023 to voluntarily anticipate the AI requirements and facilitate the transition to the application of the new rules.
The risk scale of artificial intelligence
The EU Regulation provides a horizontal level of user protection, with a four-level risk scale to regulate artificial intelligence applications: minimal, limited, high, and unacceptable. There will be very light transparency requirements for systems with limited risk, like disclosing that content is AI-generated. For those with high risk, there will be a pre-market fundamental rights impact assessment, including a requirement to register with an ad-hoc EU database and requirements on the data and technical documentation to be submitted to demonstrate product compliance.
The Regulation places at an unacceptable level–and therefore bans it — cognitive behavior manipulation systems, untargeted collection of facial images from the Internet or CCTV footage to create facial recognition databases, emotion recognition in the workplace and educational institutions, ‘social scoring’ by governments, biometric categorization to infer sensitive data (political, religious, philosophical beliefs, sexual orientation) or religious beliefs, and some instances of predictive policing for individuals.
Exceptions, governance and foundation models
The Regulation includes an emergency procedure that will allow law enforcement to use a high-risk artificial intelligence tool that has failed the evaluation procedure, which will have to dialogue on the protection of fundamental rights with the specific mechanism. There are also exemptions for the use of real-time remote biometric identification systems in publicly accessible spaces, subject to judicial authorization and for strictly defined lists of offenses. There may only be ‘ost-remote’ use for the targeted search of a person convicted of or suspected of committing a serious crime. Real-time use, “limited in time and place,” can be used for targeted searches of victims (kidnapping, trafficking, sexual exploitation), prevention of a “specific and current” terrorist threat, and for locating or identifying a person suspected of committing specific crimes (terrorism, human trafficking, sexual exploitation, murder, kidnapping, rape, armed robbery, participation in a criminal organization, environmental crimes).
Among the new arrangements already in place is the ad-hoc AI office within the European Commission to supervise general-purpose artificial intelligence systems integrated into other high-risk systems, flanked by an advisory forum for stakeholders (representatives from industry, SMEs, start-ups, civil society, and academia). To account for the wide range of tasks that artificial intelligence systems can perform – generation of video, text, images, side-language conversation, computation, or computer code generation – and the rapid expansion of their capabilities, the ‘high-impact’ foundation models (a type of generative artificial intelligence trained on a broad spectrum of generalized, label-free data) will have to comply with several transparency requirements before being released to the market, from drafting technical documentation to complying with EU copyright law to disseminating detailed summaries of the content used for training.
Innovation and Sanctions
To support innovation, sandboxes (test environments in computing) of artificial intelligence regulation will allow the creation of a controlled environment to develop, test, and validate innovative systems even under real-world conditions. To alleviate the administrative burden for smaller companies and protect them from the pressures of dominant market players, the Regulation provides “limited and clearly specified” support actions and exemptions.
Finally, regarding sanctions, any natural or leg may file a complaint with the relevant market supervisory authority regarding non-compliance with the EU Artificial Intelligence Act. In the event of a violation of the Regulation, the company will have to pay either a percentage of annual global turnover in the previous financial year or a predetermined amount (whichever is higher): 35 million euros or 7 percent for violations of prohibited applications, 15 million euros or 3 percent for breaches of the law’s obligations, 7.5 million euros or 1.5 percent for providing incorrect information. More proportionate ceilings will apply to small and medium-sized enterprises and start-ups.
English version by the Translation Service of Withub