European Union member states have given a final go-ahead for the regulation of artificial intelligence (AI) as institutions around the world race to introduce curbs for the technology.
1he AI Act has been approved by the EU Council, marking a significant milestone in the establishment of comprehensive regulations for artificial intelligence technology.
“The adoption of the AI act is a significant milestone for the European Union,” Mathieu Michel, Belgium’s secretary of state for digitization said in a Tuesday statement.
“With the AI act, Europe emphasizes the importance of trust, transparency, and accountability when dealing with new technologies while at the same time ensuring this fast-changing technology can flourish and boost European innovation,” Michel added.
The AI Act adopts a risk-oriented strategy towards artificial intelligence, implying that various uses of the technology are handled differently based on the potential risks they present to society.
The law prohibits applications of AI that are considered “unacceptable” in terms of their risk level. Such applications feature so-called “social scoring” systems that rank citizens based on aggregation and analysis of their data, predictive policing, and emotional recognition in the workplace and schools.
Autonomous vehicles or medical devices fall under the category of high-risk AI systems, as they are assessed based on the potential risks they may pose to the health, safety, and fundamental rights of individuals. Additionally, the scope of high-risk AI systems extends to encompass AI applications in financial services and education, where the presence of biased AI algorithms poses a significant risk.
US Big Tech Firms In The Spotlight
Matthew Holman, a partner at law firm Cripps, said the rules will have major implications for any person or entity developing, creating, using, or reselling AI in the EU — with U.S. tech firms firmly in the spotlight.
“The EU AI is unlike any law anywhere else on earth,” Holman said. “It creates for the first time a detailed regulatory regime for AI.”
“U.S. tech giants have been watching this developing law closely,” Holman added. “There has been a lot of funding into public-facing generative AI systems which will need to ensure compliance with the new law that is, in some places, quite onerous.”
The EU Commission will have the power to fine companies that breach the AI Act as much as 35 million euros ($38 million) or 7% of their annual global revenues — whichever is higher.
The change in EU law comes after OpenAI’s November 2022 launch of ChatGPT. Officials realized at the time that existing legislation lacked the detail needed to address the advanced capabilities of emerging generative AI technology and the risks around the use of copyrighted material.
A Long Road To Implementation
The EU has established tough regulations for generative AI systems, commonly known as “general-purpose” AI. These regulations encompass various obligations such as compliance with EU copyright law, transparent disclosures regarding the training of models, regular testing, and robust cybersecurity measures.
However, there will be a delay before these regulations become effective, as stated by Dessi Savova, a partner at Clifford Chance. The limitations on general-purpose systems will not be enforced until a year after the AI Act is implemented.
Even so, commercially available generative AI systems such as OpenAI’s ChatGPT, Google’s Gemini, and Microsoft’s Copilot are granted a “transition period” of 36 months from the date of enforcement to ensure their technology aligns with the legislation.
“Agreement has been reached on the AI Act — and that rulebook is about to become a reality,” Savova told CNBC via email. “Now, attention must turn to the effective implementation and enforcement of the AI Act.”
After being signed by the presidents of the European Parliament and the Council, the legislative act will be published in the EU’s Official Journal in the coming days and enter into force twenty days after this publication. The new regulation will apply two years after it enters into force, with some exceptions for specific provisions.
Background
The AI Act is a key element of the EU’s policy to foster the development and uptake across the single market of safe and lawful AI that respects fundamental rights. The Commission (Thierry Breton, commissioner for internal market) submitted the proposal for the AI Act in April 2021. Brando Benifei (S&D / IT) and Dragoş Tudorache (Renew Europe / RO) were the European Parliament’s rapporteurs on this file and a provisional agreement between the co-legislators was reached on 8 December 2023.