The global conversation about regulating Artificial Intelligence (AI) has been ongoing. The European Union (EU) has been working on a law regulating AI for some time now.
Last June, the EU approved a draft law about AI regulation, and, in December 2023, negotiators from the European Parliament and Council reached a tentative agreement on the Artificial Intelligence (AI) Act.
Earlier this week, on Wednesday, Members of the European Parliament (MEPs) endorsed the AI Act. The primary objective of this regulation is to safeguard fundamental rights, democracy, the rule of law, and environmental sustainability from high-risk AI while promoting innovation. So, let's review the key points of the AI Act.
Back in June, Margrethe Vestager, the EU's Competition Commissioner, highlighted that discrimination is a significant concern posed by AI, even more so than the potential threat to humanity.
One of the main rules of the AI Act prohibits therefore certain AI applications that could threaten citizens' rights. This includes biometric categorization systems based on sensitive attributes like political beliefs, religion, sexual orientation, or race. The ban also extends to collecting facial images from the internet or CCTV footage for facial recognition databases. Other prohibited practices include monitoring emotions in workplaces and schools, social scoring based on behavior or personal traits, AI systems manipulating human behavior to bypass free will, and exploiting vulnerabilities in individuals related to age, disability, or socio-economic status.
Police can still use real-time biometric identification (RBI) systems in public spaces for law enforcement, but only with prior judicial authorization and for specific crimes.
"Post-remote" RBI is strictly limited to targeted searches for individuals convicted or suspected of serious crimes. "Real-time" RBI, subject to stringent conditions, is restricted in time and location for targeted searches of victims (abduction, trafficking, sexual exploitation), preventing specific terrorist threats, or localizing persons suspected of specific crimes (terrorism, trafficking, sexual exploitation, murder, kidnapping, rape, armed robbery, participation in a criminal organization, environmental crime).
Members of the European Parliament (MEPs) have introduced a mandatory assessment of fundamental rights, applicable to high-risk AI systems in the insurance and banking sectors. This assessment considers potential harm to health, safety, fundamental rights, environment, democracy, and the rule of law. High-risk classification also applies to certain systems in law enforcement, education, employment, migration and border management, as well as to justice and democratic processes, including the risk of AI systems influencing elections and voter behavior. Citizens have the right to file complaints regarding AI systems and receive explanations for decisions impacted by high-risk AI systems affecting their rights.
So it is confirmed that AI systems used in migration, asylum and border control management remain under the high-risk category, and the legislation adds that AI systems in these fields should never be used by Member States or Union institutions, bodies and offices as a means to avoid their international obligations under the UN Convention relating to the Status of Refugees. Besides, they should not violate the principle of non-refoulement or obstruct safe and effective legal entry into the Union, including access to international protection.
Generative AI, which includes systems producing text, images, videos, and audio from simple prompts, is covered under provisions for general-purpose AI (GPAI) systems. GPAI systems and the models they are based on must meet transparency requirements proposed by Parliament. This involves creating technical documentation, complying with EU copyright law, and publishing detailed summaries of the content used for training (the act is not clear though regarding already-trained models). Open-source models available to the public (so versions below ChatGPT's GPT-4) are exempt from the copyright requirement.
For high-impact GPAI models posing systemic risks, Parliament negotiators have secured more stringent obligations. These models must undergo evaluations, assess and mitigate systemic risks, conduct adversarial testing, report serious incidents to the Commission, and report on energy efficiency.
Additionally, artificial or manipulated images, audio, or video content ("deepfakes") need to be clearly labeled as such, even when generated for artistic, creative, or satirical work. This point is interesting and makes us ponder on those publications currently using AI images to illustrate their articles, without explicitly telling their readers they are AI-generated, but also poses questions regarding deepfakes. The Act regulates deepfakes focusing on transparency and saying in Article 50 (4) that creators must disclose the artificial origin of deepfakes and techniques used. Now, in previous posts we looked at the creative use of deepfake in a music video. Yet, when used in other contexts such as politics (think about elections), or when employed to generate pornographic content involving minors or nonconsensual pornography, the AI Act should maybe add a clause about the harmful impact they may have and criminalise deepfakes to deter malicious use.
To ensure businesses, particularly Small and medium-sized enterprises (SMEs), can develop AI solutions without undue influence from industry giants, the agreement advocates for regulatory sandboxes and real-world testing established by national authorities to innovate and train AI before market placement.
Some companies may have concerns about the regulations: as seen in a previous post last year, OpenAI's Sam Altman displayed an ambiguous approach previously, expressing both a desire for regulation, but also concerns about AI companies being overregulated.
Some also complain about the EU setting computing power limits for training AI models set at a lower level compared to similar proposals in the US. According to the AI Act, when the cumulative amount of computing power used for the training, which is measured in FLOPs (floating point operations), is greater than 10^25, an AI system will face strict requirements to prove it doesn't pose system risks (this requirement may push some European companies to relocate to the US to avoid these restrictions).
Fines for non-compliance range from €7.5 million or 1% of a company’s worldwide turnover (whichever is higher) for providing incorrect information to regulators, to €15 million or 3% of worldwide turnover for breaching certain provisions of the act, to €35 million or 7% of turnover for deploying or developing banned AI tools. Smaller companies and startups will face more proportionate fines.
During the plenary debate, Internal Market Committee co-rapporteur Brando Benifei stated that the AI Act, the world's first binding law on artificial intelligence, aims "to reduce risks, create opportunities, combat discrimination, and bring transparency." Benifei also emphasized the importance of protecting the rights of workers and citizens.
Civil Liberties Committee co-rapporteur Dragos Tudorache (Renew, Romania) warned that the AI Act is just the beginning. He highlighted the need to address broader societal implications of AI, such as its impact on democracy, education, labor markets, and warfare.
The regulation is undergoing final lawyer-linguist checks and is expected to be adopted before the end of the legislative session. It will enter into force twenty days after publication in the official Journal and become fully applicable 24 months later. However, bans on prohibited practices will apply six months after entry into force, codes of practice nine months after, general-purpose AI rules including governance twelve months after, and obligations for providers and deployers of high-risk systems thirty-six months after.
In the meantime, the newly launched European AI Office will also play a crucial role in implementing the AI Act, especially for general-purpose AI. Working with the European Artificial Intelligence Board formed by Member State representatives and the European Centre for Algorithmic Transparency (ECAT) It will ensure consistency among EU Member States, set up advisory bodies, facilitate information exchange, evaluate AI models, develop codes of practice, investigate violations, and provide guidance and tools for compliance.
But just like for fast fashion laws, regulations should be complemented by education initiatives: while we wait for regulations to be perfectioned and become law, we should learn more about AI systems and develop critical thinking skills, questioning things more before automatically accepting an image, a video or the content of an article as real and genuine, while also acknowledging the potential harms AI can cause. Opening debates, especially in workplaces and educational institutions, can help prevent the misuse of AI, ensuring it isn't used in ways that harass or harm others.
Comments
You can follow this conversation by subscribing to the comment feed for this post.