The global discourse on regulating Artificial Intelligence (AI) has been ongoing, with the European Union (EU) taking a notably proactive stance in contrast to the United States. For a while now, the EU has been developing the AI Act, an innovative legislative framework designed to govern Artificial Intelligence.
Considered as one of the first comprehensive laws in this field, the AI Act strives to set a universal benchmark for AI technology. Encompassing a broad spectrum of applications, it addresses automated medical diagnoses, specific drone functionalities, the proliferation of AI-generated deepfake videos, and AI-powered bots like ChatGPT.
Last week, after hours of debate, it was announced that the European Parliament and Council negotiators reached a provisional agreement on the Artificial Intelligence Act.
In June, Margrethe Vestager, the EU's Competition Commissioner, stated that discrimination is a bigger threat posed by Artificial Intelligence and a more pressing concern than the possible extinction of the human race. Notably, the European Union has positioned the AI discourse around human rights, freedoms and discrimination.
The primary objective of this regulation is indeed to safeguard fundamental rights, democracy, the rule of law, and environmental sustainability from high-risk AI while promoting innovation. The rules establish obligations for AI based on its potential risks and level of impact, extending their reach to tech giants such as X, TikTok, and Google.
The key issue leading to conflicts and negotiations centered around AI-driven surveillance. Indeed, while such systems can offer assistance, they also pose potential threats to citizens' rights and democracy. The focal point involved prohibiting the use of AI technology to predict or pre-determine individuals likely to commit a crime, without impeding the role and operations of authorities.
Addressing this concern, the European Parliament implemented a ban on the utilization of biometric categorization systems incorporating sensitive attributes (such as political, religious, philosophical beliefs, sexual orientation and race). Additionally, the ban covers untargeted scraping of facial images from the internet or CCTV footage for the creation of facial recognition databases. Other banned practices include emotion recognition in workplaces and educational institutions, social scoring based on social behavior or personal characteristics, AI systems manipulating human behavior to override free will, and the exploitation of vulnerabilities in individuals (related to age, disability, social or economic status).
Police will retain the ability to employ biometric identification systems (RBI) in publicly accessible spaces for law enforcement, with prior judicial authorization and specific crime lists.
"Post-remote" RBI is strictly limited to targeted searches for individuals convicted or suspected of serious crimes. "Real-time" RBI, subject to stringent conditions, is restricted in time and location for targeted searches of victims (abduction, trafficking, sexual exploitation), preventing specific terrorist threats, or localizing persons suspected of specific crimes (terrorism, trafficking, sexual exploitation, murder, kidnapping, rape, armed robbery, participation in a criminal organization, environmental crime).
Members of the European Parliament (MEPs) incorporated a mandatory fundamental rights impact assessment, applicable to high-risk AI systems in the insurance and banking sectors, considering their potential harm to health, safety, fundamental rights, environment, democracy, and the rule of law. High-risk classification also extends to AI systems influencing elections and voter behavior. Citizens retain the right to file complaints regarding AI systems and receive explanations for decisions impacted by high-risk AI systems affecting their rights.
Agreement was reached on the regulation of general-purpose AI (GPAI) systems, aligning with the transparency requirements initially proposed by Parliament. This involves the creation of technical documentation, adherence to EU copyright law, and the dissemination of detailed training content summaries.
For high-impact GPAI models posing systemic risks, Parliament negotiators secured more stringent obligations. These models meeting specific criteria must undergo evaluations, assess and mitigate systemic risks, conduct adversarial testing, report serious incidents to the Commission, ensure cybersecurity, and report on energy efficiency. Until harmonized EU standards are established, GPAIs with systemic risk can rely on codes of practice for compliance.
To ensure businesses, particularly SMEs, can develop AI solutions without undue influence from industry giants, the agreement advocates for regulatory sandboxes and real-world testing established by national authorities to innovate and train AI before market placement.
Creative minds may be interested in decisions around AI and copyright. European Commissioner Thierry Breton has emphasized the effectiveness of the EU's copyright rules. Being well-suited for the AI age, these rules do not require revision.
The EU's existing copyright framework is comprehensive, particularly regarding data's crucial role in training AI models. So, developers can copy and analyze publicly available data from the internet unless rights holders object. Publishers are already effectively using this opt-out right to negotiate agreements with AI developers (some media entities in France, for example, have been working on such agreements - Radio France, TF1, Les Échos, and France Médias Monde, have blocked OpenAI from using their content, exercising their rights under the EU Copyright Directive for fair remuneration negotiations).
Lead MEPs Brando Benifei (S&D, Italy) and Dragos Tudorache (Renew, Romania), along with Secretary of State for Digitalisation and Artificial Intelligence Carme Artigas who facilitated the negotiations, and Commissioner Thierry Breton, held a joint press conference following the negotiations. Co-rapporteur Tudorache, who has led the European Parliament's four-year battle to regulate AI, highlighted the EU's pioneering role, emphasizing that the AI Act sets rules providing strong safeguards against technology abuses by public authorities, but also protecting vulnerable sectors of the economy like SMEs, empowering innovation.
The agreed-upon text is poised for formal adoption by both the Parliament and Council to become EU law. Committees within Parliament, specifically the Internal Market and Civil Liberties, are scheduled to vote on the agreement in an upcoming meeting.
Other countries will now have to take into account these rules and AI companies are expected to extend EU obligations to markets beyond the continent (compliance with EU rules would indeed be more efficient than retrain separate models for different markets).
This agreement positions the EU ahead of the US, China, and the UK in regulating Artificial Intelligence and it is part of the strategy of the EU to avoid past mistakes, with tech giants such as Facebook turning into multi-billion dollar corporations without content regulation obligations (that extended to issues such as election interference, instances of child sex abuse, and the propagation of hate speech).The EU legislation on AI is not expected to take effect until at least 2025.
Comments
You can follow this conversation by subscribing to the comment feed for this post.