Artificial Intelligence (AI) continues to dominate the headlines: recently Paul McCartney told BBC Radio 4's Today program that he has been working on recording the "last Beatles record" with the help of AI, using machine learning techniques to isolate John Lennon's voice and improve its quality from old cassette demos. In Germany, instead, around 300 people attended a Lutheran church service almost entirely generated by AI that featured ChatGPT-powered avatars, conducting prayer, music, sermons, and blessings (it is worth mentioning that the congregation didn't enjoy their cold and distanced approach...).
Yet there are also other more technical news regarding AI regulation. Taking the lead in AI regulation, the European Union (EU) has been crafting the AI Act, a pioneering legislation that will govern Artificial Intelligence. As one of the world's first comprehensive laws in this domain, the AI Act aspires to establish a global standard for AI technology. Encompassing a wide range of applications, it covers automated medical diagnoses, some types of drones, the proliferation of AI-generated videos commonly known as deepfakes, and AI-powered bots like ChatGPT. By embracing these regulations, the EU seeks to strike a delicate balance between innovation and safeguarding the ethical and societal implications of AI.
Actually, governing AI is not easy first and foremost because the technology is rapidly developing and second because it is already being employed by ordinary people worldwide, especially in the creative industries, but it is also being applied in other sectors, including healthcare, even though no formal regulations are in place restricting AI.
Despite the numerous social, ethical, and economic concerns surrounding AI, pressing the pause button, as suggested in a letter published by the Future of Life Institute a few months ago, may not be the most viable solution. Many websites and companies have already integrated AI tools into their systems, making it crucial to act swiftly and assume responsibility. The European Union (EU) has taken a proactive approach in this regard.
Currently, the EU has approved a draft legislation, which includes a ban on the police's use of live facial recognition technology in public spaces. Initially, center-right MEPs within the European People's Party (EPP) sought to protest against a complete ban on real-time facial recognition on European streets. However, the protest did not materialize, partly due to the absence of several politicians attending the funeral of former Prime Minister Silvio Berlusconi in Italy. The final vote resulted in 499 in favor, 28 against, and 93 abstentions.
MEPs will now engage in detailed discussions with EU member states before finalizing the AI Act. Probably the longest debate will revolve around biometric data, with facial recognition being one of the most contentious issues. Regarding this issue, the EPP argues that the technology is crucial for combating crime, enhancing counter-terrorism intelligence, and aiding in the search for missing children. However, their arguments often prioritize security concerns without adequately addressing the need to establish limits that protect individuals from the risks of mass surveillance, such as in workplaces or schools, as well as discrimination. After all, even with legislation on AI in place, authorities would still have the ability to use biometric data, including CCTV footage, as they currently do to pursue criminals.
Copyright-wise, the upcoming legislation will impose certain obligations on developers of AI systems that will be required to disclose and publish all the works, including those of scientists, musicians, illustrators, photographers, and journalists, that were used for training purposes. Additionally, they will need to demonstrate that their training methods complied with existing copyright laws. Failure to comply with these requirements could result in severe consequences, from being asked to immediately remove their applications to facing fines of up to 7% of their revenue.
What's noteworthy is that the European Union (EU) is spearheading these efforts and leading the global agenda on AI regulation. In comparison, the United States appears to be lagging behind: there is indeed a prevailing sense that American lawmakers are uncertain about how to proceed, even as they heed warnings from tech experts about the impending risks.
The discussion in the US has largely revolved around the notion of AI gaining sentience, going rogue and turning against humanity, evoking a scenario reminiscent of science fiction movies. Yet, this focus on a hypothetical apocalypse overlooks the present impact of AI on various critical issues, such as surveillance, discrimination, and job displacement.
For example, AI algorithms wield significant influence over social media feeds, buy they also cause discrimination within housing and mortgage lending systems. Besides, AI-powered surveillance technologies have disproportionately targeted and, in some cases, misidentified individuals from Black and brown communities. Last, but not least, numerous companies have already integrated AI tools into their products or their organizations to cut human-related costs. At the beginning of May, film and TV writers in the US started a nationwide industrial action asking for structural changes in studio operations and pay increases. The Writers Guild of America (WGA) called for regulations on AI usage, aiming to prevent its utilization in writing or rewriting literary material and as a source material and urged for the prohibition of using writers' work to train AI models. However, the Alliance of Motion Picture and Television Producers (AMPTP) rejected these proposals, opting not to make immediate commitments regarding this matter. This stance has raised concerns as it sounds like a veiled attempt to reduce the number of human writers as quickly as possible.
Currently, there is a notable disparity between the approaches taken by the European Union (EU) and the United States regarding AI legislation. While the EU is actively pursuing legislative measures, the US government announced initiatives aimed at harnessing the potential of AI. Significant investments of $140 million have been made in AI research institutes, and a blueprint for an AI bill of rights has been released. The US government is also soliciting public input on how best to regulate the use of AI. However, despite these efforts, specific regulatory guidelines have yet to materialize.
While discussions about the risks of AI and maintaining American leadership in the field persist in the US, the country is falling behind in terms of legislative and regulatory actions or it is seeking support from the same tech companies that are asking for help. This approach is not entirely unprecedented, as federal and local US governments have previously collaborated with major tech companies like Meta and Twitter to shape regulations. In 2020, Washington state passed the country's first bill regulating facial recognition. However, the legislation, authored by a state senator who also happened to be a Microsoft employee, faced criticism from civil rights groups for lacking crucial safeguards. Consequently, tech companies often end up with rules that afford them considerable leeway to create self-regulatory mechanisms that align with their business interests.
Maybe that’s what Sam Altman, CEO of OpenAI, the maker of ChatGPT, expects from the EU as well: last month, Altman urged the members of US Congress to regulate AI. In a committee hearing Altman stated: "I think if this technology goes wrong, it can go quite wrong." Yet, in mid-May, Altman suggested that OpenAI might consider exiting Europe if it couldn't comply with forthcoming AI regulations imposed by the EU, which he criticized as excessive. However, he later backtracked at the end of May, causing further confusion. In essence, Altman appears to advocate for regulations that align with his preferences.
Overall, the contrasting approaches of the EU and the US in AI regulation reflect their respective priorities, with the EU taking a more proactive stance and also highlighting issues such as discrimination.
In a recent interview with the BBC, Margrethe Vestager, the EU's Competition Commissioner, stated indeed that discrimination is a bigger threat posed by artificial Intelligence and a more pressing concern than the possible extinction of the human race. She emphasized the need for "guardrails" to be established for AI, particularly in situations where it could impact people's livelihoods, such as mortgage applications or access to social services. Indeed, if a bank uses AI to determine someone's eligibility for a mortgage or if social services within a municipality utilize AI, it is crucial to ensure that discrimination based on gender, race, or residential area does not occur.
Writing a legislation on AI is no easy task and it implies months of discussions, debates, reports and drafts. Even if the EU manages to achieve its ambitious goal of reaching an agreement on the law by the end of this year, its implementation is not expected until at least 2026. Which means the EU will have to seek an interim agreement with tech companies on a voluntary basis. However, there is another challenge ahead: by the time the law takes effect, certain norms may already be outdated, and, by then, AI may have caused significant harm due to its rapid advancement. Additionally, the legislation must be adaptable to keep pace with the progress of AI, and will have to implement regulations about scraping data and consider the impacts it may have on the labour market.
A recent report by McKinsey Global Institute states indeed that Artificial Intelligence (AI) is projected to contribute up to $4.4 trillion in value to the global economy each year. The 68-page report further states that approximately 50% of all work will be automated between 2030 and 2060. The report highlights the potential of generative AI to reshape the anatomy of work by enhancing the abilities of individual workers through automation of certain tasks and activities.
The debate about AI legislation will therefore continue throughout the coming year all over the world, not just in the European Parliament (in the meantime, you can keep updated about the developments on the AI Act by subscribing to the EU newsletter about it here).
Meanwhile, countries outside the EU should begin considering their own regulatory measures promptly. In the case of the United Kingdom, Prime Minister Rishi Sunak has shown enthusiasm for the transformative potential of AI in public services, from reducing teachers' workload in lesson planning to facilitating quicker and more accurate diagnoses for NHS patients. However, his enthusiasm may be interpreted as cost-saving measures that could lead to job cuts. Moreover, before implementing such systems, proper training and careful consideration of the pros and cons are necessary. Indeed, after Brexit, the UK risks falling behind if it simply rides the wave of excitement without thorough assessment and implementation of appropriate legislation.
Comments
You can follow this conversation by subscribing to the comment feed for this post.