The general opinion towards Artificial Intelligence (AI) continues to be ambivalent: progressing faster than expected and by now able to generate images, videos, texts, voices and music, AI is proving an asset in healthcare.
At the same time it is a cause for concern among artists worried about copyright issues and plagiarisms; it is scaring teachers and lecturers about its impact on students' learning, journalists about fake contents that may be generated using AI, and policy makers and governments about the catastrophic scenarios it may open, spreading misinformation, political destabilization and playing a key role in elections (in a way that has already happened, remember the Cambridge Analytica investigation that revealed that the personal data of millions of users harvested from Facebook, were used to create political and psychological profiles and manufactured targeted ads, influencing users' behavior?).
This week, the Center for AI Safety issued a statement affirming "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war". The statement was signed by hundreds of executives and academics: signatories from different fields included the chief executives of Google's DeepMind, ChatGPT developer OpenAI, and AI startup Anthropic.
In the last few months, there have often been calls by industry experts to regulate AI: in April a letter posted on the website of non-profit organisation Future of Life Institute, co-signed by Elon Musk, Apple co-founder Steve Wozniak and former presidential candidate Andrew Yang among the others, demanded to take a six-months break in Artificial Intelligence research (probably to give the time to some of the signatories, Musk included, to develop their own AI systems…).
At the moment, the calls for regulation of this technology highlight the fact that AI poses a risk to society: in May, Geoffrey Hinton, "the godfather of AI", resigned from Google due to concerns about AI's "existential risk". According to these alerts, Artificial Intelligence has indeed the potential to exert a substantial impact on job markets, jeopardize the well-being of millions of people, exploit disinformation and increase discrimination (all things that we, as human beings, excel at doing...).
The European Union (EU) is working on legislation that could potentially become the first set of regulations governing AI. The suggested legislation may impose a requirement on generative AI companies, compelling them to disclose the copyrighted materials employed in training their systems to generate text and images.
Yet the announcement wasn't welcomed by some of the Artificial Intelligence experts. On 24th May, Sam Altman, CEO of OpenAI, the maker of ChatGPT, announced they may consider leaving Europe if they could not comply with the upcoming Artificial Intelligence regulations by the European Union, which Altman - who is actually also one of the signatories of the "Mitigating the risk of extinction" statement - criticized as "over-regulating". On 26th May, Altman, causing more confusion as he seems to want to mitigate the risk of extinction without complying to norms, backtracked and stated on Twitter, "We are excited to continue to operate here and, of course, have no plans to leave."
Actually it's not only the experts behind AI who seem to be generating some degree of confusion about it: the general consensus advises against the extensive use of AI, citing associated risks. Yet, paradoxically, numerous companies are actively embracing these transformative systems.
In January, for example, Getty Images filed a lawsuit in the United States against Stability AI, creators of open-source AI art generator Stable Diffusion. The stock photography giant accused Stability AI of engaging in a "brazen infringement of Getty Images' intellectual property on a staggering scale." Allegedly, Stability AI illicitly copied over 12 million images from Getty Images' extensive database, "without permission ... or compensation ... as part of its efforts to build a competing business". This was deemed as a violation of both copyright and trademark protections that Getty Images fiercely safeguards. Then, in March, Getty Images announced it was collaborating with global tech company NVIDIA, to develop two generative AI models using NVIDIA Picasso to customize text-to-image and text-to-video foundation models and develop visuals using fully licensed visual content.
There are also companies using AI as a means to restructure their organizations and cut human-related costs. At the beginning of May, film and TV writers in the US started a nationwide industrial action asking for structural changes in studio operations and pay increases.
One of their pressing concerns revolved also around the potential misuse of Artificial Intelligence. Worried about companies using AI to replace human creativity, the Writers Guild of America (WGA) called for regulations on AI usage, aiming to prevent its utilization in writing or rewriting literary material and as a source material and urged for the prohibition of using writers' work to train AI models. However, the Alliance of Motion Picture and Television Producers (AMPTP) rejected these proposals, opting not to make immediate commitments regarding this matter. This stance has raised concerns as it sounds like a veiled attempt to reduce the number of human writers as quickly as possible.
But there are other examples of companies using AI to cut costs: at the beginning of May IBM CEO Arvind Krishna told Bloomberg News that the company will pause hiring for roles as roughly 7,800 jobs could be replaced by AI in the coming years. Krishna estimated that 30 percent of non-customer-facing roles could be replaced by AI and automations in five years.
In May BuzzFeed shut down its news division, dismissed 15 percent of its workforce and announced it will be "leaning into" AI and embracing more "AI inspired content". The explanation behind this choice? Reducing budgets and cutting down on other expenses. BuzzFeed will also expand its Creator program, which brings together influencers and advertisers with the company brand.
Also translators have turned into the casualties of the cutting cost frenzy: before the great advent of AI this year there were already companies that, to speed up processes, provided their translators with automatic translations that could be incorporated in their texts if they were deemed correct or dismissed or edited if they wanted (but in these cases the translation was still paid in full, the automatic translation was just used as a draft).
Besides, there have always been specific translating programs employed by professional translators (and allowed by most agencies and clients) that allow you to speed up your work by providing automatic translations of repeated terminology (the most famous is Trados). Yet many agencies have started using AI without admitting it (which is unfair towards translators, but towards clients as well, who may not want their documents to be divulged).
Nowadays rather than translating or proofreading jobs you get offered "post editing of Machine Translation" jobs claimed to be "more time consuming than proofreading but not as long as translation" (as explained to me by a project manager of the London-based branch of Language Line Solutions) and therefore paid less than a translation, but more than a proofreading session.
Yet not all attempts at replacing humans with AI were successful. A recent example is the US-based National Eating Disorders Association (NEDA), which, a few months ago, terminated its entire staff and substituted them with an AI-assisted chatbot named Tessa. The latter was allegedly created with the support of psychology researchers and Cass AI, a company that develops AI chatbots focused on mental health. The six paid employees, responsible for supervising approximately 200 volunteers, were substituted a few days after they successfully formed a union.
Now, while it may be easy to replace a human being with a chatbot to deal with very basic (and boring) customer service issues, it is utterly demented to think that a chatbot can deliver tailored advice for an eating disorder helpline. Indeed, Tessa started providing harmful information including offering advice on how to lose weight and limit calories. Eventually, Tessa incurred in the same fate of her human colleagues and was taken down. After all, the support and information that a human being who may have gone through the same issues the callers are going through is simply invaluable.
Ellen Fitzsimmons-Craft, a psychologist at Washington University in St. Louis who played a role in the development of the chatbot, explained that "Tessa" was conceived as a means to increase the accessibility of eating disorder prevention. In a post on NEDA's website (which has been removed since then), Fitzsimmons-Craft expressed her thoughts on the chatbot, admitting the decision to opt for the bot was dictated by costs. "Programs that require human time and resources to implement them are difficult to scale, particularly in our current environment in the US where there is limited investment in prevention," she explained.
It becomes therefore clear that at the moment AI is not directly causing extinction, but human beings are causing extinction through AI by replacing workers with AI tools.
Any kind of progress implies the loss of certain jobs and the creation of others and it is inevitable that AI will do the same, but this doesn't mean that we have to eliminate them before we understand which jobs will be lost or in which ways we can implement certain systems.
We are indeed at a confusing stage in which new professions linked with Artificial Intelligence are still being created, while AI systems are being used to cut positions and costs, rather than to understand in which ways AI can be integrated in certain industries and how certain tasks (data entries and analysis or administrative operations, for example) can be optimized through AI.
Let's face it, not all tasks can be automated: yes, you can translate in any language a basic message or letter with ChatGPT; but you can't ask it to provide translations of a film subtitles because it will never be able to grasp the human nuances in the target language and, not being able to actually see the film, it will not provide credible translations of its dialogues.
In the same way, a chatbot can speed up certain procedures in fields like e-shopping, but it can't be used to provide psychological support for people in urgent need. In healthcare AI can analyse millions of scans in a relatively short time, but it will still need a human to come up with a diagnosis (mind you, in treating women's pains, usually dismissed by male doctors as caused by mysterious psychological/psychosomatic ailments, AI could actually prove to be less biased…). In a nutshell, AI still lacks human skills and touch, but AI possesses the capacity to revolutionize our existence for the better.
AI can be a true marvel if used responsibly: it may not be advisable to use ChatGPT to write an essay, but the system can provide writers, researchers and translators with a vast range of synonyms and antonyms, acting like an instant Thesaurus, helping them to avoid repetitions or to summarise and reduce to bullet points longer documents.
At the moment it is therefore of the utmost importance to understand how we can harness the extraordinary capabilities of AI, use them for our collective benefit and identify which areas can be regulated with dedicated norms and laws. Another crucial aspect to address is the establishment of comprehensive training programs at various levels, aimed at equipping individuals with the skills required to thrive in an economy driven by AI.
At this stage it is hard to say if civilization will end because of AI or because of human beings, after all, in between wars, violence, murders, pollution and pandemics, it becomes apparent that we, as stewards of humanity, fall short of being exemplary preservers of our own species.
The key, as suggested also in a previous post about AI and the labour market, lies in attaining a harmonious equilibrium between humans and machines. The secret lies in comprehending how we can harness this technology to advance our own interests, generating progress instead of succumbing to panic, while monitoring those in positions of authority to make sure they exploit AI for optimization and not for the annihilation of jobs.
Comments
You can follow this conversation by subscribing to the comment feed for this post.