What did we learn in the last few days? That Artificial Intelligence (AI) is definitely progressing faster than human intelligence. Last week, the by now infamous "Pope in a puffer jacket" images that should have been a mushroom-induced hilarious and circumscribed moment of madness, suddenly turned into a pivotal fashion topic with people speculating about the Pope wearing Balenciaga.
After realizing the Pope does not wear Balenciaga and that the images were produced by AI text-to-image generator Midjourney, the huge case of mass disinformation became an endless debate about AI being a major threat to humanity. After all, by now AI can generate perfect images, videos, texts and even voices, so faking somebody - from a politician to a celebrity - saying something extremely dangerous and inappropriate is extremely easy.
As a consequence, there has recently been a high level of paranoia surrounding Artificial Intelligence: a letter posted on the website of non-profit organisation Future of Life Institute, co-signed by Elon Musk, Apple co-founder Steve Wozniak and former presidential candidate Andrew Yang among the others, demanded to take a six-months break in Artificial Intelligence (AI) research.
"We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4," read the letter. The reason of this call was stated at the beginning of the letter: "AI systems with human-competitive intelligence can pose profound risks to society and humanity."
Now, there are actually several issues about the letter: first of all, the Future of Life Institute (FLI), is primarily funded by the Musk Foundation, so this is not a letter written by an independent organization.
Besides, the definition "more powerful than GPT-4" is rather generic. What does it mean "more powerful"? In which ways? We're not sure, but it sounds like Trump's "super duper missile". The other point is that signatories included Xi Jinping and Meta's chief AI scientist Yann LeCun, who actually claimed on Twitter he didn't support the contents of the letter. So, it looks like while AI is generating fake images of the Pope, FLI is generating fake signatories.
But more steps were taken against Artificial Intelligence yesterday: OpenAI took ChatGPT offline in Italy after the government's Garante per la protezione dei dati personali (Data Protection Authority) issued an immediate temporary ban on suspected breach of privacy rules.
if you're in Italy and try to access the site, you will get the following message on your screen: "We regret to inform you that we have disabled ChatGPT for users in Italy at the request of the Italian Garante. We are issuing refunds to all users in Italy who purchased a ChatGPT Plus subscription in March. We are also temporarily pausing subscription renewals in Italy so that users won't be charged while ChatGPT is suspended."
The Garante, accused Microsoft-backed (MSFT.O) OpenAI of a data breach. According to the Garante, ChatGPT doesn't have a disclosure notice to users about data collected by OpenAI, and doesn't have any legal basis that justifies the massive collection and processing of personal data to train the algorithms on which the platform relies.
Besides, according to the Garante, the lack of an age verification mechanism exposes children to receiving responses that are absolutely inappropriate to their age and awareness, even though the service is allegedly addressed to users aged above 13 according to OpenAI's terms of service (and even though the service doesn't allow you to use certain terms; if you invite it to employ foul language, for example, it will tell you something along the lines of: "As an AI language model, I cannot use swear words or any kind of offensive language. My purpose is to provide helpful and informative responses while maintaining a respectful and professional tone").
And so Italy has become the first Western country to take action against a chatbot powered by Artificial Intelligence, placing it alongside China, Hong Kong, Iran and Russia and parts of Africa where residents are unable to create OpenAI accounts.
It is interesting to note how most people seem to be more afraid of other types of damages that AI-powered tools such as ChatGPT may cause. In the past few months, for example, lecturers and teachers in other countries such as Australia complained about students using ChatGPT in schools, wondering if it should be banned from educational institutions as it may have a negative impact on student learning and generate plagiarism, but the discourse wasn't so heated in Italy. There weren't indeed major scandals in Italy at university level with thousands of students getting ChatGPT to write their dissertations and other similar cases.
Cut and paste is an exercise that the Italian government seems to allow, actually: there was indeed a very bizarre "cut and paste" human exercise when a couple of weeks ago Claudio Anastasio, appointed by PM Giorgia Meloni (allegedly on recommendation of Rachele Mussolini, a city councilor in Rome and granddaughter of Benuto Mussolini) to head state-owned company 3-I, sent an e-mail to his board of directors. Anastasio took Benito Mussolini’s 1925 speech to Parliament in which he claimed he took "political, moral and historical responsibility" for the assassination by fascist squads of socialist MP Giacomo Matteotti, replaced the word "fascism" with "3-I" and sent it around. The entire story was rather worrying considering that 3-I is in charge of the digital transformation of public services in Italy and taking into account the fact that the speech in question marked the beginning of a totalitarianism in Italy. Anastasio resigned in the end, but the government is still full of proud "heirs of the Duce", as Ignazio La Russa, president of the Senate and co-founder of Fratelli d'Italia stated in September ("We are all heirs of the Duce," he stated. I personally dissent, I'm no heir of the Duce).
So ChatGPT was banned not on "AIgiarism" grounds, but on privacy grounds. Bizarrely, the Garante never banned social media in Italy on the same grounds. The Garante investigated WhatsApp and eventually ordered it to stop sharing user data with parent company Facebook in 2018, but never blocked it. As for protecting minors, well, you should be 13 years old to use Facebook or Instagram in Italy and 16 to use TikTok, but, obviously, there are kids younger than that using these socials. You must also be 16 years old to use WhatsApp, but, in most cases, kids younger than that age have chat groups on WhatsApp with their school friends that also include some of their teachers.
Usually you ban what scares you and what you don't know because you're too afraid of learning. ChatGPT is pretty scary for some: yes, it can be damaging for students, yes, it can still hallucinate and create incorrect texts, but banning people from accessing it, means to stop all people on Italian territory to experiment with it and learn from its mistakes, understand its powers and limits and eventually leave them behind compared to users in other countries.
Most people do use AI-powered tools as a hobby at the moment, to create art or entertain themselves: we have indeed seen how crochet and knitting enthusiasts use it to create patterns with hilarious results; others use ChatGPT for coding purposes or even to come up with recipes (the latest iteration of the AI behind the bot, GPT-4, should even be able to provide recipe suggestions based on a photograph of the contents of your fridge).
Besides, the new plugins on GPT-4 can enable it to look up data on the web and provide new opportunities for businesses as well, as users may be able to employ the plugins for a variety of applications, from organising a trip to finding a restaurant.
At the moment the priority is not stopping Artificial Intelligence, but training it in a more ethical way, developing new legal and copyright rules to protect one's original work and understand in which fields, including education and productivity, it can improve our lives. So far we know for example that AI can help detecting breast cancer, while researchers from Massachusetts Institute of Technology (MIT) have developed a new AI model that outperforms human pathologists in identifying brain cancer. The system analyzes large volumes of data and accurately classifies glioma, a type of brain cancer, into its two subtypes - oligodendroglioma and astrocytoma - in just a few minutes. The AI model has an accuracy rate of over 94%, compared to the human rate - 75%. The AI system also proved to be more consistent, while human experts' accuracy obviously varies depending on their level of expertise.
Everything can be bad if used for the wrong reasons: you can use WhatsApp to contact your friends based in other countries, but you can use it to stalk and threaten somebody; you can use TikTok to post hilarious videos of your epic fail and gather millions of followers, but you can also use it to offend someone, encourage body dysmorphia and push somebody to become anorexic. And you can use a knife to kill or to cut a loaf of bread. A tool is just a tool, it is the way we employ it that makes the difference.
Time will tell who's right or wrong in this dispute between the Italian Garante and ChatGPT. In the meantime, OpenAI has now got 20 days to respond to the request of Italy's Data Protection Authority or could risk a fine of up to 20 million euros.
So, yes, in the last few days we have got the confirmation that Artificial Intelligence (AI) is definitely progressing faster than human intelligence. There is instead no ending for human stupidity: ChatGPT may be banned from Italy, but everybody on Italian ground can obviously use a VPN, pretend to be in another country and continue accessing ChatGPT.
Blissfully ignorant of having been banned in Italy, ChatGPT first suggested to do so when I asked what to do in the eventuality it was banned. When I asked again a few hours later, it was still blissfully ignorant of the ban, but suggested to comply with the laws and regulations and even reach out to legal experts for guidance. Who knows, maybe Artificial Intelligence may not be the villain after all.
Comments
You can follow this conversation by subscribing to the comment feed for this post.