At the end of 2022, writing about Artificial Intelligence (AI) seemed to be like embarking on a journey into the future of technology. However, 2023 unfolded as a tumultuous ride filled with lawsuits, corporate disputes, dubious fashion experiments and scams damaging consumers.
Frequently, AI found itself cast as the antagonist and the villain, the threat and the menace. This trend seems to persist, as proved by the fabricated explicit images featuring Taylor Swift that quickly spread on social media in January.
Taylor Swift, Time magazine's 2023 Person of the Year, renowned as the world's most-streamed musical artist and celebrated for her groundbreaking tour, was the focal point of attention on social media when her "deepfake" images, generated by Artificial Intelligence, alarmed the public and also caught the attention of the White House. As the images spread, X, formerly known as Twitter, had to take measures to block searches related to Taylor Swift.
A bipartisan group of US senators introduced a bill dubbed the Disrupt Explicit Forged Images and Non-Consensual Edits Act of 2024 (or the "Defiance Act"), even though prior to this, in May 2023, Democratic congressman Joseph Morelle proposed the Preventing Deepfakes of Intimate Images Act, aiming to criminalize the dissemination of nonconsensual, sexualized AI-generated images. Republican congressman Tom Kean Jr co-sponsored Morelle's bill, while introducing his own AI Labeling Act in November 2023 (requiring all AI-generated content - including more innocuous chatbots used in customer service settings, for example - to be labelled as such).
These legislative initiatives, while prompted by the Swift incident (though Morelle's proposed Act is from last year, as indicated), address broader concerns regarding the proliferation of AI-generated deepfake pornography and its potential impact on millions of individuals, particularly women, underscoring significant developments in the legal landscape.
Legally speaking there have been other interesting stories involving AI: the estate of stand-up comedian George Carlin filed a lawsuit against Dudesy, a media company behind a fake comedy special created using AI to mimic Carlin's style and material. The lawsuit (filed in a federal court in Los Angeles) alleges violations of Carlin's right of publicity and copyright, asserting that the defendants lacked permission or licenses to use Carlin's likeness and copyrighted materials for their audio special "George Carlin: I'm Glad I'm Dead" in which Carlin, who passed away in 2008, can be heard commenting on current events.
This case focuses on the alleged violations of Carlin's right of publicity and copyright: apart from using an AI-generated Carlin, in the programme a voice claiming to be the AI engine used by Dudesy stated it listened to 50 years of material and that it did its best to imitate "his voice, cadence and attitude as well as the subject matter I think would have interested him today". The case underscores the ethical and legal considerations surrounding the use of AI in creative endeavors (ethical AI studies should become one of the subjects studied in a degree focused on new technologies), highlighting the importance of obtaining proper authorization and respecting intellectual property rights.
Josh Schiller, an attorney for the plaintiffs, actually made a very important statement here highlighting that this case "is not just about AI, it's about the humans that use AI to violate the law, infringe on intellectual property rights, and flout common decency". Indeed, the authors may have asked permission to the family before using AI to generate their own version of the stand-up comedian. In the same way Taylor Swift's images weren't generated by an Artificial Intelligence itself, but by people using it in unethical ways.
Lawsuits by artists claiming infringements of copyrights against companies that designed AI systems are also continuing.
At the beginning of January, Riot Games storyboard artist Jon Lam shared a link on X to a Google spreadsheet naming thousands of artists allegedly used by Midjourney to imitate their styles. The tweet also featured messages, reportedly from Midjourney developers, discussing strategies to create plausible deniability regarding their unauthorized use of artists' works. One message suggested, "all you have to do is just use those scraped datasets and then conveniently forget what you used to train the model".
Among the 16,000 artists' names contained in the document there are also Yayoi Kusama, Frida Kahlo, HR Giger, Picasso, Egon Schiele, Mark Rothko, Francis Bacon, and Andy Warhol. Among them there are also some British artists, such as Bridget Riley, Damien Hirst, Tracey Emin, David Hockney, and Anish Kapoor, who have sought legal support of US lawyers to explore participation in a class action lawsuit against Midjourney and other AI companies (some of them are considering initiating their legal proceedings in the UK).
This list, comprising 24 pages of names, serves as Exhibit J in a class action lawsuit filed by ten American artists in California against Midjourney, Stability AI, Runway AI, and DeviantArt.
This lawsuit represents the continuation of another lawsuit filed last year (Andersen et al. v. Stability AI Ltd. et al.), which actually saw most claims dismissed by a federal district court judge at the end of 2023.
In that case the plaintiffs claimed that text-to-image tools like Stable Diffusion were trained on their works to produce similar images. They filed various claims including copyright infringement, violation of the Digital Millennium Copyright Act (DMCA), and right to publicity violations.
The defendants, including Stability AI Ltd., DeviantArt, Inc., and Midjourney, Inc., filed separate motions to dismiss, with DeviantArt also filing a special motion to strike. In October last year the judge granted the motions to dismiss and deferred the motion to strike. The judge dismissed most of the claims, citing lack of copyright registration for some works with the US Copyright Office (a prerequisite for suing for infringement), and insufficient evidence of substantial similarity between the AI-generated images and the original works.
Yet the lawsuit wasn't entirely dismissed nor thrown out: in its order following the hearing, as stated on the Stable Diffusion Litigation site, "the Court denied Stability AI's attempt to dismiss plaintiffs' most significant claim, namely the direct copyright-infringement claim for misappropriation of billions of images for AI training. The Court also denied Midjourney’s attempt to dismiss the class allegations". The lawsuit was therefore later amended and refiled in November, adding several plaintiffs to the suit (and Runway AI to the list of defendants; Exhibit J is part of this lawsuit).
As an artist there are a few things you may try doing to protect your copyright: The website haveibeentrained.com allows you to check if your work has been used in generative AI programs (at the time of writing this post it is not working, though). Its Do Not Train Registry also prevents works from being included in datasets. The University of Chicago's Glaze program is also gaining popularity: it aims aiming to protect artists from programs like Midjourney and Stable Diffusion by modifying digital image data so that it looks the same to human eyes but drastically different to AI models. Additionally, the dismissal of the judge of some points of the lawsuit in October underscores the significance of artists in the US to register their works with the Copyright Office before pursuing infringement claims.
This year appears poised to be marked by a significant increase in legal disputes and conflicts centered around (AI), so be ready to protect your work by learning how to demonstrate ownership of the disputed content.
Comments
You can follow this conversation by subscribing to the comment feed for this post.