We often hear about Artificial Intelligence (AI) programs and the possibilities that AI can offer to fields such as fashion and art. We have come a long way since the early beginnings of AI and machine learning and, while some companies that have been widely experimenting with AI claim it can provide great experiences, we also know that everything is fluid when it comes to new technologies and innovative advancements in other fields may lead to even more intriguing applications. That said, there is also something scary about AI.
Take for example the exhibition that will soon be opening at Milan's Osservatorio Fondazione Prada (Galleria Vittorio Emanuele II) just in time for the next fashion week.
Curated by AI researcher and Distinguished Research Professor at New York University Kate Crawford and artist and researcher Trevor Paglen, "Training Humans" (12th September 2019 - 24th February 2020), is a photographic exhibition featuring a wide range of images portraying human beings. These pictures are employed by scientists to train artificial intelligence systems on how to see and categorise the world.
The exhibition is indeed a journey of discovery through AI, almost a historical interrogation that starts with the images used in the first computerized facial recognition lab founded by the Central Intelligence Agency (CIA) in the U.S. from 1963 onward, and includes also large visual databases used for visual object recognition software research such as ImageNet (2009).
As stated by Trevor Paglen, "when we first started conceptualizing this exhibition over two years ago, we wanted to tell a story about the history of images used to 'recognize' humans in computer vision and AI systems. We weren't interested in either the hyped, marketing version of AI nor the tales of dystopian robot futures."
Kate Crawford adds in a press release, "We wanted to engage directly with the images that train AI systems, and to take those images seriously as a part of a rapidly evolving culture. They represent the new vernacular photography that drives machine vision. To see how this works, we analyzed hundreds of training sets to understand how these 'engines of seeing' operate."
To allow visitors to discover how these systems operate, the artist and the curator looked at a series of key developments: in the 1990s, a more advanced generation of computer vision systems was for example created by the U.S. Department of Defense Counterdrug Technology Development Program Office.
For a database known as Face Recognition Technology (FERET), they developed a collection of portraits of 1,199 people, for a total of 14,126 images, in order to have a "standard benchmark" - which allows researchers to develop algorithms on a common database of images.
In more recent years the Internet and in particular social media helped the proliferation of images and AI researchers moved from using government-owned collections, such as FBI mugshots of deceased criminals, to gathering photographs from the web, a practice that turned into the norm. Many people in the AI field (from both private and public sectors) began harvesting millions of publicly-available images without asking permission or consent from the photographers or subjects of the photos.
The application of labels to these images - often done in labs or by Amazon Mechanical Turk (MTurk) low-paid workers (usually working remotely from their own homes on mind-numbing and spirit-crushing, alienating and repetitive tasks that computers can't do yet, jobs that end up causing also some physical issues such as carpal tunnel syndrome as these workers often end up ticking boxes and clicking endlessly on their mouse to classify images of people and objects...) - produces a regime of classifications, with people tagged by race, gender, age, emotion, and sometimes personal character.
As underlined by Paglen, "this exhibition shows how these images are part of a long tradition of capturing people's images without their consent, in order to classify, segment, and often stereotype them in ways that evokes colonial projects of the past."
The exhibition curators prompt visitors not just to think about the representation of human beings whose images are harvested, interpreted and codified through training datasets, but at the way these photographs are labelled and classified, creating new biases and boundaries, making assumptions and errors, generating new prejudices and ideologies and, last but not least, perpetuating a dark history of social classification, post-colonial and racist systems of population segmentation.
Researchers at the University of Tennessee, Knoxville created for example a dataset of 20,000 faces to classify people by race, gender, and age. According to it, gender is binary and race can be represented by the categories White, Black, Asian, Indian, and Others.
Besides, AI systems are even used to measure people's facial expressions to assess everything from mental health, whether someone should be hired or if a person is going to commit a crime.
Through the images on display, visitors will be given the chance to look at the AI technologies used in our society, from facial recognition and gait detection to biometric surveillance and even emotion recognition, and wonder who has the power to build and benefit from these systems.
"What we hope is that 'Training Humans' gives us at least a moment to start to look back at these systems and understand, in a more forensic way, how they see and categorize us," Crawford concludes.
If you want to discover more about AI systems, you can check out also Crawford's installation "Anatomy of an AI System" with Vladan Joler, part of La Triennale di Milano's event "Broken Nature: Design Takes on Human Survival " (until 1st September 2019); Crawford's new book Atlas of AI (Yale University Press) will instead be published next year.
There's only one question that remains, will Miuccia Prada be inspired by the many possibilities of Artificial Intelligence systems for her next catwalk and collection? Time will tell.
Image credits for this post
All images courtesy of Fondazione Prada. The images used in this exhibition are all drawn from publicly available training sets. If you see an image of yourself and you would like to have it removed from the event, please contact Fondazione Prada: [email protected]
1.
UTK Face, 2017
Zhifei Zhang, Yang Song, and Hairong Qi
Researchers at the University of Tennessee, Knoxville created this dataset of 20,000 faces to classify people by race, gender, and age. According to the dataset, gender is binary and race can be represented by the categories White, Black, Asian, Indian, and Others.
2 - 7.
FERET Dataset, 1993-1996
National Institute of Standards
Dataset funded by the United States military's Counterdrug Technology Program for use in facial recognition research.
8 and 9.
CAISA Gate and Cumulative Foot Pressure, 2001
Shuai Zheng, Kaigi Huang, Tieniu Tan and Dacheng Tao
Created at the Center for Biometrics and Security Research at the Chinese Academy of Sciences, the dataset is designed for research into recognizing people by the signature of their gait.
10 - 13.
SDUMLA-HMT, 2011
Yilong Yin, Lili Liu, and Xiwei Sun
The iris prints come from a larger multimodal dataset developed at Shandong University in Jinan, China, which includes faces, irises, finger veins, and fingerprints for use in biometric applications.
Comments
You can follow this conversation by subscribing to the comment feed for this post.