The algorithm is everything in our times. In the age of big data, automated algorithms that can help analysing large volumes of information in just a matter of seconds are the key to power.
But studies and researches carried out in the last few years proved that computerized algorithms often tend to favour one race over others. Yet it is also true that technology is not per se discriminatory and an algorithm is not "born" racist. Machine learning-based systems are indeed trained on the world we know and on the data they are fed. When an algorithm shows its "preferences" we talk about "algorithmic bias".
Sociologist Ruha Benjamin, highlights for example in her volume Race After Technology (2019) that robots (and technologies) can be racist as they are designed in a world drenched in racism, so they "learn to speak the coded language of their human parents – not only programmers but all of us online who contribute to 'naturally occurring' datasets on which AI learn."
For example, if you consider how the Artificial Intelligence sector is dominated by men and that they are often white, you may easily come to one simple conclusion – they are the people who shape and inform this technology.
A quick example could be the algorithm Amazon experimented with a while back. The multinational company tried to employ Artificial Intelligence to create a tool to quickly screen resumes. While this was supposed to be an efficient way to scan job applications, it generated some controversies and Amazon stated it didn't use it in the end. The screening algorithm employed for this purpose was indeed trained on resumes the company collected for a decade, but they mainly tended to come from men, which meant the algorithm automatically learnt to exclude women (and you can discover more about algorithmically driven data failure specific to people of colour and women in Safiya Umoja Noble's Algorithmic Oppression: How Search Engines Reinforce Racism).
There have been studies focused on racist algorithms in the health-care sector and, last October, one research revealed how decision-making software employed by some hospitals in the United States was discriminating against black patients with complex medical needs. In that case the algorithm showed favouritism towards white patients who were equally sick as the algorithm assigned black patients lower risk scores (according to one study 17.7% of patients assigned to receive extra care by the algorithm were black, but the research showed the proportion would reach 46.5% if the algorithm were unbiased). Other studies showed unfair and biased decisions carried out by algorithms in other fields, including criminal justice and education.
The latest example of algorithmic racism relates to fashion and occurred last week when Instagram banned a picture of model and influencer Nyome Nicholas-Williams by photographer Alexandra Cameron. Nicholas-Williams, 28, a plus-sized model who has worked for brands such as Adidas, Boots and Dove, has an impressive Instagram feed that she uses to spread body positivity and self-esteem.
In the picture taken by Cameron and banned by Instagram, Nicholas-Williams is portrayed with her eyes closed and arms wrapped around her breasts, looking like the portrait of bliss, something we genuinely need in our chaotic world currently gripped by the fear and anxiety of a global deadly pandemic.
Followers of the plus-size model loved the image and commented positively about it, but the post was taken down by the platform. Users complained and shared the censored photo of Nicholas-Williams under the hashtag #IwanttoseeNyome, and eventually the photo was restored.
Now it is rather sad that a picture supposed to boost female self-esteem that wasn't even that revealing (no nipples in sight and nipples are the main cause of controversy on Instagram...), ended up being banned, while there are many pictures of scantily dressed thin white women (actresses, models, influencers and ordinary people) that were never the subject of any banning controversy. So in this case it felt as if the alghorithm, probably calibrated on white women with specific body standards, found a plus-size black woman's body offensive.
The platform boasts over 15,000 people all over the world reviewing posts and trying to spot genuinely offensive materials, but Instagram has been often accused of discriminating against black people and even Instagram CEO Adam Mosseri admitted in June that the platform has to examine its "algorithmic bias", while Vishal Shah, the company's vice-president of product, announced in July that an internal Equity Team is working to eliminate any bias in their systems and policies. Yet, while the platform launched its #ShareBlackStories campaign to promote black voices, Instagram users are still reporting that their posts in support of Black Lives Matter are often suppressed and flagged as a political issue or as an election issue.
As stated above, Nicholas-Williams and Cameron (who has shot a series of beautiful portraits of the model that will hopefully become part of an exhibition or an art event) had their original posts from the shoot restored, but this is just a small victory as what happened to the model and the photographer is also happening to other users.
Mosseri recently posted on the social media platform a message stating the site stands "in solidarity with the Black community", and claiming they will be focusing on key points such as harassment, account verification, content distribution and algorithmic bias, starting with the Black community, and then trying to understand how to better serve other underrepresented groups, but it would be interesting to know (though impossible as nobody will ever reveal such data...) how specific algorithms behind these platforms are designed and, above all, what kind of data were employed to build and train them.
Will algorithms be able to perform fairly in future? Surely there will be ways to build fairer systems, maybe by experimenting more to find which variables cause bans and rerunning algorithms with other variables and with different datasets that could be suggested by a wider range of advisors (could we have for example Nicholas-Williams as one of the advisors suggesting variables?) from different backgrounds and communities (rather than just by a selected group of young white men...) who could work closely with people developing algorithms. Otherwise, such systems will continue to be absolutely accurate, but terribly unfair and unethical as well.
Comments
You can follow this conversation by subscribing to the comment feed for this post.