Based on a recent study, a preferred AI tool that creates images from text prompts is filled with gender and race stereotypes in relation to rendering people in “high-paying” and “low-paying jobs.”
Stable Diffusion, a free AI model, was asked to render 5,100 images from written prompts related to job titles in 14 fields and three crime-related categories, in line with a test of the tool by Bloomberg.
The shop then analyzed the outcomes based on the Fitzpatrick Skin Scale – a six-point scale that dermatologists use to evaluate the quantity of pigment in someone’s skin.
Bloomberg found that the pictures generated for every “high-paying” job – similar to architect, doctor, lawyer, CEO and politician – were dominated by lighter skin tones, numbered one to a few on the skin scale.
Meanwhile, darker skin tones made up the vast majority of “low-paying” jobs, similar to janitors, dishwashers, fast-food staff, and social staff.
The stereotypes were even worse when Bloomberg asked Stable Diffusion to categorize work-related photos by gender.
The AI tool generated almost thrice as many photos of men as women, with all but 4 of the 14 occupations — cashier, teacher, social employee and housewife — dominated by women.
Of the 300 images created for every of the 14 positions, all but two of the pictures for the keyword “engineer” were viewed as male, Bloomberg reported, while no images were generated for the keyword “janitor.”
Crime prompts asked the AI tool to render images for drug dealers, terrorists, and prisoners. The overwhelming majority of scores for each drug dealers and prisoners were darker.
The outcomes for the terrorists showed men with dark facial hair, often wearing head coverings – clearly based on stereotypes of Muslim men, Bloomberg stated.
“All AI models have inherent biases which can be representative of the datasets they’re trained on,” a spokesperson for London-based startup StabilityAI, which distributes Stable Diffusion, told The Post.
They added that as an open source model, meaning the software can learn from latest algorithms and datasets, platforms similar to Stable Diffusion will someday “improve bug assessment techniques and develop solutions beyond basic quick modification.”
“We intend to coach open-source models on country- and culture-specific datasets that can serve to mitigate bias brought on by overrepresentation usually datasets,” the spokesperson said.
Stable Diffusion is an element of the fast-growing generative AI industry, which also includes paid services similar to DeepAI, Midjourney, and Dall-E from OpenAI, the corporate behind ChatGPT.
Stable Diffusion is already getting used by startups similar to Deep Agency, an AI-powered virtual photo studio and modeling agency within the Netherlands that allows brands to generate images of individuals for mainstream promoting.
Deep Agency continues to be in beta, in line with her website, which shows its capabilities on two individuals who appear to be real people posing. Nevertheless, “these models don’t exist,” the positioning notes.
Graphic design platform Canva, which boasts 125 million energetic users, has also introduced Stable Diffusion integration, which has enabled all types of companies and marketers to include unique AI-generated images into their design and promoting work.
Canva’s head of AI products, Danny Wu, told Bloomberg that the corporate’s users have already created 114 million images using Canva’s stable diffusion integration, which he estimates will soon be “eliminated.”
“The problem of ensuring AI technology is fair and representative, especially as they turn into more widely used, is a very essential issue that we’re actively working on,” he said.
The necessity to eliminate bias from generative AI tools has turn into more pressing as police forces seek to make use of technology to create photorealistic complex images of suspects.
“Showing someone a machine-generated image can reassure them that it’s that person, even in the event that they might not be that person – even when it’s a very faked image,” Nicole Napolitano, director of research strategy on the Center for Policing Equity, told Bloomberg.
Napolitano said police departments with limited budgets are already adopting AI-powered technology, despite the shortage of regulation.
She also cited hundreds of illegal arrests which were made because of lapses in technology similar to facial recognition systems and biased artificial intelligence models.