Making Room for Inclusion in AI-Generated Art: Combating Biases within Algorithms
By now, most of us have discovered the wonderful breakthroughs in AI-generated art. From a simple text prompt, AI engines such as Midjourney, Stable Diffusion, and DALL-E can generate entire scenes, and now short videos. Soon, we’ll have longer videos and likely entire procedurally-generated 3D metaverses. Unfortunately, the biases that have been inherent in AI for some time now, are– well– more visible than ever.
AI image generation works in a similar pattern to most common AI: a team will assemble a large amount of data sets (statistics, text, speech, images, video) and then spend time training the AI. This usually happens by having a team identify the target objects in each image (e.g. the cat sitting on a sofa, a cat sitting on a table) or statistically analyzing which words usually seem to be positioned next to each other (e.g. “peanut butter” is 90% likely to be followed with “jelly” but only 15% likely to be followed by “honey”). For generating art, these AI engines have been fed thousands of paintings, drawings, and photographs. These images were tagged, sometimes manually, but increasingly automatically, as the AI recognized objects (the AI knows what a cat looks like). These recognitions act cumulatively, so now an AI “knows” what a cat looks like, “knows” what a whiteboard looks like, and “knows” the visual patterns that are unique to Matisse, Dali, or Banksy.
These patterns, and the data sets from which they are derived, are the problem. Because of these three truths: (1) AI has been trained manually by human developers, (2) humans are inherently unconsciously biased, and (3) white men have been the majority identity of developers, it makes sense that AI-generated media favors whiteness and men. Let’s take a look at some examples of how AI favors these identities with simple, generic AI generated prompts, using Midjourney.
Just as in the real world, these biases reinforce societal norms that white men are the general standard and white women are the standard for women. Digitalization should be advancing us in all regards, including in diversity, equity, and inclusion efforts, not perpetuating harmful and outdated stereotypes. The good news is that biases can be overcome with forethought and intentional actions. At a programmatic level, it is key for teams building AI engines to identify and understand how biases creep into the AI, both from a cultural stereotyping perspective as well as the mathematical set of statistics upon which the AI draws its output.
It’s early days, still. The AI will improve in accuracy and creative ability. As AI continues to march into more areas of our lives, we will see an increasingly important role for “AI ethicist”, “AI bias manager”, and “AI culturalist”. With some effort, AI bias can be counteracted.
Stay ahead of trends with insights from iterate.ai experts and advisors