Promoting Fairness in Generative AI: Strategies to Tackle Bias Head-On
Written by Amir Bahadori with research from Sinead O’Connor and help from AI
In today's digital landscape, where AI plays a pivotal role in shaping generative designs, the issue of bias looms large. As a UX design agency committed to creating inclusive and user-centric AI designs, we embarked on a journey to unravel the complexities of AI bias, understand its origins, and discover effective ways to mitigate its impact.
Our journey led us to examine four prominent generative AI generators, scrutinizing the extent of bias within these systems and gauging their relative performance in this domain. At the end of this blog, you'll find a groundbreaking framework—a designer's ally in the pursuit of reducing bias through thoughtful prompting—that will help your teams mitigate bias in designs.
AI generated faces of men and women
Source: Reddit
Understanding AI Bias: A Multifaceted Challenge
At the heart of AI lies a multifaceted challenge—bias. It takes various forms, from cultural and demographic biases to linguistic and ideological nuances. To design AI that resonates with diverse audiences, it's essential to comprehend the intricate nature of bias and its sources.
In the world of AI, bias is not a one-size-fits-all concept. It comes in different forms, each with its own set of challenges and implications for design. Let's dive into the various dimensions of AI bias:
Cultural Bias: One of the primary challenges in AI design is addressing cultural bias. AI models are often trained on data primarily composed of dominant languages, which can lead to a skewed understanding of languages and dialects in underrepresented regions. As AI designers, we must ensure that our designs are culturally sensitive and relevant to a global audience.
Demographic Bias: Demographic bias arises when certain groups are overrepresented or underrepresented in the training data, leading to skewed outcomes. To combat this, AI designers need to enrich their training data to ensure diversity and equal representation
Linguistic Bias: Linguistic bias stems from the language used in training data. It can lead to biased responses and content generation. As designers, we must be vigilant in crafting language that is inclusive and free from linguistic biases.
Confirmation Bias: AI models are not immune to confirmation bias, which occurs when data collection is biased toward certain viewpoints or beliefs. This can lead to AI systems reinforcing existing beliefs and contributing to misinformation. Ethical AI design requires careful consideration of data sources and collection methods.
Temporal Bias: Temporal bias arises when AI models are trained on data from specific time periods. This can hinder their ability to provide accurate insights on current events and trends. Designers should be aware of the temporal limitations of AI systems and adapt their designs accordingly.
Ideological & Political Bias: AI models can inadvertently learn and perpetuate biases associated with political or ideological viewpoints present in their training data. As designers, we must strive for neutrality and fairness in our AI designs, avoiding favoritism or bias towards any particular ideology.
SOURCE: Should ChatGPT be Biased? Challenges and Risks of Bias in Large Language Models 2023
The Roots of AI Bias: Unmasking the Origins
To tackle bias, we must first understand where it comes from. The origins of AI bias are multifaceted, and they include:
Biased Prompts and Stereotypes: AI often reinforces stereotypes through biased prompts. Even seemingly innocent prompts can lead to AI generating content that perpetuates stereotypes. As AI designers, we must be mindful of the language we use when interacting with AI systems.
Real-world Inequalities: Bias in AI often mirrors real-world inequalities present in training data. To address this, we need to take proactive steps to ensure that our training data is representative and free from bias.
Complex Language-Vision Models: The intricacies of language-vision models can magnify the impact of bias. These models are capable of generating images and text based on the data they've been trained on, which means that biases present in the training data can be amplified. It's essential for designers to understand how these models work and take measures to mitigate bias.
Labeling and Annotation: In (semi)supervised learning scenarios, biases may emerge from the subjective judgments of human annotators providing labels or annotations for the training data.
Product Design Decisions Biases: These can arise from prioritizing certain use cases or designing user interfaces for specific demographics or industries, inadvertently reinforcing existing biases and excluding different perspectives.
SOURCES: Should ChatGPT be Biased? Challenges and Risks of Bias in Large Language Models
“If training data is produced out of a system of inequality, don’t use it to build models that make important social decisions unless you ensure the model doesn’t perpetuate inequality.” -Meredith Broussard
SOURCES:More than a Glitch: Confronting Race, Gender, and Ability Bias in Tech
AI Bias in Action: Examining Different Models
Our research examined three major AI generators. In our first case study where we generated images of “intellectuals” we found that each had a unique bias profile.
Our research delved into three major AI generators, and in our initial case study involving the generation of images representing "intellectuals," we identified unique bias profiles for each model.
Midjourney V 5.2: This model excels in producing impressive imagery but tends to exhibit extreme bias across various social contexts when relying on generic prompts, often favoring a specific demographic as the default persona.
Stable Diffusion XL 1.0: It generates remarkable imagery with a more illustrative and less realistic style as the default. However, it also tends to display extreme bias, frequently leaning towards a particular demographic as the output.
DALLE-2: While it may be less technically proficient in rendering people, specifically faces and hands, it displays some hints of diversity by not consistently favoring a single demographic as the default. Instead, it often includes individuals from various backgrounds in each set.
Our exploration of other use cases yielded similar outcomes.
Mitigating Bias in AI: Applying a Semiotics Lens to Prompt Writing for Designers
Our breakthrough occurred when we merged the field of semiotics, which explores the interpretation of visual symbols and their significance, with the practice of prompt writing. We initiated this process by feeding generative image models with a prompt and then tactfully countered any inherent biases through deliberate language.
A compelling example from our case study illustrates this point: when we prompted image generators with the term "intellectual," the resulting models exhibited a notable bias toward older, Caucasian men. However, by employing a semiotic perspective to analyze the generated images, we also pinpointed more neutral elements that conveyed the essence of the term "intellectual." These included items such as bookcases and glasses, which helped mitigate the bias and enriched the overall interpretation.
The Role of Semiotics in AI Design
A valuable framework for designers derived from the field of semiotics is the RDE framework, which stands for Residual, Dominant, and Emergent. Semiotics provides insights into the context in which a brand exists and what it symbolizes in the eyes of the world. The significance we attribute to signs can evolve over time as cultures adapt. The RDE framework helps identify and categorize the prevailing codes that exist:
Residual: These codes have endured for a significant period but may appear outdated or uninspiring. However, they possess the potential for revitalization.
Dominant: Dominant codes mirror the prevailing norms and moods of today. They are heavily employed, resonating strongly with users and consumers.
Emergent: These codes feel fresh, innovative, and captivating, representing new expressions and innovative styles.
Adapted from MRS: Semiotics Masterclass
By using the RDE framework within Semiotics, designers can gain a deeper understanding of the evolving significance of signs and symbols, enabling them to make informed decisions about how to craft meaningful prompts while deliberately confronting any inherent bias in the model.
For more information on semiotics and its application in branding, you can refer to the article "Semiotics: A Different Way to Look at Your Brand" by B2B International. SOURCE: B2B International
In Conclusion: The Road to Less Bias in AI Design
In our pursuit of less-biased AI design, we've delved into the intricacies of bias and the strategies to mitigate its impact. As designers, we play a crucial role in creating AI systems that are fair, inclusive, and devoid of harmful biases.
We propose a method where we break down images generated from default prompts, differentiating between elements that reinforce bias and those that remain neutral in terms of bias implications. Additionally, we suggest research to identify dominant patterns and create a moodboard of images that deviate from those, thus allowing room for emergent expressions. Furthermore, we've provided a framework prompt writing cheat sheet to aid individuals and teams in obtaining less-biased results from popular visual AI models. In the above example using meaningful elements and using Midjourney you can say something like:
/imagine an intellectual, [insert description of person or persons] wearing glasses, and surrounded by books.
The resulting images will are likely to take you somewhere unexpected and more fun, if we’re honest.
By being conscious of what we’re putting out into the AI universe, involving human expertise, and employing Confetti Lab's semiotic approach to prompt writing, we can successfully navigate the complex terrain of AI bias and produce AI designs that demonstrate our commitment to creating designs that feel fresh, fair, and emergent.