Dolphin
Senior Member (Voting Rights)
research-article
Free access
Interface Support for Evaluating Disability Bias in AI-Generated Images
This article is summarized in an AI Podcast.View Podcast
Authors: Kelly Avery Mack, Lucy Jiang, Lotus Zhang, Leah Findlater Authors Info & Claims
CHI '26: Proceedings of the 2026 CHI Conference on Human Factors in Computing Systems
Article No.: 1191, Pages 1 - 27
https://doi.org/10.1145/3772318.3791922
Published: 13 April 2026 Publication History
CHI '26: Proceedings of the 2026 CHI Conference on Human Factors in Computing Systems
Interface Support for Evaluating Disability Bias in AI-Generated Images
Pages 1 - 27
Abstract
Generative text-to-image (T2I) models often output images that have stereotypes of people with disabilities.One possibility to mitigate the risk of these biases is to intervene at the user level, supporting T2I users themselves in being able to identify biases and act accordingly.
To understand how to design such support and its potential effectiveness, we implemented two interventions:
(1) an education module to inform users of disability stereotypes in T2I images
and
(2) AI-generated feedback about potential stereotypes in a given image.
We evaluated these options alone and in combination through a controlled experiment (N = 103) and a qualitative study (N = 10).
Our results demonstrate that interface-based interventions can help users identify stereotypes, but that people do not always desire to avoid stereotypes.
Participants wanted image subjects to “look” disabled, which sometimes inadvertently perpetuated stereotypes.
Our results indicate clear ways for T2I interfaces to support users in prompting for and assessing images.