Abstract:
Recently computer vision datasets and models have been diagnosed with multiple social biases
including gender, race, and age. Exposing these biases has proven to be the first step in di luting their effects from algorithms. In this work, we discover and expose the entire chain of
differently-abled bias in generative models, well known general purpose datasets, and widely
used search engines.
Social bias (gender, race, and age) discovery is aided by readily available human attribute detectors trained on large datasets. Developing automatic methods for discovery of concepts with
limited representation (training data), e.g., disability bias, is an interesting challenge. We pro pose a novel framework for efficient discovery of biases related to underrepresented groups.
Finally, we motivate the creation of large scale differently-abled datasets using a real life exam ple of hotel booking websites. Our experiments reveal that although hotels gain advantage in
search rankings leveraging claims of inclusive amenities, the browsing experience of differently
abled individuals is not catered for.
We hope our findings will guide users, researchers and tech giants to tackle this bias same as
gender, race and age stereotypes.