Princeton Visual AI Lab

Gender Artifacts in Visual Datasets

Nicole Meister*   Dora Zhao*   Angelina Wang   Vikram V. Ramaswamy   Ruth Fong   Olga Russakovsky
Princeton University
{nmeister, dorothyz}@alumni.princeton.edu

Paper

Code

Abstract

Gender biases are known to exist within large-scale visual datasets and can be reflected or even amplified in downstream models. Many prior works have proposed methods for mitigating gender biases, often by attempting to remove gender expression information from images. To understand the feasibility and practicality of these approaches, we investigate what gender artifacts exist within large-scale visual datasets. We define a gender artifact as a visual cue that is correlated with gender, focusing specifically on those cues that are learnable by a modern image classifier and have an interpretable human corollary. Through our analyses, we find that gender artifacts are ubiquitous in the COCO and OpenImages datasets, occurring everywhere from low-level information (e.g., the mean value of the color channels) to the higher-level composition of the image (e.g., pose and location of people). Given the prevalence of gender artifacts, we claim that attempts to remove gender artifacts from such datasets are largely infeasible. Instead, the responsibility lies with researchers and practitioners to be aware that the distribution of images within datasets is highly gendered and hence develop methods which are robust to these distributional shifts across groups.

Citation


    @article{meister2022artifacts,
    author = {Nicole Meister and Dora Zhao and Angelina Wang and Vikram V. Ramaswamy and Ruth Fong and Olga Russakovsky},
    title = {Gender Artifacts in Visual Datasetsi},
    journal = {CoRR},
    volume = {abs/2206.09191},
    year={2022}
    }
  

Identifying Gender Artifacts

In our work, we explore to what extent gendered information can truly be removed from the dataset. To do so, we develop a framework to identify gender artifacts, or visual cues that are correlated with gender. While there can be infinitely many potential correlations in an image, we focus specifically on those that are learnable (i.e., result in a learnable difference for a machine learning model) and interpretable (i.e., have an interpretable human corollary such as color and shape). To discover gender artifacts, we take the approach of leveraging a "gender artifact model," which is a model trained on selectively manipulated versions of the dataset (e.g., grayscale, occluded background) to predict perceived gender expression. This method of discovering gender artifacts is more complex than prior work, as it goes beyond analyzing annotated attributes in the image to unveil a broader set of gender artifacts.

Our goal is to understand what the model is picking up on as a predictive feature, and how variations of the dataset with particular gender artifacts removed or added may increase or decrease its performance.

We analyze some of the higher-level perceptual components of the image by changing the resolution and color of the image (1). Then, we turn to disentangling gender artifacts arising from the person versus the background (2). Finally, we investigate try to distinguish between two components of the background: the objects and the “scene” by studying the role of contextual objects as gender artifacts (3).

It is important to note that we do not condone the use of automated gender recognition in practice and discuss additional important ethical considerations in our paper's Introduction (Sec 1) and Set-up (Sec 3.3).

Implications

Acknowledgements

This material is based upon work supported by the National Science Foundation under Grant No. 1763642, Grant No. 2112562, and the Graduate Research Fellowship to A.W., as well as by the Princeton Engineering Project X Fund. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.

The webpage template was adapted from this project page.

Contact

Nicole Meister (nmeister@alumni.princeton.edu)
Dora Zhao (dorothyz@alumni.princeton.edu)