Meta, formerly known as Facebook, has recently released a new AI benchmark called FACET (FAirness in Computer Vision EvaluaTion) to evaluate the “fairness” of computer vision models. This benchmark aims to assess biases in AI models that classify and detect objects, including people, in photos and videos.
Benchmarking Bias in Computer Vision Models
FACET consists of a dataset of 32,000 images, containing 50,000 labeled individuals, annotated by human experts. The dataset includes various demographic attributes, physical attributes, and classes related to occupations and activities. This allows for a comprehensive evaluation of biases present in computer vision models.
Meta emphasizes the importance of using FACET to benchmark fairness not only in vision tasks but also in other multimodal tasks. The release of FACET aims to enable researchers and practitioners to assess the disparities in their own models and monitor the effectiveness of fairness mitigations.
Improving Fairness in AI
Benchmarking for biases in computer vision models is not a new concept, and Meta has previously released similar benchmarks. However, FACET is designed to be more comprehensive, allowing for deeper evaluations and answering specific questions about biases against different attributes.
Despite concerns about Meta’s track record in responsible AI, the company claims that FACET surpasses previous benchmarks. The aim is to address questions like whether models show biases when classifying people based on gender presentation or certain physical attributes.
About FACET
To create FACET, Meta employed human annotators from different geographical regions, including North and Latin America, the Middle East, Africa, and Southeast and East Asia. These experts labeled images for demographic attributes, physical attributes, and classes, combining them with labels from the Segment Anything 1 Billion dataset.
It’s uncertain whether individuals photographed for FACET were aware that their images would be used for this purpose. Furthermore, the recruitment process and compensation for the annotators are not clearly defined in the blog post.
Key Takeaway
Benchmarking Bias in Computer Vision Models
Meta has released FACET, an AI benchmark to evaluate biases in computer vision models. The dataset consists of 32,000 images with labeled individuals, allowing for comprehensive evaluations of biases in AI models.
Despite potential concerns about Meta’s approach to responsible AI, FACET aims to address biases in computer vision models and enable researchers and practitioners to monitor and mitigate fairness concerns. This benchmark provides a tool for evaluating biases in vision and multimodal tasks, contributing to the ongoing efforts to improve the fairness of AI systems.