Search This Blog

Powered by Blogger.

Blog Archive

Labels

Showing posts with label Computer Vision Model. Show all posts

Meta Publishes FACET Dataset to Assess AI Fairness

 

FACET, a benchmark dataset designed to aid researchers in testing computer vision models for bias, was released by Meta Platforms Inc. earlier this week. 

FACET is being launched alongside an update to the open-source DINOv2 toolbox. DINOv2, which was first released in April, is a set of artificial intelligence models aimed to help with computer vision projects. DINOv2 is now accessible under a commercial licence, thanks to recent upgrades. 

Meta's new FACET dataset is intended to aid researchers in determining whether a computer vision model is producing biased results. The company explained in a blog post that measuring AI fairness is difficult using current benchmarking methodologies. FACET, according to Meta, will make the work easier by providing a big evaluation dataset that researchers may use to audit various types of computer vision models. 

"The dataset is made up of 32,000 images containing 50,000 people, labelled by expert human annotators for demographic attributes (e.g., perceived gender presentation, perceived age group), additional physical attributes (e.g., perceived skin tone, hairstyle) and person-related classes (e.g., basketball player, doctor),” Meta researchers explained in the blog post. "FACET also contains person, hair, and clothing labels for 69,000 masks from SA-1B."

Researchers can test a computer vision model for fairness flaws by running it through FACET. They can then do an analysis to see if the accuracy of the model's conclusions varies across images. Such changes in accuracy could indicate that the AI is biased. According to Meta, FACET can be used to detect fairness imperfections in four different types of computer vision models. The dataset can be used by researchers to discover bias in neural networks optimised for classification, which is the task of grouping similar images together. It also makes evaluating object detection models easier. These models are intended to detect items of interest in images automatically. 

FACET is especially useful for auditing AI applications that conduct instance segmentation and visual grounding, two specialised object detection tasks. Instance segmentation is the technique of visually highlighting items of interest in a photograph, such as by drawing a box around them. Visual grounding models, in turn, are neural networks that can scan a photo for an object that a user describes in natural language terms. 

"While FACET is for research evaluation purposes only and cannot be used for training, we’re releasing the dataset and a dataset explorer with the intention that FACET can become a standard fairness evaluation benchmark for computer vision models,” Meta’s researchers added.

Along with the introduction of FACET, Meta changed the licence of their DINOv2 series of open-source computer vision models to an Apache 2.0 licence. The licence permits developers to use the software for both academic and commercial applications.

Meta's DINOv2 models are intended to extract data points of interest from photos. Engineers can use the retrieved data to train other computer vision models. DINOv2 is much more accurate, according to Meta than a previous-generation neural network constructed for the identical task in 2021.