Less biased facial recognition? Microsoft touts improvement, IBM offering help

If a picture paints a thousand words, facial recognition paints two: It's biased.

You might remember a few years ago that Google Photos automatically tagged images of black people as "gorillas," or Flickr (owned by Yahoo at the time) doing the same and tagging people as "apes" or "animals."

Earlier this year, the New York Times reported on a study by Joy Buolamwini, a researcher at the MIT Media Lab, on , algorithms and bias: She found that facial is most accurate for white men, and least accurate for darker-skinned people, especially women.

Now—as facial recognition is being considered for use or is being used by police, airports, immigration officials and others—Microsoft says it has improved its facial-recognition technology to the point where it has reduced error rates for darker-skinned men and women by up to 20 times. For women alone, the company says it has reduced error rates by nine times.

Microsoft made improvements by collecting more data and expanding and revising the datasets it used to train its AI.

From a recent company blog post: "The higher error rates on females with darker skin highlights an industrywide challenge: Artificial intelligence technologies are only as good as the data used to train them. If a system is to perform well across all people, the training dataset needs to represent a diversity of skin tones as well as factors such as hairstyle, jewelry and eyewear."

In other words, the company that brought us Tay, the sex-crazed and Nazi-loving chatbot, wants us to know it is trying, it's really trying. (You might also remember that Microsoft took its AI experiment Tay offline in 2016 after she quickly began to spew crazy and racist things on Twitter, reflecting the stuff she learned online. The company blamed a "coordinated attack by a subset of people" for Tay's corruption.)

In related news, IBM announced that it will release the world's largest facial dataset to technologists and researchers, to help in studying bias. It's actually releasing two datasets this fall: one that has more than 1 million images, and another that has 36,000 facial images equally distributed by ethnicity, gender and age.

Big Blue also said it improved its Watson Visual Recognition service for facial analysis, decreasing its error rate by nearly tenfold, earlier this year.

"AI holds significant power to improve the way we live and work, but only if AI systems are developed and trained responsibly, and produce outcomes we trust," IBM said in a blog post. "Making sure that the system is trained on balanced data, and rid of biases, is critical to achieving such trust."

©2018 The Mercury News (San Jose, Calif.)
Distributed by Tribune Content Agency, LLC.

Citation: Less biased facial recognition? Microsoft touts improvement, IBM offering help (2018, June 29) retrieved 25 April 2024 from https://phys.org/news/2018-06-biased-facial-recognition-microsoft-touts.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Explore further

IBM to release world's largest facial analytics dataset

23 shares

Feedback to editors