Explainer: How computers "see" faces and other objects

Explainer: How computers "see" faces and other objects
In this June 21, 2018 file photo, a face recognition camera is seen at the customs entry at Orlando International Airport in Orlando, Fla. Computers started to be able to recognize human faces in images decades ago. But now artificial intelligence systems are rivaling people's ability to classify objects in photos and videos. That's sparking increased interest from government agencies and businesses, which are eager to bestow vision skills on all sorts of machines. (AP Photo/John Raoux)

Computers started to be able to recognize human faces in images decades ago, but now artificial intelligence systems are rivaling people's ability to classify objects in photos and videos.

That's sparking increased interest from and businesses, which are eager to bestow vision skills on all sorts of machines. Among them: , drones, personal robots, in-store cameras and medical scanners that can search for . There are also our own phones, some of which can now be unlocked with a glance.

HOW DOES IT WORK?

Algorithms designed to detect facial features and recognize individual faces have grown more sophisticated since early efforts decades ago.

A common method has involved measuring facial dimensions, such as the distance between the nose and ear or from one corner of the eye to another. That information can then be broken down into numbers and matched to similar data extracted from other images. The closer they are, the better they match.

Such analysis is now aided by greater computing power and huge troves of digital imagery that can be easily stored and shared.

FROM FACES TO OBJECTS (AND PETS)

"Face is an old topic. It's always been pretty good. What really got everyone's attention is object recognition," says Michael Brown, a science professor at Toronto's York University who helps organize the annual Conference on Computer Vision and Pattern Recognition.

Research over the past decade has focused on the development of brain-like neural networks that can automatically "learn" to recognize what's in an image by looking for patterns in big data sets. But humans continue to help make machines smarter by labeling photos, as happens when Facebook users tag a friend.

An annual image recognition competition that lasted from 2010 to 2017 drew top researchers from companies like Google and Microsoft. Among the revelations: computers can do better than humans at distinguishing between various Welsh corgi breeds, in part because they're better able to quickly absorb the knowledge it takes to make those distinctions.

But computers have been confused by more abstract forms, such as statues.

THE "CODED GAZE"

The growing use of face recognition by law enforcement has highlighted longstanding concerns about racial and gender bias.

A study led by MIT computer scientist Joy Buolamwini found that systems built by companies including IBM and Microsoft were much more likely to misidentify darker-skinned people, especially women. (Buolamwini called this effect "the coded gaze.") Both Microsoft and IBM recently announced efforts to make their systems less biased by using bigger and more diverse photo repositories to train their software.

© 2018 The Associated Press. All rights reserved.

Citation: Explainer: How computers "see" faces and other objects (2018, July 3) retrieved 19 April 2024 from https://phys.org/news/2018-07-explainer-how-computers-see-faces.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Explore further

Less biased facial recognition? Microsoft touts improvement, IBM offering help

16 shares

Feedback to editors