Facial recognition is going mainstream. It’s showing up in consumer products like the iPhone X’s Face ID technology, in facial scansat airports and in law enforcement agencies in the U.S.and abroad. Cisco, too, is working on a facial recognition program that’s designed to recognize people’s faces during video meetings.

But how does the technology work? Who is using it and for what? And how reliable is it? Here are five things you probably didn’t know about facial recognition — but maybe you should.

1) What are the benefits of facial recognition?

They are many. It can help find abducted people and lost children, verify a person’s identity to make a payment, fight human traffickingand even scan people’s emotional expressions as they enter or exit stores. At this year’s royal wedding in England it was used to identify celebrity attendees. Unlike fingerprints and other biometric identifiers, facial recognition is a quick way to identify someone on video or from afar. And the technology is cheap. Amazon charges about $1 for every 1,000 images scanned for customers using its facial recognition service, Rekognition.

2) Who’s using the technology?

A wide variety of organizations. Rivals to Amazon’s service include Microsoft’s Facial Recognition APIIBM Watson Visual Recognitionand Face++ from Chinese startup Megvii.

Mastercard is testing facial recognition to verify a cardholder’s identity, which will allow users to simply glance at their phone to authenticate a purchase. Newer passports contain microchips that include a digital photo. When a traveler passes through airport security, a camera takes a photo and compares it to the image that’s pulled from the passport microchip to verify identity.

3) What are the main usage scenarios?

There are two types of recognition tasks: verification and identification. Verification is a one-to-one task, meaning that software uses AI to scan a stored image and compare it with a second image — acquired when a user glances at their iPhone X, for example — to determine if the two match. Identification is a one-to-many task, meaning one image is compared against a gallery or database of other images — perhaps tens of millions of them — to determine if the original “probe” image matches any of the items. This is used, for example, in a police search for criminal matches or a missing person.

4) Is the technology foolproof? 

That depends. AI software is only as smart as the data used to train it. Facial recognition technology works through pattern recognition, and computers are trained to “see” patterns using machine learning (ML). This computer vision (what one researcher calls “the coded gaze“) is achieved by creating a training set comprising examples of faces. The more images of faces that are fed into a system, the more accurate the AI software becomes. There is nothing intrinsically objective about facial recognition. If, for example, many more images of white men than of black women are fed into the system, it will be worse at identifying the black women. A recent study by MIT Media Lab researcher Joy Buolamwini uncovered algorithmic bias in facial recognition. Buolamwini found that threeleading tech companies’ commercial AI systemsshowed severe bias when classifying by gender and skin-type.

5) Ethical concerns

Critics fear algorithmic bias can lead to discriminatory practices. A Georgetown Law School report estimates that 117 million American adults are in face recognition networks used by law enforcement. The report warns that African Americans are most likely to be singled out because they are disproportionately represented in mug-shot databases.

The ACLU and other civil rights group are urging Axon— the maker of police body cameras — to abandon its plan to integrate facial recognition with live video from the cameras. This would potentially give patrol officers the ability to scan and recognize the faces of everyone they see.

Today, facial recognition systems are lightly regulated and are not audited for misuse. That’s why Microsoft recently urged Congress to regulate this technology.

The Quest for Algorithmic Accountability

Buolamwini notes that even flawless facial recognition technology is open to abuse in the hands of, say, authoritarian governments or overzealous marketers. China has an estimated 200 million surveillance cameras and counting — four times more than the United States. Many of these are fitted with AI, including facial recognition technology. Chinese police are even experimenting with facial-recognition glasses. Together, these technologies form the backbone of a new social credit system that serves as a public pillory in Chinese society.

See also: Connected ATM helps solve bank attacks.

Buolamwini is spearheading the development of an IEEE standard to address some of the technology’s deficiencies and to spur greater algorithmic accountability. “If we fail to make ethical and inclusive AI, we risk losing gains made in civil rights and gender equity under the guise of machine neutrality,” she says.

Facial recognition for Cisco technology

And what of Cisco’s internal efforts to build a program that recognizes the faces of video meeting participants? To train that system, the team building the program wants to capture video selfies of 5,000 unique participants — and it’s turning to employees to upload them, says Keith Griffin, a principal engineer on the team.

“Choosing the right algorithm is certainly important for our face recognition solution,” Griffin says. “However, having a diverse dataset is far more important. While algorithmic bias can exist, it is amplified significantly by data bias — which we must avoid.”

What’s next for facial recognition technology? There are more questions about its future than answers. Should you have the right to not be scanned by facial recognition systems? Should government step in to regulate the technology? Should industry regulate itself?

This much is certain: the facial recognition genie is out of the bottle — and it may be too late for you to own your own face.