AI Model Creates Invisible Digital Masks to Defend Against Unwanted Facial Recognition

A digital face

Just as a chameleon changes colors to mask itself from predators, new AI-powered technology is protecting people’s photos from online privacy threats.

The innovative model, developed at Georgia Tech, creates invisible digital masks for personal photos to thwart unwanted online facial recognition while preserving the image quality.

Anyone who posts photos of themselves risks having their privacy violated by unauthorized facial image collection. Online criminals and other bad actors collect facial images by web scraping to create databases.

These illicit databases enable criminals to commit identity fraud, stalking, and other crimes. The practice also opens victims to unwanted targeted ads and attacks.

The new model is called Chameleon. Unlike current models, which produce different masks for each user’s photos, Chameleon creates a single, personalized privacy protection (P-3) mask for all of a user’s facial photos.

A bespoke P-3 mask is created based on a few user-submitted facial photos. After applying the mask, protected photos won’t be detectable by someone scanning for the user’s face. Instead, the unwanted scan will identify the protected photos as being someone else.

The Chameleon model was developed by Professor Ling Liu of the School of Computer Science (SCS), Ph.D. students Sihao Hu and Tiansheng Huang, and Ka-Ho Chow, an assistant professor at the University of Hong Kong and Liu’s former Ph.D. student.

During development, the team accomplished its two main goals: protecting the person's identity in the photo and ensuring a minimal visual difference between the original and masked photos.

The researchers said a notable visual difference often exists between the original and photos using current masking models. However, Chameleon preserves much of the original photo’s quality among various facial images.

In several research tests, Chameleon outperformed three top facial recognition protection models in visual and protective metrics. The tests also showed that Chameleon offers more substantial privacy protection while being faster and more resource-efficient.

In the future, Huang said they would like to apply Chameleon’s methods to other uses.

“We would like to use these techniques to protect images from being used to train artificial intelligence generative models. We could protect the image information from being used without consent,” he said.

The research team aims to release Chameleon code publicly on GitHub to allow others to improve their work.

“Privacy-preserving data sharing and analytics like Chameleon will help to advance governance and responsible adoption of AI technology and stimulate responsible science and innovation,” said Liu.

The paper on Chameleon, Personalized Privacy Protection Mask Against Unauthorized Facial Recognition, was presented earlier this month at ECCV 2024.

News Contact

Morgan Usry, Communications Officer, School of Computer Science