The world’s leading publication for data science, AI, and ML professionals.

A New Anti-Facial Recognition System

LowKey is a new anti-facial recognition system developed by researchers from the University of Maryland. Learn how it works.

Photo by Tobias Tullius on Unsplash
Photo by Tobias Tullius on Unsplash

Facial Recognition software has become more and more powerful with improvements to deep learning. Correspondingly, the privacy concerns of facial recognition software have also increased. Many facial recognition systems build their databases by crawling publicly available pictures on the internet, meaning your face might be in some database somewhere without you knowing about it. One way to avoid this problem is to not post your face on the internet. However, in the age of social media, that might be infeasible. Another solution is to alter the image to trick the facial recognition software, while also maintaining image quality so that you can still use the image. This is the approach of the "LowKey" method devised by some researchers at the University of Maryland.

LowKey exploits the fact that most facial recognition systems are built using neural networks, which are known to be weak to adversarial attacks. Adversarial attacks are small changes to the input of a neural network that cause the network to misclassify the input. Ideally, the use case is as follows. You perform the LowKey adversarial attack on a selfie and upload it onto the internet. The LowKey image is picked up by a facial recognition database. Later, you go outside and a surveillance camera takes a picture of you (called the "probe image"). However, it is unable to match your probe image to the LowKey image in the database. You are safe.

Source: LowKey paper
Source: LowKey paper

The Details:

LowKey’s goal is to perform well against all facial recognition deep learning systems. However, we don’t know the architecture of some of the deep learning systems we are trying to defeat. If we trained our adversarial attack to defeat one particular facial recognition neural network that we have access to, we cannot guarantee that will work in the field against other networks. There is no perfect solution to this problem.

The LowKey researchers decided to train their adversarial attack on an ensemble of the best current open-source facial recognition neural networks, hoping that the ensemble of models would give their attack better generalizability. First, for each model in the ensemble, the researchers computed the output of that model on the input image. Then, they applied the LowKey adversarial attack on the input image, and computed the output of the model using the LowKey modified image as the input. Next they computed the difference between the two outputs. They did this for each model in the ensemble and then took the sum of the differences. Their goal was to maximize this sum. The bigger this sum, the less likely a facial recognition neural net is to classify the true image and the LowKey modified image as the same.

Second, the researchers wanted the modified image to still be recognizable to humans. To achieve this goal, the they decided to minimize the LPIPS metric on the original and LowKey images. LPIPS (Learned Perceptual Image Patch Similarity) is a human-based measure of similarity between two images. A lower LPIPS means higher similarity.

LowKey therefore has two objectives: maximize the distance between original and LowKey images based on the ensemble of open source facial recognition models, and minimize the LPIPS between the same two images. In mathematical notation, the total objective can be written as:

Source: LowKey paper
Source: LowKey paper

Clarifications:

  • x is the original image
  • x’ is the LowKey image
  • n is the number of models in the training ensemble
  • f_i is the i-th ensemble model
  • A is an image preprocessing function
  • G is a Gaussian smoothing function

Notice that there are two versions of the first objective – one with a Gaussian smoothing function, and one without. The Gaussian smoothing function version was included by the researchers because it improved results. The total objective is trained with gradient ascent, and the final x’ is output as the LowKey image.

The LowKey researchers released an online webtool if you want to try it yourself. It can be found here. As an example this is what it does to a sample image:

Source: Ali Kazal from Unsplash
Source: Ali Kazal from Unsplash

Results and Limitations:

The researchers tested LowKey by trying to break two commercially available facial recognition APIs, namely Amazon Rekognition and Microsoft Azure Face. On both APIs LowKey was able to protect user faces so that they were recognized less than 3% of the time. Without LowKey protection the two facial recognition systems recognized faces better than 90% of the time. That is a monumental difference.

However, it remains to be seen if LowKey works as well when tested against other, perhaps classified, facial recognition systems. Also, one way for facial recognition systems to get around LowKey protection would be to train the systems with LowKey images as part of the training data. This could result in an arms race where an anti-facial recognition algorithm like LowKey is released, facial recognition companies respond by training new models that take into account the algorithm, resulting in a new algorithm being released, and so on and so forth. In other words, it is possible that LowKey will one day stop being effective.

Regardless of these doubts, however, LowKey is an important step towards privacy in the Internet and Machine Learning age. LowKey demonstrates that an intuitively simple adversarial attack can fool current image recognition systems while maintaining image quality. For more details, please refer to the original paper here.


Related Articles