Fool Facial Recognition

Editor’s Note:

For the past three months, I have been spending much of my spare time working on ways to improve both my privacy and my digital footprint on the Internet. To accomplish this, I realized that Big Tech’s massive collection of personal data is the epicenter of the entire Internet privacy issue. What can be done about this? When I accepted free services and social media accounts, I granted the respective Big Tech companies access to my data. Of course, I had no idea each company I signed up with would harvest a broad spectrum of my data, and create a personal profile on me. I also was unaware that this information would be marketed, used to shape search engine results, and more. I was naive.

To counter all the “data harvesting” done on me by Big Tech, I decided to control the flow of information to THEM, as much as possible. Early in my quest, I accepted that data previously harvested about me by Big Tech is data that is already lost to me. Next, I accepted the fact there is some data that Big Tech collects about me that is really non-consequential. Collect it, – I don’t care. My personal data is another story. There are things about me that Big Tech does not need to know. Considering both the breadth and scope of everything being collected, I want to choose what information about me Big Tech collects by limiting what I allow them to collect.

In my effort to control Big Tech’s scrutiny of my personal information, my plan was to focus on controlling data acquisition in three areas: Facial Recognition, Internet Usage, and Email. This became a quest, and I went down a large number of “rabbit holes”. It wasn’t for nothing. I found a number of very useful programs and reshaped ways in which I use my computer to control my personal data. Except for the time invested, all of this was accomplished at very little cost. In the coming weeks, I will share articles on how I am managing my personal information in the three focus areas mentioned, while I am protecting my privacy, and lowering my digital footprint. Stay tuned….. Now, let’s start with Facial Recognition.

Facial Recognition And Privacy

In terms of privacy, Facial recognition is less an issue about data harvesting and more about collateral access to your data. Microsoft, Google, Apple, and Amazon all have and use facial recognition software. It’s a great tool to use when you have lots of photos and you want to search for photos of Uncle Fred, or random cats, etc. In this instance, Facial Recognition is a time-savor. This is all well and good, but there is a deeper more concerning issue.

Here are two articles that show the dark-side of Facial Recognition:

From The Verge’s website article “Researchers Want to Protect Your Selfies From Facial Recognition”: “…companies like Clearview AI are scraping social media sites to build massive face databases on which they train algorithms that are then sold to police, department stores, and sports leagues.”

From a recent article in the New York Times, Clearview AI has already scraped billions of of online photos. It’s not just the pictures that are a concern. Think about it, these photos are useless unless they can connect to a database. These pictures then become a search key for our data profiles.

As explained by the Project On Government Oversight (POGO): “…facial recognition’s threats to civil rights and civil liberties should be of great concern to lawmakers and the general public alike. That this surveillance tool has gone almost wholly unregulated is concerning both because of its potential for abuse, and because its algorithmic biases make it inherently more likely to misidentify people of color, particularly Black Americans, Asian Americans, Native Americans, and Pacific Islanders. Misidentifications can lead to improper police action, such as stops, searches, and even arrests targeting innocent people.”

Imagine that some guy photographs a girl he sees on the street, upload the photo to a website, and discovers her name, last known address, phone number, where she works, what she likes to eat, her favorite music, what she reads, where she parties at, her cat’s name, the routes she takes to work, etc. Our guy in this example might just be curious, or he might be a stalker, a serial rapist, bill collector, etc. Where is there a clear reasonable expectation of privacy on the Internet? Maybe nowhere, and that’s the problem. Another problem is that unregulated Facial Recognition is inherently dangerous.

A Facial Recognition Counter Measure

While doing some research on Facial Recognition, I ran across an application created by the SAND Lab and the University of Chicago. The app is called Fawkes (for you English history buffs, for Guy Fawkes). This excerpt taken from SAND Lab’s website explains what the app is and how it works as well:

“The SAND Lab at University of Chicago has developed Fawkes, an algorithm and software tool (running locally on your computer) that gives individuals the ability to limit how unknown third parties can track them by building facial recognition models out of their publicly available photos. At a high level, Fawkes “poisons” models that try to learn what you look like, by putting hidden changes into your photos, and using them as Trojan horses to deliver that poison to any facial recognition models of you. Fawkes takes your personal images and makes tiny, pixel-level changes that are invisible to the human eye, in a process we call image cloaking. You can then use these “cloaked” photos as you normally would, sharing them on social media, sending them to friends, printing them or displaying them on digital devices, the same way you would any other photo. The difference, however, is that if and when someone tries to use these photos to build a facial recognition model, “cloaked” images will teach the model an highly distorted version of what makes you look like you. The cloak effect is not easily detectable by humans or machines and will not cause errors in model training. However, when someone tries to identify you by presenting an unaltered, “uncloaked” image of you (e.g. a photo taken in public) to the model, the model will fail to recognize you.”

How well does it work? Here are some examples:

Using the Fawks apps is easy. First download and install either the Windows or Mac Fawks app from the SAND Lab website. Start the app and you will see this box appear on your monitor’s screen:

Push to Select Images button and choose the images you want to use, then click the Run Protection Button. Finally, upload your newly cloaked images to Facebook, Twitter, Instagram, etc. The more cloaked images of yourself you upload, the harder it will be for the Facial Recognition software to identify the real you.

The Fawkes app is free. Get is HERE.

Sources

The SAND Lab Website: http://sandlab.cs.uchicago.edu/fawkes/#code

The Verge, James Vincent: Cloak your photos with this AI privacy tool to fool facial recognition

ZDNet, Fawkes protects your identity from facial recognition systems, pixel by pixel

New York Times, Kashmir Hill: This Tool Could Protect Your Photos From Facial Recognition

By prometheus

Husband. Father. Grandfather. World class Geek.

Leave a comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.