How would you like it if anyone armed with an iPhone could figure out a slew of information about you, pull up any Facebook or Instagram picture you’ve ever been in, and see any other image of you that’s been posted publicly online?
This scenario appears to be possible thanks to a shady startup called Clearview AI which, as a New York Times investigation earlier this year revealed, has been mining your pictures online to build a vast facial recognition database. At first, Clearview AI maintained that its tool was only meant to be used by law enforcement and a few private companies. But it later became clear that the company has consistently misrepresented both the extent of its work and the breadth of its aspirations. Now the company seems to be reversing its position, again.
This is concerning. Facial recognition is an incredibly powerful tool, and Clearview’s tech is trafficking in highly personal information — including, potentially, yours. (If you’re a resident of California or the European Union, feel free to request the data Clearview has on you here, but note you’ll likely need to send them a copy of government-issued photo ID.)
The controversy surrounding Clearview has since prompted probes from US senators. The very existence of facial recognition technology has always been a source of debate. Law enforcement has been using facial recognition for several years now, and companies, schools, and other organizations are increasingly making use of the AI-powered software. But Clearview’s technology represents a frightening step toward an all-powerful system that the world hasn’t seen before. The secretive company says it has created a database of more than 3 billion images that have been scraped from all corners of the internet, including social networks like Facebook, Instagram, and YouTube. From just a snapshot or video still, Clearview claims its app lets someone using the tech identify a face and match it with publicly available information about the person, all within just a few seconds.
Do we want to live in a world where this technology exists? Clearview argues that the tech can help track down dangerous people, and its site points to “child molesters, murderers, suspected terrorists.” And as the Times reported in February, the company’s facial recognition has helped identify child victims in exploitative videos posted to the web. But clearly the tech can be used for a lot more than that.
And critics say facial recognition is way too risky, enabling excessive surveillance and threatening our privacy rights. Another concern is that the technology, broadly, has also been shown to be less accurate on people of color, women, and other minority groups.
So here’s how Clearview’s tool works. Say you have an image of a person, but you don’t know their name. You could input that photo into Clearview’s app, and it will turn up any image of the person that it had scraped from the internet, as well as links to websites from which those images came. That could be a good amount of information.
Again, Clearview’s database reportedly includes more than 3 billion images taken from around the web. That’s much more than what law enforcement agencies typically have access to. The Times reports that the technology will work with images of faces from many different angles, while older facial recognition tools used by police departments might require the subject to be looking straight ahead, like in a mug shot.
That means images from social media posts on platforms like Instagram could pop up — even images that are no longer, but once were, publicly available. And keep in mind: The tool doesn’t just surface pictures that you’ve taken and posted online. It will also turn up any photos posted of you, even those posted without your consent or knowledge.
“Clearview is a search engine for publicly available images,” Clearview CEO Hoan Ton-That told Recode in an email. “We do not and cannot index any images that are private or protected, such as those in a private Instagram account. A previously public image may or may not be searchable with Clearview depending on the time it was public and that would depend on the individual case.”
Heilweil, Rebecca. “The World’s Scariest Facial Recognition Company, Explained.” Vox, February 11, 2020. https://www.vox.com/recode/2020/2/11/21131991/clearview-ai-facial-recognition-database-law-enforcement.
When it comes to technology and social media, one of the most challenging topics to tackleis privacy. It constantly concerns society, and businesses always strive to defend their services; some may genuinely care, while others may just not want to be in any trouble. This post revealed some new facts to me, such as the fact that these big data gathering companies do not have access to images that individuals publish on their “private” accounts. But I’m not sure if it’s real or not. Personally, I feel that if we put something on the internet or even upload it to a cloud service, we cannot be certain that it will not be accessed by someone or companies.
As a graduate student interested in working on social media issues, privacy appears to be an important consideration to explore. However, privacy is a far larger and more complex subject than I can deal with my thesis research.