Hidden Content
The surveillance system offers police unprecedented surveillance power.

When London's Metropolitan Police Department announced its decision to adopt the controversial and intrusive ClearView AI surveillance system at the end of January, a global cacophony of protest erupted. Concerns, fear and trepidation surrounding facial recognition technologies, especially those like Clearview which can ID people in real-time, have been simmering for decades, but the Met's decision has finally caused public outrage to boil over. But how did we even get to the point where a relatively unknown startup managed to enact one of tentpoles of futuristic dystopia and begin marketing it to aspiring dictatorial regimes, all while earning the wrath of national governments and tech industry titans alike?

Clearview AI was founded in 2017 by Richard Schwartz and now-CEO Hoan Ton-That. The company counts Peter Thiel and AngelList founder Naval Ravikant among its investors. Clearview's technology is actually quite simple: Its facial recognition algorithm compares the image of a person's face from security camera footage to an existing database of potential matches. Marketed primarily to law enforcement agencies, the Clearview app allows users to take and upload a picture of a person then view all of the public images of that person as well as links to where those photos were published. Basically, if you're caught on camera anywhere in public, local law enforcement can use that image to mine your entire online presence for information about you, effectively ending any semblance of personal privacy.

However, the technology itself isn't an issue, it's how the company acquired its 3 billion-image database: Clearview scraped images from our collective social media profiles. Until it got caught, the company reportedly lifted pictures from Twitter, Facebook, Venmo and millions of other websites over the past few years. Twitter recently sent a cease-and-desist letter to Clearview after the company's actions were revealed, claiming that the company's actions violated Twitter's policies and demanding that Clearview stop lifting images from its platform immediately.
Google and YouTube made similar claims in their cease-and-desist letter. "YouTube's Terms of Service explicitly forbid collecting data that can be used to identify a person. Clearview has publicly admitted to doing exactly that, and in response we sent them a cease-and-desist letter," YouTube spokesperson Alex Joseph said in a February statement to CBS News.
Facebook and Venmo sent a C&D as well, though as Slate points out, Peter Thiel currently sits on Facebook's board, but invested $200,000 in the surveillance startup regardless.

These threats of legal consequences don't appear to have made much of an impression on Clearview CEO, Hoan Ton-That. In a recent CBS interview, Ton-That argued that Clearview has a First Amendment right to scrape people's online data: "The way we have built our system is to only take publicly available information and index it that way," he said. "You have to remember that this is only used for investigations after the fact. This is not a 24/7 surveillance system."
Corporate backlash against Clearview clearly hasn't dissuaded law enforcement agencies from using the surveillance system either. According to the company, more than 600 police departments across the US reportedly use the Clearview service -- including the FBI and DHS.
The Chicago Police Department paid $50,000 for a two-year license for the system, CBS News reports, though a spokesperson for the CPD noted that only 30 officers have access to it and the system is not used for live surveillance as it is in London.

"The CPD uses a facial matching tool to sort through its mugshot database and public source information in the course of an investigation triggered by an incident or crime," it said in a statement to CBS.
Despite the CPD's assurances that it would not take advantage of the system, Clearview's own marketing team appears to be pushing police departments to do exactly that. In a November email to the Green Bay PD, acquired by BuzzFeed, the company actively encouraged officers to search the database for themselves, acquaintances, even celebrities.
"Have you tried taking a selfie with Clearview yet?" the email read. "It's the best way to quickly see the power of Clearview in real time. Try your friends or family. Or a celebrity like Joe Montana or George Clooney."
"Your Clearview account has unlimited searches. So feel free to run wild with your searches," the email continued.

That's not to say that the system is completely without merit. Participating law enforcement agencies are already using it to quickly track down shoplifting, identity theft and credit card fraud suspects. Clearview also claims that its app helped the NYPD track down a terrorism suspect last August, but the agency disputes the company's involvement in the case. Clearview is also reportedly being used to help locate child sex victims; however, its use in those classes of cases remains anecdotal at best and runs the risk of hurting the same kids it's aiming to help.

Using Clearview to track minors, even if done with the best of lawful intentions, is a veritable minefield of privacy and data security concerns. Because the police are expected to upload investigation images to Clearview's servers, the company could potentially collect a massive amount of highly sensitive data on any number of underage sex abuse survivors. And given that the company's security measures are untested, unregulated and unverified, the public has no assurances that data will be safe if and when Clearview's systems are attacked.

What's more, Clearview's system suffers the same shortcomings as other facial recognition systems: It's not as good at interpreting black and brown faces as it is for whites. The company claims that its search is accurate across "all demographic groups," but the ACLU vehemently disagrees. When Clearview pitched its services to the North Miami Police Department back in October 2019, the company included a report from a three-member panel reading, "The Independent Review Panel determined that Clearview rated 100 percent accurate, producing instant and accurate matches for every photo image in the test. Accuracy was consistent across all racial and demographic groups." This study was reportedly conducted using the same methodology as the ACLU's 2018 test of Amazon's Rekognition system, a claim that the ACLU rejects. The Civil Liberties Union notes that none of the three sitting on the review board panel had any prior experience in evaluating facial recognition systems.

"Clearview's technology gives government the unprecedented power to spy on us wherever we go -- tracking our faces at protests, [Alcoholics Anonymous] meetings church, and more," ACLU Northern California attorney Jacob Snow told BuzzFeed News. "Accurate or not, Clearview's technology in law enforcement hands will end privacy as we know it."

And it's not like the police abusing their surveillance powers for personal gain is anything new. In 2016, an Associated Press investigation discovered that police around the country routinely accessed secure databases to look up information on citizens that had nothing to do with their police work, including to stalk ex-girlfriends. In 2013, a Florida cop looked up the personal information of a bank teller he was interested in. In 2009, a pair of FBI agents were caught surveilling a women's dressing room where teenage girls were trying on prom dresses. These are not isolated incidents. In the same year that Clearview was founded, DC cops attempted to intimidate Facebook into giving them access to the personal profiles of more than 230 presidential inauguration protesters. With Clearview available, the police wouldn't even need to contact Facebook as Clearview has likely already scraped and made accessible the dirt the cops are looking for.

"The weaponization possibilities of this are endless," Eric Goldman, co-director of the High Tech Law Institute at Santa Clara University, told The New York Times in January. "Imagine a rogue law enforcement officer who wants to stalk potential romantic partners, or a foreign government using this to dig up secrets about people to blackmail them or throw them in jail."

Unsurprisingly, Clearview's financial backers remain unconcerned about the system's potential for abuse. "I've come to the conclusion that because information constantly increases, there's never going to be privacy," David Scalzo, founder of Kirenaga Partners and early Clearview investor, told The New York Times. "Laws have to determine what's legal, but you can't ban technology. Sure, that might lead to a dystopian future or something, but you can't ban it."

Luckily, our elected representatives are starting to take notice of the dangers that unregulated facial recognition technologies like Clearview pose to the public. A handful of California cities including San Francisco, Oakland and Alameda have all passed moratoriums on their local governments' use of the technology. California, New Hampshire and Oregon have passed restrictions at the state level and a number of other municipalities are considering taking similar steps in the near future.

Senator Edward J. Markey (D-MA) has also taken recent note of Clearview's behavior. In January, the Senator sent a strongly worded letter to CEO Ton-That stating, "Clearview's product appears to pose particularly chilling privacy risks, and I am deeply concerned that it is capable of fundamentally dismantling Americans' expectation that they can move, assemble or simply appear in public without being identified." The senator also included a list of 14 questions for Ton-That to address by Wednesday, February 12th.

Whether Clearview bows to legal and legislative pressure here in the US remains to be seen, but don't get your hopes up. The company is already looking to expand its services to 22 countries around the world, including a number of nations which have been accused of committing human rights abuses. That includes the UAE, Qatar and Singapore, as well as Brazil and Columbia, both of which have endured years of political and social strife. There are even a few EU nations Clearview is looking to target, including Italy, Greece and the Netherlands.

Pretty soon, we won't be able to set foot in public without our presence being noticed, cataloged and tabulated. And when the government has the ability to know where anyone is at given time, our civil liberties will irreparably erode. All so that a handful of developers and investors could make a quick buck selling our faces to the police in the name of public safety.