Clearview AI is in the limelight yet again after the Office of the Australian Information Commissioner (OAIC) and the United Kingdom’s Information Commissioner’s Office (ICO) opened a joint investigation into the company’s personal information handling practices. According to the two public bodies, the inquiry will focus on whether or not the firm is “scraping” and handling data that violates the UK Data Protection Act 2018 and the Australian Privacy Act 1988.
“The investigation highlights the importance of enforcement cooperation in protecting the personal information of Australian and UK citizens in a globalised data environment,” said the Office of the Australian Information Commissioner and the UK’s Information Commissioner’s Office in a statement.
Three Billion Faces in the Data Registry
It is reported that Clearview AI’s facial recognition app features a database consisting of over three billion images of people’s faces that have been “scraped” from a range of popular social media platforms and sites, including Facebook, YouTube and Google. Users can take advantage of the app’s large database by uploading a photograph of an individual’s face to Clearview AI and discover where else that specific person’s face appears online.
This all comes after the CEO of Clearview AI, Hoan Ton-That, mentioned to NBN News NOW that his company is in discussions with federal and state agencies to help track those infected with COVID-19 using facial recognition. Despite the investigation, Ton-That claims that Clearview AI only collects photographs in compliance with relevant laws.
“Clearview AI searches publicly available photos from the internet in accordance with applicable laws. It is used to help identify criminal suspects,” said Hon-That in a statement provided to Engadget. “Its powerful technology is currently unavailable in the UK and Australia. Individuals in these countries can opt-out.”
Up until The New York Times exposed the company’s practices in January of this year, Clearview AI was relatively unknown. Following the report, the firm was sent a cease-and-desist notice from the likes of Facebook, Twitter, YouTube, and Google over the collection of photos for its app.
In recent weeks, Clearview AI suspended its contract with the Royal Canadian Mounted Police (RCMP) after claims that the firm unlawfully collected personal data and shared it with police. The Office of the Privacy Commissioner of Canada (OPC) has also opened an investigation into the RCMP’s use of the firm’s facial recognition technology.
Lawmakers to Prohibit the Use of Facial Recognition Technologies by Law Enforcement Agencies
With governments across the globe implementing strict measures necessary to manage and prevent COVID-19, the use of facial recognition — particularly by police — has come under scrutiny. As the use of facial recognition technologies continues to increase, both employees and human rights activists are demanding that companies comply with basic human rights laws when gathering personal information, especially photographs of individuals.
On 25 June, 2020, a group of lawmakers in the U.S. introduced legislation to prohibit the use of facial recognition technologies by law enforcement and government agencies without approval from Congress. Regulatory authorities in Europe are also in the midst of establishing strict rules on AI companies and the use of personal data to build public trust in technology.
As a security analyst working in Beijing in 2008, I struggled to connect to basic websites like Facebook and Wikipedia (coincidentally, many more websites are banned in China today than were then). Naturally, I started looking for a solution. VPN services were, at the time, security tools used by large I.T. companies or cybersecurity professionals.