The office of the Australian Information Commissioner (OAIC) announced this week that it would be taking no further action against the facial recognition company Clearview AI. This marks a significant victory for one of the most controversial technology companies globally, and it raises important questions about privacy regulation in the digital age. Clearview AI, founded in 2017 by Australian citizen Hoan Ton-That, has developed a facial recognition tool that has sparked widespread controversy. The tool has been trained on more than 50 billion photographs scraped from social media websites and the broader web, leading to serious privacy concerns.
In 2021, Australia’s privacy regulator ruled that Clearview AI had breached privacy laws by scraping millions of photographs from social media platforms such as Facebook and using them to train its facial recognition tool. The OAIC ordered Clearview AI to cease collecting images and to delete the ones it had already gathered. However, despite the ruling, there has been no evidence that Clearview AI followed this order. Earlier this year, media reports suggested that the company was continuing its operations in Australia, collecting more images of Australian citizens.
Given this backdrop, why did the OAIC decide to stop pursuing action against Clearview AI? What implications does this decision have for the broader fight to protect individuals’ privacy in the era of big tech? And how might Australian law need to change to provide regulators with the tools they need to effectively oversee companies like Clearview AI?
Clearview AI’s facial recognition tool claims to be 99% accurate in identifying individuals in photographs. Initially, the tool was offered to police authorities for trials in several countries, including the United States, the United Kingdom, and Australia. It was also used by Ukraine to identify Russian soldiers involved in the invasion of Ukraine. Despite these applications, the technology has been widely criticized for ethical and legal reasons.
Several countries have taken legal action against Clearview AI. In 2022, the United Kingdom’s privacy watchdog fined the company A$14.5 million for violating privacy laws. However, this decision was later overturned because UK authorities were deemed to lack the jurisdiction to issue fines to a foreign company. Similarly, France, Italy, Greece, and other European Union countries have imposed fines of $33 million or more on Clearview AI, with additional penalties levied when the company did not comply with legal orders.
In the United States, Clearview AI faced a class-action lawsuit, which was settled in June 2023. The settlement allowed the company to continue selling its tool to US law enforcement agencies, but it prohibited sales to the private sector. These actions underscore the global concerns regarding Clearview AI’s operations and the broader issues surrounding facial recognition technology.
Australia’s Privacy Laws and Clearview AI
In Australia, the OAIC’s 2021 ruling found that Clearview AI had violated the country’s privacy laws by collecting images of Australians without their consent. The regulator ordered the company to stop collecting these images and to delete those already collected within 90 days. Unlike other jurisdictions, Australia did not impose a fine on Clearview AI. The absence of a financial penalty may have contributed to the company’s apparent disregard for the OAIC’s orders.
So far, there is no evidence to suggest that Clearview AI has complied with the OAIC’s directives. Reports indicate that the company continues to collect images of Australians, raising serious concerns about the effectiveness of Australia’s privacy laws. Under the Privacy Act, when an organization does not comply with a regulatory decision, the OAIC has the option to commence enforcement proceedings in court. However, in the case of Clearview AI, the OAIC opted not to pursue this course of action.
The OAIC’s decision to cease pursuing action against Clearview AI highlights several significant issues with Australia’s current privacy laws. First, the decision underscores the limitations of the OAIC’s enforcement powers under the existing legal framework. Unlike other countries that have imposed substantial fines on Clearview AI, Australia lacks strong enforcement mechanisms to compel compliance with privacy rulings. Significant penalties for breaches of privacy laws in Australia remain rare, reducing the deterrent effect of the laws.
Second, the OAIC’s decision reflects the resource constraints faced by the regulator. The OAIC has limited capacity to investigate and pursue multiple large cases simultaneously. For example, the OAIC’s investigation into the use of facial recognition technology by Bunnings and Kmart has been pending for more than two years. In this context, the decision not to continue pursuing Clearview AI may have been influenced by the need to allocate resources to other pressing investigations.
The Need for Stronger Privacy Protections
The OAIC’s decision not to pursue further action against Clearview AI has sparked renewed calls for stronger privacy protections in Australia. Privacy advocates argue that the current legal framework is insufficient to protect individuals’ rights in the face of increasingly sophisticated surveillance technologies. As technology evolves, so too must the laws that regulate its use.
There is hope that forthcoming privacy law reforms in Australia will strengthen the legal protections for individuals’ privacy and provide more robust enforcement powers to the OAIC. The Australian government has been reviewing the Privacy Act, with a view to introducing amendments that reflect the challenges of the digital age. Potential reforms may include higher penalties for breaches of privacy laws, increased powers for the OAIC to investigate and enforce compliance, and new obligations for companies that collect and use personal data.
While general privacy law reforms are necessary, they may not be sufficient to regulate high-risk technologies such as facial recognition. Experts have called for the introduction of specific rules to address the unique risks associated with these technologies. For example, former Australian Human Rights Commissioner Ed Santow has proposed a model law to regulate the use of facial recognition technologies. Such a law could set out clear guidelines on when and how facial recognition can be used, as well as establishing oversight mechanisms to ensure compliance.
Internationally, other countries have begun developing special rules for facial recognition tools. The European Union’s recently adopted Artificial Intelligence Act, for instance, prohibits certain uses of facial recognition technology and sets strict rules around its development and deployment. These developments provide valuable lessons for Australia as it considers how best to regulate high-risk technologies.
The regulation of facial recognition technology is a complex issue, and research shows that many countries around the world are still grappling with how to establish appropriate regulations. On the one hand, facial recognition technology has the potential to provide significant benefits, such as enhancing security, improving law enforcement capabilities, and supporting public safety initiatives. On the other hand, the technology raises serious privacy concerns, including the risk of mass surveillance, discrimination, and the erosion of civil liberties.
Balancing these competing interests requires a nuanced approach. Regulations need to ensure that facial recognition technology is used responsibly and ethically, with robust safeguards in place to protect individuals’ privacy. This may involve imposing strict limitations on the use of facial recognition, requiring transparency from companies that develop and deploy the technology, and ensuring that individuals have the ability to challenge and seek redress for unlawful uses of their personal data.
What’s Next for Privacy Regulation in Australia?
The Australian government should consider taking specific actions to prevent companies like Clearview AI from using the personal data of Australians to develop surveillance technologies. This could include introducing legislation that explicitly prohibits the unauthorized collection and use of personal data for facial recognition purposes. Additionally, the government could establish clear rules about when facial recognition can be used and when it is prohibited. These rules should be informed by principles of privacy, proportionality, and accountability.
The government could also look to international examples for guidance. The EU’s Artificial Intelligence Act and similar initiatives offer valuable insights into how to approach the regulation of facial recognition technology. By adopting a comprehensive legal framework that addresses the specific risks posed by facial recognition, Australia can ensure that its privacy laws remain fit for purpose in the digital age.
The OAIC’s decision to drop its case against Clearview AI is a wake-up call for Australia’s privacy regulators and lawmakers. It highlights the need for stronger legal protections and more robust enforcement mechanisms to safeguard individuals’ privacy in the face of rapidly advancing technology. As the use of facial recognition technology continues to grow, so too must the legal and regulatory frameworks that govern its use.
In the battle to protect privacy in the age of big tech, regulators need the tools and resources to hold companies accountable. This requires not only reforming existing privacy laws but also developing new rules specifically designed to address the unique challenges posed by technologies like facial recognition. By taking these steps, Australia can ensure that it is at the forefront of protecting individuals’ privacy rights in the digital era.