AI Summarization by Meta and Google Risks Defamation Issues Amid Legal Uncertainties in Australia

google-meta

As artificial intelligence (AI) becomes more embedded in digital services, the latest advancements by Google and Meta have raised significant concerns among legal experts. These companies are now utilizing AI to generate summaries from user comments and reviews, aiming to improve user experiences by providing summarized insights on restaurants, locations, and even comments on social media posts. However, experts caution that such innovations could expose these tech giants to defamation claims, reshaping the legal landscape around online publications and AI-generated content.

The legal debate follows a landmark 2021 decision by the Australian High Court in the Dylan Voller case, where media outlets were held liable for defamatory third-party comments on their Facebook pages. This decision, alongside recent defamation cases involving Google, highlights the potential risks for tech companies when AI-generated summaries disseminate harmful or defamatory information. As Meta and Google roll out AI-powered summarization tools, the boundaries of liability for content generation and publication are being tested in unprecedented ways.

In October 2024, Google expanded its new AI capabilities, powered by its large-language model Gemini, to users in the United States and Australia. Now available in Google Maps and Google Search, the AI tool can analyze user reviews and create concise summaries of places to visit, offering recommendations based on general sentiment in reviews. Meanwhile, Meta has introduced similar AI-generated summaries on Facebook, providing concise overviews of user comments under posts. These features are intended to streamline user experiences by delivering comprehensive, concise responses derived from thousands of individual comments and reviews.

Despite the benefits, the risk of AI-generated content inadvertently disseminating defamatory statements has not gone unnoticed. Legal professionals in Australia warn that while these summarization features may enhance user convenience, they also increase the possibility of tech companies becoming publishers of defamatory material, with significant legal and financial repercussions.

Under Australian defamation law, individuals making defamatory comments on platforms such as Facebook or Google traditionally bear the brunt of liability. However, the 2021 High Court ruling in the case of Dylan Voller—a young Indigenous man subjected to mistreatment in the Don Dale Youth Detention Centre—set a critical precedent. The court held that media companies could be liable for defamatory third-party comments posted on their social media pages, asserting that they acted as publishers by providing a platform for these comments.

This ruling has since been a guiding principle for how Australian courts interpret platform liability, establishing that hosting or amplifying defamatory content can equate to publication. For Google and Meta, which now use AI to compile and summarize user-generated content, the implications are clear: if AI-generated summaries contain or amplify defamatory statements, they could be considered publishers of that content, potentially opening them to defamation lawsuits.

Both Google and Meta have faced defamation cases in Australia in recent years, which have led to significant financial consequences. In 2022, Google was ordered to pay former NSW deputy premier John Barilaro over $700,000 after a defamatory YouTube video targeted him, while in 2020, the company was fined $40,000 after its search engine linked to a defamatory article about a Melbourne lawyer.

These cases underscore that Australian courts are increasingly willing to hold tech companies accountable for defamatory content disseminated on their platforms, even when that content is user-generated or aggregated by algorithms. As AI-driven tools like Gemini enter the scene, with their capacity to synthesize vast amounts of user content, these companies may soon face more complex legal challenges concerning their role as content curators and publishers.

Michael Douglas, a defamation expert and consultant at Bennett Law, warns that AI-powered content summarization tools like those launched by Google and Meta could inevitably lead to defamation lawsuits. He points out that if these AI tools produce defamatory summaries, the platforms may be seen as publishers of that content, potentially facing defamation claims.

“If Meta sucks up comments and spits them out, and if what it spits out is defamatory, it is a publisher and potentially liable for defamation,” said Douglas. He further suggests that while tech companies may attempt to invoke certain defences, such as “innocent dissemination” under Australian defamation law, these defences might not hold up in cases involving AI, given the companies’ awareness of the risks associated with algorithmic aggregation.

The liability could potentially hinge on the “digital intermediaries” provisions in state defamation laws. These provisions were recently enacted to clarify the liability of platforms, but as Douglas notes, it’s unclear if they will cover the specific role of AI in content dissemination.

The rapid advancement of AI technologies has brought new challenges that current defamation laws may not adequately address. Professor David Rolph, a senior lecturer in law at the University of Sydney, notes that Australia’s most recent defamation reforms introduced the “serious harm” threshold to reduce trivial claims. While the new threshold aims to streamline cases and filter out minor claims, the reforms were not designed with AI in mind, particularly AI capable of generating unique responses for each user.

“The most recent defamation law reform process obviously didn’t grapple with the new permutations and problems presented by AI,” Rolph explained. He highlighted the inherent lag in legal reform relative to technological progress, emphasizing the need for defamation law to adapt more frequently.

However, Rolph acknowledges that AI’s capacity to generate multiple, unique responses might mitigate potential harm, as fewer people may see any particular allegedly defamatory statement. Nonetheless, if a harmful AI-generated summary reaches even a small audience, the implications for liability remain serious, underscoring the necessity for clearer guidelines as technology evolves.

Both Meta and Google have responded cautiously to the potential for defamation risks within their AI-driven tools. When asked about the potential for defamatory AI-generated summaries, Miriam Daniel, vice president and head of Google Maps, explained that the Gemini AI model was designed to highlight balanced views by analyzing common themes across numerous reviews. Google has also emphasized its commitment to removing fake reviews or any content that violates its policies, aiming to ensure that Gemini’s summaries provide a neutral perspective on locations or activities.

“We look for enough common themes from enough reviewers, both positive sentiments and negative sentiments, and try to provide a balanced view when we provide the summary,” Daniel said. Google’s moderation efforts appear aimed at reducing the likelihood of defamatory content, though it is unclear how effective these safeguards will be.

Meta, too, has acknowledged the risk of AI summarization generating unintended responses. A Meta spokesperson stated that while the AI used to generate comment summaries is still in its early stages, the company is making ongoing improvements to minimize potential inaccuracies. The spokesperson added that Meta informs users that AI-generated outputs may not always be accurate, helping to manage user expectations and address potential errors preemptively.

“We share information within the features themselves to help people understand that AI might return inaccurate or inappropriate outputs,” the spokesperson noted, underscoring Meta’s cautious approach to the technology.

Related Posts