By Asmita - Mar 05, 2025
Google has revealed receiving over 250 global complaints involving the use of its AI software for creating deepfake terrorism content, with additional reports on child exploitation material. The disclosure has prompted Australian authorities to request updates on initiatives to mitigate harm, amid increased global concerns on AI misuse for nefarious purposes since late 2022.
Google logo via Needpix .com
LATEST
Google has reported to Australian authorities that it has received over 250 complaints globally within a year regarding the use of its artificial intelligence (AI) software in creating deepfake terrorism material. The Australian eSafety Commission has described Google’s disclosure as a “world-first insight” into how users might be exploiting the technology to generate harmful and unlawful content. Google also reported receiving user alerts that its AI program, Gemini, was being used to generate child exploitation materials. According to Australian regulations, technology companies must periodically provide the eSafety Commission with updates on their initiatives to minimize harm or face potential penalties. The reports spanned from April 2023 to February 2024. Regulators worldwide have advocated for enhanced safeguards to ensure that AI technologies cannot facilitate terrorism, fraud, deepfake pornography, and other forms of misconduct since OpenAI’s ChatGPT gained public interest in late 2022.
In its report, Google revealed that it had received 258 user reports concerning AI-generated deepfake terrorist or violent extremist content produced with Gemini, along with another 86 reports alleging AI-generated child exploitation or abuse materials135. The regulator noted that Google did not disclose how many of these complaints were substantiated. To combat child abuse content created with Gemini, Google employed a technique known as hatch-matching, which automatically pairs newly uploaded images with previously identified ones. However, the same method was not applied to filter out terrorist or violent extremist content generated with the Gemini system, according to the regulator. Deepfakes use two algorithms – a generator and a discriminator – to create and refine fake content. The generator builds a training data set based on the desired output, creating the initial fake digital content, while the discriminator analyzes how realistic or fake the initial version of the content is. This process is repeated, enabling the generator to improve at creating realistic content and the discriminator to become more skilled at spotting flaws for the generator to correct.
The eSafety Commissioner, Julie Inman Grant, stated that it is critical for companies involved in AI development to implement and assess the effectiveness of safeguards that prevent the creation of such materials. An international panel has noted that malicious actors can use general-purpose AI to generate fake content that harms individuals in a targeted way. These malicious uses include non-consensual ‘deepfake’ pornography and AI-generated child sexual abuse material, financial fraud through voice impersonation, blackmail for extortion, sabotage of personal and professional reputations, and psychological abuse. General-purpose AI makes it easier to generate persuasive content at scale, which can help actors who seek to manipulate public opinion, for instance, to affect political outcomes. Current systems have demonstrated capabilities in low- and medium-complexity cybersecurity tasks, and state-sponsored actors are actively exploring AI to survey target systems.
The regulator has imposed fines on Telegram and Twitter, now known as X, for what it deemed deficiencies in their reporting. X has lost an appeal regarding a fine of $610,500 but intends to appeal once more, and Telegram is also planning to contest its penalty. Experts suggest that so-called ‘generative AI’ systems are likely to do more harm than good, at least in the next 10 years, due to the impact of deliberately deceptive deepfake variants in text, images, sound, and video. This extends to the proliferation of plausible-sounding AI-generated materials such as advertising copy, news articles, legislative commentary or proposals, and scholarly articles.