The launch of Veo 3, Google’s advanced AI video generator, marked a significant technological leap when it debuted in May. Capable of producing clips that at times are nearly indistinguishable from real footage, the tool unlocked a world of creative possibilities. Now, as it rolls out in Spain, a troubling report has surfaced: Veo 3 is already being used to create and spread racist videos on TikTok.
According to an investigation by MediaMatters, dozens of TikTok accounts have begun publishing AI-generated videos using Veo 3 that are rife with hate speech and racist tropes, content that is also driving high user engagement.
Most of these short clips, typically lasting about eight seconds and bearing the visible “Veo” watermark, depict Black individuals in criminal or dehumanizing ways. The content also targets immigrants and Jewish communities, clearly showing how easily this technology can be misused to distort reality.
While Google has repeatedly emphasized that its AI models include safety mechanisms — known as guardrails — to prevent abuse, in the case of Veo 3, these filters appear either too weak or easily bypassed. A core issue is that the AI wasn’t trained to fully recognize many of the racist stereotypes used in everyday contexts — such as using apes to represent Black people. Additionally, the vagueness of user prompts allows problematic content to be generated without triggering system alerts.
External testing of the tool’s safety mechanisms has also revealed how easy it is to produce harmful content. And although both Google and TikTok have explicit policies banning hate speech, effectively enforcing these guidelines at scale remains a major challenge.
TikTok says it relies on a combination of technology and human moderators to detect and remove harmful content. However, a spokesperson admitted to Ars Technica that the sheer volume of uploaded videos far exceeds their moderation capacity. They noted that over half of the accounts flagged by MediaMatters had already been removed before the report was published. Still, the offensive content had already racked up thousands of views, leaving a lasting impression on viewers.
This issue isn’t limited to TikTok. Similar trends have been observed on X (formerly Twitter), where looser content moderation policies have allowed harmful material to circulate unchecked. Compounding the problem is Grok, the platform’s integrated AI, which struggles to distinguish between real and AI-generated videos — though some users attempt to use it for that very purpose.
The most alarming concern is that this may just be the beginning. Google has announced plans to integrate Veo 3 into YouTube Shorts, a move that could greatly accelerate the spread of problematic content on one of the world’s largest video platforms. This expansion will put the moderation systems of major tech companies to the ultimate test — and reveal just how prepared they truly are to confront the consequences of their own creations.