AI-generated content is becoming more persuasive — and more dangerous — as tools improve and users grow increasingly dependent on synthetic media, according to the latest Internet safety reporting.
Cybersecurity experts warn that advances in generative AI are lowering the barrier to producing convincing deepfakes, with criminal misuse likely to accelerate as models become more capable and are trained on user feedback.
The issue resurfaced last week following investigations into Elon Musk’s social media platform X over sexualised deepfake images generated using its Grok chatbot. European regulators and several national authorities have begun examining how such content was created and distributed.
Security researchers say the case is part of a broader trend: AI-generated images, video and audio are increasingly being weaponised for fraud, impersonation and identity theft.
The ease of production was highlighted by a recent viral video depicting Hollywood actors Tom Cruise and Brad Pitt in a staged fight scene. The clip’s creator, filmmaker Ruairi Robinson, said it was generated from a two-line prompt — underscoring how little technical expertise is required to produce highly realistic synthetic footage.
Konstantin Levinzon, co-founder of Planet VPN, said the rapid development of AI tools has dramatically scaled deception risks.
“A particularly dangerous aspect of AI technologies is that they make it difficult to distinguish what’s real,” Levinzon said. “The internet is flooded with fake images and videos, also known as deepfakes, which can be created in just seconds with low-cost or even free tools.”
While sexualised or reputational attacks often attract headlines, corporate environments are also exposed. Levinzon noted that attackers increasingly use AI-generated video and audio to impersonate executives in business email compromise (BEC) schemes, attempting to authorise fraudulent transfers or bypass internal controls.
Financial institutions and identity verification systems are also under pressure. High-quality synthetic video and manipulated imagery can be used in attempts to defeat facial recognition or “liveness” checks, creating new operational challenges for banks and fintech providers.
The latest International Safety Report suggests the threat will intensify as AI systems produce more persuasive outputs and users become more accustomed to consuming AI-generated media. The more realistic the content — and the more people rely on it — the easier it becomes for malicious actors to exploit trust.
Levinzon cautioned that although AI-generated media can be difficult to detect, it often leaves technical artefacts.
“In deepfake videos, there are unnatural facial movements, inconsistent lighting or shadows, blurring, or distortion around the face,” he said. Detection services can analyse images and videos for signs of AI generation, including texture inconsistencies and detail errors, but such tools provide probabilistic assessments rather than definitive conclusions.
For critical risk leaders (CRLs), the implication is clear: technical controls alone are insufficient. Organisations must combine detection capabilities with employee awareness, identity verification protocols and layered authentication.
Experts recommend limiting the amount of high-resolution personal video content shared publicly, as attackers frequently harvest such material to train impersonation models. Multi-factor authentication (MFA) remains essential to reduce account takeover risk, particularly as compromised accounts can be used to distribute convincing fake content.
Levinzon also argued that encrypted connections, such as those provided by virtual private networks (VPNs), can reduce data exposure and lower the likelihood of targeted exploitation, although they do not directly prevent deepfake creation.
As generative AI continues to evolve, the core challenge is no longer whether synthetic content can deceive — but how quickly institutions, regulators and individuals can adapt their verification frameworks to operate in a world where seeing is no longer believing.

