Investigation Finds xAI’s Grok Still Generating Sexualized Deepfakes Despite Safety Pledges


Elon Musk’s Grok chatbot is facing renewed scrutiny following reports that the artificial intelligence tool is continuing to generate sexualized deepfakes of real women. An investigation by NBC News has identified dozens of images circulating on X that depict real individuals in suggestive poses without their consent.
The content reportedly places women in revealing outfits such as towels, sports bras, and bunny suits. While the system blocks full nudity and images of minors, critics argue the output constitutes a significant violation of personal privacy and consent standards.
This controversy follows a major public backlash in January 2026, which prompted xAI to implement stricter filters. At the time, the company promised to mitigate the abusive deepfakes that triggered government probes globally.
Despite these assurances, users have reportedly discovered methods to bypass safety blocks throughout March and April 2026. Some users have even transitioned from static images to short videos depicting women in suggestive contexts.
xAI maintains that it employs extensive safeguards, including real-time monitoring and prompt filtering. The company reiterated its strict prohibition of non-consensual deepfakes and expressed intent to review the new findings.
The availability of such content has kept eight government agencies globally on high alert. Investigations remain active in California, Australia, Canada, and Europe, as regulators under President Donald Trump’s administration assess potential legal violations.
The situation highlights the challenge for AI developers in balancing creative tools with safety. While supporters champion Grok’s uncensored nature, advocacy groups warn that loopholes facilitate the victimization of real people.
Victims often remain unaware that their likenesses have been manipulated until the content has spread across social media. This failure to contain the issue raises questions about the efficacy of AI self-regulation in the current technological landscape.
As of Tuesday, April 14, 2026, the images remain a central focus for technologists and legal experts. These investigations may set a new precedent for how AI companies are held accountable for user-generated content.