Musk says users, not Grok, are liable — but regulators aren’t convinced

With courts, governments circling, X navigates AI liability in the age of generative tools

Last updated:
Nathaniel Lacsina, Senior Web Editor
2 MIN READ
The controversy has revived debate over how much responsibility platforms bear for AI-generated content.
The controversy has revived debate over how much responsibility platforms bear for AI-generated content.
Bloomberg

In January 2026, scrutiny intensified around X after its AI chatbot Grok was found generating non-consensual and explicit images. Designed as a creative assistant, Grok quickly became a regulatory flashpoint when users prompted it to digitally alter photos in ways that violated consent and, in some cases, involved minors. The incidents triggered swift reactions from governments and rights groups, reigniting global concerns about accountability in generative AI.

India’s Ministry of Electronics and IT issued a formal notice to X, demanding an Action Taken Report within 72 hours and citing a serious failure to prevent unlawful content. At the same time, French officials referred similar cases to prosecutors, calling the outputs 'manifestly illegal' and urging action under the EU’s Digital Services Act. Together, these moves signal growing international pressure on platforms to control misuse of AI tools.

Musk shifts responsibility to users

Responding to the backlash, Elon Musk stated that users—not Grok—would be legally responsible for illegal content. Posting on X, Musk said anyone prompting the chatbot to generate unlawful material would face the same consequences as uploading it directly. The company reiterated that violators would be permanently banned and that it would cooperate with law enforcement.

The controversy has revived debate over how much responsibility platforms bear for AI-generated content. EU regulators have previously fined X $140 million for content moderation failures, raising questions about whether sufficient safeguards exist. Critics argue that shifting blame to users does not absolve platforms of their duty to design safer systems.

Industry-wide implications

Independent reports have earlier flagged Grok’s role in producing deepfakes and explicit imagery, exposing gaps in AI governance. As regulators in India and Europe demand clearer oversight and technical fixes, the Grok case is emerging as a key test for the AI industry. How X responds may shape future expectations for platform accountability worldwide.

Sign up for the Daily Briefing

Get the latest news and updates straight to your inbox

Up Next