Elon Musk’s Grok AI faces backlash over inappropriate image edits

Platform pledges swift safeguards after users manipulate images of women and children

Last updated:
1 MIN READ
Bloomberg
Authorities in India and France have begun an investigation into the misuse of AI-generated content.

Elon Musk’s AI platform, Grok, is racing to fix flaws in its image-editing tool after users reported that it could turn pictures of children or women into sexualised content.

In a statement on X on Friday, Grok acknowledged the problem. “We’ve identified lapses in safeguards and are urgently fixing them,” the company said. “CSAM (Child Sexual Abuse Material) is illegal and strictly prohibited.”

The complaints surfaced after Grok rolled out an “edit image” button in late December. The feature allows users to modify images on the platform, but some exploited it to partially or fully remove clothing from pictures of women and children, sparking widespread concern.

Legal experts say companies in the United States could face criminal charges if they knowingly facilitate or fail to prevent the creation or sharing of child sexual abuse material.

Media reports from India said government officials have asked X to provide details of steps taken to remove “obscene, nude, indecent and sexually suggestive content” generated by Grok without consent.

Meanwhile, in Paris, the public prosecutor’s office has expanded its investigation into X, following allegations that Grok was being misused to generate and circulate child pornography.

Grok has promised swift fixes, emphasising that protecting users and preventing illegal content is a top priority.

Sign up for the Daily Briefing

Get the latest news and updates straight to your inbox