Grok AI Now Restricted from Altering Revealing Images on X

 

Elon Musk’s AI chatbot, Grok, will no longer be able to modify images of real people in revealing clothing on the X platform. The update comes after widespread criticism over the AI’s ability to digitally undress adults—and in some alarming instances, minors. X and its parent company, xAI, have put new safeguards in place to prevent misuse while emphasizing adherence to legal standards.

Enhanced Protections Against Image Manipulation

X announced on Wednesday that Grok has been updated with technical restrictions preventing it from editing photos of people in bikinis, lingerie, or other revealing outfits. These limitations now apply to all users, including X Premium subscribers.

In recent days, Grok’s image-generation features had already been limited to paying X Premium members. Researchers following the AI noted that Grok had started responding differently to user requests for image modifications, even for subscribers. X confirmed that these changes are now fully implemented.

Despite the restrictions, AI Forensics, a European nonprofit organization that monitors AI behavior, highlighted “inconsistencies” between public interactions on Grok’s X account and private conversations on Grok.com regarding sexually explicit content.

X emphasized its commitment to addressing illegal activity, including Child Sexual Abuse Material (CSAM). Users attempting to generate prohibited content via Grok face the same penalties as those uploading illegal content directly, such as permanent account suspension, content removal, and involvement of law enforcement authorities.

Elon Musk responded to the controversy on X, stating that no nude images of minors had been generated by Grok. He added, “Zero. Grok will refuse to produce any illegal content, as it is designed to follow the laws of each country or region.”

Legal Oversight and Global Repercussions

Experts pointed out that while fully nude images were uncommon, Grok had previously followed requests to digitally place minors in revealing clothing or suggestive poses. Producing such non-consensual images of minors is considered CSAM, punishable under the Take It Down Act, which carries fines and possible jail time.

California Attorney General Rob Bonta announced an official investigation into the “spread of non-consensual sexually explicit content created with Grok,” signaling heightened legal scrutiny of AI-generated materials.

The controversy has affected Grok’s availability in several countries. Indonesia and Malaysia have banned the AI tool due to concerns about inappropriate image generation. In the United Kingdom, the communications regulator Ofcom launched a formal inquiry into X, while Prime Minister Keir Starmer’s office welcomed the platform’s efforts to mitigate misuse.

The new safeguards reflect a broader push to ensure AI technologies operate responsibly. While Grok can no longer modify images of real people in revealing clothing, ongoing monitoring will be critical to prevent potential abuse in the future.

This development highlights the challenge technology companies face in balancing innovation with ethics and user protection. With the updated restrictions, Grok continues to function as a versatile AI assistant on X while prioritizing safety and compliance with legal standards.