TECH&SPACE
LIVE FEEDMC v1.0
HR
// STATUS
ISS420 kmCREW7 aboardNEOs0 tracked todayKp0FLAREB1.0LATESTBaltic Whale and Fehmarn Delays Push Scandlines Toward Faste...ISS420 kmCREW7 aboardNEOs0 tracked todayKp0FLAREB1.0LATESTBaltic Whale and Fehmarn Delays Push Scandlines Toward Faste...
// INITIALIZING GLOBE FEED...
AIdb#2887

xAI's Grok becomes latest AI flashpoint in CSAM scandal

(6d ago)
Kalifornija, SAD
engadget.com
xAI's Grok becomes latest AI flashpoint in CSAM scandal

xAI's Grok becomes latest AI flashpoint in CSAM scandal📷 Published: Apr 18, 2026 at 12:10 UTC

  • Teen photos allegedly used to generate CSAM
  • Third class action lawsuit against xAI escalates
  • Grok already under global scrutiny for child imagery

Three California teenagers have filed a class action lawsuit against xAI, accusing its Grok AI model of generating child sexual abuse material using their photos. The lawsuit claims one victim was alerted in December 2023 that AI-generated images and videos of her—altered into explicit poses—were circulating on Discord and Telegram, often traded for other CSAM. These platforms have become hotspots for such content, where synthetic media is weaponized to create more material.

xAI isn’t learning this the hard way. The company is already facing multiple investigations across the EU and UK over reports that Grok repeatedly produces sexualized images of children, even when explicitly prompted against such outputs. This isn’t just a content moderation failure; it points to deeper issues in how AI models are trained and safeguarded against misuse.

From demo to liability: Grok's training data problem lands in court

From demo to liability: Grok's training data problem lands in court📷 Published: Apr 18, 2026 at 12:10 UTC

From demo to liability: Grok's training data problem lands in court

The lawsuit alleges Grok’s training data included the teens’ photos without consent, then repurposed them into exploitative content. While the exact number of affected minors remains unclear, the legal filing suggests a systemic pattern rather than isolated incidents. Early signals indicate the leaks originated from Discord servers where users experimented with Grok’s image generation capabilities, highlighting how quickly AI tools can be misused when deployed without strict guardrails.

For developers, this case underscores the legal risks of scraping data without verifiable consent. The real signal here is that the hype around AI’s creative potential is colliding with real-world consequences—liability, lawsuits, and reputational damage that no marketing slide can gloss over.

If Grok’s training data truly included these teens’ photos without consent, how many other AI models are quietly operating on similarly compromised datasets?

xAI Grok CSAM generation lawsuitAI content moderation failures in real-world deploymentGrok-1.5 model safety vulnerabilitiesAI-generated child sexual abuse material (CSAM) risksxAI legal challenges over AI safety systems
// liked by readers

//Comments