
Ashley St. Clair, the mother of one of Elon Musk’s children, sued Musk’s xAI artificial intelligence company Thursday, alleging that the AI giant was negligent and inflicted emotional distress by enabling users of its AI tool, called Grok, to create deepfake photos of her in sexually explicit poses and by failing to sufficiently limit such behavior after her complaints, according to NBC.
The lawsuit comes after weeks of mounting backlash against Grok’s ability to generate nonconsensual deepfakes, allowing users to remove clothes from people depicted in photos uploaded to the service and often replacing clothes with bikinis or underwear. Her lawsuit was filed in state court in New York but quickly transferred to the federal Southern District of New York after a request from xAI.
St. Clair had notified xAI that users were creating illicit deepfake photos of her “as a child stripped down to a string bikini” and “as an adult in sexually explicit poses” and requested that the Grok service be prevented from creating the nonconsensual images, the lawsuit says.
The lawsuit alleges that even though Grok confirmed her “images will not be used or altered without explicit consent in any future generations or responses,” xAI continued to allow users to create more explicit AI-generated images of her and instead retaliated by demonetizing her X account.
X and xAI did not immediately respond to a request for comment. On Thursday, xAI sued St. Clair in federal court in Texas, saying she violated xAI’s terms of service and claiming damages of over $75,000. xAI said in its suit that any claims against the company must be filed in either federal court in the Northern District of Texas or in state courts in Tarrant County, Texas.
Last week, X limited the capabilities of the @Grok reply bot, seemingly preventing it from generating the images that nonconsensually put identifiable people in revealing swimsuits or underwear. As of the time of the reporting, those capabilities remained intact on the standalone Grok app and the Grok website and in the dedicated Grok tab on X.
Grok has been creating a flood of sexualized AI-generated images for weeks, with the pace reaching thousands such images per hour last week, according to researchers. Many of the images have been posted publicly on X.
The creation and spread of nonconsensual sexualized images have sparked a worldwide response, including several government investigations and calls for smartphone app marketplaces to ban or restrict X. Regulators and other tech companies, though, have stopped short of restricting the app.
California’s attorney general launched an investigation into Grok on Wednesday as Gov. Gavin Newsom posted on X that “xAI’s decision to create and host a breeding ground for predators to spread nonconsensual sexually explicit AI deepfakes, including images that digitally undress children, is vile.”
St. Clair’s suit alleges that Grok’s feature allowing users to create nonconsensual deepfakes is a design defect and that the company could have foreseen the use of the feature to harass people with unlawful images.
It says those depicted in the deepfakes, including St. Clair, suffered extreme distress.
“Defendant engaged in extreme and outrageous conduct, exceeding all bounds of decency and utterly intolerable in a civilized society,” the suit says.