Although X removed Grok’s ability to create nonconsensual digitally undressed images on the social platform, the standalone Grok app is another story. It reportedly continues to produce “nudified” deepfakes of real people. And now, Ashley St. Clair, a conservative political strategist and mother of one of Elon Musk’s 14 children, has sued xAI for nonconsensual sexualized images of her that Grok allegedly produced. In the court filing , St. Clair accused xAI’s Grok chatbot of creating and disseminating deepfakes of her “as a child stripped down to a string bikini, and as an adult in sexually explicit poses, covered in semen, or wearing only bikini floss.” In some cases, the chatbot allegedly produced bikini-clad deepfakes of St. Clair based on a photo of her as a 14-year-old. “People took pictures of me as a child and undressed me. There’s one where they undressed me and bent me over, and in the background is my child’s backpack that he’s wearing right now,” she said. “I am also seeing images where they add bruises to women, beat them up, tie them up, mutilated,” St. Clair told The Guardian . “These sickos used to have to go to the dark depths of the internet, and now it is on a mainstream social media app.” St. Clair said that, after she reported the images to X, the social platform replied that the content didn’t violate any policies. In addition, she claims that X left the images posted for up to seven days after she reported them. St. Clair said xAI then retaliated against her by creating more digitally undressed deepfakes of her, therefore “making [St. Clair] the laughingstock of the social media platform.” She accused the company of then revoking her X Premium subscription, verification checkmark, and ability to monetize content on the platform. “xAI further banned [her] from repurchasing Premium,” St. Clair’s court filing states. On Wednesday, X said it changed its policies so that Grok would no longer generate sexualized images of children or nonconsensual nudity “in those jurisdictions where it’s illegal.” However, the standalone Grok app reportedly continues to undress and sexualize photos when prompted to do so. Neither Apple nor Google has removed the Grok app despite explicit policy violations. Anna Moneymaker via Getty Images Apple and Google have thus far done, well, absolutely nothing. Despite the multi-week outrage over the deepfakes, neither company removed the X or Grok apps from their app stores. Both the App Store and Play Store have policies that explicitly prohibit apps that generate such content. Neither Apple nor Google has responded to multiple requests for comment from Engadget. That includes a follow-up email sent on Friday, regarding the Grok app continuing to “nudify” photos of real women and other people. While Apple and Google fail to act, many governments have done the opposite. On Monday, Malaysia and Indonesia banned Grok. The same day, UK regulator Ofcom opened a formal investigation into X. California opened one on Wednesday . The US Senate even passed the Defiance Act for a second time in the wake of the blowback. “If you are a woman, you can’t post a picture, and you can’t speak, or you risk this abuse,” St. Clair told The Guardian . “It’s dangerous, and I believe this is by design. You are supposed to feed AI humanity and thoughts, and when you are doing things that particularly impact women, and they don’t want to participate in it because they are being targeted, it means the AI is inherently going to be biased.” Speaking about Musk and his team, she added that “these people believe they are above the law, because they are. They don’t think they are going to get in trouble, they think they have no consequences.” This article originally appeared on Engadget at https://www.engadget.com/ai/the-mother-of-one-of-elon-musks-children-is-suing-xai-over-nonconsensual-deepfake-images-191451979.html?src=rss