WASHINGTON, DC -– In recent weeks, Grok – the AI system developed by Elon Musk’s xAI – has been generating nonconsensual, sexualized images of women and children on the social-media platform X. This has prompted investigations and formal scrutiny by regulators in the European Union, France, India, Malaysia, and the United Kingdom. European officials have described the conduct as illegal. British regulators have launched urgent inquiries. Other governments have warned that Grok’s output might violate domestic criminal and platform-safety laws. Far from marginal regulatory disputes, these discussions get to the heart of AI governance. Governments worldwide increasingly agree on a basic premise of AI governance: systems deployed at scale must be safe, controllable, and subject to meaningful oversight. Whether framed by the EU’s Digital Services Act (DSA), the OECD’s AI Principles, UNESCO’s AI ethics framework, or emerging national safety regimes, these norms are clear and unwavering. AI systems that enable foreseeable harm, particularly sexual exploitation, are incompatib