Axios
OpenAI is rolling out a more permissible version of GPT-5.5 — aka "Spud" — to vetted cyber defenders, the company said Thursday. Why it matters : Recent security testing suggests that GPT-5.5 model is nearly as good at finding and exploiting software bugs as Anthropic's Mythos Preview . The capabilities of the new models have sparked an urgent debate in Silicon Valley and the White House about how to keep them out of the hands of bad actors. Driving the news: OpenAI is opening a limited preview of GPT-5.5-Cyber to vetted cyber defenders who are "responsible for securing critical infrastructure," per a press release. A source familiar with GPT-5.5-Cyber's abilities told Axios that they were roughly on par with Mythos. One major recent test put Mythos narrowly ahead. Cyber defenders who are vetted and approved for the highest tier of OpenAI's Trusted Access for Cyber program will receive a version of GPT-5.5 that has fewer guardrails than the publicly available model. They will be able to use it to hunt for bugs, study malware and reverse engineer attacks. Defenders will still be blocked from certain tasks like credential theft and writing malware, but the new abilities are designed to help them automate popular cybersecurity workflows, OpenAI noted in the press release. Zoom in: The new GPT-5.5-Cyber model is specifically designed to help defenders write proofs of concept for the bugs they can find or run simulations to test their organization's security posture. Meanwhile, OpenAI has also made another version of GPT-5.5 available to other members of its Trusted Access for Cyber program that can help with understanding unfamiliar code, mapping affected surfaces or reviewing patches for software flaws. The big picture: Advanced AI models are getting scary good at finding and exploiting flaws in technology, including everything from operating systems to web browsers. The U.K. AI Security Institute said last week that GPT-5.5 was able to complete a 32-step simulated corporate cyberattack in 2 out of 10 test runs. Mythos did the same in 3 out of 10 runs. Before Mythos, no AI model had ever successfully completed that test. Between the lines : OpenAI and Anthropic are pursuing two different approaches to rolling out their cyber-capable models as they both try to keep the technology out of the hands of malicious hackers and adversarial governments. Anthropic has taken a more restrictive view, allowing approximately 40 organizations to access Mythos. Some of those companies are also part of the company's new Project Glasswing, where members are trading information about how they're testing the model. OpenAI is taking a more open approach. It's releasing one version of its advanced models with stricter guardrails, while also creating a version with fewer safeguards for companies that apply for access. What to watch : The White House is actively discussing a slate of executive actions that could change how the federal government is involved in future model rollouts. Go deeper : Frightening AI advances speed race to secure critical infrastructure
Go to News Site