Anthropic is giving companies, including Amazon, Apple, and Microsoft, access to its unreleased Claude Mythos model to prepare cybersecurity defense

· Fortune

Anthropic is giving a group of Big Tech and cybersecurity firms access to a preview version of Claude Mythos—its unreleased and most advanced AI model—in an attempt to bolster cybersecurity defences across some of the world’s most critical systems.

Visit saltysenoritaaz.org for more information.

The company has been concerned that the new model may pose unprecedented cybersecurity risks and increase the likelihood of large-scale AI-driven cyberattacks this year.

The initiative, called “Project Glasswing,” allows firms, including Amazon Web Services, Anthropic, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, Microsoft, and NVIDIA, to use the company’s Mythos Preview for defensive security work and share their learnings with the wider industry. Anthropic is also providing access to roughly 40 more organizations responsible for building or maintaining critical software infrastructure, allowing them to use the model to scan and secure both their own systems and open-source code.

In a blog post announcing the new initiative, Anthropic said it formed Project Glasswing because it believes the capabilities of its Claude Mythos Preview could reshape the cybersecurity sector due to its strong agentic coding and reasoning skills. Anthropic said it does not plan to make the Mythos Preview generally available, but eventually wants to safely deploy Mythos-class models at scale when new safeguards are in place.

The existence of Anthropic’s Mythos model was first revealed in March, when Fortune reported that the company was developing and testing an unreleased model described in company documents as “by far the most powerful AI model” it had ever developed. In a draft blogpost inadvertently made public last month, Anthropic warned that Mythos is “currently far ahead of any other AI model in cyber capabilities” and said it “presages an upcoming wave of models that can exploit vulnerabilities in ways that far outpace the efforts of defenders.”

The news of the model’s existence has already rattled the cybersecurity industry. Following Fortune’s report, shares in CrowdStrike, Palo Alto Networks, Zscaler, SentinelOne, Okta, Netskope, and Tenable all slumped between 5% and 11% as investors worried that increasingly capable AI models could undermine demand for traditional security products, a concern that had already surfaced the previous month when Anthropic launched Claude Code Security.

In just the past few weeks, Anthropic says its Mythos Preview has identified thousands of zero-day vulnerabilities, many of which were critical and difficult to detect, including some in every major operating system and web browser. Several of the vulnerabilities discovered using the model had existed undetected for years, according to the company, the oldest being a 27-year old bug in OpenBSD—an operating system best known for its strong security.

But Anthropic has also acknowledged that the same capabilities that can bolster cyber defences can also be weaponized by attackers.

“Given the rate of AI progress, it will not be long before such capabilities proliferate, potentially beyond actors who are committed to deploying them safely,” the company said in a blog post. “The fallout—for economies, public safety, and national security—could be severe. Project Glasswing is an urgent attempt to put these capabilities to work for defensive purposes.”

While concerns about AI’s potential to automate large-scale cyber attacks have been building for a while, Anthropic’s newest model appears to represent a dangerous new level of AI performance in cyber tasks. According to a report from Axios, Anthropic has already privately warned top government officials that Mythos makes large-scale cyberattacks significantly more likely this year.

Previous models from OpenAI and Anthropic had already reached a new risk level for cyber threats. When OpenAI released GPT-5.3-Codex in February, the company said it was the first model it had classified as high-capability for cybersecurity tasks under its Preparedness Framework and the first it had directly trained to identify software vulnerabilities. Anthropic also said its most advanced model on the market, Opus 4.6, released the same week, demonstrated an ability to surface previously unknown vulnerabilities in production codebases—a capability the company acknowledged was dual-use.

Hackers have already leveraged Anthropic’s tools to enable more sophisticated and autonomous attacks. Last year, the company disclosed what it described as the first documented case of a cyberattack largely executed by AI—a Chinese state-sponsored group that used AI agents to autonomously infiltrate roughly 30 global targets, with AI handling the majority of tactical operations independently.

“Given the rate of AI progress, it will not be long before such capabilities proliferate, potentially beyond actors who are committed to deploying them safely,” Anthropic said in a statement. “The work of defending the world’s cyber infrastructure might take years; frontier AI capabilities are likely to advance substantially over just the next few months. For cyber defenders to come out ahead, we need to act now.”

This story was originally featured on Fortune.com

Read full story at source