Back to stories
Anthropic's 'Too Dangerous' AI Model Leaked by Unauthorized Users
Apr 28, 2026

Anthropic's 'Too Dangerous' AI Model Leaked by Unauthorized Users

42%
58%

42% Left — 58% Right

Estimated · Americans generally prioritize national security over corporate autonomy when it comes to AI technology, especially given widespread concerns about cybersecurity threats from foreign adversaries. Polling consistently shows majorities support government oversight of emerging technologies that could affect national defense. Moderates and independents likely view the security breach as validating concerns about private companies controlling dangerous AI capabilities, while being less concerned about geopolitical access restrictions that favor U.S. allies.

EstimateAmericans generally prioritize national security over corporate autonomy when it comes to AI technology, especially given widespread concerns about cybersecurity threats from foreign adversaries. Polling consistently shows majorities support government oversight of emerging technologies that could affect national defense. Moderates and independents likely view the security breach as validating concerns about private companies controlling dangerous AI capabilities, while being less concerned about geopolitical access restrictions that favor U.S. allies.
Share
Helpful?

Left says

  • The unauthorized access to Mythos demonstrates the urgent need for stronger cybersecurity regulations and oversight of AI companies handling dangerous technologies
  • Anthropic's exclusion of international partners from initial access creates dangerous geopolitical tensions and undermines global cooperation on AI safety
  • The Pentagon's designation of Anthropic as a supply chain risk appears politically motivated and contradicts the administration's own use of the company's technology across other agencies
  • CISA's lack of access to Mythos while other agencies use it reflects the Trump administration's systematic weakening of critical cybersecurity infrastructure

Right says

  • The security breach validates concerns about Anthropic's ability to properly safeguard dangerous AI technology and justifies the Pentagon's supply chain risk designation
  • Anthropic's restrictions on military and surveillance use of its AI models inappropriately limit legitimate national security applications and demonstrate the company's ideological bias
  • The company's selective distribution of Mythos to primarily U.S. allies represents appropriate protection of American technological advantages in global competition
  • Private companies should not dictate how the U.S. government uses AI technology for national defense, especially when taxpayer funding supports AI development

Common Take

High Consensus
  • Mythos represents an unprecedented cybersecurity capability that found over 2,000 unknown software vulnerabilities in seven weeks of testing
  • The unauthorized access to Mythos by a small group through insider knowledge and contractor access poses legitimate security concerns
  • Critical infrastructure including banks, power grids, and government systems face significant vulnerability to AI-powered cyberattacks
  • Both government agencies and private companies need access to advanced AI security tools to defend against emerging cyber threats
Helpful?

The Arguments

Left argues

The unauthorized breach of Mythos exposes critical failures in Anthropic's cybersecurity practices, validating the urgent need for comprehensive federal oversight and regulation of AI companies handling dangerous dual-use technologies.

Right counters

The breach actually demonstrates why the Pentagon's supply chain risk designation was justified—if Anthropic cannot secure its most dangerous models from unauthorized access, it cannot be trusted with national security applications.

Right argues

Anthropic's ideological restrictions on military and surveillance use inappropriately constrain legitimate national defense applications, especially when taxpayer funding through government partnerships supports AI development.

Left counters

These ethical guardrails prevent the misuse of powerful AI for mass surveillance and autonomous weapons, representing responsible corporate governance rather than ideological bias—the government can develop its own AI for such purposes.

Right argues

Anthropic's selective distribution of Mythos exclusively to U.S. allies appropriately protects American technological advantages in global competition and prevents adversaries from accessing critical cybersecurity capabilities.

Left counters

This exclusionary approach undermines essential international cooperation on AI safety and creates dangerous geopolitical tensions when global cybersecurity threats require coordinated multinational responses.

Left argues

The Trump administration's systematic weakening of CISA—cutting funding and staff while denying access to Mythos—dangerously compromises America's primary cybersecurity defense agency when AI-powered attacks pose unprecedented threats.

Right counters

CISA's exclusion reflects appropriate prioritization of resources toward agencies with direct operational responsibilities, while the NSA and other defense agencies with Mythos access can better leverage its capabilities for national security.

Left argues

The Pentagon's supply chain risk designation appears politically motivated and contradictory, given that other federal agencies continue using Anthropic's technology while the administration simultaneously seeks broader government deployment of Mythos.

Right counters

The designation reflects legitimate concerns about a private company dictating terms to the military, and civilian agency use doesn't negate the need for unrestricted defense access to AI capabilities in national security contexts.

Challenge Questions

These questions target genuine internal contradictions — meant to provoke honest reflection.

Right asks Left

If Anthropic's cybersecurity practices are so inadequate that unauthorized users can access their most dangerous model, how can you simultaneously argue for weaker oversight and regulation of AI companies handling such powerful technologies?

Left asks Right

If you believe the government should have unrestricted access to AI models for national security purposes, how do you reconcile supporting a company whose security failures allowed unauthorized access to the very capabilities you want protected from adversaries?

Outlier Report

Left Fringe

Progressive tech critics like Cathy O'Neil and some Democratic Socialists of America members who view any Pentagon involvement in AI as inherently militaristic, representing roughly 15% of the left coalition.

Right Fringe

Libertarian-leaning Republicans like Thomas Massie and some Cato Institute fellows who oppose any government regulation of private AI companies, representing approximately 20% of the right coalition.

Noise Assessment

Moderate noise level - most discourse reflects genuine policy concerns rather than performative positioning, though some partisan amplification exists around Trump administration cybersecurity policies.

Sources (9)

AllSides

The AI model that Anthropic billed as too dangerous to release has reportedly been accessed by an unauthorized third party, and the incident raises concerns about the future of cybersecurity. The Mythos model was reportedly accessed by a handful of users in a private Discord chat on the day it was announced publicly, Bloomberg reported. Earlier this month, the group was able to access the program in part because one of the members of the group is a third party contractor for Anthropic, according to Bloomberg. Using this access, the group was able to guess where the model was located based on previously leaked knowledge by another group about Anthropic's past practices, that hackers obtained from AI training startup Mercor. Although the group that accessed it has not been using the model for cyberattacks, it has been using the program continuously since its release and still has access, the outlet reported.

AllSides

When Anthropic told the world this month that it had built an artificial intelligence model so powerful that it was too dangerous to release widely, the company named 11 organizations as partners to help mount a defense. All were from the United States. Within two weeks, the model, called Mythos, had set off a global scramble unlike anything yet seen in the A.I. era. Mythos, which Anthropic has said is uncannily capable of finding and exploiting hidden flaws in the software that runs the world's banks, power grids and governments, had become a geopolitical chip — and a U.S. company held it.

AllSides

There is a new AI model called Mythos. Anthropic built it for defensive cybersecurity research. It is so effective at finding software vulnerabilities that Anthropic decided the general public cannot have it. Instead, it is letting a small circle of trusted partners like Microsoft and Google experiment with it first under controlled conditions, while researchers figure out what guardrails need to exist.

Axios

<p><a href="https://www.axios.com/2026/04/17/anthropic-white-house-wiles-bessent-amodei" target="_blank">Anthropic</a> is hitting turbulence at a critical moment, with a cascade of challenges<strong> </strong>converging ahead of a potential IPO that could value the company near $800 billion.</p><p><strong>Why it matters: </strong>The AI darling, whose <a href="https://www.axios.com/2026/04/13/anthropic-revenue-growth-ai" target="_blank">revenue has tripled</a> to $30 billion this year on the back of its wildly popular coding tools, has never been more valuable or more vulnerable.</p><hr /><ul><li>Chief rival <a href="https://www.axios.com/2026/04/21/openai-anthropic-enterprise-rivalry-heats-up" target="_blank">OpenAI senses opportunity</a> in the Claude maker's recent stumbles, courting frustrated developers and pitching itself as the steadier alternative ahead of dueling IPOs.</li></ul><p><strong>Zoom in: </strong>Anthropic's problems over the past two months span nearly every part of its business — product quality, pricing, security and capacity — and are starting to compound.</p><p><strong>1. Model backlash:</strong> Perceived declines in Opus 4.6 performance triggered an initial wave of suspicion, with some developers accusing Anthropic of <a href="https://www.axios.com/2026/04/16/anthropic-claude-power-user-complaints" target="_blank">quietly downgrading</a> its flagship model.</p><ul><li>Its newest model, Opus 4.7, delivered <a href="https://www.axios.com/2026/04/16/anthropic-claude-opus-model-mythos" target="_blank">major benchmark gains</a> but drew a mixed public reception, as some users complained of higher token costs, bugs and uneven performance.</li></ul><p><strong>2. Capacity crunch: </strong>Surging demand is straining Anthropic's compute, with users running into tighter limits and periodic outages — a red flag for companies that have grown reliant on Claude.</p><p><strong>3. Security scares: </strong>A software update accidentally <a href="https://www.axios.com/2026/03/31/anthropic-leaked-source-code-ai" target="_blank">exposed internal Claude Code files</a>, handing outsiders a window into Anthropic's most valuable product and raising questions about its internal safeguards.</p><ul><li>Anthropic is now investigating reports that a small group of unauthorized users accessed Mythos — its most powerful model, withheld over safety concerns about its offensive cyber capabilities.</li></ul><p><strong>4. Product confusion: </strong>Some <a href="https://x.com/edzitron/status/2046722850703442219?s=20" target="_blank">users discovered</a> on Tuesday that Claude Code was no longer available on the $20/month Pro plan — a potentially major shift affecting Anthropic's most popular product and its most accessible tier.</p><ul><li>Facing massive backlash, the <a href="https://x.com/TheAmolAvasare/status/2046724659039932830?s=20" target="_blank">company said</a> the move was part of a limited test affecting a small share of users, though that explanation did little to ease fears of broader pricing changes.</li></ul><p><strong>Reality check: </strong>Anthropic's business is still booming.</p><ul><li>Even as the company pushes enterprise clients toward usage-based pricing, demand hasn't slowed — nor has revenue.</li><li>Anthropic's <a href="https://www.axios.com/2026/03/09/anthropic-sues-pentagon-supply-chain-risk-label" target="_blank">standoff</a> with the Pentagon endeared it to AI safety advocates and Trump critics, helping <a href="https://www.axios.com/2026/03/01/anthropic-claude-chatgpt-app-downloads-pentagon" target="_blank">drive</a> a spike in usage that briefly sent Claude to the top of the U.S. App Store.</li><li>"We've seen extraordinary demand for Claude over the past several months, and our team is doing everything we can to scale quickly and responsibly," an Anthropic spokesperson told Axios. "We know it hasn't always been smooth, and we're grateful to our community for the patience and feedback as we work through it."</li></ul><p><strong>The big picture: </strong>The stakes go far beyond a few product missteps. Anthropic and OpenAI are locked in a race to define the enterprise AI market and to convince investors they deserve massive IPO valuations.</p><ul><li>Anthropic's rapid rise has been fueled by cutting-edge products, developer trust and a reputation for discipline. Now, even small stumbles risk chipping away at that foundation.</li></ul><p><strong>What we're watching: </strong>OpenAI, no stranger to "<a href="https://www.wsj.com/tech/ai/openais-altman-declares-code-red-to-improve-chatgpt-as-google-threatens-ai-lead-7faf5ea6" target="_blank">code red</a>" moments in the breakneck AI race, is seizing on its competitor's growing pains.</p><ul><li>A <a href="https://www.theverge.com/ai-artificial-intelligence/911118/openai-memo-cro-ai-competition-anthropic" target="_blank">leaked memo</a> from OpenAI chief revenue officer Denise Dresser blasted Anthropic as elitist and alleged that the company had overstated its revenue run rate by billions.</li><li>CEO Sam Altman accused Anthropic of "fear-based marketing" in a podcast appearance this week, taking aim at its tightly controlled Mythos rollout.</li><li>And as Anthropic grappled with user backlash from its Claude Code pricing confusion, OpenAI <a href="https://x.com/rohanvarma/status/2046769635350241292?s=20" target="_blank">engineers</a> — egged on by <a href="https://x.com/sama/status/2046808217133670800?s=20" target="_blank">Altman</a><strong> </strong>— <a href="https://x.com/gabriel1/status/2046781253089976538?s=20" target="_blank">openly</a> <a href="https://x.com/thsottiaux/status/2046740759056162816?s=20" target="_blank">mocked</a> their rival on social media.</li></ul><p><strong>The bottom line: </strong>Both Altman and Anthropic CEO Dario Amodei used to signal that the AI race should have multiple winners. Their tone and tactics now suggest otherwise.</p>

Axios

<p>Anthropic says it has no way to control or shut down its <a href="https://www.axios.com/technology/automation-and-ai" target="_blank">AI</a> models once they're deployed by the Pentagon, according to a new court filing.</p><p><strong>Why it matters: </strong>The Pentagon designated Anthropic a <a href="https://www.axios.com/2026/03/09/anthropic-sues-pentagon-supply-chain-risk-label" target="_blank">supply chain risk</a>, contending the AI firm is inappropriately getting involved in how its technology can be used in sensitive military operations. </p><hr /><p><strong>What's inside:</strong> Anthropic argues in the <a href="https://storage.courtlistener.com/recap/gov.uscourts.cadc.42923/gov.uscourts.cadc.42923.01208843394.0.pdf" target="_blank">filing</a> to a federal appeals court in D.C. that it has no visibility, technical ability or any kind of "kill switch" for its technology once it's deployed.</p><ul><li>The company also says the Pentagon has the opportunity to test models before deployment.</li></ul><p><strong>Catch up quick: </strong>The company's usage policies include no Claude for autonomous weapons or mass surveillance, red lines that the Pentagon dismissed as red herrings and led to the dispute. </p><ul><li>The D.C. court previously <a href="https://www.axios.com/2026/04/08/anthropic-loses-bid-to-block-pentagon-blacklisting" target="_blank">rejected</a> Anthropic's request for a pause on the supply chain risk designation. A judge in California for an ongoing parallel case <a href="https://www.axios.com/2026/03/26/judge-temporarily-blocks-pentagon-ban-anthropic" target="_blank">granted</a> Anthropic's request.</li><li>The split decision means Anthropic can't participate in new Pentagon contracts, but can continue working with other government agencies while the litigation plays out. </li></ul><p><strong>Friction point: </strong>The Pentagon is arguing in court that Anthropic is a supply chain risk as the Trump administration <a href="https://www.axios.com/2026/04/16/white-house-anthropic-ai-mythos-government-national-security" target="_blank">moves to deploy</a> its new Mythos model across the federal government. </p><ul><li>Now, agency heads are scrambling to figure out how they can protect their systems from cyber attacks using Mythos, potentially complicating the administration's argument that the company poses a national security risk.</li></ul><p><strong>What's next:</strong> A hearing is scheduled for May 19.</p>

Axios

<p>The Cybersecurity and Infrastructure Security Agency doesn't have access to Anthropic's powerful new <a href="https://www.axios.com/2026/04/07/anthropic-mythos-preview-cybersecurity-risks" target="_blank">Mythos Preview model</a>, even though some other government agencies are using it, two sources tell Axios.</p><p><strong>Why it matters:</strong> The country's top cyber defense agency, tasked with helping to secure everything from banks to power plants, is on the outside looking in at a time when the industries it works with are deeply concerned about AI-powered cyberattacks overwhelming their defenses. </p><hr /><ul><li>Anthropic decided against a public release of Mythos due to its unprecedented ability to quickly discover and exploit security vulnerabilities.</li><li>Instead, Anthropic provided it to more than 40 companies and organizations who are now testing it and working to shore up their systems.</li><li>CISA is not on that list, the sources say.</li></ul><p><strong>State of play: </strong>Earlier this month, an Anthropic official told Axios the company briefed CISA and the Commerce Department on Mythos' capabilities. </p><ul><li>The Commerce Department's Center for AI Standards and Innovation has <a href="https://www.politico.com/news/2026/04/14/anthropic-mythos-federal-agency-testing-00872439" target="_blank">reportedly</a> been testing Mythos. </li><li>The <a href="https://www.axios.com/2026/04/19/nsa-anthropic-mythos-pentagon" target="_blank">NSA</a> is also among the organizations using Mythos, despite the Department of Defense, which oversees the agency, having declared Anthropic is a "supply chain risk."</li><li>It's unclear if the ongoing turmoil within the agency during the second Trump administration played any role in the agency not moving more swiftly to secure access. </li><li>Spokespeople for CISA and Anthropic declined to comment.</li></ul><p><strong>The big picture:</strong> The Trump administration has spent the last year <a href="https://www.axios.com/2026/01/27/cisa-trump-administration-cuts-congress" target="_blank">reducing capacity</a> at CISA, instead opting to give more policy influence to the White House's national cyber director and pushing some programs to the state and local level.</p><ul><li>CISA's acting director Nick Andersen <a href="https://www.nextgov.com/cybersecurity/2026/04/cisa-resources-more-limited-i-would-amid-shutdown-top-official-says/412939/" target="_blank">told lawmakers</a> last week that the agency's resources are "more limited than I would like."</li><li>Trump <a href="https://cyberscoop.com/trump-budget-proposal-would-cut-hundreds-of-millions-more-from-cisa/" target="_blank">proposed</a> cutting as much as $707 million from the agency's budget in the upcoming fiscal year.</li><li>CISA has already lost more than a <a href="https://www.axios.com/2025/06/03/cisa-staff-layoffs-resignations-trump-cuts" target="_blank">third of its workforce</a> and <a href="https://www.axios.com/2026/01/27/cisa-trump-administration-cuts-congress" target="_blank">millions</a> in funding.</li></ul><p><strong>Between the lines:</strong> National cyber director Sean Cairncross is among the Trump officials <a href="https://www.axios.com/2026/04/16/white-house-anthropic-ai-mythos-government-national-security" target="_blank">negotiating</a> broader civilian agency access to Mythos.</p><ul><li>Treasury has also been <a href="https://www.axios.com/2026/04/16/white-house-anthropic-ai-mythos-government-national-security" target="_blank">negotiating</a> access.</li><li>Other organizations with access to Mythos have predominantly been using it to find exploitable security vulnerabilities in their own networks, sources tell Axios.</li></ul><p><strong>What to watch:</strong> Security teams at critical infrastructure organizations have often looked to CISA to share threat intelligence across their sectors and determine how to prioritize their security strategies. </p>

Axios

<p>OpenAI is mobilizing consulting partners and touting its compute edge to claw back enterprise customers from Anthropic, as both labs barrel toward potential IPOs.</p><p><strong>Why it matters:</strong> The outcome of this fight could determine which company hits the public markets with momentum, and which has to explain to investors why it's losing ground.</p><hr /><p><strong>Driving the news: </strong>OpenAI is working to take market share from Anthropic's enterprise business. </p><ul><li>The <a href="https://www.axios.com/technology/automation-and-ai" target="_blank">AI</a> lab is engaging several consulting partners to help enterprises deploy and scale Codex, OpenAI's coding tool. </li><li>Partners will get early access to AI tools in hopes that they can help enterprises "rethink their business processes in the age of AI in a different way," chief revenue officer Denise Dresser told Axios. </li></ul><p><strong>Zoom out: </strong>It's part of OpenAI's broader shift toward enterprise revenue, after Claude Code's mass adoption led to businesses <a href="https://www.axios.com/2026/03/18/ai-enterprise-revenue-anthropic-openai" target="_blank">spending more money</a> with Anthropic than OpenAI, per Ramp.</p><ul><li>Dresser told OpenAI employees in a memo last week that "the market is as competitive as I have ever seen it." </li></ul><p><strong>Between the lines: </strong>OpenAI and Anthropic are competing on everything from compute to enterprise adoption to model quality as both labs race toward <a href="https://www.axios.com/2026/04/03/anthropic-openai-ipo" target="_blank">IPOs</a> that could come as soon as this fall.</p><ul><li>"Everyone's operating in winner-take-all mode" and that's happening at "every layer of the tech stack," says Anuj Kapur, CEO of AI software delivery platform CloudBees.</li><li>Investor demand appears stronger for <a href="https://www.wsj.com/pro/venture-capital/anthropic-takes-top-spot-in-trading-on-secondary-platform-augment-daa865b9?eafs_enabled=false" target="_blank">Anthropic</a> than OpenAI in secondary markets, according to multiple <a href="https://www.bloomberg.com/news/articles/2026-04-01/openai-demand-sinks-on-secondary-market-as-anthropic-runs-hot" target="_blank">reports</a>.</li></ul><p><strong>If OpenAI doesn't catch up on revenue soon,</strong> Anthropic could "take a lead here that, let's say, over the next one or two years, could be insurmountable," David Sacks, tech investor and White House <a href="https://www.axios.com/2026/03/30/david-sacks-trump-ai-agenda-plan" target="_blank">adviser</a>, said on the "<a href="https://www.youtube.com/watch?v=SFdqX7IY7RY" target="_blank">All-In</a>" podcast.</p><ul><li>Growth compounds in the AI world: Models could get smarter over time due to more volume usage, for example. </li><li>More paying customers could mean more revenue, which could be used to buy more compute.</li></ul><p><strong>Yes, but: </strong>OpenAI says it's ahead of Anthropic on compute capacity, which it highlighted in a recent investor letter reviewed by Axios.</p><ul><li>Anthropic CEO Dario Amodei has cautioned that aggressively scaling compute is risky given its high costs and uncertain demand, even as the company <a href="https://www.axios.com/2026/04/21/anthropic-amazon-compute-wars" target="_blank">announced</a> on Monday an expanded <a href="https://www.anthropic.com/news/anthropic-amazon-compute" target="_blank">Amazon partnership</a> to secure up to 5 gigawatts of additional compute.</li><li>An OpenAI investor tells Axios that greater compute capacity is OpenAI's key advantage, as it can fuel more experimentation and keep Sam Altman's firm ahead on model performance. </li></ul><p><strong>The bottom line:</strong> OpenAI is spending to grow users, Anthropic is spending to protect margins, and both are battling toward the biggest IPOs in history. </p>

Axios

<p>Anthropic is <a href="https://www.anthropic.com/news/anthropic-amazon-compute" target="_blank">expanding</a> its partnership with Amazon, committing more than $100 billion over the next decade to secure massive new computing capacity.</p><p><strong>Why it matters: </strong><a href="https://www.axios.com/2026/04/02/anthropic-usage-limits-openai" target="_blank">Compute capacity</a> is the currency of the AI race and is most likely to define who wins.</p><hr /><p><strong>Driving the news: </strong>Anthropic agreed to spend $100 billion to secure up to 5 gigawatts of compute from Amazon to train and run its Claude models. </p><ul><li>Amazon will invest $5 billion now, with the option for up to $20 billion more — deepening its stake in Anthropic.</li></ul><p><strong>Between the lines: </strong>Anthropic is signaling it's ready to spend heavily on the same infrastructure edge that its biggest competitor, OpenAI, has been touting.</p><ul><li>OpenAI sent a letter to investors last week pitching its compute capacity as its competitive advantage over Anthropic.</li><li>Anthropic, for its part, has pointed to a wave of partnerships aimed at expanding access, now including this latest Amazon announcement.</li></ul><p><strong>Catch up quick: </strong>Compute is finite, and needs to be used to both service customers and train upcoming models, so it both determines how well your current model runs and shapes upcoming model performance. </p><ul><li>When demand spikes, as it has for Anthropic's Claude Code, that capacity is strained. </li><li>Anthropic changed its enterprise pricing in response, charging more for super users. Some consumers have reported <a href="https://www.axios.com/2026/04/16/anthropic-claude-power-user-complaints" target="_blank">worse experiences</a> using Claude, which they attribute to compute constraints. </li></ul><p><strong>What we're watching:</strong> How long the AI labs rely on partners that are also competitors for access to compute. </p><ul><li>Amazon has its own AI race to win. </li><li>That may become a problem for Anthropic over time, potentially forcing it to spend even more money on its own compute. </li></ul><p><strong>The bottom line: </strong>To monitor the AI race, watch compute capacity.</p>

Forbes

The Pentagon previously labeled Anthropic a supply-chain risk.

This summary was generated by artificial intelligence and may contain errors or mischaracterizations. Always refer to the original sources for authoritative reporting.