NSA reportedly testing Anthropic’s Mythos Preview model

NSA Mythos – Reports say the NSA has access to Anthropic’s Mythos Preview as US agencies weigh AI capabilities against security safeguards—and a legal fight over “supply chain risk” continues.
The NSA is reportedly using Anthropic’s new Mythos Preview model, a development that adds another twist to an already tense AI policy showdown.
Mythos Preview enters the intelligence pipeline
According to Misryoum. the National Security Agency has been given access to Anthropic’s Mythos Preview. with reporting citing multiple sources familiar with the matter.. Anthropic unveiled Mythos Preview in early April as a general-purpose language model. positioning it as especially capable at computer security tasks—an area where intelligence agencies and defense teams tend to look for practical automation.
A policy feud, then a backchannel shift
The move is striking given the months-long friction between Anthropic and the Pentagon.. Misryoum notes that earlier in February. President Trump directed federal agencies to stop using Anthropic’s services after the company reportedly refused to adjust certain safeguards intended for military use during contract negotiations.. That makes the current NSA-related access feel less like a routine procurement update and more like a recalibration—whether technical. legal. or both.
Part of the context. Misryoum adds. is that Anthropic’s CEO Dario Amodei met with White House chief of staff Susie Wiles and other officials.. The White House described the meeting as “productive and constructive. ” while President Trump said he did not know details when asked by reporters.. For readers. the key takeaway is that AI governance in Washington often moves through a mixture of formal directives and informal negotiations—especially when national-security stakes are on the line.
Why “computer security” capability matters now
The fact that Mythos Preview is described as “strikingly capable” at security tasks helps explain why an intelligence agency might keep pushing toward adoption even amid controversy.. In practical terms. language models can support workflows like analyzing code. summarizing threat intelligence. drafting detection rules. or assisting with incident response triage.. Misryoum doesn’t need to overstate it: these tools are not magic shields. but they can compress time between a security signal and a human decision.
From an operational perspective, this is exactly the kind of capability that can appeal to agencies managing fast-moving digital threats. And once a model is tested, institutional momentum can be hard to stop—especially if teams find ways to align it with internal security requirements.
Legal battles and the “supply chain risk” label
Misryoum also notes that Anthropic is still involved in litigation with the US government.. Anthropic filed lawsuits against the Department of Defense in March after the Trump administration labeled the company a “supply chain risk.” A preliminary injunction from one court temporarily blocked that designation. but a separate federal judge later denied a motion to lift the label.
That legal uncertainty matters because it shapes what “use” can realistically mean inside government.. Even if a model is accessed. agencies typically care about auditability. data handling. and the ability to demonstrate controls—requirements that are often stricter in intelligence and defense environments.. It also means future access could tighten or loosen depending on how courts interpret risk and safeguards.
The human stakes: security teams and trust gaps
For security professionals. the appeal of an advanced AI model is obvious: workloads are heavy. threat landscapes are relentless. and manual analysis can become the bottleneck.. But Misryoum readers should also consider the trust gap that follows a public dispute between a model provider and the government.. When adoption is surrounded by legal conflict. teams may face extra verification steps. slower deployments. and heightened scrutiny over what gets sent to the model and how outputs are validated.
In the background. there’s also a broader question that doesn’t fit neatly into any single headline: if agencies rely more on AI for cyber tasks. they need confidence that the system won’t introduce new vulnerabilities—whether through unsafe recommendations. data leakage paths. or subtle reliability failures.. That’s where “safeguards” stop being a policy word and start becoming engineering constraints.
What to watch next
This story isn’t just about one model or one agency. Misryoum’s angle here is that it reflects a recurring pattern in US AI policy: when capability is valuable, government demand can eventually find a route—even if procurement and safeguards stay contested.
The immediate next question is whether access to Mythos Preview becomes routine inside the NSA or remains limited to specific pilots.. A second question is how the courts and the ongoing legal dispute influence what other agencies can do.. And if the Pentagon’s earlier posture continues to conflict with real-world operational needs. expect more negotiations—possibly focused less on whether AI can be used. and more on how it can be controlled. measured. and trusted.
Vercel confirms breach as hackers claim to be selling stolen data