Business

AI coding tools hacked via credentials, not models

New incidents show attacks are targeting OAuth tokens and agent privileges in AI coding workflows, raising urgent governance questions.

AI coding tools are being breached in a way that feels counterintuitive: attackers are going after the credentials that power the agent, not the underlying model.

In recent disclosures covered by Misryoum, incidents involving Codex, Claude Code and Copilot followed a recurring theme.. Rather than trying to “break the intelligence. ” the attackers targeted OAuth tokens and service identities that allow AI agents to take actions inside real development and cloud environments.

This credential-first pattern matters because it turns everyday workflow automation into a high-value pathway for theft and takeover. If an agent can authenticate to production systems on the user’s behalf, then compromise of that authentication can scale far faster than a traditional bug.

Beyond the specific cases, Misryoum highlights that the vulnerabilities shared structural similarities across different products and environments.. In one example. a crafted branch name was used to siphon a GitHub OAuth token during cloning. with the behavior tied to how parameters flowed into setup steps.. In another. misconfigurations and edge cases around sandbox controls allowed command handling to escape or bypass protections. including scenarios where deny rules were not enforced as expected.

Meanwhile, the same underlying governance problem showed up in workflows that connect AI agents to repositories and cloud services.. With Misryoum coverage. the emphasis shifted from “does the agent understand what it’s doing?” to “how safely is it permitted to do it?” In practice. this includes whether agent instructions can be influenced through repository content. whether trust and permission gates actually hold. and whether the agent’s permissions match the principle of least privilege.

The takeaway for businesses is straightforward: AI agents can inherit powerful access in ways security teams may not routinely inventory or monitor.. That means even small weaknesses in input handling. sandbox logic. or permission checks can become catastrophic when they intersect with broad tokens and service accounts.

Misryoum also notes that defenders have been shipping patches. but the broader risk remains: many security reviews focus on the code the agent produces. while the real attack surface is the agent’s execution environment and authentication pathways.. If credential handling is not treated as a first-class security boundary. monitoring and scanning may miss the moment sensitive tokens are exposed or misused.

At the organizational level. Misryoum recommends tightening governance around AI agent identities: inventory the agents and their scopes. review patch coverage. and apply least-privilege permissions wherever possible.. Treat repository inputs—like configuration changes and text fields that guide agent behavior—as untrusted. and validate identity and permissions before an agent is allowed to authenticate to external services.

In the end, the risk is not that AI models are suddenly “unreliable.” The risk, as Misryoum frames it, is that companies may be granting agents the kind of access that normally requires strong human identity controls, without matching lifecycle management, auditing, and separation of duties.