LiteLLM SQLi CVE-2026-42208: Hackers read secrets via pre-auth attack

LiteLLM SQLi – Hackers are exploiting a pre-auth SQL injection in LiteLLM to access proxy databases and steal API keys, credentials, and configs. Fix: upgrade to 1.83.7+ and rotate secrets.
A critical flaw in LiteLLM is being exploited to pull sensitive data out of AI gateways—often without authentication.
LiteLLM flaw lets attackers target secrets at the proxy layer
Hackers are actively exploiting a critical SQL injection vulnerability in the LiteLLM open-source gateway middleware, tracked as CVE-2026-42208.. The problem shows up during LiteLLM’s proxy API key verification step. which means the attack path lands right where integrations check and manage credentials.
What makes the issue especially dangerous is that the vulnerability can be triggered without authentication.. Attackers send a specially crafted Authorization header to LLM API routes, including endpoints such as /chat/completions.. From there. the SQL injection allows malicious requests to read data from LiteLLM’s underlying proxy database. and also to modify it.
Misryoum understands that the impact goes beyond “just” leaking data.. LiteLLM is designed to store and manage multiple layers of secrets—API keys. virtual keys. master keys. and environment or configuration secrets—so database access effectively becomes access to the credentials that let AI applications call external model providers.
Why CVE-2026-42208 is a bigger deal than a typical LLM bug
LiteLLM sits in the middle of many production AI stacks. acting as a unified proxy/SDK layer that lets developers route requests to different model providers through one interface.. When that middle layer is compromised. the blast radius can be wide: systems that depend on LiteLLM for authentication. provider credential handling. or secret-based routing can be affected even if their application code hasn’t changed.
The maintainers say the attack could enable “unauthorised access to the proxy and the credentials it manages.” That wording matters because it suggests more than data exposure.. If an attacker can also alter stored values. they may be able to redirect traffic. weaken access controls. or set up conditions for follow-on compromise.
Misryoum also notes that LiteLLM’s popularity increases the incentive for opportunistic attackers to test it at scale.. The project has roughly 45k stars and 7.6k forks on public code hosting. a size large enough that vulnerabilities often become “high-value targets” for threat actors once they are public.
Patch timeline: upgrade to 1.83.7 to close the SQL injection
A fix was delivered in LiteLLM version 1.83.7. The change replaces string concatenation in the vulnerable logic with parameterized queries, the standard mitigation for SQL injection flaws.
Researchers have observed exploitation beginning around 36 hours after the bug was disclosed publicly on April 24.. While that isn’t the fastest seen in the broader vulnerability ecosystem, the activity described is notable for its precision.. Attack attempts were targeted toward tables that hold API keys. provider credentials (including for OpenAI. Anthropic. and Bedrock). and environment/configuration data.
Misryoum interpretation: the attackers don’t appear to be fumbling. Instead of probing indiscriminately, they moved toward the “secrets live here” parts of the database schema—an indicator that either prior internal knowledge existed or the attackers quickly derived the correct targets.
Active exploitation patterns: crafted headers and table-aware payloads
The observed requests used crafted Authorization: Bearer headers and were sent to the /chat/completions route. Researchers noted there were no attempts against benign tables, which suggests the operator focused directly on high-value data rather than mapping the system.
In a second phase. attackers reportedly switched IP addresses—often a sign of evasion and operational caution—then reran the SQL injection attempts.. The payloads were also described as fewer and more precise. consistent with someone refining their approach after learning the relevant table names and structures.
For defenders, this is a reminder that “unusual API traffic” may not look like generic scanning. If the requests go straight to the interesting endpoints with table-aware behavior, the normal indicators of early probing may not appear.
If you can’t upgrade: treat exposed instances as compromised and rotate keys
The most practical takeaway is blunt: any LiteLLM instance still running a vulnerable version should be treated as potentially compromised. Misryoum recommends assuming that exposed virtual API keys, master keys, and provider credentials could be read—and then used elsewhere.
Rotating secrets is not optional in this scenario. Since LiteLLM can store provider credentials and configuration details, a stolen set can enable attackers to impersonate your proxy behavior, make requests under your credentials, or use the leaked data to accelerate further compromise.
For teams that cannot upgrade immediately, maintainers suggest a workaround: setting disable_error_logs: true under general_settings. The idea is to block the path through which malicious inputs can reach the vulnerable query logic.
Misryoum’s angle here is risk management: workarounds can reduce exploitability, but they rarely restore trust once exploitation is suspected. The safest path still runs through upgrading to 1.83.7 or later, then rotating everything that could plausibly be in the database.
The broader supply-chain lesson: secret theft is now “in the workflow”
This LiteLLM issue arrives after earlier supply-chain activity targeting the project. TeamPCP-related activity reportedly involved malicious PyPI packages that deployed an infostealer to harvest credentials, tokens, and secrets from infected systems.
Put together, the pattern is clear: attackers increasingly aim at the operational “glue” between AI applications and the outside world—middleware, gateways, and dependency supply chains—because those components concentrate authentication material.
Looking ahead. Misryoum expects more organizations to adopt tighter exposure controls for AI gateways: limiting internet reachability. requiring stronger network segmentation. logging and alerting around unusual Authorization header usage. and instituting rapid secret rotation playbooks for any suspected proxy compromise.
Even when the underlying models remain safe, the infrastructure around them can still be the real target. CVE-2026-42208 is another reminder that in modern AI stacks, the credentials are the crown jewels—and SQL injection can open the vault.