Contents
Tags
On March 24, 2026, two versions of the LiteLLM Python package on PyPI were silently replaced with malware. The attack stole API keys, SSH keys, cloud credentials, and cryptocurrency wallets — then installed a persistent backdoor that phoned home every 50 minutes. Because LiteLLM is downloaded ~3.4 million times per day and sits between your application and every AI provider you use, the blast radius was enormous. This is what happened, who did it, and exactly what you need to do.
LiteLLM is a Python proxy library that provides a unified interface to over 100 AI model providers — OpenAI, Anthropic, Google, Azure, AWS Bedrock, and more. It's used in AI agents, chatbots, developer tooling, and MCP servers. Because it handles all your API keys in one place, a compromise of LiteLLM means a compromise of every AI credential on the affected machine.
The threat actor, tracked as TeamPCP, did not attack LiteLLM directly. They first compromised Trivy — a widely-used open-source security scanner that runs inside many CI/CD pipelines, including LiteLLM's own. Stolen credentials from Trivy gave TeamPCP access to LiteLLM's PyPI publishing pipeline. Two poisoned versions — 1.82.7 and 1.82.8 — were uploaded directly to PyPI. No corresponding GitHub release tags existed for either version, which was the key forensic indicator that something was wrong.
Red flag to watch for: if a PyPI release has no matching GitHub tag, treat it as suspicious. Legitimate maintainers almost always cut a GitHub release before publishing to PyPI.
The malicious package installed a file named litellm_init.pth into Python's site-packages directory. Python automatically executes all .pth files on every interpreter startup — no import required, no explicit trigger needed. The moment any Python process started on the infected system, the payload ran. This included IDE terminals, background services, cron jobs, and MCP servers auto-loaded by tools like Cursor.
Stage 2 performed a full system enumeration and exfiltrated everything: environment variables, SSH private keys and configs, .env files, AWS/GCP/Azure credentials and metadata endpoint tokens, Kubernetes service account tokens and secrets, Terraform and Helm artifacts, CI/CD secrets, GitHub tokens, shell history, .gitconfig, and cryptocurrency wallet files. Data was AES-256-CBC encrypted with a random session key, that key was wrapped in a 4096-bit RSA public key, then packaged and sent to attacker-controlled domains that spoofed LiteLLM's own branding.
# What Stage 2 collected from infected systems:
Environment variables (API keys, tokens, passwords)
~/.ssh/id_rsa, id_ed25519, known_hosts, config
~/.aws/credentials, ~/.aws/config
~/.config/gcloud/credentials.db
~/.kube/config + Kubernetes service account tokens
Terraform state files (.tfstate)
CI/CD secrets (GitHub Actions, GitLab CI, CircleCI)
Shell history (.bash_history, .zsh_history)
.env files (all directories, recursive)
Crypto wallet files (Bitcoin, Ethereum, Solana)
.gitconfig (may contain tokens)
# Exfiltrated to attacker-controlled C2:
models.litellm.cloud (spoofing LiteLLM's domain)
checkmarx.zone/raw (IP: 83.142.209.11)Stage 3 installed a persistent backdoor as ~/.config/sysmon/sysmon.py and registered it as a systemd service. Every 50 minutes it polled a remote C2 server for new payloads and commands. The malware also attempted to create privileged pods in any discovered Kubernetes cluster (kube-system namespace), turning a single compromised developer machine into a foothold into production infrastructure.
According to Sonatype's 2024 State of the Software Supply Chain report, 512,847 malicious open-source packages were discovered in a single year — a 156% year-over-year increase. Malicious packages on PyPI, npm, and other registries grew 1,300% between 2020 and 2023 (ReversingLabs). The LiteLLM attack is not an outlier; it is part of an accelerating pattern targeting AI infrastructure specifically.
You are potentially affected if you installed litellm between approximately March 24, 2026 10:39 UTC and March 24, 2026 20:15 UTC without a pinned version, or if your environment auto-updated to the latest release. Check which version you have:
# Check installed litellm version
pip show litellm
# If you see 1.82.7 or 1.82.8 — you are affected
# Safe version: 1.82.6 or earlier
# Check for the malicious .pth launcher
find /usr -name "litellm_init.pth" 2>/dev/null
find ~/.local -name "litellm_init.pth" 2>/dev/null
python -c "import site; print(site.getsitepackages())"
# Then check those directories for litellm_init.pth
# Check for the persistence backdoor
ls -la ~/.config/sysmon/
systemctl status sysmon 2>/dev/null
cat ~/.config/systemd/user/*.service 2>/dev/null | grep sysmonIf you installed either compromised version, treat the entire system as fully compromised. Do not attempt to patch around the issue — the attacker had complete credential access and may have used those credentials before you detected the infection.
# 1. Remove malicious .pth file from all site-packages
python -c "import site; [print(p) for p in site.getsitepackages()]" | while read dir; do
rm -f "$dir/litellm_init.pth"
done
# 2. Remove persistence backdoor
rm -rf ~/.config/sysmon/
systemctl --user disable sysmon 2>/dev/null
systemctl --user stop sysmon 2>/dev/null
# 3. Downgrade to safe version
pip install litellm==1.82.6
# 4. Verify clean install
pip show litellm | grep Version
python -c "import litellm; print('litellm loaded cleanly')"# Pin litellm with a hash to prevent silent upgrades
# Generate hashes with: pip download litellm==1.82.6 -d /tmp/pkg && pip hash /tmp/pkg/litellm-*.whl
litellm==1.82.6 \
--hash=sha256:<insert-hash-here>
# Or use pip-compile to lock all transitive dependencies:
# pip install pip-tools
# pip-compile requirements.in --generate-hashesThe LiteLLM Docker proxy image (ghcr.io/berriai/litellm) was NOT affected by this attack because it pins its own dependencies internally. If you are running LiteLLM in production, the Docker image is now the recommended deployment method.
LiteLLM is commonly used as a local proxy to route between different AI providers — including local models served by Ollama. If you have been using LiteLLM to manage multiple providers alongside locally-running models, this attack could have exposed every API key in your environment. Tools like runyard.dev help you identify which models can run entirely on your own hardware without ever sending data to an external API — reducing the attack surface by eliminating the need for multiple cloud provider keys in the first place. The fewer secrets on disk, the less there is to steal.
OWASP ranked Supply Chain Vulnerability at #3 in its LLM Top 10 for 2025 — and this attack is a textbook example of why. As AI tooling matures, the libraries that sit between your application and AI providers will become increasingly attractive targets. Treat every AI dependency with the same scrutiny you would apply to a payment or auth library: pin it, audit it, and isolate its access.
Tools
Paste pip freeze output to scan for compromised litellm versions.
Newsletter