← Blog/Have You Been Pawned by LiteLLM?
security
Runyard Team
@runyard_dev
6 min read

Tags

#security#litellm#supply-chain#ai-security#checker
Runyard.dev — Find AI Models That Run on Your Hardware

Have You Been Pawned by LiteLLM?

On March 24, 2026, two versions of LiteLLM on PyPI were replaced with malware that silently stole every API key, SSH key, cloud credential, and crypto wallet on the machine. The attack ran for ~9 hours and reached millions of daily downloads. Paste your pip freeze output below to check if you were affected.

Check Your Environment Now

LiteLLM Pawned? CheckerSupply Chain Scanner

Paste the output of pip freeze or pip list below, then click Check.

What Were the Compromised Versions?

  • litellm 1.82.7 — published March 24, 2026 ~10:39 UTC. Payload injected into proxy_server.py; triggers on litellm.proxy import.
  • litellm 1.82.8 — published March 24, 2026 ~10:52 UTC. Installed litellm_init.pth in site-packages; executes on every Python startup — no import needed.
  • Both versions were removed from PyPI by ~20:15 UTC the same day.
  • Safe version: 1.82.6 or any version before 1.82.7.

How to Confirm Manually

terminalbash
# 1. Check your installed version
pip show litellm | grep Version

# 2. Look for the malicious .pth launcher
python -c "import site; print('\n'.join(site.getsitepackages()))"
# Check each path printed above for litellm_init.pth

# 3. Look for the persistence backdoor
ls ~/.config/sysmon/sysmon.py 2>/dev/null && echo "BACKDOOR FOUND"
systemctl --user status sysmon 2>/dev/null

# 4. Check network logs for C2 traffic
grep -r "models.litellm.cloud\|83.142.209.11" /var/log/ 2>/dev/null

If You Were Affected: Full Response Plan

A compromise of this severity cannot be patched in place. The attacker had full read access to your environment variables, SSH keys, cloud credentials, and shell history the moment Python started. Every credential must be treated as stolen.

  1. 1.Isolate the machine — disconnect from networks while you work through this list
  2. 2.Remove litellm_init.pth from every Python site-packages directory on the system
  3. 3.Remove the backdoor: rm -rf ~/.config/sysmon/ && systemctl --user disable sysmon
  4. 4.Check Kubernetes for rogue pods: kubectl get pods -n kube-system
  5. 5.Rotate all credentials: SSH keys, GitHub personal access tokens, AWS/GCP/Azure IAM keys, Kubernetes service account tokens, database passwords, Slack/Discord webhooks, all AI provider API keys (OpenAI, Anthropic, etc.), any .env file secrets
  6. 6.Audit cloud accounts for unauthorized IAM users, API keys, or resource creation during the window March 24 10:39–20:15 UTC
  7. 7.Consider a full OS reinstall for machines that contained production secrets — backdoors can drop additional payloads
  8. 8.Pin litellm to 1.82.6 and add dependency hash verification going forward
cleanup.shbash
#!/bin/bash
# Step 1 — Remove malicious .pth file
for dir in $(python -c "import site; print(' '.join(site.getsitepackages()))"); do
  rm -f "$dir/litellm_init.pth"
  echo "Cleaned: $dir"
done

# Step 2 — Remove persistence backdoor
rm -rf ~/.config/sysmon/
systemctl --user disable --now sysmon 2>/dev/null

# Step 3 — Downgrade to safe version
pip install litellm==1.82.6

# Step 4 — Verify
pip show litellm | grep Version
python -c "import litellm; print('litellm OK')"

How This Attack Worked

The attacker (TeamPCP) did not compromise LiteLLM directly. They first breached Trivy — a widely-used CI/CD security scanner — and stole LiteLLM's PyPI publishing credentials from its pipeline. The two poisoned versions were uploaded without any corresponding GitHub release tags, which was the key forensic signal. The 1.82.8 variant was particularly dangerous: it installed a .pth file that Python auto-executes on every interpreter start, meaning the credential harvester ran in every Python process on the machine regardless of whether litellm was ever imported.

Red flag for future attacks: if a PyPI package version has no matching GitHub release tag, do not install it. Legitimate maintainers tag before publishing.

Prevention: Eliminate the Attack Surface

LiteLLM is popular because it provides a single proxy to dozens of AI cloud providers — but that means it also aggregates dozens of API keys in one place, making it a high-value target. If you run local models on your own hardware, you reduce how many cloud API keys need to exist in your environment at all. Tools like runyard.dev help you match models to your exact GPU and RAM so you can move workloads fully on-device. Fewer secrets on disk means a smaller blast radius when supply chain attacks like this one land.

  • Pin every AI dependency with exact versions and hashes in your requirements file
  • Never auto-pull latest in environments that have access to cloud credentials or SSH keys
  • Use isolated containers or VMs for LLM tooling — separate from credentials
  • Audit your CI/CD pipeline tools: security scanners run in privileged contexts and are targeted specifically
  • Enable cloud provider alerts for new IAM keys, unusual regions, or new resource creation

RUNYARD.DEV

Hardware-aware AI model discovery. Know exactly what runs on your machine — before you download.

© 2026 RUNYARD.DEV — All rights reserved.

Built for local AI.

Tools

Pawned by LiteLLM?

Paste pip freeze output to scan for compromised litellm versions.

Newsletter