In March 2026, two versions of the LiteLLM package on PyPI — versions 1.82.7 and 1.82.8 — were compromised. The packages contained a three-stage payload: credential harvesting, Kubernetes lateral movement, and a persistent backdoor. Over 40,000 downloads happened in roughly 40 minutes before PyPI quarantined the packages.

LiteLLM is a popular Python library for calling multiple LLM providers through a unified API. It's used in production AI pipelines across thousands of companies. If you use pip install litellm in your CI/CD pipeline or Dockerfile without pinning versions, you were in the blast radius.

The attack is worth understanding in detail because of how it started: not through a compromised maintainer account, not through a typosquatting package, but through a poisoned GitHub Action in LiteLLM's own CI/CD pipeline.

How the attack worked

Stage 1: The CI/CD entry point

The attackers — tracked as TeamPCP — compromised a third-party GitHub Action used in LiteLLM's CI pipeline. Specifically, a Trivy security scanning action that LiteLLM referenced by tag rather than by SHA. The attackers gained access to the action's repository, modified the code, and pushed a new version under the existing tag.

When LiteLLM's CI pipeline ran, it pulled the poisoned action. The action injected malicious code into the build artifacts. The compromised build produced legitimate-looking LiteLLM packages that included a hidden payload. Those packages were published to PyPI through the normal release process, signed with the project's normal credentials.

This is the critical detail: the attack didn't compromise LiteLLM's source code repository. The source code on GitHub was clean. The compromise happened in the build pipeline, between source code and published package.

Stage 2: The payload

The compromised packages installed a three-stage payload:

  1. Credential harvesting. The package scanned the environment for API keys, AWS credentials, database connection strings, and Kubernetes service account tokens. It targeted ~/.aws/credentials, environment variables matching common patterns (*_API_KEY, *_SECRET, DATABASE_URL), and mounted Kubernetes secrets.
  2. Lateral movement. If running in a Kubernetes cluster, the payload used harvested service account tokens to enumerate pods, services, and secrets across namespaces. It attempted to escalate privileges through known RBAC misconfigurations.
  3. Persistent backdoor. The payload installed a reverse shell that persisted across container restarts by writing to mounted volumes and modifying startup scripts.

The harvested credentials were exfiltrated to attacker-controlled infrastructure. The same group used similar techniques against Checkmarx and Telnyx in the same period, suggesting a coordinated campaign targeting developer tooling supply chains.

Stage 3: The timeline

The attack window was remarkably short:

40 minutes. That's how long a window an attacker needs to compromise tens of thousands of environments. And 40 minutes is actually fast for detection. Many supply chain attacks go unnoticed for days or weeks.

Why CI/CD pipelines are the new attack surface

The LiteLLM attack exploits a trust chain that most teams don't think about: your CI/CD pipeline trusts GitHub Actions, which trust their upstream repositories, which trust their maintainers. A single compromised link anywhere in that chain can inject code into your build artifacts.

Most teams treat their CI/CD pipeline as trusted infrastructure. They pin their application dependencies carefully but reference GitHub Actions by tag (uses: some-action/trivy@v1). Tags are mutable. A compromised tag points to compromised code. Your pipeline runs it with whatever permissions you've granted the workflow.

The pattern is:

  1. Attacker compromises a popular GitHub Action (or any dependency in the build pipeline)
  2. Attacker modifies the action code and pushes under the existing tag
  3. Every CI pipeline that references that action by tag pulls the compromised version on next run
  4. Compromised code runs with the pipeline's permissions — often including PyPI/npm publish tokens, AWS credentials, and Docker registry access

This isn't theoretical. It happened to LiteLLM. It happened to Checkmarx. It will happen again.

What to do right now

Check if you were affected

If you installed LiteLLM in March 2026, check which version you got:

pip show litellm | grep Version # If it shows 1.82.7 or 1.82.8, you were affected # Check pip install history pip install litellm==1.82.6 # Known safe version

If you were affected: rotate all credentials in the affected environment. AWS keys, API tokens, database passwords, Kubernetes service account tokens. Assume everything in that environment was exfiltrated.

Pin GitHub Actions to SHAs

This is the single most impactful change. Stop referencing actions by tag. Pin them to a specific commit SHA:

# Bad: mutable tag, can be compromised - uses: aquasecurity/trivy-action@master # Better: version tag, but still mutable - uses: aquasecurity/trivy-action@v0.28.0 # Best: immutable commit SHA - uses: aquasecurity/trivy-action@a259c2542c1eab15786e3907a1e5160040013043

A commit SHA is immutable. Even if the attacker compromises the repository, they can't change what a specific SHA points to (without a force-push that would be detectable). Tools like Dependabot and Renovate can automate SHA pinning and updates.

Pin package versions in CI

Never run pip install litellm without a version pin in a CI pipeline or Dockerfile. Use a lockfile:

# requirements.txt with hashes (pip-compile can generate this) litellm==1.82.6 \ --hash=sha256:abc123... # Or use a lockfile tool pip-compile --generate-hashes requirements.in

Hash verification ensures the package you install matches the package you audited, even if the version number is reused (which PyPI prevents, but other registries may not).

Audit your CI/CD permissions

Most GitHub Actions workflows run with more permissions than they need. Review your workflow files:

# Restrict permissions at the workflow level permissions: contents: read packages: none # Only grant publish permissions to the release job jobs: release: permissions: contents: read id-token: write # For trusted publishing

Use PyPI Trusted Publishers

PyPI supports Trusted Publishers — OIDC-based authentication that eliminates long-lived API tokens. Instead of storing a PyPI token in your GitHub secrets, you configure PyPI to trust your specific GitHub repository and workflow. This means a compromised action can't use stolen credentials to publish to a different project.

Monitor for anomalous packages

Tools like Socket and Snyk can detect anomalous changes in package behavior between versions — new network calls, file system access patterns, or obfuscated code that wasn't present before. Adding these to your CI pipeline creates an early warning system.

The bigger picture

The LiteLLM attack is a preview of what's coming. AI developer tooling is growing explosively — new packages, new frameworks, new CLI tools appearing weekly. Every one of them is a potential supply chain target. AI packages are particularly attractive to attackers because they typically run with access to API keys, cloud credentials, and customer data.

The OWASP Top 10 for Agentic AI, published in December 2025, lists supply chain attacks as a top-10 risk specifically because AI agents often have broad permissions and access to sensitive data. A compromised AI package doesn't just steal code — it steals API keys that provide access to LLM providers, cloud infrastructure, and customer data.

The fix isn't to stop using open-source packages. It's to stop treating your build pipeline as a trusted environment. Pin your actions. Pin your dependencies. Restrict your permissions. Monitor for anomalies. And assume that any unpinned dependency is a vector that an attacker will eventually exploit.

This is also why the tools you run locally matter. Your terminal sees every API key and credential on your machine. If your terminal phones home, that's another link in the trust chain an attacker can exploit. We built yaw with zero telemetry and BYOK AI specifically because developer tools shouldn't be a supply chain risk.

Because 40 minutes is all it takes.

Published by Yaw Labs.

Related Articles

Interested in AI tools and developer workflows? Token Limit News is our weekly newsletter.