A poisoned GitHub Action compromised a popular AI package. 40,000 downloads in 40 minutes. Your CI/CD pipeline is an attack surface.
In March 2026, two versions of the LiteLLM package on PyPI — versions 1.82.7 and 1.82.8 — were compromised. The packages contained a three-stage payload: credential harvesting, Kubernetes lateral movement, and a persistent backdoor. Over 40,000 downloads happened in roughly 40 minutes before PyPI quarantined the packages.
LiteLLM is a popular Python library for calling multiple LLM providers through a unified API. It's used in production AI pipelines across thousands of companies. If you use pip install litellm in your CI/CD pipeline or Dockerfile without pinning versions, you were in the blast radius.
The attack is worth understanding in detail because of how it started: not through a compromised maintainer account, not through a typosquatting package, but through a poisoned GitHub Action in LiteLLM's own CI/CD pipeline.
The attackers — tracked as TeamPCP — compromised a third-party GitHub Action used in LiteLLM's CI pipeline. Specifically, a Trivy security scanning action that LiteLLM referenced by tag rather than by SHA. The attackers gained access to the action's repository, modified the code, and pushed a new version under the existing tag.
When LiteLLM's CI pipeline ran, it pulled the poisoned action. The action injected malicious code into the build artifacts. The compromised build produced legitimate-looking LiteLLM packages that included a hidden payload. Those packages were published to PyPI through the normal release process, signed with the project's normal credentials.
This is the critical detail: the attack didn't compromise LiteLLM's source code repository. The source code on GitHub was clean. The compromise happened in the build pipeline, between source code and published package.
The compromised packages installed a three-stage payload:
~/.aws/credentials, environment variables matching common patterns (*_API_KEY, *_SECRET, DATABASE_URL), and mounted Kubernetes secrets.The harvested credentials were exfiltrated to attacker-controlled infrastructure. The same group used similar techniques against Checkmarx and Telnyx in the same period, suggesting a coordinated campaign targeting developer tooling supply chains.
The attack window was remarkably short:
40 minutes. That's how long a window an attacker needs to compromise tens of thousands of environments. And 40 minutes is actually fast for detection. Many supply chain attacks go unnoticed for days or weeks.
The LiteLLM attack exploits a trust chain that most teams don't think about: your CI/CD pipeline trusts GitHub Actions, which trust their upstream repositories, which trust their maintainers. A single compromised link anywhere in that chain can inject code into your build artifacts.
Most teams treat their CI/CD pipeline as trusted infrastructure. They pin their application dependencies carefully but reference GitHub Actions by tag (uses: some-action/trivy@v1). Tags are mutable. A compromised tag points to compromised code. Your pipeline runs it with whatever permissions you've granted the workflow.
The pattern is:
This isn't theoretical. It happened to LiteLLM. It happened to Checkmarx. It will happen again.
If you installed LiteLLM in March 2026, check which version you got:
pip show litellm | grep Version
# If it shows 1.82.7 or 1.82.8, you were affected
# Check pip install history
pip install litellm==1.82.6 # Known safe version
If you were affected: rotate all credentials in the affected environment. AWS keys, API tokens, database passwords, Kubernetes service account tokens. Assume everything in that environment was exfiltrated.
This is the single most impactful change. Stop referencing actions by tag. Pin them to a specific commit SHA:
# Bad: mutable tag, can be compromised
- uses: aquasecurity/trivy-action@master
# Better: version tag, but still mutable
- uses: aquasecurity/trivy-action@v0.28.0
# Best: immutable commit SHA
- uses: aquasecurity/trivy-action@a259c2542c1eab15786e3907a1e5160040013043
A commit SHA is immutable. Even if the attacker compromises the repository, they can't change what a specific SHA points to (without a force-push that would be detectable). Tools like Dependabot and Renovate can automate SHA pinning and updates.
Never run pip install litellm without a version pin in a CI pipeline or Dockerfile. Use a lockfile:
# requirements.txt with hashes (pip-compile can generate this)
litellm==1.82.6 \
--hash=sha256:abc123...
# Or use a lockfile tool
pip-compile --generate-hashes requirements.in
Hash verification ensures the package you install matches the package you audited, even if the version number is reused (which PyPI prevents, but other registries may not).
Most GitHub Actions workflows run with more permissions than they need. Review your workflow files:
permissions: blocks to restrict token scope?# Restrict permissions at the workflow level
permissions:
contents: read
packages: none
# Only grant publish permissions to the release job
jobs:
release:
permissions:
contents: read
id-token: write # For trusted publishing
PyPI supports Trusted Publishers — OIDC-based authentication that eliminates long-lived API tokens. Instead of storing a PyPI token in your GitHub secrets, you configure PyPI to trust your specific GitHub repository and workflow. This means a compromised action can't use stolen credentials to publish to a different project.
Tools like Socket and Snyk can detect anomalous changes in package behavior between versions — new network calls, file system access patterns, or obfuscated code that wasn't present before. Adding these to your CI pipeline creates an early warning system.
The LiteLLM attack is a preview of what's coming. AI developer tooling is growing explosively — new packages, new frameworks, new CLI tools appearing weekly. Every one of them is a potential supply chain target. AI packages are particularly attractive to attackers because they typically run with access to API keys, cloud credentials, and customer data.
The OWASP Top 10 for Agentic AI, published in December 2025, lists supply chain attacks as a top-10 risk specifically because AI agents often have broad permissions and access to sensitive data. A compromised AI package doesn't just steal code — it steals API keys that provide access to LLM providers, cloud infrastructure, and customer data.
The fix isn't to stop using open-source packages. It's to stop treating your build pipeline as a trusted environment. Pin your actions. Pin your dependencies. Restrict your permissions. Monitor for anomalies. And assume that any unpinned dependency is a vector that an attacker will eventually exploit.
This is also why the tools you run locally matter. Your terminal sees every API key and credential on your machine. If your terminal phones home, that's another link in the trust chain an attacker can exploit. We built yaw with zero telemetry and BYOK AI specifically because developer tools shouldn't be a supply chain risk.
Because 40 minutes is all it takes.
Published by Yaw Labs.
Interested in AI tools and developer workflows? Token Limit News is our weekly newsletter.