A Security Scanner Got Hacked and Now Your AI Stack Might Be Compromised

A Security Scanner Got Hacked and Now Your AI Stack Might Be Compromised

3 min read
11 views

A vulnerability scanner called Trivy got compromised on March 19th. Five days later, a threat actor used that foothold to poison LiteLLM, a Python package with 95 million monthly downloads that sits between your code and every major LLM API. Versions 1.82.7 and 1.82.8 on PyPI contained a three-stage credential stealer, a Kubernetes lateral movement toolkit, and a persistent backdoor.

The attacker didn't need to trick anyone into running malicious code. Just installing the package was enough.

How It Works

The attack comes in two flavors depending on which version you installed.

Version 1.82.7 embeds malicious code directly in litellm/proxy/proxy_server.py. It executes at import time. Standard supply chain play.

Version 1.82.8 escalates. It drops a file called litellm_init.pth at the wheel root. Python's .pth mechanism runs these files automatically on every interpreter startup. You don't need to import litellm. You don't need to call it. If it's in your environment, the payload fires on any python command.

Once active, the malware does three things:

Stage 1: Harvest everything. SSH private keys, .env files, AWS and GCP and Azure credentials, Kubernetes configs, crypto wallets, database passwords, SSL certs, shell history, git configs. It also hits cloud metadata endpoints (IMDS, container cred services). All of it gets encrypted with AES-256-CBC under a hardcoded 4096-bit RSA public key, bundled into a tar, and POSTed to models.litellm.cloud. That domain has nothing to do with legitimate LiteLLM infrastructure.

Stage 2: Move laterally through Kubernetes. If the malware finds a K8s service account token, it reads every secret across every namespace in the cluster. Then it creates privileged alpine:latest pods on every node in kube-system, each mounting the host filesystem.

Stage 3: Persist. Those pods install a backdoor at /root/.config/sysmon/sysmon.py with a systemd service called sysmon.service that polls for additional payloads. Restarting the container won't help. The backdoor lives on the host.

The Chain That Made This Possible

The threat actor goes by TeamPCP. Their attack path:

  • March 19: Compromise Aqua Security's Trivy, a widely-used vulnerability scanner
  • March 23: Compromise Checkmarx's KICS GitHub Action using the same access chain
  • March 24: Use credentials stolen from LiteLLM's CI/CD pipeline (which ran Trivy) to publish poisoned packages to PyPI

A security scanner was the entry point for a supply chain attack on an AI library. The tool meant to catch vulnerabilities became the vulnerability.

What To Do Right Now

If you use LiteLLM in any capacity:

Check your installed version. If you're on 1.82.7 or 1.82.8, assume compromise. There's a detection tool on GitHub that can scan your environment.

Rotate everything. SSH keys, cloud credentials, API tokens, database passwords. All of them. The exfiltration happens at install time, so if the package was ever present in your environment, your secrets left the building.

Audit your Kubernetes clusters. Look for unexpected pods in kube-system, especially anything running alpine:latest with host filesystem mounts. Check for sysmon.service in systemd on cluster nodes.

Pin your dependencies. litellm==1.82.6 is the last clean version. The entire litellm package has been pulled from PyPI as of today, so you can't install it fresh anyway.

Audit your CI/CD for transitive trust. If your pipeline runs third-party security scanners, those scanners have access to your secrets. Trivy's compromise cascaded into LiteLLM because LiteLLM trusted Trivy in CI. Every tool in your pipeline is an attack surface.

The Bigger Problem

This isn't a one-off. It's a pattern. The AI ecosystem runs on Python packages with deep system access, broad network permissions, and implicit trust from package managers. LiteLLM specifically sits at the gateway between applications and LLM providers. It handles API keys for OpenAI, Anthropic, Azure, and dozens of others by design.

A package that's supposed to hold your most sensitive API credentials got backdoored through a compromised security tool in its build pipeline. That's not a bug. That's an architectural problem the entire ecosystem needs to reckon with.

Pin your dependencies. Verify your supply chain. And maybe reconsider how much trust you're placing in packages you've never audited.

Share this post

D

davydany

Software Engineer & Tech Leader passionate about building innovative solutions and sharing knowledge through writing.