Comment by detente18

3 days ago

LiteLLM maintainer here, this is still an evolving situation, but here's what we know so far:

1. Looks like this originated from the trivvy used in our ci/cd - https://github.com/search?q=repo%3ABerriAI%2Flitellm%20trivy... https://ramimac.me/trivy-teampcp/#phase-09

2. If you're on the proxy docker, you were not impacted. We pin our versions in the requirements.txt

3. The package is in quarantine on pypi - this blocks all downloads.

We are investigating the issue, and seeing how we can harden things. I'm sorry for this.

- Krrish

Update:

- Impacted versions (v1.82.7, v1.82.8) have been deleted from PyPI - All maintainer accounts have been changed - All keys for github, docker, circle ci, pip have been deleted

We are still scanning our project to see if there's any more gaps.

If you're a security expert and want to help, email me - krrish@berri.ai

  • Update 2 (03/25/2026):

    - We will be holding a townhall on Friday to review the incident and share next steps (https://lnkd.in/gsbTdCe7)

    - We can confirm a bad version of Trivy security scanner ran in our CI/CD pipeline, which would have led to the supply chain attack

    - We have paused new releases until we've completed securing our codebase and release pipeline to ensure safe releases for users

    - We've added additional github/gitlab ci scripts for checking if you're impacted: https://lnkd.in/gGicMkby

    We hope to share a full RCA in the coming days. Until then, if there's anything we can do to help your team - please let me know. You can email me (krrish@berri.ai), or join the discussion on github (https://lnkd.in/g9TuuQ2H).

I just want to share an update

the developer has made a new github account and linked their new github account to hackernews and linked their hackernews about me to their github account to verify the github account being legitimate after my suggestion

Worth following this thread as they mention that: "I will be updating this thread, as we have more to share." https://github.com/BerriAI/litellm/issues/24518

the chain here is wild. trivy gets compromised, that gives access to your ci, ci has the pypi publish token, now 97 million monthly downloads are poisoned. was the pypi token scoped to publishing only or did it have broader access? because the github account takeover suggests something wider leaked than just the publish credential

  • If the payload is a credential stealer then they can use that to escalate into basically anything right?

    • Yes and the scary part is you might never know the full extent. A credential stealer grabs whatever is in memory or env during the build, ships it out, and the attacker uses those creds weeks later from a completely different IP. The compromised package gets caught and reverted, everyone thinks the incident is over, meanwhile the stolen tokens are still valid. I wonder how many teams who installed 1.82.7 actually rotated all their CI secrets after this, not just uninstalled the bad version.

  • I wonder if there are a few things here....

    It would be great if Linux was able to do simple chroot jails and run tests inside of them before releasing software. In this case, it looks like the whole build process would need to be done in the jail. Tools like lxroot might do enough of what chroot on BSD does.

    It seems like software tests need to have a class of test that checks whether any of the components of an application have been compromised in some way. This in itself may be somewhat complex...

    We are in a world where we can't assume secure operation of components anymore. This is kinda sad, but here we are....

    • The sad part is you're right that we can't assume secure operation of components anymore, but the tooling hasn't caught up to that reality. Chroot jails help with runtime isolation but the attack here happened at build time, the malicious code was already in the package before any test could run. And the supply chain is deep. Trivy gets compromised, which gives CI access, which gives PyPI access. Even if you jail your own builds you're trusting that every tool in your pipeline wasn't the entry point. 97 million monthly downloads means a lot of people's "secure" pipelines just ran attacker code with full access.

This must be super stressful for you, but I do want to note your "I'm sorry for this." It's really human.

It is so much better than, you know... "We regret any inconvenience and remain committed to recognising the importance of maintaining trust with our valued community and following the duration of the ongoing transient issue we will continue to drive alignment on a comprehensive remediation framework going forward."

Kudos to you. Stressful times, but I hope it helps to know that people are reading this appreciating the response.

  • I think we really need to use sandboxes. Guix provides sandboxed environments by just flipping a switch. NixOS is in an ideal position to do the same, but for some reason they are regarded as "inconvenient".

    Personally, I am a heavy user of Firejail and bwrap. We need defense in depth. If someone in the supply chain gets compromised, damage should be limited. It's easy to patch the security model of Linux with userspaces, and even easier with eBPF, but the community is somehow stuck.

    • What would be really helpful is if software sandboxed itself. It's very painful to sandbox software from the outside and it's radically less effective because your sandbox is always maximally permissive.

      But, sadly, there's no x-platform way to do this, and sandboxing APIs are incredibly bad still and often require privileges.

      > It's easy to patch the security model of Linux with userspaces, and even easier with eBPF, but the community is somehow stuck.

      Neither of these is easy tbh. Entering a Linux namespace requires root, so if you want your users to be safe then you have to first ask them to run your service as root. eBPF is a very hard boundary to maintain, requiring you to know every system call that your program can make - updates to libc, upgrades to any library, can break this.

      Sandboxing tooling is really bad.

      12 replies →

Kudos for this update.

Write a detailed postmortem, share it publicly, continue taking responsibility, and you will come out of this having earned an immense amount respect.

>1. Looks like this originated from the trivvy used in our ci/cd

Were you not aware of this in the short time frame that it happened in? How come credentials were not rotated to mitigate the trivy compromise?

I put together a little script to search for and list installed litellm versions on my systems here: https://github.com/kinchahoy/uvpowered-tools/blob/main/inven...

It's very much not production grade. It might miss sneaky ways to install litellm, but it does a decent job of scanning all my conda, .venv, uv and system enviornments without invoking a python interpreter or touching anything scary. Let me know if it misses something that matters.

Obviously read it before running it etc.

Similar to delve, this guy has almost no work experience. You have to wonder if YC and the cult of extremely young founders is causing instability issues in society at large?

  • It's interesting to see how the landscape changes when the folks upstream won't let you offload responsibility. Litellm's client list includes people who know better.

  • Welcome to the new era, where programming is neither a skill nor a trade, but a task to be automated away by anyone with a paid subscription.

    • alot of software isnt that important so its fine, but some actually is important. especially with a branding name slapped on it that people will trust

      6 replies →

  • It’s a flex now. But there are still many people doing it for the love of the game.

You're making great software and I'm sorry this happened to you. Don't get discouraged, keep bringing the open source disruption!

> - Krrish

Was your account completely compromised? (Judging from the commit made by TeamPCP on your accounts)

Are you in contacts with all the projects which use litellm downstream and if they are safe or not (I am assuming not)

I am unable to understand how it compromised your account itself from the exploit at trivvy being used in CI/CD as well.

  • It was the PYPI_PUBLISH token which was in our github project as an env var, that got sent to trivvy.

    We have deleted all our pypi publishing tokens.

    Our accounts had 2fa, so it's a bad token here.

    We're reviewing our accounts, to see how we can make it more secure (trusted publishing via jwt tokens, move to a different pypi account, etc.).

  • >I am unable to understand how it compromised your account itself from the exploit at trivvy being used in CI/CD as well.

    Token in CI could've been way too broad.

This is just one of many projects that was a victim of Trivy hack. There are millions of those projects and this issue will be exploited in next months if not years.

Good work! Sorry to hear you're in this situation, good luck and godspeed!

we're using litellm via helm charts with tags main-v1.81.12-stable.2 and main-v1.80.8-stable.1 - assuming they're safe?

also how are we sure that docker images aren't affected?

  • Docker deployments are more safe even if affected because there is a lower chance (but not zero) that you didn't mount all your credentials into the image. It would have access to LLM keys of course, but that's not really what the hacker is after. He's after private SSH keys.

    That being said this hack was a direct upload to PyPI in the last few days, so very unlikely those images are affected.

The decision to block all downloads is pretty disruptive, especially for people on pinned known good versions. Its breaking a bunch of my systems that are all launched with `uv run`

  • > Its breaking a bunch of my systems that are all launched with `uv run`

    From a security standpoint, you would rather pull in a library that is compromised and run a credential stealer? It seems like this is the exact intended and best behavior.

  • You should be using build artifacts, not relying on `uv run` to install packages on the fly. Besides the massive security risk, it also means that you're dependent on a bunch of external infrastructure every time you launch. PyPI going down should not bring down your systems.

    • This is the right answer. Unfortunately, this is very rarely practiced.

      More strangely (to me), this is often addressed by adding loads of fallible/partial caching (in e.g. CICD or deployment infrastructure) for package managers rather than building and publishing temporary/per-user/per-feature ephemeral packages for dev/testing to an internal registry. Since the latter's usually less complex and more reliable, it's odd that it's so rarely practiced.

    • There are so many advantages to deployable artifacts, including audibility and fast roll-back. Also you can block so many risky endpoints from your compute outbound networks, which means even if you are compromised, it doesn't do the attacker any good if their C&C is not allow listed.

  • Take this as an argument to rethink your engineering decision to base your workflows entirely on the availability of an external dependency.

  • That's a good thing (disruptive "firebreak" to shut down any potential sources of breach while info's still being gathered). The solve for this is artifacts/container images/whatnot, as other commenters pointed out.

    That said, I'm sorry this is being downvoted: it's unhappily observing facts, not arguing for a different security response. I know that's toeing the rules line, but I think it's important to observe.

There are hundreds of PRs fixing valid issues to your github repo seemingly in limbo for weeks. What is the maintainer state over there?

  • increasing the (social) pressure on maintainers to get PRs merged seems like the last thing you should be doing in light of preventing malicious code ending up in dependencies like this

    i'd much rather see a million open PRs than a single malicious PR sneak through due to lack of thorough review.

  • Not really the time for that. There's also PRs being merged every hour of the day.