Skip to content

fix: pin liteLLM <=1.82.6 to mitigate TeamPCP supply chain attack#1822

Closed
gn00295120 wants to merge 1 commit intoNVIDIA:developfrom
gn00295120:fix/pin-litellm-supply-chain
Closed

fix: pin liteLLM <=1.82.6 to mitigate TeamPCP supply chain attack#1822
gn00295120 wants to merge 1 commit intoNVIDIA:developfrom
gn00295120:fix/pin-litellm-supply-chain

Conversation

@gn00295120
Copy link
Copy Markdown

@gn00295120 gn00295120 commented Mar 25, 2026

Summary

liteLLM versions 1.82.7 and 1.82.8 were compromised by the TeamPCP group via a supply chain attack through Trivy. The current dependency constraint allows these malicious versions to be installed.

Impact

The compromised versions steal sensitive credentials including SSH keys, AWS/GCP/K8s credentials, CI/CD tokens, and environment variables. Version 1.82.8 installs a .pth persistence mechanism that executes on every Python startup — even after liteLLM is uninstalled.

Fix

This PR pins the upper bound of the liteLLM dependency to <=1.82.6, which is the last known safe version before the compromise. Once BerriAI publishes a verified clean release, this upper bound can be raised.

Files changed (3 sub-packages):

  • packages/nvidia_nat_adk/pyproject.toml: ~=1.74>=1.74.0, <=1.82.6
  • packages/nvidia_nat_crewai/pyproject.toml: ~=1.74>=1.74.0, <=1.82.6
  • packages/nvidia_nat_agno/pyproject.toml: ~=1.74>=1.74.0, <=1.82.6

References

Summary by CodeRabbit

  • Chores
    • Updated litellm dependency constraints across packages to version range 1.74.0 to 1.82.6.

liteLLM versions 1.82.7 and 1.82.8 were compromised by the TeamPCP group
via a supply chain attack. This pins the upper bound to the last known
safe version across all 3 sub-packages (adk, crewai, agno).

References:
- BerriAI/litellm#24512
- https://osv.dev/vulnerability/MAL-2026-2144
@gn00295120 gn00295120 requested a review from a team as a code owner March 25, 2026 09:25
Copilot AI review requested due to automatic review settings March 25, 2026 09:26
@copy-pr-bot
Copy link
Copy Markdown

copy-pr-bot bot commented Mar 25, 2026

This pull request requires additional validation before any workflows can run on NVIDIA's runners.

Pull request vetters can view their responsibilities here.

Contributors can view more details about this message here.

@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Mar 25, 2026

No actionable comments were generated in the recent review. 🎉

ℹ️ Recent review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 437039ef-d51f-4312-a5e5-9bef723020cf

📥 Commits

Reviewing files that changed from the base of the PR and between 2a9bf12 and 95263eb.

📒 Files selected for processing (3)
  • packages/nvidia_nat_adk/pyproject.toml
  • packages/nvidia_nat_agno/pyproject.toml
  • packages/nvidia_nat_crewai/pyproject.toml

Walkthrough

Updates the litellm dependency constraint across three package pyproject.toml files from a compatible-release specifier (litellm~=1.74) to an explicit bounded range (litellm>=1.74.0, <=1.82.6).

Changes

Cohort / File(s) Summary
Dependency Version Constraint Updates
packages/nvidia_nat_adk/pyproject.toml, packages/nvidia_nat_agno/pyproject.toml, packages/nvidia_nat_crewai/pyproject.toml
Updated litellm dependency from tilde-based spec (~=1.74) to explicit bounded range (>=1.74.0, <=1.82.6).

Estimated code review effort

🎯 1 (Trivial) | ⏱️ ~3 minutes

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title clearly describes the main change: pinning liteLLM to a safe version to mitigate a supply chain attack, which matches the core objective of the PR.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Pins litellm to a known-safe upper bound across NeMo Agent Toolkit integration subpackages to avoid installing compromised litellm releases (1.82.7/1.82.8) implicated in a supply-chain attack.

Changes:

  • Tighten litellm version constraint from ~=1.74 to >=1.74.0, <=1.82.6 in three subpackages.
  • Ensure impacted integrations (ADK, Agno, CrewAI) will not resolve to the compromised versions.

Reviewed changes

Copilot reviewed 3 out of 3 changed files in this pull request and generated 3 comments.

File Description
packages/nvidia_nat_adk/pyproject.toml Pins litellm to <=1.82.6 for the ADK integration package.
packages/nvidia_nat_agno/pyproject.toml Pins litellm to <=1.82.6 for the Agno integration package.
packages/nvidia_nat_crewai/pyproject.toml Pins litellm to <=1.82.6 for the CrewAI integration package.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

@willkill07
Copy link
Copy Markdown
Member

  1. The compromised packages were already removed from PyPI -- PR name is misleading.
  2. There are several issues preventing this PR from being merged. The amount of extra back-and-forth required (waiting for you to update the PR and triggering CI) is not worth the time (most of the team is OOO this week).
    • No updates to uv.lock files.
    • DCO check failed (did not sign commits).
  3. We were already working on addressing security guidance and will prefer chore: pin LiteLLM to known good version #1823 over this

@willkill07 willkill07 closed this Mar 25, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants