fix: [Security Fix] remove litellm #1240
Conversation
Remove the `litellm` package from both Dockerfile.langflow and Dockerfile.langflow.dev to avoid conflicts/compatibility issues. In Dockerfile.langflow the `pip uninstall -y litellm` was added to the RUN that installs `uv` and prepares /app/langflow-data; in Dockerfile.langflow.dev a `RUN uv pip uninstall litellm` line was added after the dependency sync. This ensures built images do not include `litellm`.
Remove usage of the agentd library and its OpenAI patching/tool decorator. Instantiate AsyncOpenAI directly (HTTP/2 or HTTP/1.1 fallback) and remove imports of agentd.patch and agentd.tool_decorator. Add runtime dependencies for openai, pyyaml, and tiktoken in pyproject.toml to support direct OpenAI client usage.
There was a problem hiding this comment.
Pull request overview
This PR removes the agentd integration and litellm from the dependency set / Docker images, and updates the runtime dependency list accordingly (notably adding openai, pyyaml, and tiktoken).
Changes:
- Remove
agentdimports/decorators and MCP patching integration. - Ensure
litellmis uninstalled in Langflow Docker images. - Update dependency manifests (
pyproject.toml,uv.lock) to reflect the new dependency set.
Reviewed changes
Copilot reviewed 5 out of 6 changed files in this pull request and generated 4 comments.
Show a summary per file
| File | Description |
|---|---|
Dockerfile.langflow |
Uninstalls litellm from the base Langflow image layer. |
Dockerfile.langflow.dev |
Adds a uv pip uninstall litellm step after syncing deps. |
pyproject.toml |
Drops agentd and adds direct deps (openai, pyyaml, tiktoken). |
src/config/settings.py |
Removes patch_openai_with_mcp usage and instantiates AsyncOpenAI directly. |
src/services/search_service.py |
Removes agentd tool decorator/import. |
uv.lock |
Lockfile updated to remove agentd/litellm (and related transitive deps) and include the new direct deps. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| # Return to app directory and install the project | ||
| WORKDIR /app | ||
| RUN uv sync --frozen --no-dev --no-editable --extra postgresql | ||
| RUN uv pip uninstall litellm |
There was a problem hiding this comment.
uv pip uninstall litellm will run interactively and prompt for confirmation in a Docker build, which can hang/fail the build. Use the non-interactive flag (e.g., -y/--yes), and consider making the step tolerant when the package is already absent (e.g., avoid failing the layer if litellm is not installed).
| RUN uv pip uninstall litellm | |
| RUN uv pip uninstall -y litellm || true |
| if use_http2: | ||
| self._patched_async_client = patch_openai_with_mcp(AsyncOpenAI()) | ||
| self._patched_async_client = AsyncOpenAI() | ||
| logger.info("OpenAI client initialized with HTTP/2") | ||
| else: | ||
| http_client = httpx.AsyncClient( | ||
| http2=False, | ||
| timeout=httpx.Timeout(60.0, connect=10.0) | ||
| ) | ||
| self._patched_async_client = patch_openai_with_mcp( | ||
| AsyncOpenAI(http_client=http_client) | ||
| ) | ||
| self._patched_async_client = AsyncOpenAI(http_client=http_client) |
There was a problem hiding this comment.
After removing patch_openai_with_mcp(...), clients.patched_*_client now returns a vanilla AsyncOpenAI. Other parts of the codebase still format model names with provider prefixes (e.g., ollama/..., watsonx/...) expecting LiteLLM-style routing; those requests will fail against the OpenAI API. Either reintroduce a routing layer (or provider-specific clients) or update the embedding/search code to only send OpenAI-compatible model names here.
| self._patched_async_client = AsyncOpenAI() | ||
| logger.info("OpenAI client initialized with HTTP/2") | ||
| else: | ||
| http_client = httpx.AsyncClient( | ||
| http2=False, | ||
| timeout=httpx.Timeout(60.0, connect=10.0) | ||
| ) | ||
| self._patched_async_client = patch_openai_with_mcp( | ||
| AsyncOpenAI(http_client=http_client) | ||
| ) | ||
| self._patched_async_client = AsyncOpenAI(http_client=http_client) |
There was a problem hiding this comment.
The surrounding docstring/comments still describe this client as “patched with LiteLLM support” and mention LiteLLM routing, but the implementation now constructs a plain AsyncOpenAI. Please update the docstring/comments (and potentially the patched_* naming) so they match the new behavior to avoid misleading future changes.
| "zxcvbn>=4.5.0", | ||
| "openai>=1.0.0", | ||
| "pyyaml>=6.0", | ||
| "tiktoken>=0.7.0", |
There was a problem hiding this comment.
PR description focuses on removing litellm from Docker images, but this PR also removes agentd usage and adds new runtime dependencies (openai, pyyaml, tiktoken). Please update the PR description to reflect these additional changes so reviewers understand the full scope and impact.
Remove the
litellmpackage from both Dockerfile.langflow and Dockerfile.langflow.dev to avoid conflicts/compatibility issues. In Dockerfile.langflow thepip uninstall -y litellmwas added to the RUN that installsuvand prepares /app/langflow-data; in Dockerfile.langflow.dev aRUN uv pip uninstall litellmline was added after the dependency sync. This ensures built images do not includelitellm.ref: Approved PR https://github.com/langflow-ai/openrag/pull/1239/changes