feat: add litellm component#11805
Conversation
|
Important Review skippedAuto incremental reviews are disabled on this repository. Please check the settings in the CodeRabbit UI or the You can disable this status message by setting the Use the checkbox below for a quick retry:
WalkthroughThis pull request introduces a new LiteLLMProxyComponent that routes requests to multiple LLM providers via LiteLLM, along with lazy-loading infrastructure and comprehensive unit tests. The component accepts configuration inputs for API endpoints, authentication, model selection, and parameters like temperature and max tokens. Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes 🚥 Pre-merge checks | ✅ 6 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (6 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## main #11805 +/- ##
==========================================
+ Coverage 35.73% 35.75% +0.01%
==========================================
Files 1528 1528
Lines 73928 73928
Branches 11134 11134
==========================================
+ Hits 26420 26434 +14
+ Misses 46069 46055 -14
Partials 1439 1439
Flags with carried forward coverage won't be shown. Click here to find out more. 🚀 New features to boost your workflow:
|
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (3)
src/lfx/src/lfx/components/litellm/litellm_proxy.py (2)
1-86: Advisory: LangChain explicitly recommends againstChatOpenAIfor LiteLLM proxy use cases.
ChatOpenAItargets official OpenAI API specifications only; non-standard response fields from third-party providers are not extracted or preserved, and the docs explicitly state that if you are using LiteLLM (or OpenRouter, vLLM, DeepSeek), you should use a provider-specific package instead.Practically, this means reasoning tokens, tool-use metadata, and provider-specific fields from the real underlying model (e.g., Claude via LiteLLM) will be silently dropped. For the basic chat use case this works, but advanced features from routed providers won't surface in the response. Consider whether a dedicated
langchain-litellmintegration orChatLiteLLM(if available) would be more appropriate.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/lfx/src/lfx/components/litellm/litellm_proxy.py` around lines 1 - 86, The code uses ChatOpenAI inside LiteLLMProxyComponent.build_model which targets only the official OpenAI API and will drop provider-specific fields returned by LiteLLM; replace ChatOpenAI with a LiteLLM-aware client (e.g., a ChatLiteLLM or the langchain-litellm provider class) or implement a lightweight provider wrapper that preserves nonstandard response fields and metadata, keeping the same input mappings (api_base, api_key, model_name, temperature, max_tokens, timeout, max_retries, streaming/stream) used in build_model so calls continue to work but now propagate provider-specific tokens and metadata.
73-75: Remove ineffectiveisinstancecheck for pydantic.v1SecretStr.The
isinstance(api_key, SecretStr)guard will always beFalsebecauseSecretStrInput.valueresolves to a plainstr(orMessage/Datain edge cases), never a pydanticSecretStrobject. Theget_secret_value()call is unreachable. SinceChatOpenAI(pydantic v2) accepts plainstrforapi_key, simplify lines 73–75 to justapi_key = self.api_keyand pass it directly toChatOpenAI.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/lfx/src/lfx/components/litellm/litellm_proxy.py` around lines 73 - 75, The isinstance check against SecretStr and the get_secret_value() call in litellm_proxy.py are ineffective; remove the conditional block (the SecretStr guard and get_secret_value invocation) so you simply use api_key = self.api_key and pass that plain string to ChatOpenAI (referencing the api_key variable and ChatOpenAI usage in the file) instead of attempting to unwrap a pydantic.v1 SecretStr.src/backend/tests/unit/components/languagemodels/test_litellm_proxy.py (1)
53-72: Add a test for theSecretStrunwrapping branch inbuild_model.The
build_modelmethod has an explicit branch for whenapi_keyis apydantic.v1.SecretStr(lines 74–75 of the component). This branch is not exercised by any existing test. Without coverage, a regression (e.g.,get_secret_value()not called, or the wrongSecretStrtype being checked) would go undetected.✅ Suggested additional test
def test_build_model_with_secret_str_api_key(self, component_class, default_kwargs, mocker): """Verify that a pydantic.v1 SecretStr api_key is unwrapped before passing to ChatOpenAI.""" from pydantic.v1 import SecretStr default_kwargs["api_key"] = SecretStr("sk-secret") component = component_class(**default_kwargs) mock_chat_openai = mocker.patch( "lfx.components.litellm.litellm_proxy.ChatOpenAI", return_value=MagicMock(), ) component.build_model() _args, kwargs = mock_chat_openai.call_args assert kwargs["api_key"] == "sk-secret" assert not isinstance(kwargs["api_key"], SecretStr)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/backend/tests/unit/components/languagemodels/test_litellm_proxy.py` around lines 53 - 72, Add a unit test exercising the SecretStr branch in build_model so the pydantic.v1.SecretStr api_key is unwrapped before calling ChatOpenAI: construct the component with default_kwargs["api_key"] set to a SecretStr instance, patch lfx.components.litellm.litellm_proxy.ChatOpenAI to return a MagicMock, call component.build_model(), then inspect mock_chat_openai.call_args to assert the passed kwargs["api_key"] equals the raw string and is not a SecretStr; reference the build_model method, the api_key field, SecretStr type, and ChatOpenAI mock when locating code to change.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/backend/tests/unit/components/languagemodels/test_litellm_proxy.py`:
- Around line 132-137: The test test_get_exception_message_no_openai_import is
over-mocking: remove the redundant patch("builtins.__import__",
side_effect=ImportError) and rely only on patch.dict("sys.modules", {"openai":
None}) to simulate the missing openai import; update the with block around
component._get_exception_message(Exception("test")) to use only patch.dict so
the test stays targeted and avoids intercepting all imports.
---
Nitpick comments:
In `@src/backend/tests/unit/components/languagemodels/test_litellm_proxy.py`:
- Around line 53-72: Add a unit test exercising the SecretStr branch in
build_model so the pydantic.v1.SecretStr api_key is unwrapped before calling
ChatOpenAI: construct the component with default_kwargs["api_key"] set to a
SecretStr instance, patch lfx.components.litellm.litellm_proxy.ChatOpenAI to
return a MagicMock, call component.build_model(), then inspect
mock_chat_openai.call_args to assert the passed kwargs["api_key"] equals the raw
string and is not a SecretStr; reference the build_model method, the api_key
field, SecretStr type, and ChatOpenAI mock when locating code to change.
In `@src/lfx/src/lfx/components/litellm/litellm_proxy.py`:
- Around line 1-86: The code uses ChatOpenAI inside
LiteLLMProxyComponent.build_model which targets only the official OpenAI API and
will drop provider-specific fields returned by LiteLLM; replace ChatOpenAI with
a LiteLLM-aware client (e.g., a ChatLiteLLM or the langchain-litellm provider
class) or implement a lightweight provider wrapper that preserves nonstandard
response fields and metadata, keeping the same input mappings (api_base,
api_key, model_name, temperature, max_tokens, timeout, max_retries,
streaming/stream) used in build_model so calls continue to work but now
propagate provider-specific tokens and metadata.
- Around line 73-75: The isinstance check against SecretStr and the
get_secret_value() call in litellm_proxy.py are ineffective; remove the
conditional block (the SecretStr guard and get_secret_value invocation) so you
simply use api_key = self.api_key and pass that plain string to ChatOpenAI
(referencing the api_key variable and ChatOpenAI usage in the file) instead of
attempting to unwrap a pydantic.v1 SecretStr.
…dentials incorrect

Adds a litellmproxy component
Summary by CodeRabbit
New Features
Tests