Skip to content

feat: add litellm component#11805

Merged
jordanrfrazier merged 11 commits into
mainfrom
litellm-component
Feb 26, 2026
Merged

feat: add litellm component#11805
jordanrfrazier merged 11 commits into
mainfrom
litellm-component

Conversation

@jordanrfrazier
Copy link
Copy Markdown
Collaborator

@jordanrfrazier jordanrfrazier commented Feb 18, 2026

Adds a litellmproxy component

Summary by CodeRabbit

  • New Features

    • Added LiteLLMProxyComponent that routes requests to multiple LLM providers with support for configurable base URL, API key, model name, temperature, max tokens, timeout, and retry settings.
    • Component is now available as a public export in the package.
  • Tests

    • Added comprehensive unit tests for LiteLLMProxyComponent covering initialization, input validation, model construction, and error handling scenarios.

@github-actions github-actions Bot added the enhancement New feature or request label Feb 18, 2026
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Feb 18, 2026

Important

Review skipped

Auto incremental reviews are disabled on this repository.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

Use the checkbox below for a quick retry:

  • 🔍 Trigger review

Walkthrough

This pull request introduces a new LiteLLMProxyComponent that routes requests to multiple LLM providers via LiteLLM, along with lazy-loading infrastructure and comprehensive unit tests. The component accepts configuration inputs for API endpoints, authentication, model selection, and parameters like temperature and max tokens.

Changes

Cohort / File(s) Summary
Component Implementation
src/lfx/src/lfx/components/litellm/litellm_proxy.py, src/lfx/src/lfx/components/litellm/__init__.py
Introduces LiteLLMProxyComponent with support for base URL, API key, model name, temperature, max tokens, timeout, and retry settings. Constructs ChatOpenAI clients and extracts user-friendly error messages from OpenAI exceptions. Implements lazy-loading via __getattr__ and __dir__ hooks for dynamic imports.
Package Integration
src/lfx/src/lfx/components/__init__.py
Exposes litellm as a public component through TYPE_CHECKING imports, dynamic import mapping, and __all__ exports.
Unit Tests
src/backend/tests/unit/components/languagemodels/test_litellm_proxy.py
Comprehensive test coverage for LiteLLMProxyComponent including initialization validation, input schema verification, model construction with ChatOpenAI mocking, exception message formatting for various OpenAI errors, and edge cases when openai import is unavailable.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

🚥 Pre-merge checks | ✅ 6 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 18.75% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (6 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'feat: add litellm component' directly and clearly describes the main change—adding a new litellm component to the codebase, as evidenced by the new LiteLLMProxyComponent class and related module files.
Test Coverage For New Implementations ✅ Passed PR includes comprehensive unit tests for LiteLLMProxyComponent covering initialization, input schema, model construction with mocking, and exception handling for all OpenAI error types plus edge cases.
Test Quality And Coverage ✅ Passed The test suite provides comprehensive coverage of LiteLLMProxyComponent's main functionality with 9 well-structured tests validating actual behavior beyond smoke tests.
Test File Naming And Structure ✅ Passed Test file follows correct naming convention (test_*.py), located in tests/unit directory, contains descriptively named test functions using test_ prefix, uses pytest and mock imports, includes comprehensive test coverage with positive scenarios, negative error scenarios, and edge cases, employs proper mocking strategies with @patch decorators and MagicMock for test isolation, and contains multiple assertions validating expected behavior and error handling.
Excessive Mock Usage Warning ✅ Passed Test file demonstrates disciplined mock usage with 4 of 9 tests containing zero mocks, appropriate third-party dependency mocking for ChatOpenAI, and mock density of 0.066 comparable to similar tests.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch litellm-component

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@jordanrfrazier jordanrfrazier requested review from HimavarshaVS and removed request for ogabrielluiz February 18, 2026 15:57
@codecov
Copy link
Copy Markdown

codecov Bot commented Feb 18, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 35.75%. Comparing base (91e32bf) to head (3561215).
⚠️ Report is 2 commits behind head on main.

Additional details and impacted files

Impacted file tree graph

@@            Coverage Diff             @@
##             main   #11805      +/-   ##
==========================================
+ Coverage   35.73%   35.75%   +0.01%     
==========================================
  Files        1528     1528              
  Lines       73928    73928              
  Branches    11134    11134              
==========================================
+ Hits        26420    26434      +14     
+ Misses      46069    46055      -14     
  Partials     1439     1439              
Flag Coverage Δ
backend 56.44% <ø> (+0.07%) ⬆️
frontend 17.18% <ø> (ø)
lfx 42.40% <ø> (ø)

Flags with carried forward coverage won't be shown. Click here to find out more.
see 12 files with indirect coverage changes

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

@github-actions github-actions Bot added enhancement New feature or request and removed enhancement New feature or request labels Feb 18, 2026
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (3)
src/lfx/src/lfx/components/litellm/litellm_proxy.py (2)

1-86: Advisory: LangChain explicitly recommends against ChatOpenAI for LiteLLM proxy use cases.

ChatOpenAI targets official OpenAI API specifications only; non-standard response fields from third-party providers are not extracted or preserved, and the docs explicitly state that if you are using LiteLLM (or OpenRouter, vLLM, DeepSeek), you should use a provider-specific package instead.

Practically, this means reasoning tokens, tool-use metadata, and provider-specific fields from the real underlying model (e.g., Claude via LiteLLM) will be silently dropped. For the basic chat use case this works, but advanced features from routed providers won't surface in the response. Consider whether a dedicated langchain-litellm integration or ChatLiteLLM (if available) would be more appropriate.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/lfx/src/lfx/components/litellm/litellm_proxy.py` around lines 1 - 86, The
code uses ChatOpenAI inside LiteLLMProxyComponent.build_model which targets only
the official OpenAI API and will drop provider-specific fields returned by
LiteLLM; replace ChatOpenAI with a LiteLLM-aware client (e.g., a ChatLiteLLM or
the langchain-litellm provider class) or implement a lightweight provider
wrapper that preserves nonstandard response fields and metadata, keeping the
same input mappings (api_base, api_key, model_name, temperature, max_tokens,
timeout, max_retries, streaming/stream) used in build_model so calls continue to
work but now propagate provider-specific tokens and metadata.

73-75: Remove ineffective isinstance check for pydantic.v1 SecretStr.

The isinstance(api_key, SecretStr) guard will always be False because SecretStrInput.value resolves to a plain str (or Message/Data in edge cases), never a pydantic SecretStr object. The get_secret_value() call is unreachable. Since ChatOpenAI (pydantic v2) accepts plain str for api_key, simplify lines 73–75 to just api_key = self.api_key and pass it directly to ChatOpenAI.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/lfx/src/lfx/components/litellm/litellm_proxy.py` around lines 73 - 75,
The isinstance check against SecretStr and the get_secret_value() call in
litellm_proxy.py are ineffective; remove the conditional block (the SecretStr
guard and get_secret_value invocation) so you simply use api_key = self.api_key
and pass that plain string to ChatOpenAI (referencing the api_key variable and
ChatOpenAI usage in the file) instead of attempting to unwrap a pydantic.v1
SecretStr.
src/backend/tests/unit/components/languagemodels/test_litellm_proxy.py (1)

53-72: Add a test for the SecretStr unwrapping branch in build_model.

The build_model method has an explicit branch for when api_key is a pydantic.v1.SecretStr (lines 74–75 of the component). This branch is not exercised by any existing test. Without coverage, a regression (e.g., get_secret_value() not called, or the wrong SecretStr type being checked) would go undetected.

✅ Suggested additional test
def test_build_model_with_secret_str_api_key(self, component_class, default_kwargs, mocker):
    """Verify that a pydantic.v1 SecretStr api_key is unwrapped before passing to ChatOpenAI."""
    from pydantic.v1 import SecretStr

    default_kwargs["api_key"] = SecretStr("sk-secret")
    component = component_class(**default_kwargs)

    mock_chat_openai = mocker.patch(
        "lfx.components.litellm.litellm_proxy.ChatOpenAI",
        return_value=MagicMock(),
    )
    component.build_model()

    _args, kwargs = mock_chat_openai.call_args
    assert kwargs["api_key"] == "sk-secret"
    assert not isinstance(kwargs["api_key"], SecretStr)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/backend/tests/unit/components/languagemodels/test_litellm_proxy.py`
around lines 53 - 72, Add a unit test exercising the SecretStr branch in
build_model so the pydantic.v1.SecretStr api_key is unwrapped before calling
ChatOpenAI: construct the component with default_kwargs["api_key"] set to a
SecretStr instance, patch lfx.components.litellm.litellm_proxy.ChatOpenAI to
return a MagicMock, call component.build_model(), then inspect
mock_chat_openai.call_args to assert the passed kwargs["api_key"] equals the raw
string and is not a SecretStr; reference the build_model method, the api_key
field, SecretStr type, and ChatOpenAI mock when locating code to change.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/backend/tests/unit/components/languagemodels/test_litellm_proxy.py`:
- Around line 132-137: The test test_get_exception_message_no_openai_import is
over-mocking: remove the redundant patch("builtins.__import__",
side_effect=ImportError) and rely only on patch.dict("sys.modules", {"openai":
None}) to simulate the missing openai import; update the with block around
component._get_exception_message(Exception("test")) to use only patch.dict so
the test stays targeted and avoids intercepting all imports.

---

Nitpick comments:
In `@src/backend/tests/unit/components/languagemodels/test_litellm_proxy.py`:
- Around line 53-72: Add a unit test exercising the SecretStr branch in
build_model so the pydantic.v1.SecretStr api_key is unwrapped before calling
ChatOpenAI: construct the component with default_kwargs["api_key"] set to a
SecretStr instance, patch lfx.components.litellm.litellm_proxy.ChatOpenAI to
return a MagicMock, call component.build_model(), then inspect
mock_chat_openai.call_args to assert the passed kwargs["api_key"] equals the raw
string and is not a SecretStr; reference the build_model method, the api_key
field, SecretStr type, and ChatOpenAI mock when locating code to change.

In `@src/lfx/src/lfx/components/litellm/litellm_proxy.py`:
- Around line 1-86: The code uses ChatOpenAI inside
LiteLLMProxyComponent.build_model which targets only the official OpenAI API and
will drop provider-specific fields returned by LiteLLM; replace ChatOpenAI with
a LiteLLM-aware client (e.g., a ChatLiteLLM or the langchain-litellm provider
class) or implement a lightweight provider wrapper that preserves nonstandard
response fields and metadata, keeping the same input mappings (api_base,
api_key, model_name, temperature, max_tokens, timeout, max_retries,
streaming/stream) used in build_model so calls continue to work but now
propagate provider-specific tokens and metadata.
- Around line 73-75: The isinstance check against SecretStr and the
get_secret_value() call in litellm_proxy.py are ineffective; remove the
conditional block (the SecretStr guard and get_secret_value invocation) so you
simply use api_key = self.api_key and pass that plain string to ChatOpenAI
(referencing the api_key variable and ChatOpenAI usage in the file) instead of
attempting to unwrap a pydantic.v1 SecretStr.

@github-actions github-actions Bot added the lgtm This PR has been approved by a maintainer label Feb 18, 2026
@github-actions github-actions Bot added enhancement New feature or request and removed enhancement New feature or request labels Feb 18, 2026
@github-actions github-actions Bot added enhancement New feature or request and removed enhancement New feature or request labels Feb 18, 2026
@github-actions
Copy link
Copy Markdown
Contributor

github-actions Bot commented Feb 18, 2026

Frontend Unit Test Coverage Report

Coverage Summary

Lines Statements Branches Functions
Coverage: 19%
19.03% (6177/32459) 12.53% (3175/25323) 12.75% (888/6962)

Unit Test Results

Tests Skipped Failures Errors Time
2345 0 💤 0 ❌ 0 🔥 33.213s ⏱️

@github-actions github-actions Bot added enhancement New feature or request and removed enhancement New feature or request labels Feb 18, 2026
@github-actions github-actions Bot added enhancement New feature or request and removed enhancement New feature or request labels Feb 18, 2026
@jordanrfrazier jordanrfrazier added this pull request to the merge queue Feb 18, 2026
@github-merge-queue github-merge-queue Bot removed this pull request from the merge queue due to failed status checks Feb 18, 2026
@niedrem
Copy link
Copy Markdown

niedrem commented Feb 26, 2026

Would it make sense to place the litellm proxy component under the Models & Agents group?

image

@github-actions github-actions Bot added enhancement New feature or request and removed enhancement New feature or request labels Feb 26, 2026
@github-actions github-actions Bot added enhancement New feature or request and removed enhancement New feature or request labels Feb 26, 2026
@github-actions github-actions Bot added enhancement New feature or request and removed enhancement New feature or request labels Feb 26, 2026
@github-actions github-actions Bot added enhancement New feature or request and removed enhancement New feature or request labels Feb 26, 2026
@github-actions github-actions Bot added enhancement New feature or request and removed enhancement New feature or request labels Feb 26, 2026
@github-actions github-actions Bot added enhancement New feature or request and removed enhancement New feature or request labels Feb 26, 2026
@jordanrfrazier jordanrfrazier added this pull request to the merge queue Feb 26, 2026
Merged via the queue into main with commit 6bddfc2 Feb 26, 2026
94 checks passed
@jordanrfrazier jordanrfrazier deleted the litellm-component branch February 26, 2026 22:19
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

enhancement New feature or request lgtm This PR has been approved by a maintainer

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants