fix: Strip /v1 suffix from base URL and improve error handling in OllamaComponent#10112
Conversation
…onfiguration and building models with ChatOllamaComponent ♻️ (ollama.py): Refactor ChatOllamaComponent to support OpenAI-compatible mode with '/v1' suffix and improve error handling in methods like build_model and is_valid_ollama_url
|
Important Review skippedAuto incremental reviews are disabled on this repository. Please check the settings in the CodeRabbit UI or the You can disable this status message by setting the WalkthroughAdds OpenAI-compatible path support in the Ollama component, selecting ChatOpenAI when base_url contains /v1. Normalizes URLs by stripping /v1 in validators and model listing. Adjusts error handling to avoid raising when Ollama is unreachable during config updates. Updates unit tests to reflect new behaviors and URL handling. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant Caller
participant OllamaComponent as Ollama Component
participant ChatOpenAI
participant ChatOllama
rect rgba(230,245,255,0.6)
note over OllamaComponent: build_model(base_url, model, params)
Caller->>OllamaComponent: build_model(...)
alt base_url contains "/v1"
OllamaComponent->>ChatOpenAI: construct(base_url, api_key, model, temperature, ...)
ChatOpenAI-->>OllamaComponent: instance
else standard Ollama API
OllamaComponent->>ChatOllama: construct(**llm_params)
ChatOllama-->>OllamaComponent: instance
end
OllamaComponent-->>Caller: LLM instance
end
sequenceDiagram
autonumber
participant Caller
participant OllamaComponent as Ollama Component
participant OllamaAPI as Ollama REST
rect rgba(240,255,240,0.6)
note over OllamaComponent: URL validation and model listing
Caller->>OllamaComponent: is_valid_ollama_url(url)
OllamaComponent->>OllamaComponent: normalize url (strip trailing "/v1")
OllamaComponent->>OllamaAPI: GET /api/tags
OllamaAPI-->>OllamaComponent: 200/err
OllamaComponent-->>Caller: True/False
end
rect rgba(255,248,230,0.6)
Caller->>OllamaComponent: get_models(base_url)
OllamaComponent->>OllamaComponent: strip "/v1", ensure trailing "/"
OllamaComponent->>OllamaAPI: GET /api/tags
OllamaAPI-->>OllamaComponent: models list
OllamaComponent-->>Caller: model names
end
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Pre-merge checks and finishing touches❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (5 passed)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (2)
src/backend/tests/unit/components/languagemodels/test_chatollama_component.py (2)
346-367: Consider asserting ChatOllama is not called.The test validates ChatOpenAI is invoked with the correct parameters when
/v1is present. However, it doesn't verify that ChatOllama is NOT called in this path.Consider adding a patch for ChatOllama to verify it's not invoked:
@patch("lfx.components.ollama.ollama.ChatOpenAI") -def test_build_model_uses_chatopenai_with_v1_suffix(self, mock_chat_openai): +@patch("lfx.components.ollama.ollama.ChatOllama") +def test_build_model_uses_chatopenai_with_v1_suffix(self, mock_chat_ollama, mock_chat_openai): """Test that /v1 suffix triggers ChatOpenAI instead of ChatOllama.""" mock_model = MagicMock() mock_chat_openai.return_value = mock_model component = ChatOllamaComponent() component.base_url = "http://localhost:11434/v1" component.model_name = "llama3.1" component.temperature = 0.7 component.mirostat = "Disabled" model = component.build_model() # Verify ChatOpenAI was called with /v1 URL mock_chat_openai.assert_called_once() + mock_chat_ollama.assert_not_called() call_args = mock_chat_openai.call_args[1] assert call_args["base_url"] == "http://localhost:11434/v1" assert call_args["api_key"] == "ollama" # pragma: allowlist secret assert call_args["model"] == "llama3.1" assert call_args["temperature"] == 0.7 assert model == mock_model
369-386: Make the base_url assertion more specific.The test verifies
/v1/triggers ChatOpenAI, but the assertion only checks that/v1is present in the URL. Consider asserting the exact expected URL for clarity.# Verify ChatOpenAI was called mock_chat_openai.assert_called_once() call_args = mock_chat_openai.call_args[1] - assert "/v1" in call_args["base_url"] + assert call_args["base_url"] == "http://localhost:11434/v1/" assert model == mock_model
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
src/backend/tests/unit/components/languagemodels/test_chatollama_component.py(2 hunks)src/lfx/src/lfx/components/ollama/ollama.py(4 hunks)
🧰 Additional context used
📓 Path-based instructions (8)
src/backend/tests/unit/components/**/*.py
📄 CodeRabbit inference engine (.cursor/rules/backend_development.mdc)
src/backend/tests/unit/components/**/*.py: Mirror the component directory structure for unit tests in src/backend/tests/unit/components/
Use ComponentTestBaseWithClient or ComponentTestBaseWithoutClient as base classes for component unit tests
Provide file_names_mapping for backward compatibility in component tests
Create comprehensive unit tests for all new components
Files:
src/backend/tests/unit/components/languagemodels/test_chatollama_component.py
{src/backend/**/*.py,tests/**/*.py,Makefile}
📄 CodeRabbit inference engine (.cursor/rules/backend_development.mdc)
{src/backend/**/*.py,tests/**/*.py,Makefile}: Run make format_backend to format Python code before linting or committing changes
Run make lint to perform linting checks on backend Python code
Files:
src/backend/tests/unit/components/languagemodels/test_chatollama_component.py
src/backend/tests/unit/**/*.py
📄 CodeRabbit inference engine (.cursor/rules/backend_development.mdc)
Test component integration within flows using create_flow, build_flow, and get_build_events utilities
Files:
src/backend/tests/unit/components/languagemodels/test_chatollama_component.py
src/backend/tests/**/*.py
📄 CodeRabbit inference engine (.cursor/rules/testing.mdc)
src/backend/tests/**/*.py: Unit tests for backend code must be located in the 'src/backend/tests/' directory, with component tests organized by component subdirectory under 'src/backend/tests/unit/components/'.
Test files should use the same filename as the component under test, with an appropriate test prefix or suffix (e.g., 'my_component.py' → 'test_my_component.py').
Use the 'client' fixture (an async httpx.AsyncClient) for API tests in backend Python tests, as defined in 'src/backend/tests/conftest.py'.
When writing component tests, inherit from the appropriate base class in 'src/backend/tests/base.py' (ComponentTestBase, ComponentTestBaseWithClient, or ComponentTestBaseWithoutClient) and provide the required fixtures: 'component_class', 'default_kwargs', and 'file_names_mapping'.
Each test in backend Python test files should have a clear docstring explaining its purpose, and complex setups or mocks should be well-commented.
Test both sync and async code paths in backend Python tests, using '@pytest.mark.asyncio' for async tests.
Mock external dependencies appropriately in backend Python tests to isolate unit tests from external services.
Test error handling and edge cases in backend Python tests, including using 'pytest.raises' and asserting error messages.
Validate input/output behavior and test component initialization and configuration in backend Python tests.
Use the 'no_blockbuster' pytest marker to skip the blockbuster plugin in tests when necessary.
Be aware of ContextVar propagation in async tests; test both direct event loop execution and 'asyncio.to_thread' scenarios to ensure proper context isolation.
Test error handling by mocking internal functions using monkeypatch in backend Python tests.
Test resource cleanup in backend Python tests by using fixtures that ensure proper initialization and cleanup of resources.
Test timeout and performance constraints in backend Python tests using 'asyncio.wait_for' and timing assertions.
Test Langflow's Messag...
Files:
src/backend/tests/unit/components/languagemodels/test_chatollama_component.py
src/backend/**/*component*.py
📄 CodeRabbit inference engine (.cursor/rules/icons.mdc)
In your Python component class, set the
iconattribute to a string matching the frontend icon mapping exactly (case-sensitive).
Files:
src/backend/tests/unit/components/languagemodels/test_chatollama_component.py
src/backend/**/components/**/*.py
📄 CodeRabbit inference engine (.cursor/rules/icons.mdc)
In your Python component class, set the
iconattribute to a string matching the frontend icon mapping exactly (case-sensitive).
Files:
src/backend/tests/unit/components/languagemodels/test_chatollama_component.py
{**/test_*.py,**/*.test.{ts,tsx}}
📄 CodeRabbit inference engine (coderabbit-custom-pre-merge-checks-unique-id-file-non-traceable-F7F2B60C-1728-4C9A-8889-4F2235E186CA.txt)
{**/test_*.py,**/*.test.{ts,tsx}}: Warn when mocks replace testing real behavior/interactions in test files
Suggest using real objects or simpler test doubles when mocks become excessive
Ensure mocks are reserved for external dependencies, not core application logic, in tests
Test files should have descriptive test names that explain what is being validated
Organize tests logically with proper setup and teardown
Include edge cases and error conditions for comprehensive test coverage
Cover both positive and negative scenarios where appropriate
Tests must cover the main functionality being implemented
Avoid smoke-only tests; assert meaningful behavior and outcomes
Follow project testing frameworks (pytest for backend, Playwright for frontend)
For API endpoints, verify both success and error responses in tests
Files:
src/backend/tests/unit/components/languagemodels/test_chatollama_component.py
**/test_*.py
📄 CodeRabbit inference engine (coderabbit-custom-pre-merge-checks-unique-id-file-non-traceable-F7F2B60C-1728-4C9A-8889-4F2235E186CA.txt)
**/test_*.py: Backend test files must follow naming convention test_*.py
Backend tests should use proper pytest structure
For async Python code, use proper pytest async testing patterns (e.g., pytest-asyncio)
Files:
src/backend/tests/unit/components/languagemodels/test_chatollama_component.py
🧬 Code graph analysis (2)
src/lfx/src/lfx/components/ollama/ollama.py (2)
src/lfx/src/lfx/components/ollama/ollama_embeddings.py (1)
is_valid_ollama_url(97-103)src/lfx/src/lfx/utils/util.py (1)
transform_localhost_url(118-160)
src/backend/tests/unit/components/languagemodels/test_chatollama_component.py (2)
src/lfx/src/lfx/components/ollama/ollama.py (4)
update_build_config(233-270)build_model(145-217)is_valid_ollama_url(219-231)get_models(272-334)src/lfx/src/lfx/components/ollama/ollama_embeddings.py (2)
update_build_config(58-68)is_valid_ollama_url(97-103)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (9)
- GitHub Check: Lint Backend / Run Mypy (3.11)
- GitHub Check: Run Backend Tests / Unit Tests - Python 3.10 - Group 4
- GitHub Check: Run Backend Tests / Integration Tests - Python 3.10
- GitHub Check: Run Backend Tests / Unit Tests - Python 3.10 - Group 5
- GitHub Check: Run Backend Tests / Unit Tests - Python 3.10 - Group 2
- GitHub Check: Run Backend Tests / Unit Tests - Python 3.10 - Group 3
- GitHub Check: Run Backend Tests / Unit Tests - Python 3.10 - Group 1
- GitHub Check: Test Starter Templates
- GitHub Check: Update Starter Projects
🔇 Additional comments (8)
src/backend/tests/unit/components/languagemodels/test_chatollama_component.py (3)
172-174: LGTM!The test correctly validates the revised error handling where
update_build_configreturns empty model options instead of raising when Ollama is unreachable.
388-403: LGTM!The test properly validates that
is_valid_ollama_urlstrips the/v1suffix before making the validation request, ensuring the correct Ollama API endpoint is used.
405-454: LGTM!Both tests effectively validate the implementation changes:
test_get_models_with_v1_suffix(lines 405-432): Confirms thatget_modelsstrips the/v1suffix and calls the correct Ollama API endpoint.
test_update_build_config_no_error_when_ollama_not_running(lines 434-454): Validates the revised error handling where connection failures result in empty model options rather than raising an exception.src/lfx/src/lfx/components/ollama/ollama.py (5)
7-7: LGTM!The import of
ChatOpenAIis correctly placed and necessary to support the OpenAI-compatible API path when/v1is present in the base URL.
36-36: LGTM!The updated field info clearly documents the
/v1suffix requirement for OpenAI-compatible mode and provides a helpful example URL.
164-171: Verify parameter handling in OpenAI-compatible mode.The OpenAI-compatible path passes only a subset of parameters to
ChatOpenAI(temperature, max_tokens, verbose). Many Ollama-specific parameters liketop_p,top_k,repeat_penalty,mirostat, etc. are not passed and will be silently ignored.Consider:
- Documenting in the field info text which parameters are supported in OpenAI-compatible mode
- Logging a warning when OpenAI mode is used with unsupported parameters, or
- Verifying that the OpenAI API endpoint supports these parameters and passing them if so
This will help users understand parameter limitations when using the
/v1endpoint.
252-259: LGTM!The simplified error handling in
update_build_configprovides graceful degradation by setting empty model options when Ollama is unreachable, rather than raising an error. This improves the user experience by allowing the component to be configured even when Ollama is temporarily unavailable.
289-293: LGTM!The URL normalization correctly strips the
/v1suffix before querying the Ollama API, ensuring that model listing works consistently whether or not the user includes/v1in their base URL.
| if transformed_base_url and "/v1" in transformed_base_url: | ||
| try: | ||
| output = ChatOpenAI( | ||
| base_url=transformed_base_url, | ||
| api_key="ollama", # pragma: allowlist secret | ||
| model=self.model_name, | ||
| temperature=self.temperature or 0.1, | ||
| max_tokens=self.num_ctx or None, | ||
| verbose=self.verbose, | ||
| ) | ||
| except Exception as e: | ||
| msg = ( | ||
| "Unable to connect to Ollama's OpenAI-compatible API. ", | ||
| "Please verify the base URL includes '/v1', ensure Ollama is running, and try again.", | ||
| ) | ||
| raise ValueError(msg) from e |
There was a problem hiding this comment.
The /v1 check may match unintended URLs.
The condition "/v1" in transformed_base_url could match /v1 anywhere in the URL (e.g., in a hostname or path segment like http://example.com/api/v1/ollama), not just as the intended suffix.
Consider using a more precise check that matches /v1 only at the end of the path:
-if transformed_base_url and "/v1" in transformed_base_url:
+if transformed_base_url and (transformed_base_url.rstrip("/").endswith("/v1")):This aligns with the normalization logic used in is_valid_ollama_url and get_models where /v1 is explicitly checked as a suffix.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if transformed_base_url and "/v1" in transformed_base_url: | |
| try: | |
| output = ChatOpenAI( | |
| base_url=transformed_base_url, | |
| api_key="ollama", # pragma: allowlist secret | |
| model=self.model_name, | |
| temperature=self.temperature or 0.1, | |
| max_tokens=self.num_ctx or None, | |
| verbose=self.verbose, | |
| ) | |
| except Exception as e: | |
| msg = ( | |
| "Unable to connect to Ollama's OpenAI-compatible API. ", | |
| "Please verify the base URL includes '/v1', ensure Ollama is running, and try again.", | |
| ) | |
| raise ValueError(msg) from e | |
| if transformed_base_url and (transformed_base_url.rstrip("/").endswith("/v1")): | |
| try: | |
| output = ChatOpenAI( | |
| base_url=transformed_base_url, | |
| api_key="ollama", # pragma: allowlist secret | |
| model=self.model_name, | |
| temperature=self.temperature or 0.1, | |
| max_tokens=self.num_ctx or None, | |
| verbose=self.verbose, | |
| ) | |
| except Exception as e: | |
| msg = ( | |
| "Unable to connect to Ollama's OpenAI-compatible API. ", | |
| "Please verify the base URL includes '/v1', ensure Ollama is running, and try again.", | |
| ) | |
| raise ValueError(msg) from e |
| async def is_valid_ollama_url(self, url: str) -> bool: | ||
| try: | ||
| async with httpx.AsyncClient() as client: | ||
| url = transform_localhost_url(url) | ||
| return (await client.get(urljoin(url, "api/tags"))).status_code == HTTP_STATUS_OK | ||
| if not url: | ||
| return False | ||
| # Strip /v1 suffix if present, as Ollama API endpoints are at root level | ||
| base_url = url.rstrip("/").removesuffix("/v1") | ||
| if not base_url.endswith("/"): | ||
| base_url = base_url + "/" | ||
| return (await client.get(urljoin(base_url, "api/tags"))).status_code == HTTP_STATUS_OK | ||
| except httpx.RequestError: | ||
| return False |
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
🧩 Analysis chain
Verify consistency with ollama_embeddings.py.
The implementation correctly strips the /v1 suffix before validating the Ollama URL. However, ollama_embeddings.py (lines 96-102) has a similar is_valid_ollama_url method that does NOT strip /v1.
Run this script to check if ollama_embeddings.py should also strip /v1 for consistency:
If embeddings also support OpenAI-compatible mode, the URL normalization should be consistent across both components.
🏁 Script executed:
#!/bin/bash
# Check if ollama_embeddings should also handle /v1 suffix
# Display the is_valid_ollama_url implementation in ollama_embeddings.py
echo "=== ollama_embeddings.py is_valid_ollama_url ==="
rg -A 10 "async def is_valid_ollama_url" src/lfx/src/lfx/components/ollama/ollama_embeddings.py
# Check if there are any references to /v1 in ollama_embeddings
echo ""
echo "=== References to /v1 in ollama_embeddings.py ==="
rg "/v1" src/lfx/src/lfx/components/ollama/ollama_embeddings.py || echo "No references to /v1 found"Length of output: 454
Normalize /v1 suffix in ollama_embeddings.py
In src/lfx/src/lfx/components/ollama/ollama_embeddings.py, update is_valid_ollama_url to strip "/v1" and enforce a trailing slash—mirroring the logic in ollama.py—so both modules validate the same endpoint.
Codecov Report✅ All modified and coverable lines are covered by tests. ❌ Your project status has failed because the head coverage (47.08%) is below the target coverage (55.00%). You can increase the head coverage or adjust the target coverage. Additional details and impacted files@@ Coverage Diff @@
## main #10112 +/- ##
=======================================
Coverage 24.16% 24.17%
=======================================
Files 1086 1086
Lines 40067 40043 -24
Branches 5546 5541 -5
=======================================
- Hits 9682 9679 -3
+ Misses 30214 30193 -21
Partials 171 171
Flags with carried forward coverage won't be shown. Click here to find out more. 🚀 New features to boost your workflow:
|
…utomatically stripped in build_model method 📝 (ollama.py): Update base_url info to clarify default value and improve readability of the code
| "Detected '/v1' suffix in base URL. The Ollama component uses the native Ollama API, " | ||
| "not the OpenAI-compatible API. The '/v1' suffix has been automatically removed. " | ||
| "If you want to use the OpenAI-compatible API, please use the OpenAI component instead. " | ||
| "Learn more at https://docs.ollama.com/openai#openai-compatibility" |
There was a problem hiding this comment.
We should point to our docs
There was a problem hiding this comment.
@Cristhianzl In this case I'd use Ollama's docs, because it pertains to Ollama's OpenAI compatible API, and not ours
|



This pull request improves the handling of the Ollama API base URL in the
ChatOllamaComponent, ensuring that any/v1suffix (used for OpenAI compatibility) is automatically stripped and a warning is logged. This helps prevent user misconfiguration and clarifies that the component uses the native Ollama API. The tests have also been updated to cover these scenarios, and error handling has been made more robust when Ollama is not running.Base URL handling and API compatibility:
build_model,is_valid_ollama_url, andget_modelsmethods now automatically strip any/v1or/v1/suffix from the base URL, ensuring correct usage of the native Ollama API endpoints. A warning is logged if such a suffix is detected, guiding users to use the OpenAI component if needed. (src/lfx/src/lfx/components/ollama/ollama.py) [1] [2] [3]base_urlinput field is now set tohttp://localhost:11434, making setup easier for most users. (src/lfx/src/lfx/components/ollama/ollama.py)Error handling improvements:
src/lfx/src/lfx/components/ollama/ollama.py)Test coverage for new behavior:
/v1suffixes are stripped, warnings are logged, and API calls use the correct base URL. Tests also confirm that no error is raised when Ollama is unavailable. (src/backend/tests/unit/components/languagemodels/test_chatollama_component.py) [1] [2]Minor fixes:
build_modelfor clarity. (src/lfx/src/lfx/components/ollama/ollama.py)#2885