Skip to content

fix: Strip /v1 suffix from base URL and improve error handling in OllamaComponent#10112

Merged
Cristhianzl merged 4 commits into
mainfrom
cz/ollamafix-v1
Oct 7, 2025
Merged

fix: Strip /v1 suffix from base URL and improve error handling in OllamaComponent#10112
Cristhianzl merged 4 commits into
mainfrom
cz/ollamafix-v1

Conversation

@Cristhianzl
Copy link
Copy Markdown
Member

@Cristhianzl Cristhianzl commented Oct 3, 2025

This pull request improves the handling of the Ollama API base URL in the ChatOllamaComponent, ensuring that any /v1 suffix (used for OpenAI compatibility) is automatically stripped and a warning is logged. This helps prevent user misconfiguration and clarifies that the component uses the native Ollama API. The tests have also been updated to cover these scenarios, and error handling has been made more robust when Ollama is not running.

Base URL handling and API compatibility:

  • The build_model, is_valid_ollama_url, and get_models methods now automatically strip any /v1 or /v1/ suffix from the base URL, ensuring correct usage of the native Ollama API endpoints. A warning is logged if such a suffix is detected, guiding users to use the OpenAI component if needed. (src/lfx/src/lfx/components/ollama/ollama.py) [1] [2] [3]
  • The default value for the base_url input field is now set to http://localhost:11434, making setup easier for most users. (src/lfx/src/lfx/components/ollama/ollama.py)

Error handling improvements:

  • The component no longer raises an error if Ollama is not running when updating the build config; instead, it sets empty model options, improving robustness and user experience. (src/lfx/src/lfx/components/ollama/ollama.py)

Test coverage for new behavior:

  • New and updated tests verify that /v1 suffixes are stripped, warnings are logged, and API calls use the correct base URL. Tests also confirm that no error is raised when Ollama is unavailable. (src/backend/tests/unit/components/languagemodels/test_chatollama_component.py) [1] [2]

Minor fixes:

  • Improved error message formatting in build_model for clarity. (src/lfx/src/lfx/components/ollama/ollama.py)

#2885

…onfiguration and building models with ChatOllamaComponent

♻️ (ollama.py): Refactor ChatOllamaComponent to support OpenAI-compatible mode with '/v1' suffix and improve error handling in methods like build_model and is_valid_ollama_url
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Oct 3, 2025

Important

Review skipped

Auto incremental reviews are disabled on this repository.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

Walkthrough

Adds OpenAI-compatible path support in the Ollama component, selecting ChatOpenAI when base_url contains /v1. Normalizes URLs by stripping /v1 in validators and model listing. Adjusts error handling to avoid raising when Ollama is unreachable during config updates. Updates unit tests to reflect new behaviors and URL handling.

Changes

Cohort / File(s) Summary
Ollama component implementation
src/lfx/src/lfx/components/ollama/ollama.py
Adds ChatOpenAI path when base_url includes /v1; retains ChatOllama for standard mode. Normalizes URLs by stripping trailing /v1 in is_valid_ollama_url and get_models. Refines error messages. Removes pre-flight URL validity check in update_build_config and alters control flow to handle unreachable Ollama without raising. Imports ChatOpenAI.
Unit tests for language models
src/backend/tests/unit/components/languagemodels/test_chatollama_component.py
Updates expectations: unreachable Ollama no longer raises; model_name options become empty. Adds tests for ChatOpenAI selection with /v1 base_url, URL normalization in validators and get_models, and coverage for containerized URL transformations.

Sequence Diagram(s)

sequenceDiagram
  autonumber
  participant Caller
  participant OllamaComponent as Ollama Component
  participant ChatOpenAI
  participant ChatOllama

  rect rgba(230,245,255,0.6)
  note over OllamaComponent: build_model(base_url, model, params)
  Caller->>OllamaComponent: build_model(...)
  alt base_url contains "/v1"
    OllamaComponent->>ChatOpenAI: construct(base_url, api_key, model, temperature, ...)
    ChatOpenAI-->>OllamaComponent: instance
  else standard Ollama API
    OllamaComponent->>ChatOllama: construct(**llm_params)
    ChatOllama-->>OllamaComponent: instance
  end
  OllamaComponent-->>Caller: LLM instance
  end
Loading
sequenceDiagram
  autonumber
  participant Caller
  participant OllamaComponent as Ollama Component
  participant OllamaAPI as Ollama REST

  rect rgba(240,255,240,0.6)
  note over OllamaComponent: URL validation and model listing
  Caller->>OllamaComponent: is_valid_ollama_url(url)
  OllamaComponent->>OllamaComponent: normalize url (strip trailing "/v1")
  OllamaComponent->>OllamaAPI: GET /api/tags
  OllamaAPI-->>OllamaComponent: 200/err
  OllamaComponent-->>Caller: True/False
  end

  rect rgba(255,248,230,0.6)
  Caller->>OllamaComponent: get_models(base_url)
  OllamaComponent->>OllamaComponent: strip "/v1", ensure trailing "/"
  OllamaComponent->>OllamaAPI: GET /api/tags
  OllamaAPI-->>OllamaComponent: models list
  OllamaComponent-->>Caller: model names
  end
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Pre-merge checks and finishing touches

❌ Failed checks (1 warning, 1 inconclusive)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 58.33% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
Excessive Mock Usage Warning ❓ Inconclusive Unable to review the test suite for mock usage because the repository contents, including test_chatollama_component.py, could not be accessed; without seeing the tests I cannot determine whether mocks are excessive or appropriate. Please provide access to the test files or rerun the checks so I can inspect how mocks are being used; once the tests are viewable I can complete the excessive mock usage assessment.
✅ Passed checks (5 passed)
Check name Status Explanation
Test Coverage For New Implementations ✅ Passed The PR introduces significant new behavior for Ollama’s /v1 OpenAI-compatible mode, URL normalization, and error handling, and it ships with targeted unit tests in src/backend/tests/unit/components/languagemodels/test_chatollama_component.py that exercise the new ChatOpenAI path, the /v1 stripping logic, and the no-error handling when Ollama is unreachable, all using meaningful assertions that match the implementation. No new frontend code or integration-level changes were introduced, so no additional test layers are expected. Test naming follows the project conventions, and there are no placeholders, so coverage for the new functionality appears sufficient.
Test Quality And Coverage ✅ Passed Tests thoroughly exercise the new /v1 OpenAI-compatible path, URL normalization, model fetching, and graceful handling when Ollama is offline, and they assert concrete behaviors rather than mere smoke checks. All touched code remains synchronous so async testing patterns are irrelevant. No API endpoint behavior outside these unit scopes was introduced, so the test suite remains aligned with project testing conventions.
Test File Naming And Structure ✅ Passed Reviewed the backend test file src/backend/tests/unit/components/languagemodels/test_chatollama_component.py; it follows the test_*.py naming convention, uses pytest-style functions with descriptive names that outline scenarios (e.g., OpenAI-compatible base URL handling, missing Ollama service), and logically groups positive and negative cases including error conditions. No frontend or integration tests were modified, and existing coverage addresses edge cases around /v1 normalization and service unavailability. Overall, the tests observe proper structure, organization, and scenario coverage required by the custom check.
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title Check ✅ Passed The title succinctly captures the two main changes in this PR by highlighting both the stripping of the “/v1” suffix from the base URL and the enhanced error handling in the OllamaComponent, making it clear and specific without unnecessary detail.

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@github-actions github-actions Bot added enhancement New feature or request and removed enhancement New feature or request labels Oct 3, 2025
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (2)
src/backend/tests/unit/components/languagemodels/test_chatollama_component.py (2)

346-367: Consider asserting ChatOllama is not called.

The test validates ChatOpenAI is invoked with the correct parameters when /v1 is present. However, it doesn't verify that ChatOllama is NOT called in this path.

Consider adding a patch for ChatOllama to verify it's not invoked:

 @patch("lfx.components.ollama.ollama.ChatOpenAI")
-def test_build_model_uses_chatopenai_with_v1_suffix(self, mock_chat_openai):
+@patch("lfx.components.ollama.ollama.ChatOllama")
+def test_build_model_uses_chatopenai_with_v1_suffix(self, mock_chat_ollama, mock_chat_openai):
     """Test that /v1 suffix triggers ChatOpenAI instead of ChatOllama."""
     mock_model = MagicMock()
     mock_chat_openai.return_value = mock_model
 
     component = ChatOllamaComponent()
     component.base_url = "http://localhost:11434/v1"
     component.model_name = "llama3.1"
     component.temperature = 0.7
     component.mirostat = "Disabled"
 
     model = component.build_model()
 
     # Verify ChatOpenAI was called with /v1 URL
     mock_chat_openai.assert_called_once()
+    mock_chat_ollama.assert_not_called()
     call_args = mock_chat_openai.call_args[1]
     assert call_args["base_url"] == "http://localhost:11434/v1"
     assert call_args["api_key"] == "ollama"  # pragma: allowlist secret
     assert call_args["model"] == "llama3.1"
     assert call_args["temperature"] == 0.7
     assert model == mock_model

369-386: Make the base_url assertion more specific.

The test verifies /v1/ triggers ChatOpenAI, but the assertion only checks that /v1 is present in the URL. Consider asserting the exact expected URL for clarity.

     # Verify ChatOpenAI was called
     mock_chat_openai.assert_called_once()
     call_args = mock_chat_openai.call_args[1]
-    assert "/v1" in call_args["base_url"]
+    assert call_args["base_url"] == "http://localhost:11434/v1/"
     assert model == mock_model
📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between aa96d74 and 5f50f38.

📒 Files selected for processing (2)
  • src/backend/tests/unit/components/languagemodels/test_chatollama_component.py (2 hunks)
  • src/lfx/src/lfx/components/ollama/ollama.py (4 hunks)
🧰 Additional context used
📓 Path-based instructions (8)
src/backend/tests/unit/components/**/*.py

📄 CodeRabbit inference engine (.cursor/rules/backend_development.mdc)

src/backend/tests/unit/components/**/*.py: Mirror the component directory structure for unit tests in src/backend/tests/unit/components/
Use ComponentTestBaseWithClient or ComponentTestBaseWithoutClient as base classes for component unit tests
Provide file_names_mapping for backward compatibility in component tests
Create comprehensive unit tests for all new components

Files:

  • src/backend/tests/unit/components/languagemodels/test_chatollama_component.py
{src/backend/**/*.py,tests/**/*.py,Makefile}

📄 CodeRabbit inference engine (.cursor/rules/backend_development.mdc)

{src/backend/**/*.py,tests/**/*.py,Makefile}: Run make format_backend to format Python code before linting or committing changes
Run make lint to perform linting checks on backend Python code

Files:

  • src/backend/tests/unit/components/languagemodels/test_chatollama_component.py
src/backend/tests/unit/**/*.py

📄 CodeRabbit inference engine (.cursor/rules/backend_development.mdc)

Test component integration within flows using create_flow, build_flow, and get_build_events utilities

Files:

  • src/backend/tests/unit/components/languagemodels/test_chatollama_component.py
src/backend/tests/**/*.py

📄 CodeRabbit inference engine (.cursor/rules/testing.mdc)

src/backend/tests/**/*.py: Unit tests for backend code must be located in the 'src/backend/tests/' directory, with component tests organized by component subdirectory under 'src/backend/tests/unit/components/'.
Test files should use the same filename as the component under test, with an appropriate test prefix or suffix (e.g., 'my_component.py' → 'test_my_component.py').
Use the 'client' fixture (an async httpx.AsyncClient) for API tests in backend Python tests, as defined in 'src/backend/tests/conftest.py'.
When writing component tests, inherit from the appropriate base class in 'src/backend/tests/base.py' (ComponentTestBase, ComponentTestBaseWithClient, or ComponentTestBaseWithoutClient) and provide the required fixtures: 'component_class', 'default_kwargs', and 'file_names_mapping'.
Each test in backend Python test files should have a clear docstring explaining its purpose, and complex setups or mocks should be well-commented.
Test both sync and async code paths in backend Python tests, using '@pytest.mark.asyncio' for async tests.
Mock external dependencies appropriately in backend Python tests to isolate unit tests from external services.
Test error handling and edge cases in backend Python tests, including using 'pytest.raises' and asserting error messages.
Validate input/output behavior and test component initialization and configuration in backend Python tests.
Use the 'no_blockbuster' pytest marker to skip the blockbuster plugin in tests when necessary.
Be aware of ContextVar propagation in async tests; test both direct event loop execution and 'asyncio.to_thread' scenarios to ensure proper context isolation.
Test error handling by mocking internal functions using monkeypatch in backend Python tests.
Test resource cleanup in backend Python tests by using fixtures that ensure proper initialization and cleanup of resources.
Test timeout and performance constraints in backend Python tests using 'asyncio.wait_for' and timing assertions.
Test Langflow's Messag...

Files:

  • src/backend/tests/unit/components/languagemodels/test_chatollama_component.py
src/backend/**/*component*.py

📄 CodeRabbit inference engine (.cursor/rules/icons.mdc)

In your Python component class, set the icon attribute to a string matching the frontend icon mapping exactly (case-sensitive).

Files:

  • src/backend/tests/unit/components/languagemodels/test_chatollama_component.py
src/backend/**/components/**/*.py

📄 CodeRabbit inference engine (.cursor/rules/icons.mdc)

In your Python component class, set the icon attribute to a string matching the frontend icon mapping exactly (case-sensitive).

Files:

  • src/backend/tests/unit/components/languagemodels/test_chatollama_component.py
{**/test_*.py,**/*.test.{ts,tsx}}

📄 CodeRabbit inference engine (coderabbit-custom-pre-merge-checks-unique-id-file-non-traceable-F7F2B60C-1728-4C9A-8889-4F2235E186CA.txt)

{**/test_*.py,**/*.test.{ts,tsx}}: Warn when mocks replace testing real behavior/interactions in test files
Suggest using real objects or simpler test doubles when mocks become excessive
Ensure mocks are reserved for external dependencies, not core application logic, in tests
Test files should have descriptive test names that explain what is being validated
Organize tests logically with proper setup and teardown
Include edge cases and error conditions for comprehensive test coverage
Cover both positive and negative scenarios where appropriate
Tests must cover the main functionality being implemented
Avoid smoke-only tests; assert meaningful behavior and outcomes
Follow project testing frameworks (pytest for backend, Playwright for frontend)
For API endpoints, verify both success and error responses in tests

Files:

  • src/backend/tests/unit/components/languagemodels/test_chatollama_component.py
**/test_*.py

📄 CodeRabbit inference engine (coderabbit-custom-pre-merge-checks-unique-id-file-non-traceable-F7F2B60C-1728-4C9A-8889-4F2235E186CA.txt)

**/test_*.py: Backend test files must follow naming convention test_*.py
Backend tests should use proper pytest structure
For async Python code, use proper pytest async testing patterns (e.g., pytest-asyncio)

Files:

  • src/backend/tests/unit/components/languagemodels/test_chatollama_component.py
🧬 Code graph analysis (2)
src/lfx/src/lfx/components/ollama/ollama.py (2)
src/lfx/src/lfx/components/ollama/ollama_embeddings.py (1)
  • is_valid_ollama_url (97-103)
src/lfx/src/lfx/utils/util.py (1)
  • transform_localhost_url (118-160)
src/backend/tests/unit/components/languagemodels/test_chatollama_component.py (2)
src/lfx/src/lfx/components/ollama/ollama.py (4)
  • update_build_config (233-270)
  • build_model (145-217)
  • is_valid_ollama_url (219-231)
  • get_models (272-334)
src/lfx/src/lfx/components/ollama/ollama_embeddings.py (2)
  • update_build_config (58-68)
  • is_valid_ollama_url (97-103)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (9)
  • GitHub Check: Lint Backend / Run Mypy (3.11)
  • GitHub Check: Run Backend Tests / Unit Tests - Python 3.10 - Group 4
  • GitHub Check: Run Backend Tests / Integration Tests - Python 3.10
  • GitHub Check: Run Backend Tests / Unit Tests - Python 3.10 - Group 5
  • GitHub Check: Run Backend Tests / Unit Tests - Python 3.10 - Group 2
  • GitHub Check: Run Backend Tests / Unit Tests - Python 3.10 - Group 3
  • GitHub Check: Run Backend Tests / Unit Tests - Python 3.10 - Group 1
  • GitHub Check: Test Starter Templates
  • GitHub Check: Update Starter Projects
🔇 Additional comments (8)
src/backend/tests/unit/components/languagemodels/test_chatollama_component.py (3)

172-174: LGTM!

The test correctly validates the revised error handling where update_build_config returns empty model options instead of raising when Ollama is unreachable.


388-403: LGTM!

The test properly validates that is_valid_ollama_url strips the /v1 suffix before making the validation request, ensuring the correct Ollama API endpoint is used.


405-454: LGTM!

Both tests effectively validate the implementation changes:

  1. test_get_models_with_v1_suffix (lines 405-432): Confirms that get_models strips the /v1 suffix and calls the correct Ollama API endpoint.

  2. test_update_build_config_no_error_when_ollama_not_running (lines 434-454): Validates the revised error handling where connection failures result in empty model options rather than raising an exception.

src/lfx/src/lfx/components/ollama/ollama.py (5)

7-7: LGTM!

The import of ChatOpenAI is correctly placed and necessary to support the OpenAI-compatible API path when /v1 is present in the base URL.


36-36: LGTM!

The updated field info clearly documents the /v1 suffix requirement for OpenAI-compatible mode and provides a helpful example URL.


164-171: Verify parameter handling in OpenAI-compatible mode.

The OpenAI-compatible path passes only a subset of parameters to ChatOpenAI (temperature, max_tokens, verbose). Many Ollama-specific parameters like top_p, top_k, repeat_penalty, mirostat, etc. are not passed and will be silently ignored.

Consider:

  1. Documenting in the field info text which parameters are supported in OpenAI-compatible mode
  2. Logging a warning when OpenAI mode is used with unsupported parameters, or
  3. Verifying that the OpenAI API endpoint supports these parameters and passing them if so

This will help users understand parameter limitations when using the /v1 endpoint.


252-259: LGTM!

The simplified error handling in update_build_config provides graceful degradation by setting empty model options when Ollama is unreachable, rather than raising an error. This improves the user experience by allowing the component to be configured even when Ollama is temporarily unavailable.


289-293: LGTM!

The URL normalization correctly strips the /v1 suffix before querying the Ollama API, ensuring that model listing works consistently whether or not the user includes /v1 in their base URL.

Comment on lines +162 to +177
if transformed_base_url and "/v1" in transformed_base_url:
try:
output = ChatOpenAI(
base_url=transformed_base_url,
api_key="ollama", # pragma: allowlist secret
model=self.model_name,
temperature=self.temperature or 0.1,
max_tokens=self.num_ctx or None,
verbose=self.verbose,
)
except Exception as e:
msg = (
"Unable to connect to Ollama's OpenAI-compatible API. ",
"Please verify the base URL includes '/v1', ensure Ollama is running, and try again.",
)
raise ValueError(msg) from e
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

The /v1 check may match unintended URLs.

The condition "/v1" in transformed_base_url could match /v1 anywhere in the URL (e.g., in a hostname or path segment like http://example.com/api/v1/ollama), not just as the intended suffix.

Consider using a more precise check that matches /v1 only at the end of the path:

-if transformed_base_url and "/v1" in transformed_base_url:
+if transformed_base_url and (transformed_base_url.rstrip("/").endswith("/v1")):

This aligns with the normalization logic used in is_valid_ollama_url and get_models where /v1 is explicitly checked as a suffix.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if transformed_base_url and "/v1" in transformed_base_url:
try:
output = ChatOpenAI(
base_url=transformed_base_url,
api_key="ollama", # pragma: allowlist secret
model=self.model_name,
temperature=self.temperature or 0.1,
max_tokens=self.num_ctx or None,
verbose=self.verbose,
)
except Exception as e:
msg = (
"Unable to connect to Ollama's OpenAI-compatible API. ",
"Please verify the base URL includes '/v1', ensure Ollama is running, and try again.",
)
raise ValueError(msg) from e
if transformed_base_url and (transformed_base_url.rstrip("/").endswith("/v1")):
try:
output = ChatOpenAI(
base_url=transformed_base_url,
api_key="ollama", # pragma: allowlist secret
model=self.model_name,
temperature=self.temperature or 0.1,
max_tokens=self.num_ctx or None,
verbose=self.verbose,
)
except Exception as e:
msg = (
"Unable to connect to Ollama's OpenAI-compatible API. ",
"Please verify the base URL includes '/v1', ensure Ollama is running, and try again.",
)
raise ValueError(msg) from e

Comment on lines 219 to 231
async def is_valid_ollama_url(self, url: str) -> bool:
try:
async with httpx.AsyncClient() as client:
url = transform_localhost_url(url)
return (await client.get(urljoin(url, "api/tags"))).status_code == HTTP_STATUS_OK
if not url:
return False
# Strip /v1 suffix if present, as Ollama API endpoints are at root level
base_url = url.rstrip("/").removesuffix("/v1")
if not base_url.endswith("/"):
base_url = base_url + "/"
return (await client.get(urljoin(base_url, "api/tags"))).status_code == HTTP_STATUS_OK
except httpx.RequestError:
return False
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

🧩 Analysis chain

Verify consistency with ollama_embeddings.py.

The implementation correctly strips the /v1 suffix before validating the Ollama URL. However, ollama_embeddings.py (lines 96-102) has a similar is_valid_ollama_url method that does NOT strip /v1.

Run this script to check if ollama_embeddings.py should also strip /v1 for consistency:

If embeddings also support OpenAI-compatible mode, the URL normalization should be consistent across both components.


🏁 Script executed:

#!/bin/bash
# Check if ollama_embeddings should also handle /v1 suffix

# Display the is_valid_ollama_url implementation in ollama_embeddings.py
echo "=== ollama_embeddings.py is_valid_ollama_url ==="
rg -A 10 "async def is_valid_ollama_url" src/lfx/src/lfx/components/ollama/ollama_embeddings.py

# Check if there are any references to /v1 in ollama_embeddings
echo ""
echo "=== References to /v1 in ollama_embeddings.py ==="
rg "/v1" src/lfx/src/lfx/components/ollama/ollama_embeddings.py || echo "No references to /v1 found"

Length of output: 454


Normalize /v1 suffix in ollama_embeddings.py

In src/lfx/src/lfx/components/ollama/ollama_embeddings.py, update is_valid_ollama_url to strip "/v1" and enforce a trailing slash—mirroring the logic in ollama.py—so both modules validate the same endpoint.

@codecov
Copy link
Copy Markdown

codecov Bot commented Oct 3, 2025

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 24.17%. Comparing base (eb216e0) to head (90a9520).
⚠️ Report is 4 commits behind head on main.

❌ Your project status has failed because the head coverage (47.08%) is below the target coverage (55.00%). You can increase the head coverage or adjust the target coverage.

Additional details and impacted files

Impacted file tree graph

@@           Coverage Diff           @@
##             main   #10112   +/-   ##
=======================================
  Coverage   24.16%   24.17%           
=======================================
  Files        1086     1086           
  Lines       40067    40043   -24     
  Branches     5546     5541    -5     
=======================================
- Hits         9682     9679    -3     
+ Misses      30214    30193   -21     
  Partials      171      171           
Flag Coverage Δ
backend 47.08% <ø> (ø)

Flags with carried forward coverage won't be shown. Click here to find out more.
see 8 files with indirect coverage changes

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

…utomatically stripped in build_model method

📝 (ollama.py): Update base_url info to clarify default value and improve readability of the code
@github-actions github-actions Bot added enhancement New feature or request and removed enhancement New feature or request labels Oct 6, 2025
"Detected '/v1' suffix in base URL. The Ollama component uses the native Ollama API, "
"not the OpenAI-compatible API. The '/v1' suffix has been automatically removed. "
"If you want to use the OpenAI-compatible API, please use the OpenAI component instead. "
"Learn more at https://docs.ollama.com/openai#openai-compatibility"
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should point to our docs

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Cristhianzl In this case I'd use Ollama's docs, because it pertains to Ollama's OpenAI compatible API, and not ours

@github-actions github-actions Bot added the lgtm This PR has been approved by a maintainer label Oct 6, 2025
@Cristhianzl Cristhianzl changed the title feat: Add OpenAI-compatible API support with /v1 suffix handling fix: Strip /v1 suffix from base URL and improve error handling Oct 6, 2025
@Cristhianzl Cristhianzl enabled auto-merge October 6, 2025 21:03
@github-actions github-actions Bot added enhancement New feature or request bug Something isn't working and removed enhancement New feature or request bug Something isn't working labels Oct 6, 2025
@ogabrielluiz ogabrielluiz changed the title fix: Strip /v1 suffix from base URL and improve error handling fix: Strip /v1 suffix from base URL and improve error handling in OllamaComponent Oct 6, 2025
@github-actions github-actions Bot added bug Something isn't working and removed bug Something isn't working labels Oct 6, 2025
@github-actions github-actions Bot removed the bug Something isn't working label Oct 6, 2025
@github-actions github-actions Bot added the bug Something isn't working label Oct 6, 2025
@sonarqubecloud
Copy link
Copy Markdown

sonarqubecloud Bot commented Oct 6, 2025

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

bug Something isn't working lgtm This PR has been approved by a maintainer

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants