Skip to content

Feat: add structured output schema to ollama#10271

Merged
erichare merged 9 commits into
langflow-ai:mainfrom
HzaRashid:feat/ollama-output-schema
Oct 30, 2025
Merged

Feat: add structured output schema to ollama#10271
erichare merged 9 commits into
langflow-ai:mainfrom
HzaRashid:feat/ollama-output-schema

Conversation

@HzaRashid
Copy link
Copy Markdown
Collaborator

@HzaRashid HzaRashid commented Oct 14, 2025

Addresses #7122 and #9472.

  • The format field of the ChatOllama component is now a TableInput, in which a schema can be defined. Outputs for Data and DataFrame objects are added to achieve the same support as the structured output component.
  • Bumps langchain-ollama version to enable support for structured output schemas.

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Oct 14, 2025

Important

Review skipped

Auto incremental reviews are disabled on this repository.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

Walkthrough

Updates the langchain-ollama dependency, adds unit tests for ChatOllamaComponent format handling, and extends ChatOllamaComponent to parse JSON schema formats, expose multiple outputs, and add helpers for JSON parsing and Data/DataFrame construction.

Changes

Cohort / File(s) Summary of edits
Dependency bump
pyproject.toml
Update langchain-ollama from 0.2.1 to 0.3.10.
Unit tests for format handling
src/backend/tests/unit/components/languagemodels/test_chatollama_component.py
Add tests verifying build_model forwards format options: string "json", JSON schema dicts, complex schemas, and Pydantic-generated schemas; uses patching to assert ChatOllama init receives parsed format.
ChatOllamaComponent enhancements
src/lfx/src/lfx/components/ollama/ollama.py
Replace simple format input with NestedDictInput; add outputs: text_output, model_output, data_output, dataframe_output; implement _parse_format_field, _parse_json_response, build_data_output, build_dataframe_output; route build_model through format parsing; add error handling for connectivity and JSON parsing.

Sequence Diagram(s)

sequenceDiagram
  autonumber
  actor User
  participant Component as ChatOllamaComponent
  participant Ollama as ChatOllama
  participant Parser as JSON Parser
  participant DataOut as Data/DataFrame

  User->>Component: run(inputs, format)
  rect rgba(200,230,255,0.3)
    note right of Component: Build model
    Component->>Component: _parse_format_field(format)
    Component->>Ollama: new ChatOllama(model, format=parsed)
  end

  User->>Component: request text/model/data/dataframe output
  alt text/model
    Component->>Ollama: invoke(...)
    Ollama-->>Component: response (text / raw)
    Component-->>User: text_output / model_output
  else data/dataframe
    Component->>Ollama: invoke(...)
    Ollama-->>Component: response (expects JSON)
    Component->>Parser: _parse_json_response()
    Parser-->>Component: parsed JSON or error
    opt data
      Component->>DataOut: build_data_output(parsed)
      DataOut-->>Component: Data
      Component-->>User: data_output
    end
    opt dataframe
      Component->>DataOut: build_dataframe_output(parsed)
      DataOut-->>Component: DataFrame
      Component-->>User: dataframe_output
    end
  end

  rect rgba(255,220,220,0.35)
    note over Component,Ollama: Error paths
    Component-->>User: Invalid JSON / connectivity error
  end
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Possibly related PRs

Suggested reviewers

  • edwinjosechittilappilly
  • ogabrielluiz

Pre-merge checks and finishing touches

❌ Failed checks (1 error, 3 warnings)
Check name Status Explanation Resolution
Test Coverage For New Implementations ❌ Error The pull request adds comprehensive unit tests for schema parsing in the ChatOllamaComponent’s build_model method and follows the project’s naming conventions, but it does not include tests for the newly added _parse_json_response, build_data_output, or build_dataframe_output methods, nor any integration tests to validate end-to-end structured output behavior. Add unit tests covering the _parse_json_response, build_data_output, and build_dataframe_output methods and include at least one integration test that demonstrates the full structured output flow using a simulated Ollama response to verify Data and DataFrame outputs.
Test Quality And Coverage ⚠️ Warning The existing tests exercise only the format‐field forwarding in build_model but do not invoke or validate the newly added JSON parsing and output construction methods (_parse_json_response, build_data_output, build_dataframe_output) nor their error handling paths. Consequently, the main functionality of structured output parsing and registry wiring remains untested, leaving gaps in coverage despite correct pytest patterns. Add unit tests that directly invoke _parse_json_response with both valid and invalid JSON payloads to verify error messaging, and tests for build_data_output and build_dataframe_output covering dicts, lists of dicts, primitives, and failure scenarios, as well as ensuring the outputs registry produces the correct Data and DataFrame objects.
Test File Naming And Structure ⚠️ Warning The backend test file is correctly named and uses pytest fixtures with descriptive test function names covering both success and failure scenarios and edge cases, but it improperly includes an unmarked “integration” test in the unit test directory instead of placing it in an integration tests directory or marking it appropriately. Move the integration-style test into a dedicated integration tests directory or add a proper pytest marker (e.g., @pytest.mark.integration) and update project configuration so integration tests are separated from unit tests.
Excessive Mock Usage Warning ⚠️ Warning The unit tests in test_chatollama_component.py rely heavily on mocking internal classes and external dependencies—31 patch decorators and 17 MagicMock instances—which obscures actual behavior and reduces confidence in the logic’s real-world operation. Refactor tests to reduce the extensive use of patches and MagicMock by using real or lightweight test doubles for ChatOllama and HTTP clients, moving external interaction verification to integration tests and retaining focused unit tests on the component’s logic. Add at least one integration test that exercises the serialization, JSON parsing, and network behavior end-to-end. This will ensure that core behaviors are validated against actual components rather than mocks.
✅ Passed checks (3 passed)
Check name Status Explanation
Docstring Coverage ✅ Passed Docstring coverage is 90.91% which is sufficient. The required threshold is 80.00%.
Title Check ✅ Passed The title succinctly describes the primary enhancement of adding a structured output schema to the Ollama component and aligns directly with the pull request’s objectives of supporting JSON schema-based outputs. It is concise, specific, and uses clear language without unnecessary qualifiers or noise. Scanning the title alone gives a teammate an accurate understanding of the change’s focus.
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (5)
src/backend/tests/unit/components/languagemodels/test_chatollama_component.py (1)

566-607: Add tests for new outputs (data/dataframe)

Now that the component exposes data_output and dataframe_output, add unit tests to cover:

  • JSON dict/list -> Data
  • list[dict]/dict/primitive -> DataFrame shapes and error paths

I can draft these tests mirroring your existing mocking style. Want me to add them?

src/lfx/src/lfx/components/ollama/ollama.py (4)

161-166: New outputs are useful; ensure async methods are handled and covered by tests

text_response/build_model are sync/async-compatible in Output; please add tests for data_output and dataframe_output.


236-247: Add HTTP timeouts to avoid hanging validations

AsyncClient without timeouts can hang on network issues. Pass an explicit timeout.

Apply:

-            async with httpx.AsyncClient() as client:
+            async with httpx.AsyncClient(timeout=self.timeout or 10.0) as client:

Based on learnings


318-341: Add HTTP timeouts for model discovery calls

Use explicit timeouts for get/show calls.

Apply:

-            async with httpx.AsyncClient() as client:
+            async with httpx.AsyncClient(timeout=self.timeout or 10.0) as client:

Based on learnings


389-401: Optional: make JSON parsing more robust against code fences

Models sometimes wrap JSON in json ... . Strip fences before json.loads to reduce false negatives.

Apply:

-        text = message.text if hasattr(message, "text") else str(message)
+        text = message.text if hasattr(message, "text") else str(message)
+        # Strip Markdown code fences if present
+        if text.strip().startswith("```"):
+            fenced = text.strip().strip("`")
+            # Remove optional 'json' language tag
+            if fenced.lower().startswith("json"):
+                fenced = fenced[4:].lstrip()
+            text = fenced
📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 354a6ce and 54a3777.

⛔ Files ignored due to path filters (1)
  • uv.lock is excluded by !**/*.lock
📒 Files selected for processing (3)
  • pyproject.toml (1 hunks)
  • src/backend/tests/unit/components/languagemodels/test_chatollama_component.py (1 hunks)
  • src/lfx/src/lfx/components/ollama/ollama.py (6 hunks)
🧰 Additional context used
📓 Path-based instructions (8)
src/backend/tests/unit/components/**/*.py

📄 CodeRabbit inference engine (.cursor/rules/backend_development.mdc)

src/backend/tests/unit/components/**/*.py: Mirror the component directory structure for unit tests in src/backend/tests/unit/components/
Use ComponentTestBaseWithClient or ComponentTestBaseWithoutClient as base classes for component unit tests
Provide file_names_mapping for backward compatibility in component tests
Create comprehensive unit tests for all new components

Files:

  • src/backend/tests/unit/components/languagemodels/test_chatollama_component.py
{src/backend/**/*.py,tests/**/*.py,Makefile}

📄 CodeRabbit inference engine (.cursor/rules/backend_development.mdc)

{src/backend/**/*.py,tests/**/*.py,Makefile}: Run make format_backend to format Python code before linting or committing changes
Run make lint to perform linting checks on backend Python code

Files:

  • src/backend/tests/unit/components/languagemodels/test_chatollama_component.py
src/backend/tests/unit/**/*.py

📄 CodeRabbit inference engine (.cursor/rules/backend_development.mdc)

Test component integration within flows using create_flow, build_flow, and get_build_events utilities

Files:

  • src/backend/tests/unit/components/languagemodels/test_chatollama_component.py
src/backend/tests/**/*.py

📄 CodeRabbit inference engine (.cursor/rules/testing.mdc)

src/backend/tests/**/*.py: Unit tests for backend code must be located in the 'src/backend/tests/' directory, with component tests organized by component subdirectory under 'src/backend/tests/unit/components/'.
Test files should use the same filename as the component under test, with an appropriate test prefix or suffix (e.g., 'my_component.py' → 'test_my_component.py').
Use the 'client' fixture (an async httpx.AsyncClient) for API tests in backend Python tests, as defined in 'src/backend/tests/conftest.py'.
When writing component tests, inherit from the appropriate base class in 'src/backend/tests/base.py' (ComponentTestBase, ComponentTestBaseWithClient, or ComponentTestBaseWithoutClient) and provide the required fixtures: 'component_class', 'default_kwargs', and 'file_names_mapping'.
Each test in backend Python test files should have a clear docstring explaining its purpose, and complex setups or mocks should be well-commented.
Test both sync and async code paths in backend Python tests, using '@pytest.mark.asyncio' for async tests.
Mock external dependencies appropriately in backend Python tests to isolate unit tests from external services.
Test error handling and edge cases in backend Python tests, including using 'pytest.raises' and asserting error messages.
Validate input/output behavior and test component initialization and configuration in backend Python tests.
Use the 'no_blockbuster' pytest marker to skip the blockbuster plugin in tests when necessary.
Be aware of ContextVar propagation in async tests; test both direct event loop execution and 'asyncio.to_thread' scenarios to ensure proper context isolation.
Test error handling by mocking internal functions using monkeypatch in backend Python tests.
Test resource cleanup in backend Python tests by using fixtures that ensure proper initialization and cleanup of resources.
Test timeout and performance constraints in backend Python tests using 'asyncio.wait_for' and timing assertions.
Test Langflow's Messag...

Files:

  • src/backend/tests/unit/components/languagemodels/test_chatollama_component.py
src/backend/**/*component*.py

📄 CodeRabbit inference engine (.cursor/rules/icons.mdc)

In your Python component class, set the icon attribute to a string matching the frontend icon mapping exactly (case-sensitive).

Files:

  • src/backend/tests/unit/components/languagemodels/test_chatollama_component.py
src/backend/**/components/**/*.py

📄 CodeRabbit inference engine (.cursor/rules/icons.mdc)

In your Python component class, set the icon attribute to a string matching the frontend icon mapping exactly (case-sensitive).

Files:

  • src/backend/tests/unit/components/languagemodels/test_chatollama_component.py
**/@(test_*.py|*.test.@(ts|tsx))

📄 CodeRabbit inference engine (coderabbit-custom-pre-merge-checks-unique-id-file-non-traceable-F7F2B60C-1728-4C9A-8889-4F2235E186CA.txt)

**/@(test_*.py|*.test.@(ts|tsx)): Check if tests have too many mock objects that obscure what's actually being tested
Warn when mocks are used instead of testing real behavior and interactions
Suggest using real objects or test doubles when mocks become excessive
Ensure mocks are used appropriately for external dependencies, not core logic
Recommend integration tests when unit tests become overly mocked
Test files should have descriptive test function names explaining what is tested
Tests should be organized logically with proper setup and teardown
Include edge cases and error conditions for comprehensive coverage
Verify tests cover both positive and negative scenarios where appropriate
Tests should cover the main functionality being implemented
Ensure tests are not just smoke tests but actually validate behavior
For API endpoints, verify both success and error response testing

Files:

  • src/backend/tests/unit/components/languagemodels/test_chatollama_component.py
**/test_*.py

📄 CodeRabbit inference engine (coderabbit-custom-pre-merge-checks-unique-id-file-non-traceable-F7F2B60C-1728-4C9A-8889-4F2235E186CA.txt)

**/test_*.py: Check that backend test files follow naming convention: test_.py
Backend tests should be named test_
.py and follow proper pytest structure
For async Python code, ensure proper async testing patterns (pytest) are used
Backend tests should follow pytest conventions; frontend tests should use Playwright

Files:

  • src/backend/tests/unit/components/languagemodels/test_chatollama_component.py
🧬 Code graph analysis (2)
src/backend/tests/unit/components/languagemodels/test_chatollama_component.py (2)
src/backend/tests/base.py (2)
  • component_class (37-40)
  • default_kwargs (43-45)
src/lfx/src/lfx/components/ollama/ollama.py (1)
  • build_model (168-234)
src/lfx/src/lfx/components/ollama/ollama.py (3)
src/lfx/src/lfx/inputs/inputs.py (4)
  • DictInput (450-462)
  • MessageTextInput (206-257)
  • NestedDictInput (429-447)
  • SliderInput (642-643)
src/lfx/src/lfx/schema/data.py (1)
  • Data (26-288)
src/lfx/src/lfx/base/models/model.py (1)
  • text_response (86-92)
🪛 GitHub Actions: Ruff Style Check
src/backend/tests/unit/components/languagemodels/test_chatollama_component.py

[error] 466-466: Ruff check failed: W293 Blank line contains whitespace. Command: 'uv run --only-dev ruff check --output-format=github .'

🪛 GitHub Check: Ruff Style Check (3.13)
src/backend/tests/unit/components/languagemodels/test_chatollama_component.py

[failure] 556-556: Ruff (W293)
src/backend/tests/unit/components/languagemodels/test_chatollama_component.py:556:1: W293 Blank line contains whitespace


[failure] 553-553: Ruff (W293)
src/backend/tests/unit/components/languagemodels/test_chatollama_component.py:553:1: W293 Blank line contains whitespace


[failure] 549-549: Ruff (W293)
src/backend/tests/unit/components/languagemodels/test_chatollama_component.py:549:1: W293 Blank line contains whitespace


[failure] 513-513: Ruff (W293)
src/backend/tests/unit/components/languagemodels/test_chatollama_component.py:513:1: W293 Blank line contains whitespace


[failure] 498-498: Ruff (W293)
src/backend/tests/unit/components/languagemodels/test_chatollama_component.py:498:1: W293 Blank line contains whitespace


[failure] 495-495: Ruff (W293)
src/backend/tests/unit/components/languagemodels/test_chatollama_component.py:495:1: W293 Blank line contains whitespace


[failure] 491-491: Ruff (W293)
src/backend/tests/unit/components/languagemodels/test_chatollama_component.py:491:1: W293 Blank line contains whitespace


[failure] 481-481: Ruff (W293)
src/backend/tests/unit/components/languagemodels/test_chatollama_component.py:481:1: W293 Blank line contains whitespace


[failure] 470-470: Ruff (W293)
src/backend/tests/unit/components/languagemodels/test_chatollama_component.py:470:1: W293 Blank line contains whitespace


[failure] 466-466: Ruff (W293)
src/backend/tests/unit/components/languagemodels/test_chatollama_component.py:466:1: W293 Blank line contains whitespace

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (11)
  • GitHub Check: Run Backend Tests / Unit Tests - Python 3.10 - Group 5
  • GitHub Check: Lint Backend / Run Mypy (3.11)
  • GitHub Check: Run Backend Tests / Unit Tests - Python 3.10 - Group 1
  • GitHub Check: Run Backend Tests / Unit Tests - Python 3.10 - Group 4
  • GitHub Check: Run Backend Tests / Unit Tests - Python 3.10 - Group 3
  • GitHub Check: Lint Backend / Run Mypy (3.12)
  • GitHub Check: Lint Backend / Run Mypy (3.13)
  • GitHub Check: Run Backend Tests / Integration Tests - Python 3.10
  • GitHub Check: Run Backend Tests / Unit Tests - Python 3.10 - Group 2
  • GitHub Check: Test Starter Templates
  • GitHub Check: Update Starter Projects
🔇 Additional comments (5)
pyproject.toml (1)

96-96: Bump to langchain-ollama==0.3.10 looks good

Enables schema-based structured output per PR goal. Please update the lockfile and run CI with the new pin.

Run:

  • uv lock --upgrade
  • uv sync
  • make format_backend && make lint
src/backend/tests/unit/components/languagemodels/test_chatollama_component.py (1)

461-607: Good coverage for format forwarding

Tests validate format="json", schema dicts, complex schema, and Pydantic model_json_schema; mocking targets are correct.

src/lfx/src/lfx/components/ollama/ollama.py (3)

68-73: Switch to NestedDictInput for format is appropriate

Enables JSON schema entry in UI while preserving string/back-compat via parser.


201-201: Using _parse_format_field in build_model is correct

Handles "json" string, JSON strings, and dict schemas uniformly.


434-451: Incorrect — positional DataFrame calls are valid here

lxf.schema.dataframe.DataFrame defines init(self, data=..., ...) and populates via pd.DataFrame(data, **kwargs), so calls like DataFrame(), DataFrame(parsed) and DataFrame([parsed]) are supported (see src/lfx/src/lfx/schema/dataframe.py init).

Likely an incorrect or invalid review comment.

Comment thread src/backend/tests/unit/components/languagemodels/test_chatollama_component.py Outdated
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (8)
src/backend/tests/unit/components/languagemodels/test_chatollama_component.py (2)

461-592: Add tests for new Data and DataFrame outputs.

Given the new outputs and JSON parsing helpers, add unit tests for:

  • data_output: dict, list[dict], list[scalar], and scalar JSON responses.
  • dataframe_output: list[dict] → rows; dict → single-row; empty list → empty DF; invalid list items → raises ValueError.

You can patch ChatOllamaComponent._parse_json_response to return canned JSON and avoid network/model calls.


477-498: Cover JSON schema provided as a JSON string.

Add a test where kwargs["format"] = json.dumps(json_schema) and assert ChatOllama receives the parsed dict. This validates _parse_format_field behavior for NestedDictInput string inputs.

src/lfx/src/lfx/components/ollama/ollama.py (6)

68-73: Default format should not be an empty dict; prefer None.

NestedDictInput defaults to {}, which results in sending format={} to ChatOllama when the user didn’t set it. Prefer None to avoid ambiguous/invalid schema.

Apply this diff:

-        NestedDictInput(
+        NestedDictInput(
             name="format",
             display_name="Format",
             info="Specify the format of the output (options: JSON schema).",
             advanced=True,
+            value=None,
         ),

161-166: Nice addition of multiple outputs. Consider avoiding duplicate LLM calls.

If a flow requests both Text and Data/DataFrame outputs, _parse_json_response() re-invokes text_response(), triggering another model call. Cache/reuse the last message (self.status) to prevent duplicate calls in the same run.

Apply this diff in _parse_json_response (see separate comment) to reuse cached status.


353-376: Harden format normalization: treat empty dict/blank strings as None.

Avoid sending format={} or a blank string; defer to ChatOllama defaults by using None.

Apply this diff:

     def _parse_format_field(self, format_value: Any) -> Any:
@@
-        if format_value is None or (isinstance(format_value, dict) and not format_value):
-            return format_value  # langchain handles None and empty dict (and empty string)
+        # Normalize empty/blank to None
+        if format_value is None:
+            return None
+        if isinstance(format_value, dict) and not format_value:
+            return None
+        if isinstance(format_value, str) and not format_value.strip():
+            return None
         formatted_value = format_value
         if isinstance(format_value, str):  # parse as json if string
             with suppress(json.JSONDecodeError):
                 formatted_value = json.loads(format_value)
 
         return formatted_value

377-401: Make JSON parsing more robust and avoid duplicate model calls.

  • Reuse cached self.status if available to avoid double calls in the same run.
  • Strip markdown code fences around JSON blocks; many models return json ... .

Optionally, consider json_repair fallback for near‑JSON responses (dependency already present).

Apply this diff:

-    async def _parse_json_response(self) -> Any:
+    async def _parse_json_response(self) -> Any:
         """Parse the JSON response from the model.
@@
-        message = await self.text_response()
-        text = message.text if hasattr(message, "text") else str(message)
+        # Reuse existing status if present to avoid duplicate calls
+        message = getattr(self, "status", None) or await self.text_response()
+        text = message.text if hasattr(message, "text") else str(message)
 
-        if not text:
+        if not text:
             msg = "No response from model"
             raise ValueError(msg)
 
         try:
-            return json.loads(text)
+            raw = text.strip()
+            # Strip markdown code fences if present
+            if raw.startswith("```"):
+                # remove leading ```[json]? and trailing ```
+                raw = raw.lstrip("`").lstrip("`").lstrip("`")
+                # if language tag was present (e.g., json), drop the first line
+                raw = "\n".join(raw.splitlines()[1:]) if raw.lower().startswith("json") else raw
+                raw = raw.rsplit("```", 1)[0].strip()
+            return json.loads(raw)
         except json.JSONDecodeError as e:
             msg = f"Invalid JSON response. Ensure model supports JSON output. Error: {e}"
             raise ValueError(msg) from e

236-249: Add a short timeout to validation requests.

To avoid hanging on unreachable hosts, set a small timeout on AsyncClient (e.g., 5s).

Apply this diff:

-        try:
-            async with httpx.AsyncClient() as client:
+        try:
+            async with httpx.AsyncClient(timeout=5.0) as client:

289-351: Minor: set timeout and simplify non-coroutine JSON handling.

httpx.Response.json() is synchronous; the coroutine checks can be removed. Also set a timeout to avoid hanging on slow endpoints.

Apply this diff:

-        try:
+        try:
             # Strip /v1 suffix if present, as Ollama API endpoints are at root level
@@
-            async with httpx.AsyncClient() as client:
+            async with httpx.AsyncClient(timeout=10.0) as client:
                 # Fetch available models
                 tags_response = await client.get(tags_url)
                 tags_response.raise_for_status()
-                models = tags_response.json()
-                if asyncio.iscoroutine(models):
-                    models = await models
+                models = tags_response.json()
                 await logger.adebug(f"Available models: {models}")
@@
-                    json_data = show_response.json()
-                    if asyncio.iscoroutine(json_data):
-                        json_data = await json_data
+                    json_data = show_response.json()
📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 354a6ce and cceddd2.

⛔ Files ignored due to path filters (1)
  • uv.lock is excluded by !**/*.lock
📒 Files selected for processing (3)
  • pyproject.toml (1 hunks)
  • src/backend/tests/unit/components/languagemodels/test_chatollama_component.py (1 hunks)
  • src/lfx/src/lfx/components/ollama/ollama.py (6 hunks)
🧰 Additional context used
📓 Path-based instructions (8)
src/backend/tests/unit/components/**/*.py

📄 CodeRabbit inference engine (.cursor/rules/backend_development.mdc)

src/backend/tests/unit/components/**/*.py: Mirror the component directory structure for unit tests in src/backend/tests/unit/components/
Use ComponentTestBaseWithClient or ComponentTestBaseWithoutClient as base classes for component unit tests
Provide file_names_mapping for backward compatibility in component tests
Create comprehensive unit tests for all new components

Files:

  • src/backend/tests/unit/components/languagemodels/test_chatollama_component.py
{src/backend/**/*.py,tests/**/*.py,Makefile}

📄 CodeRabbit inference engine (.cursor/rules/backend_development.mdc)

{src/backend/**/*.py,tests/**/*.py,Makefile}: Run make format_backend to format Python code before linting or committing changes
Run make lint to perform linting checks on backend Python code

Files:

  • src/backend/tests/unit/components/languagemodels/test_chatollama_component.py
src/backend/tests/unit/**/*.py

📄 CodeRabbit inference engine (.cursor/rules/backend_development.mdc)

Test component integration within flows using create_flow, build_flow, and get_build_events utilities

Files:

  • src/backend/tests/unit/components/languagemodels/test_chatollama_component.py
src/backend/tests/**/*.py

📄 CodeRabbit inference engine (.cursor/rules/testing.mdc)

src/backend/tests/**/*.py: Unit tests for backend code must be located in the 'src/backend/tests/' directory, with component tests organized by component subdirectory under 'src/backend/tests/unit/components/'.
Test files should use the same filename as the component under test, with an appropriate test prefix or suffix (e.g., 'my_component.py' → 'test_my_component.py').
Use the 'client' fixture (an async httpx.AsyncClient) for API tests in backend Python tests, as defined in 'src/backend/tests/conftest.py'.
When writing component tests, inherit from the appropriate base class in 'src/backend/tests/base.py' (ComponentTestBase, ComponentTestBaseWithClient, or ComponentTestBaseWithoutClient) and provide the required fixtures: 'component_class', 'default_kwargs', and 'file_names_mapping'.
Each test in backend Python test files should have a clear docstring explaining its purpose, and complex setups or mocks should be well-commented.
Test both sync and async code paths in backend Python tests, using '@pytest.mark.asyncio' for async tests.
Mock external dependencies appropriately in backend Python tests to isolate unit tests from external services.
Test error handling and edge cases in backend Python tests, including using 'pytest.raises' and asserting error messages.
Validate input/output behavior and test component initialization and configuration in backend Python tests.
Use the 'no_blockbuster' pytest marker to skip the blockbuster plugin in tests when necessary.
Be aware of ContextVar propagation in async tests; test both direct event loop execution and 'asyncio.to_thread' scenarios to ensure proper context isolation.
Test error handling by mocking internal functions using monkeypatch in backend Python tests.
Test resource cleanup in backend Python tests by using fixtures that ensure proper initialization and cleanup of resources.
Test timeout and performance constraints in backend Python tests using 'asyncio.wait_for' and timing assertions.
Test Langflow's Messag...

Files:

  • src/backend/tests/unit/components/languagemodels/test_chatollama_component.py
src/backend/**/*component*.py

📄 CodeRabbit inference engine (.cursor/rules/icons.mdc)

In your Python component class, set the icon attribute to a string matching the frontend icon mapping exactly (case-sensitive).

Files:

  • src/backend/tests/unit/components/languagemodels/test_chatollama_component.py
src/backend/**/components/**/*.py

📄 CodeRabbit inference engine (.cursor/rules/icons.mdc)

In your Python component class, set the icon attribute to a string matching the frontend icon mapping exactly (case-sensitive).

Files:

  • src/backend/tests/unit/components/languagemodels/test_chatollama_component.py
**/@(test_*.py|*.test.@(ts|tsx))

📄 CodeRabbit inference engine (coderabbit-custom-pre-merge-checks-unique-id-file-non-traceable-F7F2B60C-1728-4C9A-8889-4F2235E186CA.txt)

**/@(test_*.py|*.test.@(ts|tsx)): Check if tests have too many mock objects that obscure what's actually being tested
Warn when mocks are used instead of testing real behavior and interactions
Suggest using real objects or test doubles when mocks become excessive
Ensure mocks are used appropriately for external dependencies, not core logic
Recommend integration tests when unit tests become overly mocked
Test files should have descriptive test function names explaining what is tested
Tests should be organized logically with proper setup and teardown
Include edge cases and error conditions for comprehensive coverage
Verify tests cover both positive and negative scenarios where appropriate
Tests should cover the main functionality being implemented
Ensure tests are not just smoke tests but actually validate behavior
For API endpoints, verify both success and error response testing

Files:

  • src/backend/tests/unit/components/languagemodels/test_chatollama_component.py
**/test_*.py

📄 CodeRabbit inference engine (coderabbit-custom-pre-merge-checks-unique-id-file-non-traceable-F7F2B60C-1728-4C9A-8889-4F2235E186CA.txt)

**/test_*.py: Check that backend test files follow naming convention: test_.py
Backend tests should be named test_
.py and follow proper pytest structure
For async Python code, ensure proper async testing patterns (pytest) are used
Backend tests should follow pytest conventions; frontend tests should use Playwright

Files:

  • src/backend/tests/unit/components/languagemodels/test_chatollama_component.py
🧬 Code graph analysis (2)
src/lfx/src/lfx/components/ollama/ollama.py (3)
src/lfx/src/lfx/inputs/inputs.py (3)
  • DictInput (450-462)
  • MessageTextInput (206-257)
  • NestedDictInput (429-447)
src/lfx/src/lfx/schema/data.py (1)
  • Data (26-288)
src/lfx/src/lfx/base/models/model.py (1)
  • text_response (86-92)
src/backend/tests/unit/components/languagemodels/test_chatollama_component.py (2)
src/backend/tests/base.py (2)
  • component_class (37-40)
  • default_kwargs (43-45)
src/lfx/src/lfx/components/ollama/ollama.py (1)
  • build_model (168-234)
🪛 GitHub Actions: Ruff Style Check
src/backend/tests/unit/components/languagemodels/test_chatollama_component.py

[error] 564-564: Ruff (D415): First line should end with a period, question mark, or exclamation point.

🪛 GitHub Check: Ruff Style Check (3.13)
src/backend/tests/unit/components/languagemodels/test_chatollama_component.py

[failure] 564-564: Ruff (D415)
src/backend/tests/unit/components/languagemodels/test_chatollama_component.py:564:13: D415 First line should end with a period, question mark, or exclamation point

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (12)
  • GitHub Check: Run Backend Tests / Unit Tests - Python 3.10 - Group 5
  • GitHub Check: Run Backend Tests / Unit Tests - Python 3.10 - Group 4
  • GitHub Check: Run Backend Tests / Unit Tests - Python 3.10 - Group 3
  • GitHub Check: Run Backend Tests / Unit Tests - Python 3.10 - Group 1
  • GitHub Check: Run Backend Tests / Unit Tests - Python 3.10 - Group 2
  • GitHub Check: Run Backend Tests / Integration Tests - Python 3.10
  • GitHub Check: Lint Backend / Run Mypy (3.12)
  • GitHub Check: Lint Backend / Run Mypy (3.10)
  • GitHub Check: Lint Backend / Run Mypy (3.11)
  • GitHub Check: Lint Backend / Run Mypy (3.13)
  • GitHub Check: Test Starter Templates
  • GitHub Check: update-index
🔇 Additional comments (3)
src/lfx/src/lfx/components/ollama/ollama.py (2)

201-201: Good: normalize format via _parse_format_field.

This ensures “json”, JSON strings, and dict schemas are passed correctly to ChatOllama.


423-451: No change needed for DataFrame instantiation
The DataFrame class is a pandas subclass whose constructor signature begins with data; using a positional argument (DataFrame(parsed)) correctly assigns to data and matches its documented examples. The suggested switch to data=… is purely stylistic and not required.

Likely an incorrect or invalid review comment.

pyproject.toml (1)

96-96: ```markdown

Approve bump: JSON Schema supported; update lockfile
langchain-ollama v0.3.10 supports passing a JSON Schema dict to ChatOllama’s format/with_structured_output APIs (see docs). Update the uv lockfile after merging.


</blockquote></details>

</blockquote></details>

</details>

<!-- This is an auto-generated comment by CodeRabbit for review status -->

@HzaRashid HzaRashid force-pushed the feat/ollama-output-schema branch 4 times, most recently from decfedb to ddb7548 Compare October 14, 2025 20:56
@HzaRashid HzaRashid marked this pull request as draft October 14, 2025 23:23
@HzaRashid HzaRashid force-pushed the feat/ollama-output-schema branch 3 times, most recently from a3efa26 to 14d960c Compare October 15, 2025 00:02
@sonarqubecloud
Copy link
Copy Markdown

@HzaRashid HzaRashid changed the title Feat: ollama output schema Feat: add structured output schema to ollama Oct 15, 2025
@HzaRashid HzaRashid force-pushed the feat/ollama-output-schema branch 2 times, most recently from f0b68c7 to 068391e Compare October 24, 2025 21:59
@HzaRashid HzaRashid marked this pull request as ready for review October 24, 2025 22:08
@HzaRashid HzaRashid force-pushed the feat/ollama-output-schema branch from 036e79e to 7ad81fa Compare October 24, 2025 22:13
@HimavarshaVS HimavarshaVS force-pushed the feat/ollama-output-schema branch from 70e6f56 to a9ec6ca Compare October 30, 2025 10:12
@HzaRashid HzaRashid force-pushed the feat/ollama-output-schema branch from a9ec6ca to 2cdedc2 Compare October 30, 2025 17:47
HzaRashid and others added 2 commits October 30, 2025 17:50
feat: nested json input for output format, and added data and dataframe outputs for chatollama

add unit and integration tests

chore: update component index

update component index

patch parse logic

ruff style checks

update component index

draft modfication to s.o.c; fallback to langchain directly if trustcall fails

feat: add input table for output format in ChatOllama component

chore: update component index

[autofix.ci] apply automated fixes

[autofix.ci] apply automated fixes (attempt 2/3)

[autofix.ci] apply automated fixes (attempt 3/3)

chore: update component index
@HzaRashid HzaRashid force-pushed the feat/ollama-output-schema branch from 2cdedc2 to 4e298a7 Compare October 30, 2025 17:52
@erichare erichare added this pull request to the merge queue Oct 30, 2025
@github-merge-queue github-merge-queue Bot removed this pull request from the merge queue due to a conflict with the base branch Oct 30, 2025
@carlosrcoelho carlosrcoelho added enhancement New feature or request and removed draft labels Oct 30, 2025
@erichare erichare enabled auto-merge October 30, 2025 19:49
@erichare erichare added this pull request to the merge queue Oct 30, 2025
Merged via the queue into langflow-ai:main with commit d8bd70d Oct 30, 2025
46 of 47 checks passed
korenLazar pushed a commit to kiran-kate/langflow that referenced this pull request Nov 13, 2025
* feat: add output schema for ChatOllama component

feat: nested json input for output format, and added data and dataframe outputs for chatollama

add unit and integration tests

chore: update component index

update component index

patch parse logic

ruff style checks

update component index

draft modfication to s.o.c; fallback to langchain directly if trustcall fails

feat: add input table for output format in ChatOllama component

chore: update component index

[autofix.ci] apply automated fixes

[autofix.ci] apply automated fixes (attempt 2/3)

[autofix.ci] apply automated fixes (attempt 3/3)

chore: update component index

* chore: update component index

* ruff (ollama.py imports)

* [autofix.ci] apply automated fixes

* [autofix.ci] apply automated fixes (attempt 2/3)

* [autofix.ci] apply automated fixes

* [autofix.ci] apply automated fixes (attempt 2/3)

* [autofix.ci] apply automated fixes (attempt 3/3)

---------

Co-authored-by: Hamza Rashid <hzarashid@users.noreply.github.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Eric Hare <ericrhare@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

enhancement New feature or request

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants