Feat: add structured output schema to ollama#10271
Conversation
|
Important Review skippedAuto incremental reviews are disabled on this repository. Please check the settings in the CodeRabbit UI or the You can disable this status message by setting the WalkthroughUpdates the langchain-ollama dependency, adds unit tests for ChatOllamaComponent format handling, and extends ChatOllamaComponent to parse JSON schema formats, expose multiple outputs, and add helpers for JSON parsing and Data/DataFrame construction. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
actor User
participant Component as ChatOllamaComponent
participant Ollama as ChatOllama
participant Parser as JSON Parser
participant DataOut as Data/DataFrame
User->>Component: run(inputs, format)
rect rgba(200,230,255,0.3)
note right of Component: Build model
Component->>Component: _parse_format_field(format)
Component->>Ollama: new ChatOllama(model, format=parsed)
end
User->>Component: request text/model/data/dataframe output
alt text/model
Component->>Ollama: invoke(...)
Ollama-->>Component: response (text / raw)
Component-->>User: text_output / model_output
else data/dataframe
Component->>Ollama: invoke(...)
Ollama-->>Component: response (expects JSON)
Component->>Parser: _parse_json_response()
Parser-->>Component: parsed JSON or error
opt data
Component->>DataOut: build_data_output(parsed)
DataOut-->>Component: Data
Component-->>User: data_output
end
opt dataframe
Component->>DataOut: build_dataframe_output(parsed)
DataOut-->>Component: DataFrame
Component-->>User: dataframe_output
end
end
rect rgba(255,220,220,0.35)
note over Component,Ollama: Error paths
Component-->>User: Invalid JSON / connectivity error
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Suggested reviewers
Pre-merge checks and finishing touches❌ Failed checks (1 error, 3 warnings)
✅ Passed checks (3 passed)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (5)
src/backend/tests/unit/components/languagemodels/test_chatollama_component.py (1)
566-607: Add tests for new outputs (data/dataframe)Now that the component exposes data_output and dataframe_output, add unit tests to cover:
- JSON dict/list -> Data
- list[dict]/dict/primitive -> DataFrame shapes and error paths
I can draft these tests mirroring your existing mocking style. Want me to add them?
src/lfx/src/lfx/components/ollama/ollama.py (4)
161-166: New outputs are useful; ensure async methods are handled and covered by teststext_response/build_model are sync/async-compatible in Output; please add tests for data_output and dataframe_output.
236-247: Add HTTP timeouts to avoid hanging validationsAsyncClient without timeouts can hang on network issues. Pass an explicit timeout.
Apply:
- async with httpx.AsyncClient() as client: + async with httpx.AsyncClient(timeout=self.timeout or 10.0) as client:Based on learnings
318-341: Add HTTP timeouts for model discovery callsUse explicit timeouts for get/show calls.
Apply:
- async with httpx.AsyncClient() as client: + async with httpx.AsyncClient(timeout=self.timeout or 10.0) as client:Based on learnings
389-401: Optional: make JSON parsing more robust against code fencesModels sometimes wrap JSON in
json .... Strip fences before json.loads to reduce false negatives.Apply:
- text = message.text if hasattr(message, "text") else str(message) + text = message.text if hasattr(message, "text") else str(message) + # Strip Markdown code fences if present + if text.strip().startswith("```"): + fenced = text.strip().strip("`") + # Remove optional 'json' language tag + if fenced.lower().startswith("json"): + fenced = fenced[4:].lstrip() + text = fenced
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
uv.lockis excluded by!**/*.lock
📒 Files selected for processing (3)
pyproject.toml(1 hunks)src/backend/tests/unit/components/languagemodels/test_chatollama_component.py(1 hunks)src/lfx/src/lfx/components/ollama/ollama.py(6 hunks)
🧰 Additional context used
📓 Path-based instructions (8)
src/backend/tests/unit/components/**/*.py
📄 CodeRabbit inference engine (.cursor/rules/backend_development.mdc)
src/backend/tests/unit/components/**/*.py: Mirror the component directory structure for unit tests in src/backend/tests/unit/components/
Use ComponentTestBaseWithClient or ComponentTestBaseWithoutClient as base classes for component unit tests
Provide file_names_mapping for backward compatibility in component tests
Create comprehensive unit tests for all new components
Files:
src/backend/tests/unit/components/languagemodels/test_chatollama_component.py
{src/backend/**/*.py,tests/**/*.py,Makefile}
📄 CodeRabbit inference engine (.cursor/rules/backend_development.mdc)
{src/backend/**/*.py,tests/**/*.py,Makefile}: Run make format_backend to format Python code before linting or committing changes
Run make lint to perform linting checks on backend Python code
Files:
src/backend/tests/unit/components/languagemodels/test_chatollama_component.py
src/backend/tests/unit/**/*.py
📄 CodeRabbit inference engine (.cursor/rules/backend_development.mdc)
Test component integration within flows using create_flow, build_flow, and get_build_events utilities
Files:
src/backend/tests/unit/components/languagemodels/test_chatollama_component.py
src/backend/tests/**/*.py
📄 CodeRabbit inference engine (.cursor/rules/testing.mdc)
src/backend/tests/**/*.py: Unit tests for backend code must be located in the 'src/backend/tests/' directory, with component tests organized by component subdirectory under 'src/backend/tests/unit/components/'.
Test files should use the same filename as the component under test, with an appropriate test prefix or suffix (e.g., 'my_component.py' → 'test_my_component.py').
Use the 'client' fixture (an async httpx.AsyncClient) for API tests in backend Python tests, as defined in 'src/backend/tests/conftest.py'.
When writing component tests, inherit from the appropriate base class in 'src/backend/tests/base.py' (ComponentTestBase, ComponentTestBaseWithClient, or ComponentTestBaseWithoutClient) and provide the required fixtures: 'component_class', 'default_kwargs', and 'file_names_mapping'.
Each test in backend Python test files should have a clear docstring explaining its purpose, and complex setups or mocks should be well-commented.
Test both sync and async code paths in backend Python tests, using '@pytest.mark.asyncio' for async tests.
Mock external dependencies appropriately in backend Python tests to isolate unit tests from external services.
Test error handling and edge cases in backend Python tests, including using 'pytest.raises' and asserting error messages.
Validate input/output behavior and test component initialization and configuration in backend Python tests.
Use the 'no_blockbuster' pytest marker to skip the blockbuster plugin in tests when necessary.
Be aware of ContextVar propagation in async tests; test both direct event loop execution and 'asyncio.to_thread' scenarios to ensure proper context isolation.
Test error handling by mocking internal functions using monkeypatch in backend Python tests.
Test resource cleanup in backend Python tests by using fixtures that ensure proper initialization and cleanup of resources.
Test timeout and performance constraints in backend Python tests using 'asyncio.wait_for' and timing assertions.
Test Langflow's Messag...
Files:
src/backend/tests/unit/components/languagemodels/test_chatollama_component.py
src/backend/**/*component*.py
📄 CodeRabbit inference engine (.cursor/rules/icons.mdc)
In your Python component class, set the
iconattribute to a string matching the frontend icon mapping exactly (case-sensitive).
Files:
src/backend/tests/unit/components/languagemodels/test_chatollama_component.py
src/backend/**/components/**/*.py
📄 CodeRabbit inference engine (.cursor/rules/icons.mdc)
In your Python component class, set the
iconattribute to a string matching the frontend icon mapping exactly (case-sensitive).
Files:
src/backend/tests/unit/components/languagemodels/test_chatollama_component.py
**/@(test_*.py|*.test.@(ts|tsx))
📄 CodeRabbit inference engine (coderabbit-custom-pre-merge-checks-unique-id-file-non-traceable-F7F2B60C-1728-4C9A-8889-4F2235E186CA.txt)
**/@(test_*.py|*.test.@(ts|tsx)): Check if tests have too many mock objects that obscure what's actually being tested
Warn when mocks are used instead of testing real behavior and interactions
Suggest using real objects or test doubles when mocks become excessive
Ensure mocks are used appropriately for external dependencies, not core logic
Recommend integration tests when unit tests become overly mocked
Test files should have descriptive test function names explaining what is tested
Tests should be organized logically with proper setup and teardown
Include edge cases and error conditions for comprehensive coverage
Verify tests cover both positive and negative scenarios where appropriate
Tests should cover the main functionality being implemented
Ensure tests are not just smoke tests but actually validate behavior
For API endpoints, verify both success and error response testing
Files:
src/backend/tests/unit/components/languagemodels/test_chatollama_component.py
**/test_*.py
📄 CodeRabbit inference engine (coderabbit-custom-pre-merge-checks-unique-id-file-non-traceable-F7F2B60C-1728-4C9A-8889-4F2235E186CA.txt)
**/test_*.py: Check that backend test files follow naming convention: test_.py
Backend tests should be named test_.py and follow proper pytest structure
For async Python code, ensure proper async testing patterns (pytest) are used
Backend tests should follow pytest conventions; frontend tests should use Playwright
Files:
src/backend/tests/unit/components/languagemodels/test_chatollama_component.py
🧬 Code graph analysis (2)
src/backend/tests/unit/components/languagemodels/test_chatollama_component.py (2)
src/backend/tests/base.py (2)
component_class(37-40)default_kwargs(43-45)src/lfx/src/lfx/components/ollama/ollama.py (1)
build_model(168-234)
src/lfx/src/lfx/components/ollama/ollama.py (3)
src/lfx/src/lfx/inputs/inputs.py (4)
DictInput(450-462)MessageTextInput(206-257)NestedDictInput(429-447)SliderInput(642-643)src/lfx/src/lfx/schema/data.py (1)
Data(26-288)src/lfx/src/lfx/base/models/model.py (1)
text_response(86-92)
🪛 GitHub Actions: Ruff Style Check
src/backend/tests/unit/components/languagemodels/test_chatollama_component.py
[error] 466-466: Ruff check failed: W293 Blank line contains whitespace. Command: 'uv run --only-dev ruff check --output-format=github .'
🪛 GitHub Check: Ruff Style Check (3.13)
src/backend/tests/unit/components/languagemodels/test_chatollama_component.py
[failure] 556-556: Ruff (W293)
src/backend/tests/unit/components/languagemodels/test_chatollama_component.py:556:1: W293 Blank line contains whitespace
[failure] 553-553: Ruff (W293)
src/backend/tests/unit/components/languagemodels/test_chatollama_component.py:553:1: W293 Blank line contains whitespace
[failure] 549-549: Ruff (W293)
src/backend/tests/unit/components/languagemodels/test_chatollama_component.py:549:1: W293 Blank line contains whitespace
[failure] 513-513: Ruff (W293)
src/backend/tests/unit/components/languagemodels/test_chatollama_component.py:513:1: W293 Blank line contains whitespace
[failure] 498-498: Ruff (W293)
src/backend/tests/unit/components/languagemodels/test_chatollama_component.py:498:1: W293 Blank line contains whitespace
[failure] 495-495: Ruff (W293)
src/backend/tests/unit/components/languagemodels/test_chatollama_component.py:495:1: W293 Blank line contains whitespace
[failure] 491-491: Ruff (W293)
src/backend/tests/unit/components/languagemodels/test_chatollama_component.py:491:1: W293 Blank line contains whitespace
[failure] 481-481: Ruff (W293)
src/backend/tests/unit/components/languagemodels/test_chatollama_component.py:481:1: W293 Blank line contains whitespace
[failure] 470-470: Ruff (W293)
src/backend/tests/unit/components/languagemodels/test_chatollama_component.py:470:1: W293 Blank line contains whitespace
[failure] 466-466: Ruff (W293)
src/backend/tests/unit/components/languagemodels/test_chatollama_component.py:466:1: W293 Blank line contains whitespace
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (11)
- GitHub Check: Run Backend Tests / Unit Tests - Python 3.10 - Group 5
- GitHub Check: Lint Backend / Run Mypy (3.11)
- GitHub Check: Run Backend Tests / Unit Tests - Python 3.10 - Group 1
- GitHub Check: Run Backend Tests / Unit Tests - Python 3.10 - Group 4
- GitHub Check: Run Backend Tests / Unit Tests - Python 3.10 - Group 3
- GitHub Check: Lint Backend / Run Mypy (3.12)
- GitHub Check: Lint Backend / Run Mypy (3.13)
- GitHub Check: Run Backend Tests / Integration Tests - Python 3.10
- GitHub Check: Run Backend Tests / Unit Tests - Python 3.10 - Group 2
- GitHub Check: Test Starter Templates
- GitHub Check: Update Starter Projects
🔇 Additional comments (5)
pyproject.toml (1)
96-96: Bump to langchain-ollama==0.3.10 looks goodEnables schema-based structured output per PR goal. Please update the lockfile and run CI with the new pin.
Run:
- uv lock --upgrade
- uv sync
- make format_backend && make lint
src/backend/tests/unit/components/languagemodels/test_chatollama_component.py (1)
461-607: Good coverage for format forwardingTests validate format="json", schema dicts, complex schema, and Pydantic model_json_schema; mocking targets are correct.
src/lfx/src/lfx/components/ollama/ollama.py (3)
68-73: Switch to NestedDictInput for format is appropriateEnables JSON schema entry in UI while preserving string/back-compat via parser.
201-201: Using _parse_format_field in build_model is correctHandles "json" string, JSON strings, and dict schemas uniformly.
434-451: Incorrect — positional DataFrame calls are valid herelxf.schema.dataframe.DataFrame defines init(self, data=..., ...) and populates via pd.DataFrame(data, **kwargs), so calls like DataFrame(), DataFrame(parsed) and DataFrame([parsed]) are supported (see src/lfx/src/lfx/schema/dataframe.py init).
Likely an incorrect or invalid review comment.
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (8)
src/backend/tests/unit/components/languagemodels/test_chatollama_component.py (2)
461-592: Add tests for new Data and DataFrame outputs.Given the new outputs and JSON parsing helpers, add unit tests for:
- data_output: dict, list[dict], list[scalar], and scalar JSON responses.
- dataframe_output: list[dict] → rows; dict → single-row; empty list → empty DF; invalid list items → raises ValueError.
You can patch
ChatOllamaComponent._parse_json_responseto return canned JSON and avoid network/model calls.
477-498: Cover JSON schema provided as a JSON string.Add a test where
kwargs["format"] = json.dumps(json_schema)and assert ChatOllama receives the parsed dict. This validates_parse_format_fieldbehavior for NestedDictInput string inputs.src/lfx/src/lfx/components/ollama/ollama.py (6)
68-73: Defaultformatshould not be an empty dict; prefer None.NestedDictInput defaults to
{}, which results in sendingformat={}to ChatOllama when the user didn’t set it. PreferNoneto avoid ambiguous/invalid schema.Apply this diff:
- NestedDictInput( + NestedDictInput( name="format", display_name="Format", info="Specify the format of the output (options: JSON schema).", advanced=True, + value=None, ),
161-166: Nice addition of multiple outputs. Consider avoiding duplicate LLM calls.If a flow requests both Text and Data/DataFrame outputs,
_parse_json_response()re-invokestext_response(), triggering another model call. Cache/reuse the last message (self.status) to prevent duplicate calls in the same run.Apply this diff in
_parse_json_response(see separate comment) to reuse cached status.
353-376: Hardenformatnormalization: treat empty dict/blank strings as None.Avoid sending
format={}or a blank string; defer to ChatOllama defaults by usingNone.Apply this diff:
def _parse_format_field(self, format_value: Any) -> Any: @@ - if format_value is None or (isinstance(format_value, dict) and not format_value): - return format_value # langchain handles None and empty dict (and empty string) + # Normalize empty/blank to None + if format_value is None: + return None + if isinstance(format_value, dict) and not format_value: + return None + if isinstance(format_value, str) and not format_value.strip(): + return None formatted_value = format_value if isinstance(format_value, str): # parse as json if string with suppress(json.JSONDecodeError): formatted_value = json.loads(format_value) return formatted_value
377-401: Make JSON parsing more robust and avoid duplicate model calls.
- Reuse cached
self.statusif available to avoid double calls in the same run.- Strip markdown code fences around JSON blocks; many models return
json ....Optionally, consider
json_repairfallback for near‑JSON responses (dependency already present).Apply this diff:
- async def _parse_json_response(self) -> Any: + async def _parse_json_response(self) -> Any: """Parse the JSON response from the model. @@ - message = await self.text_response() - text = message.text if hasattr(message, "text") else str(message) + # Reuse existing status if present to avoid duplicate calls + message = getattr(self, "status", None) or await self.text_response() + text = message.text if hasattr(message, "text") else str(message) - if not text: + if not text: msg = "No response from model" raise ValueError(msg) try: - return json.loads(text) + raw = text.strip() + # Strip markdown code fences if present + if raw.startswith("```"): + # remove leading ```[json]? and trailing ``` + raw = raw.lstrip("`").lstrip("`").lstrip("`") + # if language tag was present (e.g., json), drop the first line + raw = "\n".join(raw.splitlines()[1:]) if raw.lower().startswith("json") else raw + raw = raw.rsplit("```", 1)[0].strip() + return json.loads(raw) except json.JSONDecodeError as e: msg = f"Invalid JSON response. Ensure model supports JSON output. Error: {e}" raise ValueError(msg) from e
236-249: Add a short timeout to validation requests.To avoid hanging on unreachable hosts, set a small timeout on AsyncClient (e.g., 5s).
Apply this diff:
- try: - async with httpx.AsyncClient() as client: + try: + async with httpx.AsyncClient(timeout=5.0) as client:
289-351: Minor: set timeout and simplify non-coroutine JSON handling.
httpx.Response.json()is synchronous; the coroutine checks can be removed. Also set a timeout to avoid hanging on slow endpoints.Apply this diff:
- try: + try: # Strip /v1 suffix if present, as Ollama API endpoints are at root level @@ - async with httpx.AsyncClient() as client: + async with httpx.AsyncClient(timeout=10.0) as client: # Fetch available models tags_response = await client.get(tags_url) tags_response.raise_for_status() - models = tags_response.json() - if asyncio.iscoroutine(models): - models = await models + models = tags_response.json() await logger.adebug(f"Available models: {models}") @@ - json_data = show_response.json() - if asyncio.iscoroutine(json_data): - json_data = await json_data + json_data = show_response.json()
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
uv.lockis excluded by!**/*.lock
📒 Files selected for processing (3)
pyproject.toml(1 hunks)src/backend/tests/unit/components/languagemodels/test_chatollama_component.py(1 hunks)src/lfx/src/lfx/components/ollama/ollama.py(6 hunks)
🧰 Additional context used
📓 Path-based instructions (8)
src/backend/tests/unit/components/**/*.py
📄 CodeRabbit inference engine (.cursor/rules/backend_development.mdc)
src/backend/tests/unit/components/**/*.py: Mirror the component directory structure for unit tests in src/backend/tests/unit/components/
Use ComponentTestBaseWithClient or ComponentTestBaseWithoutClient as base classes for component unit tests
Provide file_names_mapping for backward compatibility in component tests
Create comprehensive unit tests for all new components
Files:
src/backend/tests/unit/components/languagemodels/test_chatollama_component.py
{src/backend/**/*.py,tests/**/*.py,Makefile}
📄 CodeRabbit inference engine (.cursor/rules/backend_development.mdc)
{src/backend/**/*.py,tests/**/*.py,Makefile}: Run make format_backend to format Python code before linting or committing changes
Run make lint to perform linting checks on backend Python code
Files:
src/backend/tests/unit/components/languagemodels/test_chatollama_component.py
src/backend/tests/unit/**/*.py
📄 CodeRabbit inference engine (.cursor/rules/backend_development.mdc)
Test component integration within flows using create_flow, build_flow, and get_build_events utilities
Files:
src/backend/tests/unit/components/languagemodels/test_chatollama_component.py
src/backend/tests/**/*.py
📄 CodeRabbit inference engine (.cursor/rules/testing.mdc)
src/backend/tests/**/*.py: Unit tests for backend code must be located in the 'src/backend/tests/' directory, with component tests organized by component subdirectory under 'src/backend/tests/unit/components/'.
Test files should use the same filename as the component under test, with an appropriate test prefix or suffix (e.g., 'my_component.py' → 'test_my_component.py').
Use the 'client' fixture (an async httpx.AsyncClient) for API tests in backend Python tests, as defined in 'src/backend/tests/conftest.py'.
When writing component tests, inherit from the appropriate base class in 'src/backend/tests/base.py' (ComponentTestBase, ComponentTestBaseWithClient, or ComponentTestBaseWithoutClient) and provide the required fixtures: 'component_class', 'default_kwargs', and 'file_names_mapping'.
Each test in backend Python test files should have a clear docstring explaining its purpose, and complex setups or mocks should be well-commented.
Test both sync and async code paths in backend Python tests, using '@pytest.mark.asyncio' for async tests.
Mock external dependencies appropriately in backend Python tests to isolate unit tests from external services.
Test error handling and edge cases in backend Python tests, including using 'pytest.raises' and asserting error messages.
Validate input/output behavior and test component initialization and configuration in backend Python tests.
Use the 'no_blockbuster' pytest marker to skip the blockbuster plugin in tests when necessary.
Be aware of ContextVar propagation in async tests; test both direct event loop execution and 'asyncio.to_thread' scenarios to ensure proper context isolation.
Test error handling by mocking internal functions using monkeypatch in backend Python tests.
Test resource cleanup in backend Python tests by using fixtures that ensure proper initialization and cleanup of resources.
Test timeout and performance constraints in backend Python tests using 'asyncio.wait_for' and timing assertions.
Test Langflow's Messag...
Files:
src/backend/tests/unit/components/languagemodels/test_chatollama_component.py
src/backend/**/*component*.py
📄 CodeRabbit inference engine (.cursor/rules/icons.mdc)
In your Python component class, set the
iconattribute to a string matching the frontend icon mapping exactly (case-sensitive).
Files:
src/backend/tests/unit/components/languagemodels/test_chatollama_component.py
src/backend/**/components/**/*.py
📄 CodeRabbit inference engine (.cursor/rules/icons.mdc)
In your Python component class, set the
iconattribute to a string matching the frontend icon mapping exactly (case-sensitive).
Files:
src/backend/tests/unit/components/languagemodels/test_chatollama_component.py
**/@(test_*.py|*.test.@(ts|tsx))
📄 CodeRabbit inference engine (coderabbit-custom-pre-merge-checks-unique-id-file-non-traceable-F7F2B60C-1728-4C9A-8889-4F2235E186CA.txt)
**/@(test_*.py|*.test.@(ts|tsx)): Check if tests have too many mock objects that obscure what's actually being tested
Warn when mocks are used instead of testing real behavior and interactions
Suggest using real objects or test doubles when mocks become excessive
Ensure mocks are used appropriately for external dependencies, not core logic
Recommend integration tests when unit tests become overly mocked
Test files should have descriptive test function names explaining what is tested
Tests should be organized logically with proper setup and teardown
Include edge cases and error conditions for comprehensive coverage
Verify tests cover both positive and negative scenarios where appropriate
Tests should cover the main functionality being implemented
Ensure tests are not just smoke tests but actually validate behavior
For API endpoints, verify both success and error response testing
Files:
src/backend/tests/unit/components/languagemodels/test_chatollama_component.py
**/test_*.py
📄 CodeRabbit inference engine (coderabbit-custom-pre-merge-checks-unique-id-file-non-traceable-F7F2B60C-1728-4C9A-8889-4F2235E186CA.txt)
**/test_*.py: Check that backend test files follow naming convention: test_.py
Backend tests should be named test_.py and follow proper pytest structure
For async Python code, ensure proper async testing patterns (pytest) are used
Backend tests should follow pytest conventions; frontend tests should use Playwright
Files:
src/backend/tests/unit/components/languagemodels/test_chatollama_component.py
🧬 Code graph analysis (2)
src/lfx/src/lfx/components/ollama/ollama.py (3)
src/lfx/src/lfx/inputs/inputs.py (3)
DictInput(450-462)MessageTextInput(206-257)NestedDictInput(429-447)src/lfx/src/lfx/schema/data.py (1)
Data(26-288)src/lfx/src/lfx/base/models/model.py (1)
text_response(86-92)
src/backend/tests/unit/components/languagemodels/test_chatollama_component.py (2)
src/backend/tests/base.py (2)
component_class(37-40)default_kwargs(43-45)src/lfx/src/lfx/components/ollama/ollama.py (1)
build_model(168-234)
🪛 GitHub Actions: Ruff Style Check
src/backend/tests/unit/components/languagemodels/test_chatollama_component.py
[error] 564-564: Ruff (D415): First line should end with a period, question mark, or exclamation point.
🪛 GitHub Check: Ruff Style Check (3.13)
src/backend/tests/unit/components/languagemodels/test_chatollama_component.py
[failure] 564-564: Ruff (D415)
src/backend/tests/unit/components/languagemodels/test_chatollama_component.py:564:13: D415 First line should end with a period, question mark, or exclamation point
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (12)
- GitHub Check: Run Backend Tests / Unit Tests - Python 3.10 - Group 5
- GitHub Check: Run Backend Tests / Unit Tests - Python 3.10 - Group 4
- GitHub Check: Run Backend Tests / Unit Tests - Python 3.10 - Group 3
- GitHub Check: Run Backend Tests / Unit Tests - Python 3.10 - Group 1
- GitHub Check: Run Backend Tests / Unit Tests - Python 3.10 - Group 2
- GitHub Check: Run Backend Tests / Integration Tests - Python 3.10
- GitHub Check: Lint Backend / Run Mypy (3.12)
- GitHub Check: Lint Backend / Run Mypy (3.10)
- GitHub Check: Lint Backend / Run Mypy (3.11)
- GitHub Check: Lint Backend / Run Mypy (3.13)
- GitHub Check: Test Starter Templates
- GitHub Check: update-index
🔇 Additional comments (3)
src/lfx/src/lfx/components/ollama/ollama.py (2)
201-201: Good: normalizeformatvia_parse_format_field.This ensures “json”, JSON strings, and dict schemas are passed correctly to ChatOllama.
423-451: No change needed for DataFrame instantiation
TheDataFrameclass is a pandas subclass whose constructor signature begins withdata; using a positional argument (DataFrame(parsed)) correctly assigns todataand matches its documented examples. The suggested switch todata=…is purely stylistic and not required.Likely an incorrect or invalid review comment.
pyproject.toml (1)
96-96: ```markdownApprove bump: JSON Schema supported; update lockfile
langchain-ollama v0.3.10 supports passing a JSON Schema dict to ChatOllama’sformat/with_structured_outputAPIs (see docs). Update the uv lockfile after merging.</blockquote></details> </blockquote></details> </details> <!-- This is an auto-generated comment by CodeRabbit for review status -->
decfedb to
ddb7548
Compare
a3efa26 to
14d960c
Compare
|
f0b68c7 to
068391e
Compare
036e79e to
7ad81fa
Compare
70e6f56 to
a9ec6ca
Compare
a9ec6ca to
2cdedc2
Compare
feat: nested json input for output format, and added data and dataframe outputs for chatollama add unit and integration tests chore: update component index update component index patch parse logic ruff style checks update component index draft modfication to s.o.c; fallback to langchain directly if trustcall fails feat: add input table for output format in ChatOllama component chore: update component index [autofix.ci] apply automated fixes [autofix.ci] apply automated fixes (attempt 2/3) [autofix.ci] apply automated fixes (attempt 3/3) chore: update component index
2cdedc2 to
4e298a7
Compare
* feat: add output schema for ChatOllama component feat: nested json input for output format, and added data and dataframe outputs for chatollama add unit and integration tests chore: update component index update component index patch parse logic ruff style checks update component index draft modfication to s.o.c; fallback to langchain directly if trustcall fails feat: add input table for output format in ChatOllama component chore: update component index [autofix.ci] apply automated fixes [autofix.ci] apply automated fixes (attempt 2/3) [autofix.ci] apply automated fixes (attempt 3/3) chore: update component index * chore: update component index * ruff (ollama.py imports) * [autofix.ci] apply automated fixes * [autofix.ci] apply automated fixes (attempt 2/3) * [autofix.ci] apply automated fixes * [autofix.ci] apply automated fixes (attempt 2/3) * [autofix.ci] apply automated fixes (attempt 3/3) --------- Co-authored-by: Hamza Rashid <hzarashid@users.noreply.github.com> Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com> Co-authored-by: Eric Hare <ericrhare@gmail.com>



Addresses #7122 and #9472.