Skip to content

fix: refactor Ollama model fetching to use async and filter capabilities#10550

Merged
lucaseduoli merged 17 commits into
fix/ollama_embeddingsfrom
fix-ollama-models-loading
Nov 11, 2025
Merged

fix: refactor Ollama model fetching to use async and filter capabilities#10550
lucaseduoli merged 17 commits into
fix/ollama_embeddingsfrom
fix-ollama-models-loading

Conversation

@edwinjosechittilappilly
Copy link
Copy Markdown
Collaborator

Replaces synchronous requests for Ollama model fetching with asynchronous httpx calls and adds filtering to only include models with 'completion' capability. Updates the LanguageModelComponent to support async validation and fetching of Ollama models, improving reliability and accuracy of available model options.

Replaces synchronous requests for Ollama model fetching with asynchronous httpx calls and adds filtering to only include models with 'completion' capability. Updates the LanguageModelComponent to support async validation and fetching of Ollama models, improving reliability and accuracy of available model options.
Introduces several Notion-related components for Langflow, including AddContentToPage, NotionDatabaseProperties, NotionListPages, NotionPageContent, NotionPageCreator, NotionPageUpdate, and NotionSearch. Updates the component index to register these new tools, enabling Notion API interactions such as page creation, content retrieval, database property listing, and more.
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Nov 10, 2025

Important

Review skipped

Auto reviews are disabled on base/target branches other than the default branch.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

✨ Finishing touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch fix-ollama-models-loading

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@github-actions
Copy link
Copy Markdown
Contributor

github-actions Bot commented Nov 10, 2025

Frontend Unit Test Coverage Report

Coverage Summary

Lines Statements Branches Functions
Coverage: 15%
14.67% (3955/26947) 7.45% (1533/20552) 9% (532/5906)

Unit Test Results

Tests Skipped Failures Errors Time
1588 0 💤 0 ❌ 0 🔥 18.309s ⏱️

@codecov
Copy link
Copy Markdown

codecov Bot commented Nov 10, 2025

Codecov Report

❌ Patch coverage is 14.03509% with 49 lines in your changes missing coverage. Please review.
⚠️ Please upload report for BASE (fix/ollama_embeddings@324efcc). Learn more about missing BASE report.

Files with missing lines Patch % Lines
src/lfx/src/lfx/base/models/model_utils.py 14.28% 48 Missing ⚠️
src/lfx/src/lfx/base/models/model.py 0.00% 1 Missing ⚠️
Additional details and impacted files

Impacted file tree graph

@@                   Coverage Diff                    @@
##             fix/ollama_embeddings   #10550   +/-   ##
========================================================
  Coverage                         ?   39.35%           
========================================================
  Files                            ?     1476           
  Lines                            ?    82224           
  Branches                         ?     8991           
========================================================
  Hits                             ?    32359           
  Misses                           ?    48959           
  Partials                         ?      906           
Flag Coverage Δ
frontend 13.57% <ø> (?)
lfx 39.28% <14.03%> (?)

Flags with carried forward coverage won't be shown. Click here to find out more.

Files with missing lines Coverage Δ
...c/backend/base/langflow/api/v1/openai_responses.py 10.52% <ø> (ø)
...c/lfx/src/lfx/custom/custom_component/component.py 58.12% <ø> (ø)
src/lfx/src/lfx/base/models/model.py 20.71% <0.00%> (ø)
src/lfx/src/lfx/base/models/model_utils.py 15.00% <14.28%> (ø)
🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

Comment thread src/lfx/src/lfx/components/models/language_model.py Outdated
Comment on lines +112 to +115
async with httpx.AsyncClient() as client:
# Fetch available models
tags_response = await client.get(url=tags_url)
tags_response.raise_for_status()
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it seems the AsyncClient is only used on the first line

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is tricky when I moved it out of context

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

get the .json() call in the context and it should work.

Refactored the LanguageModelComponent code in Basic Prompt Chaining.json to improve formatting and readability. No functional changes were made; only code style and structure were updated.
@edwinjosechittilappilly
Copy link
Copy Markdown
Collaborator Author

@ogabrielluiz I have usedd a bse function in both embedding and langauge model component

Moves Ollama model fetching and URL validation logic from the LanguageModelComponent class to shared utility functions in model_utils. Updates references in starter project JSONs to use the new utility functions for improved code reuse and maintainability.
Comment thread src/lfx/src/lfx/base/models/model_utils.py
Adds logic to skip processing text content when message state is 'complete' in OpenAI response streaming, ensuring only content_blocks are processed for tool calls. Updates LCModelComponent to set state to 'partial' during streaming and only update the database with 'complete' state, without sending a new message event, as the frontend already has all streamed content.
@lucaseduoli lucaseduoli force-pushed the fix-ollama-models-loading branch from 882e26c to 0c2c01e Compare November 11, 2025 13:17
@lucaseduoli lucaseduoli force-pushed the fix-ollama-models-loading branch from 868fb78 to 895284f Compare November 11, 2025 13:20
@lucaseduoli lucaseduoli force-pushed the fix-ollama-models-loading branch from 2660ccd to dd07a95 Compare November 11, 2025 13:23
Copy link
Copy Markdown
Collaborator

@lucaseduoli lucaseduoli left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@github-actions github-actions Bot added the lgtm This PR has been approved by a maintainer label Nov 11, 2025
@lucaseduoli lucaseduoli changed the title ref:Refactor Ollama model fetching to use async and filter capabilities fix: refactor Ollama model fetching to use async and filter capabilities Nov 11, 2025
@lucaseduoli lucaseduoli merged commit e68c61e into fix/ollama_embeddings Nov 11, 2025
30 of 36 checks passed
@lucaseduoli lucaseduoli deleted the fix-ollama-models-loading branch November 11, 2025 13:25
@github-actions github-actions Bot added the bug Something isn't working label Nov 11, 2025
kerinin pushed a commit that referenced this pull request Nov 11, 2025
…API come with type of last complete message (#10558)

(I'm going to force-push to get this in while the nightly is broken - this is needed for QA ASAP)

* Changed embedding model to show api base when switching embedding models

* fix: refactor Ollama model fetching to use async and filter capabilities (#10550)

* Refactor Ollama model fetching to use async and filter capabilities

Replaces synchronous requests for Ollama model fetching with asynchronous httpx calls and adds filtering to only include models with 'completion' capability. Updates the LanguageModelComponent to support async validation and fetching of Ollama models, improving reliability and accuracy of available model options.

* Update language_model.py

* Add Notion integration components

Introduces several Notion-related components for Langflow, including AddContentToPage, NotionDatabaseProperties, NotionListPages, NotionPageContent, NotionPageCreator, NotionPageUpdate, and NotionSearch. Updates the component index to register these new tools, enabling Notion API interactions such as page creation, content retrieval, database property listing, and more.

* [autofix.ci] apply automated fixes

* [autofix.ci] apply automated fixes (attempt 2/3)

* [autofix.ci] apply automated fixes (attempt 3/3)

* Update LanguageModelComponent code in starter projects

Refactored the LanguageModelComponent code in Basic Prompt Chaining.json to improve formatting and readability. No functional changes were made; only code style and structure were updated.

* Refactor Ollama model fetching logic

Moves Ollama model fetching and URL validation logic from the LanguageModelComponent class to shared utility functions in model_utils. Updates references in starter project JSONs to use the new utility functions for improved code reuse and maintainability.

* Update component_index.json

* Update Nvidia Remix.json

* Handle message state for streaming responses

Adds logic to skip processing text content when message state is 'complete' in OpenAI response streaming, ensuring only content_blocks are processed for tool calls. Updates LCModelComponent to set state to 'partial' during streaming and only update the database with 'complete' state, without sending a new message event, as the frontend already has all streamed content.

* fixed test

* Changed embedding model component

* removed default values

* fixed ruff

* fixed starter projects

---------

Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Lucas Oliveira <lucas.edu.oli@hotmail.com>

* fixed backend test

* fixed llm test

* fixed update build config ollama test

* fixed test embedding model component

---------

Co-authored-by: Edwin Jose <edwin.jose@datastax.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

bug Something isn't working lgtm This PR has been approved by a maintainer

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants