fix: make ollama embeddings work with external URLs, make completion API come with type of last complete message#10558
Conversation
…ies (#10550) * Refactor Ollama model fetching to use async and filter capabilities Replaces synchronous requests for Ollama model fetching with asynchronous httpx calls and adds filtering to only include models with 'completion' capability. Updates the LanguageModelComponent to support async validation and fetching of Ollama models, improving reliability and accuracy of available model options. * Update language_model.py * Add Notion integration components Introduces several Notion-related components for Langflow, including AddContentToPage, NotionDatabaseProperties, NotionListPages, NotionPageContent, NotionPageCreator, NotionPageUpdate, and NotionSearch. Updates the component index to register these new tools, enabling Notion API interactions such as page creation, content retrieval, database property listing, and more. * [autofix.ci] apply automated fixes * [autofix.ci] apply automated fixes (attempt 2/3) * [autofix.ci] apply automated fixes (attempt 3/3) * Update LanguageModelComponent code in starter projects Refactored the LanguageModelComponent code in Basic Prompt Chaining.json to improve formatting and readability. No functional changes were made; only code style and structure were updated. * Refactor Ollama model fetching logic Moves Ollama model fetching and URL validation logic from the LanguageModelComponent class to shared utility functions in model_utils. Updates references in starter project JSONs to use the new utility functions for improved code reuse and maintainability. * Update component_index.json * Update Nvidia Remix.json * Handle message state for streaming responses Adds logic to skip processing text content when message state is 'complete' in OpenAI response streaming, ensuring only content_blocks are processed for tool calls. Updates LCModelComponent to set state to 'partial' during streaming and only update the database with 'complete' state, without sending a new message event, as the frontend already has all streamed content. * fixed test * Changed embedding model component * removed default values * fixed ruff * fixed starter projects --------- Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com> Co-authored-by: Lucas Oliveira <lucas.edu.oli@hotmail.com>
|
Important Review skippedAuto incremental reviews are disabled on this repository. Please check the settings in the CodeRabbit UI or the You can disable this status message by setting the WalkthroughThis PR introduces async Ollama model discovery and dynamic provider switching across language and embedding model components. It adds utilities to validate Ollama URLs and fetch available models, converts Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant UI as LanguageModelComponent UI
participant UpdateBuild as update_build_config (async)
participant URLValidator as is_valid_ollama_url
participant ModelFetcher as get_ollama_models
participant OllamaAPI as Ollama API
User->>UI: Change provider to Ollama
UI->>UpdateBuild: Trigger update with field_name="provider"
UpdateBuild->>URLValidator: Validate ollama_base_url
URLValidator->>OllamaAPI: GET /api/tags (connectivity check)
OllamaAPI-->>URLValidator: Connection status
alt URL Valid
UpdateBuild->>ModelFetcher: Fetch models with desired_capability
ModelFetcher->>OllamaAPI: GET /api/tags (list models)
OllamaAPI-->>ModelFetcher: Model list
ModelFetcher->>OllamaAPI: GET /api/show (for each model)
OllamaAPI-->>ModelFetcher: Model capabilities
ModelFetcher-->>UpdateBuild: Filtered model list
UpdateBuild->>UpdateBuild: Update model_name options
UpdateBuild-->>UI: Return updated config
UI->>User: Display available models
else URL Invalid
UpdateBuild->>UpdateBuild: Clear model_name options
UpdateBuild->>UpdateBuild: Log warning
UpdateBuild-->>UI: Return updated config
UI->>User: Show error/warning
end
sequenceDiagram
participant Client
participant OpenAIAPI as OpenAI API
participant Streaming as openai_responses.py
participant EventHandler
Client->>OpenAIAPI: Send message with streaming
OpenAIAPI->>Streaming: Stream delta events
loop For each delta
Streaming->>Streaming: Read data.properties.state
alt state == "complete"
Streaming->>Streaming: Clear text content
Streaming->>Streaming: Still process tool_calls
Streaming->>EventHandler: Emit event without text
else state != "complete"
Streaming->>EventHandler: Emit event with text
end
end
EventHandler-->>Client: Send event to frontend
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~35 minutes Areas requiring extra attention:
Possibly related PRs
Suggested labels
Suggested reviewers
Pre-merge checks and finishing touchesImportant Pre-merge checks failedPlease resolve all errors before merging. Addressing warnings is optional. ❌ Failed checks (1 error, 2 warnings, 2 inconclusive)
✅ Passed checks (2 passed)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Codecov Report❌ Patch coverage is ❌ Your patch status has failed because the patch coverage (12.90%) is below the target coverage (40.00%). You can increase the patch coverage or adjust the target coverage.
Additional details and impacted files@@ Coverage Diff @@
## main #10558 +/- ##
==========================================
- Coverage 38.90% 31.45% -7.46%
==========================================
Files 1477 1328 -149
Lines 85270 60152 -25118
Branches 10240 8994 -1246
==========================================
- Hits 33175 18921 -14254
+ Misses 51048 40325 -10723
+ Partials 1047 906 -141
Flags with carried forward coverage won't be shown. Click here to find out more.
🚀 New features to boost your workflow:
|
There was a problem hiding this comment.
Actionable comments posted: 4
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (29)
src/lfx/src/lfx/custom/custom_component/component.py (1)
1586-1602: Frontend UI depends on message completion event to exit loading state.The backend change creates a critical UI bug. The frontend explicitly uses
properties.stateto manage loading indicators:
- During streaming: state="partial" → displays loading spinner with "Steps" header
- After streaming: state should transition away from "partial" → displays "Finished" with checkmark
Since the change skips sending the message event after streaming completes, the frontend will never receive the state update and the loading spinner remains visible indefinitely, even though the message is complete in the database.
Evidence from frontend code:
src/frontend/src/stores/messagesStore.ts:22checksif (message.properties?.state === "partial")src/frontend/src/modals/IOModal/components/chatView/chatMessage/chat-message.tsx:304uses this to setisLoading={chat.properties?.state === "partial" && isBuilding && lastMessage}src/frontend/src/components/core/chatComponents/ContentBlockDisplay.tsx:46-51renders "Steps" whenstate === "partial", "Finished" otherwiseThe database will be updated to state="complete" (lines 1595-1596), but without the frontend event, the UI won't reflect this change. Either:
- Send a lightweight completion event with just the state change, or
- Ensure polling queries refetch the message state immediately
src/lfx/src/lfx/components/ollama/ollama.py (1)
352-360: Use the just‑updated URL and lower log noise; current code can fetch against a stale endpoint.
- Validation/fetch uses self.base_url instead of the updated URL, so options may not refresh after base_url edits.
- Logging prints the entire field dict at WARNING level.
Apply:
@@ - if field_name in {"model_name", "base_url", "tool_model_enabled"}: - logger.warning(f"Fetching Ollama models from updated URL: {build_config['base_url']}") - - if await self.is_valid_ollama_url(self.base_url): + if field_name in {"model_name", "base_url", "tool_model_enabled"}: + url = ( + field_value + if field_name == "base_url" + else build_config.get("base_url", {}).get("value", self.base_url) + ) + logger.info(f"Fetching Ollama models from updated URL: {url}") + + if await self.is_valid_ollama_url(url): tool_model_enabled = build_config["tool_model_enabled"].get("value", False) or self.tool_model_enabled - build_config["model_name"]["options"] = await self.get_models( - self.base_url, tool_model_enabled=tool_model_enabled - ) + build_config["model_name"]["options"] = await self.get_models( + url, tool_model_enabled=tool_model_enabled + ) else: build_config["model_name"]["options"] = []src/backend/base/langflow/initial_setup/starter_projects/SEO Keyword Generator.json (1)
1035-1069: Expose IBM watsonx.ai and Ollama in the Provider dropdown to match the code.The code supports IBM/Ollama, but the UI options list only shows OpenAI/Anthropic/Google. Users can’t pick the new providers.
- "options": [ - "OpenAI", - "Anthropic", - "Google" - ], - "options_metadata": [ - { - "icon": "OpenAI" - }, - { - "icon": "Anthropic" - }, - { - "icon": "GoogleGenerativeAI" - } - ], + "options": [ + "OpenAI", + "Anthropic", + "Google", + "IBM watsonx.ai", + "Ollama" + ], + "options_metadata": [ + { "icon": "OpenAI" }, + { "icon": "Anthropic" }, + { "icon": "GoogleGenerativeAI" }, + { "icon": "WatsonxAI" }, + { "icon": "Ollama" } + ],src/backend/base/langflow/initial_setup/starter_projects/Image Sentiment Analysis.json (2)
1301-1336: Add IBM/Ollama to Provider options (LanguageModelComponent-yEikN).- "options": [ - "OpenAI", - "Anthropic", - "Google" - ], - "options_metadata": [ - { - "icon": "OpenAI" - }, - { - "icon": "Anthropic" - }, - { - "icon": "Google" - } - ], + "options": [ + "OpenAI", + "Anthropic", + "Google", + "IBM watsonx.ai", + "Ollama" + ], + "options_metadata": [ + { "icon": "OpenAI" }, + { "icon": "Anthropic" }, + { "icon": "Google" }, + { "icon": "WatsonxAI" }, + { "icon": "Ollama" } + ],
1626-1661: Add IBM/Ollama to Provider options (LanguageModelComponent-TSuC2).- "options": [ - "OpenAI", - "Anthropic", - "Google" - ], - "options_metadata": [ - { - "icon": "OpenAI" - }, - { - "icon": "Anthropic" - }, - { - "icon": "Google" - } - ], + "options": [ + "OpenAI", + "Anthropic", + "Google", + "IBM watsonx.ai", + "Ollama" + ], + "options_metadata": [ + { "icon": "OpenAI" }, + { "icon": "Anthropic" }, + { "icon": "Google" }, + { "icon": "WatsonxAI" }, + { "icon": "Ollama" } + ],src/backend/base/langflow/initial_setup/starter_projects/Research Agent.json (3)
2089-2124: Provider dropdown missing IBM/Ollama options; UI can’t select new providers.The LanguageModelComponent code supports "IBM watsonx.ai" and "Ollama", but this provider field still lists only OpenAI/Anthropic/Google. Add both providers (and icons) to keep UI consistent with the code.
Apply this JSON diff to the provider input in this node’s template:
- "options": [ - "OpenAI", - "Anthropic", - "Google" - ], + "options": [ + "OpenAI", + "Anthropic", + "Google", + "IBM watsonx.ai", + "Ollama" + ], "options_metadata": [ { "icon": "OpenAI" }, { "icon": "Anthropic" }, { - "icon": "GoogleGenerativeAI" - } + "icon": "GoogleGenerativeAI" + }, + { + "icon": "WatsonxAI" + }, + { + "icon": "Ollama" + } ],
2023-2345: Blocking I/O inside async update_build_config; switch to async HTTP for IBM model fetch.fetch_ibm_models uses requests (blocking) and is called from an async path, which can freeze the event loop. Use httpx.AsyncClient (or run in a thread).
Minimal change using httpx:
-from typing import Any - -import requests +from typing import Any +import httpx @@ - @staticmethod - def fetch_ibm_models(base_url: str) -> list[str]: + @staticmethod + async def fetch_ibm_models(base_url: str) -> list[str]: @@ - response = requests.get(endpoint, params=params, timeout=10) - response.raise_for_status() - data = response.json() + async with httpx.AsyncClient(timeout=10) as client: + response = await client.get(endpoint, params=params) + response.raise_for_status() + data = response.json() @@ - models = self.fetch_ibm_models(base_url=field_value) + models = await self.fetch_ibm_models(base_url=field_value)
2411-2445: Repeat: provider dropdown for the second LanguageModelComponent also missing IBM/Ollama.Mirror the same options/options_metadata update here so users can select the new providers.
Use the same diff pattern as above to append "IBM watsonx.ai" and "Ollama" and their icons.
src/backend/base/langflow/initial_setup/starter_projects/Memory Chatbot.json (2)
1434-1468: Provider dropdown missing IBM/Ollama options; UI and code diverge.Add "IBM watsonx.ai" and "Ollama" (and icons) to match the LanguageModelComponent code support.
- "options": [ - "OpenAI", - "Anthropic", - "Google" - ], + "options": [ + "OpenAI", + "Anthropic", + "Google", + "IBM watsonx.ai", + "Ollama" + ], "options_metadata": [ { "icon": "OpenAI" }, { "icon": "Anthropic" }, - { "icon": "GoogleGenerativeAI" } + { "icon": "GoogleGenerativeAI" }, + { "icon": "WatsonxAI" }, + { "icon": "Ollama" } ],
1367-1540: Same blocking I/O concern; adopt async IBM model fetch.Replace requests with httpx.AsyncClient and await both the fetch and the call in update_build_config, mirroring the earlier diff.
-import requests +import httpx @@ - @staticmethod - def fetch_ibm_models(base_url: str) -> list[str]: + @staticmethod + async def fetch_ibm_models(base_url: str) -> list[str]: @@ - response = requests.get(endpoint, params=params, timeout=10) + async with httpx.AsyncClient(timeout=10) as client: + response = await client.get(endpoint, params=params) @@ - models = self.fetch_ibm_models(base_url=field_value) + models = await self.fetch_ibm_models(base_url=field_value)src/lfx/src/lfx/components/models/embedding_model.py (1)
222-279: Handle env‑var Ollama URLs before validating.When the saved
ollama_base_urlis an env-var token (e.g.,OLLAMA_BASE_URL),is_valid_ollama_urlreceives that literal string, the HTTP probe fails, and we wipe out the model list. The language-model component already guards against this by treating all-caps-with-underscores values as variable references and falling back to the default URL. We need the same logic here to avoid breaking existing flows that rely on environment variables.Recommend normalizing to a local
ollama_urlfirst, short-circuiting visibly invalid tokens, and then using that same value for the fetch.Apply something like:
- if await is_valid_ollama_url(url=self.ollama_base_url): + ollama_url = self.ollama_base_url + if ollama_url and isinstance(ollama_url, str) and ollama_url.isupper() and "_" in ollama_url: + await logger.adebug( + f"Config value appears to be a variable reference: {ollama_url}, using default URL instead." + ) + ollama_url = DEFAULT_OLLAMA_URL + + if await is_valid_ollama_url(url=ollama_url): try: models = await get_ollama_models( - base_url_value=self.ollama_base_url, + base_url_value=ollama_url, desired_capability=DESIRED_CAPABILITY, json_models_key=JSON_MODELS_KEY, json_name_key=JSON_NAME_KEY, json_capabilities_key=JSON_CAPABILITIES_KEY, )and mirror that adjustment in the
ollama_base_url/modelfield branches below so they never pass raw env-var tokens into the validator/fetcher.src/backend/base/langflow/initial_setup/starter_projects/Document Q&A.json (1)
1055-1078: Expose all supported providers in the template.The starter flow’s stored
provider.optionsstill lists only"OpenAI","Anthropic", and"Google", so users can’t pick the newly supported"Ollama"or"IBM watsonx.ai"providers from this template. The Python component now exposes five providers; the starter JSON should mirror that to keep the UI consistent.Please extend both the
optionsandoptions_metadataarrays to include the Ollama and IBM entries so the template reflects the full provider list.A minimal patch:
- "options": [ - "OpenAI", - "Anthropic", - "Google" - ], - "options_metadata": [ - { - "icon": "OpenAI" - }, - { - "icon": "Anthropic" - }, - { - "icon": "GoogleGenerativeAI" - } - ], + "options": [ + "OpenAI", + "Anthropic", + "Google", + "IBM watsonx.ai", + "Ollama" + ], + "options_metadata": [ + { + "icon": "OpenAI" + }, + { + "icon": "Anthropic" + }, + { + "icon": "GoogleGenerativeAI" + }, + { + "icon": "WatsonxAI" + }, + { + "icon": "Ollama" + } + ],src/backend/base/langflow/initial_setup/starter_projects/Portfolio Website Code Generator.json (1)
1502-1676: Provider options mismatch blocks IBM/Ollama in the UI.Your LanguageModelComponent code enables "IBM watsonx.ai" and "Ollama", but the template’s provider dropdown still exposes only ["OpenAI","Anthropic","Google"]. Users can’t select IBM/Ollama, so the new async model-discovery paths won’t be reachable.
Please add the missing options in both provider fields.
- "options": [ - "OpenAI", - "Anthropic", - "Google" - ], + "options": [ + "OpenAI", + "Anthropic", + "Google", + "IBM watsonx.ai", + "Ollama" + ],Apply to:
- LanguageModelComponent-2ZwfQ provider field (Lines 1569-1603).
- LanguageModelComponent-DJzkZ provider field (Lines 1896-1930).
Also applies to: 1829-2002, 1569-1603, 1896-1930
src/backend/base/langflow/initial_setup/starter_projects/Twitter Thread Generator.json (2)
1969-2144: Add IBM/Ollama to provider dropdown to expose new functionality.Code supports IBM/Ollama, but the template field still lists only ["OpenAI","Anthropic","Google"]. Update the provider field options so users can select the new providers.
- "options": [ - "OpenAI", - "Anthropic", - "Google" - ], + "options": [ + "OpenAI", + "Anthropic", + "Google", + "IBM watsonx.ai", + "Ollama" + ],Apply to LanguageModelComponent-URKKz provider field (Lines 2035-2070).
Also applies to: 1995-2034
2000-2031: Update ChatOpenAI parameters to canonical forms for langchain_openai 0.3.23.The embedded component code in this starter project uses deprecated parameters. Migrate
model_name→modelandopenai_api_key→api_keyto align with langchain_openai 0.3.23. For langchain_openai 0.3.23, usemodelas the canonical parameter instead of the deprecatedmodel_name, and useapi_keyinstead ofopenai_api_key.src/backend/base/langflow/initial_setup/starter_projects/Custom Component Generator.json (1)
2628-2791: Fix ChatOpenAI args, avoid blocking IBM fetch, and simplify Ollama debug log
- Replace ChatOpenAI model_name/openai_api_key with model/api_key; run IBM model fetch in an executor from the async method to avoid blocking the event loop; collapse the multi-line debug string to one line.
- Apply across embedded Python in starter project templates under src/backend/base/langflow/initial_setup/starter_projects/*.json.
@@ - return ChatOpenAI( - model_name=model_name, - temperature=temperature, - streaming=stream, - openai_api_key=self.api_key, - ) + return ChatOpenAI( + model=model_name, + temperature=temperature, + streaming=stream, + api_key=self.api_key, + ) @@ - try: - models = self.fetch_ibm_models(base_url=field_value) + import asyncio + try: + loop = asyncio.get_running_loop() + models = await loop.run_in_executor(None, self.fetch_ibm_models, field_value) build_config["model_name"]["options"] = models build_config["model_name"]["value"] = models[0] if models else IBM_WATSONX_DEFAULT_MODELS[0] info_message = f"Updated model options: {len(models)} models found in {field_value}" logger.info(info_message) @@ - logger.debug( - f"Fetching Ollama models from updated URL: {build_config['ollama_base_url']} \ - and value {self.ollama_base_url}", - ) + logger.debug(f"Fetching Ollama models from updated URL: {build_config['ollama_base_url']} and value {self.ollama_base_url}")src/backend/base/langflow/initial_setup/starter_projects/Research Translation Loop.json (1)
1404-1579: Fix ChatOpenAI args, make IBM fetch non-blocking, and simplify logger.debug in Research Translation Loop starterReplace ChatOpenAI param names to match repo usage (model, api_key), call the blocking fetch_ibm_models via run_in_executor from the async context, and collapse the multi-line logger.debug f-string to a single line.
@@ - return ChatOpenAI( - model_name=model_name, - temperature=temperature, - streaming=stream, - openai_api_key=self.api_key, - ) + return ChatOpenAI( + model=model_name, + temperature=temperature, + streaming=stream, + api_key=self.api_key, + ) @@ - try: - models = self.fetch_ibm_models(base_url=field_value) + import asyncio + try: + loop = asyncio.get_running_loop() + models = await loop.run_in_executor(None, self.fetch_ibm_models, field_value) build_config["model_name"]["options"] = models build_config["model_name"]["value"] = models[0] if models else IBM_WATSONX_DEFAULT_MODELS[0] info_message = f"Updated model options: {len(models)} models found in {field_value}" logger.info(info_message) @@ - logger.debug( - f"Fetching Ollama models from updated URL: {build_config['ollama_base_url']} \ - and value {self.ollama_base_url}", - ) + logger.debug(f"Fetching Ollama models from updated URL: {build_config['ollama_base_url']} and value {self.ollama_base_url}")src/backend/base/langflow/initial_setup/starter_projects/Vector Store RAG.json (4)
3244-3650: Fix Ollama URL handling on change: use field_value, normalize '/v1', and avoid stale/self reads.Using self.ollama_base_url inside the "ollama_base_url" branch can validate/fetch with the previous URL. Also, '/v1' normalization is applied in build_model but not during model discovery, causing failures when users paste OpenAI‑compat URLs. Apply normalization and update build_config value; prefer async logger methods consistently.
Apply this diff inside the LanguageModelComponent.update_build_config "elif field_name == 'ollama_base_url'" branch:
- elif field_name == "ollama_base_url": - # Fetch Ollama models when ollama_base_url changes - # Use the field_value directly since this is triggered when the field changes - logger.debug( - f"Fetching Ollama models from updated URL: {build_config['ollama_base_url']} \ - and value {self.ollama_base_url}", - ) - await logger.adebug(f"Fetching Ollama models from updated URL: {self.ollama_base_url}") - if await is_valid_ollama_url(url=self.ollama_base_url): + elif field_name == "ollama_base_url": + # Normalize and use the new value directly + new_url = transform_localhost_url(field_value or "") + if new_url and new_url.rstrip("/").endswith("/v1"): + new_url = new_url.rstrip("/").removesuffix("/v1") + await logger.awarning( + "Detected '/v1' suffix in base URL. The Ollama component uses the native Ollama API, " + "not the OpenAI-compatible API. The '/v1' suffix has been automatically removed. " + "If you want to use the OpenAI-compatible API, please use the OpenAI component instead. " + "Learn more at https://docs.ollama.com/openai#openai-compatibility" + ) + # Reflect normalized value back to the UI + if "ollama_base_url" in build_config: + build_config["ollama_base_url"]["value"] = new_url + + await logger.adebug(f"Fetching Ollama models from updated URL: {new_url}") + if await is_valid_ollama_url(url=new_url): try: models = await get_ollama_models( - base_url_value=self.ollama_base_url, + base_url_value=new_url, desired_capability=DESIRED_CAPABILITY, json_models_key=JSON_MODELS_KEY, json_name_key=JSON_NAME_KEY, json_capabilities_key=JSON_CAPABILITIES_KEY, ) build_config["model_name"]["options"] = models build_config["model_name"]["value"] = models[0] if models else "" info_message = f"Updated model options: {len(models)} models found in {self.ollama_base_url}" await logger.ainfo(info_message) except ValueError: await logger.awarning("Error updating Ollama model options.") build_config["model_name"]["options"] = [] build_config["model_name"]["value"] = "" else: - await logger.awarning(f"Invalid Ollama URL: {self.ollama_base_url}") + await logger.awarning(f"Invalid Ollama URL: {new_url}") build_config["model_name"]["options"] = [] build_config["model_name"]["value"] = ""
3311-3345: Update Provider dropdown template to include IBM watsonx.ai and Ollama.The code supports IBM/Ollama, but the template options only list OpenAI/Anthropic/Google. Users won't see IBM/Ollama in the UI initially.
Apply this JSON diff in the provider template block:
"options": [ - "OpenAI", - "Anthropic", - "Google" + "OpenAI", + "Anthropic", + "Google", + "IBM watsonx.ai", + "Ollama" ], "options_metadata": [ { "icon": "OpenAI" }, { "icon": "Anthropic" }, { - "icon": "GoogleGenerativeAI" + "icon": "GoogleGenerativeAI" + }, + { + "icon": "WatsonxAI" + }, + { + "icon": "Ollama" } ],
3244-3288: Harden IBM model fetch against arbitrary URLs (allowlist).fetch_ibm_models accepts any base_url. Since flows can be imported/edited, enforce an allowlist to reduce SSRF risk.
Minimal patch:
@staticmethod def fetch_ibm_models(base_url: str) -> list[str]: """Fetch available models from the watsonx.ai API.""" try: + # Restrict to known IBM endpoints to prevent SSRF + if base_url not in IBM_WATSONX_URLS: + logger.warning("Unsupported watsonx base URL; falling back to default model list.") + return IBM_WATSONX_DEFAULT_MODELS endpoint = f"{base_url}/ml/v1/foundation_model_specs" params = {"version": "2024-09-16", "filters": "function_text_chat,!lifecycle_withdrawn"} response = requests.get(endpoint, params=params, timeout=10)
3244-3335: Fix ChatOpenAI and ChatAnthropic constructor parameter names.The code uses incorrect parameter names that will cause runtime TypeErrors:
- ChatOpenAI: Change
model_name=model_nametomodel=model_name- ChatAnthropic: Change
anthropic_api_key=self.api_keytoapi_key=self.api_keyChatOpenAI 0.3.23 expects the parameter
model, notmodel_name. ChatAnthropic 0.3.14 expects the parameterapi_key, notanthropic_api_key. Bothstreamingparameters are valid and correctly used.src/backend/base/langflow/initial_setup/starter_projects/Instagram Copywriter.json (2)
2861-2896: Provider dropdown options don’t include IBM/Ollama (UI mismatch).The node’s provider.options only list OpenAI/Anthropic/Google, but the code supports IBM watsonx.ai and Ollama. Users won’t see the new providers.
- Update provider.options to include "IBM watsonx.ai" and "Ollama", with corresponding icons.
- Re-export/regenerate this starter so the template matches the code.
Also applies to: 3183-3217
2795-3118: Fix ChatOpenAI constructor parameter: changemodel_nametomodelfor langchain-openai 0.3.23 compatibility.The ChatOpenAI constructor in langchain-openai 0.3.23 expects the
modelparameter, notmodel_name. The code currently passesmodel_name=model_name, which will cause a TypeError at runtime.Apply this diff to the
build_model()method's OpenAI branch:return ChatOpenAI( - model_name=model_name, + model=model_name, temperature=temperature, streaming=stream, openai_api_key=self.api_key, )src/backend/base/langflow/initial_setup/starter_projects/Financial Report Parser.json (1)
949-975: Provider dropdown options miss IBM/Ollama (UI mismatch).The node template only lists OpenAI/Anthropic/Google. Update options to surface IBM watsonx.ai and Ollama, matching the code.
src/backend/base/langflow/initial_setup/starter_projects/Meeting Summary.json (4)
3148-3176: Provider options omit IBM and Ollama — UI cannot surface new providersThe template lists only OpenAI/Anthropic/Google while the embedded code supports IBM watsonx.ai and Ollama. Add both to keep UI/state consistent.
"provider": { "_input_type": "DropdownInput", ... - "options": [ - "OpenAI", - "Anthropic", - "Google" - ], + "options": [ + "OpenAI", + "Anthropic", + "Google", + "IBM watsonx.ai", + "Ollama" + ], ... },
3474-3504: Mirror provider options fix in the second LanguageModelComponentApply the same provider options expansion here to avoid inconsistent behavior between the two nodes.
"provider": { "_input_type": "DropdownInput", ... - "options": [ - "OpenAI", - "Anthropic", - "Google" - ], + "options": [ + "OpenAI", + "Anthropic", + "Google", + "IBM watsonx.ai", + "Ollama" + ], ... },
3298-3327: Add missing deps in the second LanguageModelComponent as wellDuplicate the dependency additions here for consistency.
"dependencies": [ { "name": "langchain_anthropic", "version": "0.3.14" }, { "name": "langchain_google_genai", "version": "2.0.6" }, { "name": "langchain_openai", "version": "0.3.23" }, + { + "name": "langchain_ollama", + "version": null + }, + { + "name": "langchain_ibm", + "version": null + }, { "name": "lfx", "version": null } ],
2973-2992: Add missing runtime dependencies to metadata.dependenciesThe embedded code imports
langchain_ibm(line 3076:from langchain_ibm import ChatWatsonx) andlangchain_ollama(line 3076:from langchain_ollama import ChatOllama), but both are missing from the dependencies list at lines 2973-2992. This will cause execution to fail in environments that rely on this metadata for dependency resolution."dependencies": [ { "name": "langchain_anthropic", "version": "0.3.14" }, { "name": "langchain_google_genai", "version": "2.0.6" }, { "name": "langchain_openai", "version": "0.3.23" }, + { + "name": "langchain_ollama", + "version": null + }, + { + "name": "langchain_ibm", + "version": null + }, { "name": "lfx", "version": null } ],src/lfx/src/lfx/components/models/language_model.py (1)
334-343: Avoid blocking the event loop when fetching IBM modelsfetch_ibm_models uses requests (sync). Calling it inside async update_build_config blocks the loop. Offload to a worker thread or switch to httpx.AsyncClient.
+import asyncio @@ - try: - models = self.fetch_ibm_models(base_url=field_value) + try: + # Offload sync HTTP call to a worker thread to avoid blocking the event loop + models = await asyncio.to_thread(self.fetch_ibm_models, field_value) build_config["model_name"]["options"] = models
🧹 Nitpick comments (27)
src/backend/base/langflow/api/v1/openai_responses.py (1)
189-197: Consider skipping text processing block for completed messages.The logic correctly clears text content for completed messages to avoid duplicate streaming. However, the text processing block (lines 317-339) will still execute even with empty text, performing unnecessary delta computations that won't result in any output.
Consider wrapping the text processing block with a condition to skip it entirely when the message is complete:
# Skip processing text content if state is "complete" # All content has already been streamed via token events if message_state == "complete": await logger.adebug( "[OpenAIResponses][stream] skipping add_message with state=complete" ) # Still process content_blocks for tool calls, but skip text content text = "" # Look for Agent Steps in content_blocks for block in content_blocks: # ... tool call processing ... # Extract text content for streaming (only AI responses) - if ( + if message_state != "complete" and ( sender in ["Machine", "AI", "Agent"] and text != request.input and sender_name in ["Agent", "AI"] ):This avoids unnecessary delta computation when text is already empty.
src/lfx/src/lfx/base/models/model.py (1)
281-303: Ensure streaming status is a string; avoid propagating a generator.lf_message.text can be an async stream; assigning it to result leaks a generator into status. Guard for str.
- result = lf_message.text or "" + text_val = getattr(lf_message, "text", None) + result = text_val if isinstance(text_val, str) else ""src/lfx/src/lfx/base/models/model_utils.py (2)
22-37: Add timeout and optional headers for resilient URL validation.Avoid indefinite waits and support proxied/auth setups without breaking current callers.
-async def is_valid_ollama_url(url: str) -> bool: +async def is_valid_ollama_url(url: str, headers: dict[str, str] | None = None) -> bool: @@ - async with httpx.AsyncClient() as client: - return (await client.get(url=urljoin(url, "api/tags"))).status_code == HTTP_STATUS_OK + timeout = httpx.Timeout(5.0) + async with httpx.AsyncClient(timeout=timeout) as client: + resp = await client.get(url=urljoin(url, "api/tags"), headers=headers) + return resp.status_code == HTTP_STATUS_OK
39-107: Model listing: add timeout and optional headers; minor hygiene.Improves robustness and flexibility; keeps behavior the same for existing callers.
-async def get_ollama_models( - base_url_value: str, desired_capability: str, json_models_key: str, json_name_key: str, json_capabilities_key: str -) -> list[str]: +async def get_ollama_models( + base_url_value: str, + desired_capability: str, + json_models_key: str, + json_name_key: str, + json_capabilities_key: str, + headers: dict[str, str] | None = None, +) -> list[str]: @@ - tags_response = None - - async with httpx.AsyncClient() as client: + timeout = httpx.Timeout(10.0) + async with httpx.AsyncClient(timeout=timeout) as client: # Fetch available models - tags_response = await client.get(url=tags_url) + tags_response = await client.get(url=tags_url, headers=headers) @@ - models = tags_response.json() - if asyncio.iscoroutine(models): - models = await models + models = tags_response.json() await logger.adebug(f"Available models: {models}") @@ - show_response = await client.post(url=show_url, json=payload) + show_response = await client.post(url=show_url, json=payload, headers=headers) @@ - return sorted(model_ids) + return sorted(set(model_ids))src/backend/base/langflow/initial_setup/starter_projects/SEO Keyword Generator.json (1)
968-1142: Use the updated Ollama URL in config updates; avoid stale fetches.In the ollama_base_url branch you validate/fetch using self.ollama_base_url. During in-place edits, this may still hold the old value.
- elif field_name == "ollama_base_url": + elif field_name == "ollama_base_url": @@ - logger.debug( - f"Fetching Ollama models from updated URL: {build_config['ollama_base_url']} \ - and value {self.ollama_base_url}", - ) - await logger.adebug(f"Fetching Ollama models from updated URL: {self.ollama_base_url}") - if await is_valid_ollama_url(url=self.ollama_base_url): + current_url = field_value or getattr(self, "ollama_base_url", DEFAULT_OLLAMA_URL) + logger.debug(f"Fetching Ollama models from updated URL: {current_url}") + await logger.adebug(f"Fetching Ollama models from updated URL: {current_url}") + if await is_valid_ollama_url(url=current_url): try: - models = await get_ollama_models( - base_url_value=self.ollama_base_url, + models = await get_ollama_models( + base_url_value=current_url, desired_capability=DESIRED_CAPABILITY, json_models_key=JSON_MODELS_KEY, json_name_key=JSON_NAME_KEY, json_capabilities_key=JSON_CAPABILITIES_KEY, ) build_config["model_name"]["options"] = models build_config["model_name"]["value"] = models[0] if models else "" - info_message = f"Updated model options: {len(models)} models found in {self.ollama_base_url}" + info_message = f"Updated model options: {len(models)} models found in {current_url}" await logger.ainfo(info_message) except ValueError: await logger.awarning("Error updating Ollama model options.") build_config["model_name"]["options"] = [] build_config["model_name"]["value"] = "" else: - await logger.awarning(f"Invalid Ollama URL: {self.ollama_base_url}") + await logger.awarning(f"Invalid Ollama URL: {current_url}") build_config["model_name"]["options"] = [] build_config["model_name"]["value"] = ""src/backend/base/langflow/initial_setup/starter_projects/Image Sentiment Analysis.json (1)
1235-1406: Fix stale URL usage on Ollama base URL change.Mirror the SEO fix: use the incoming field_value for validation/fetch.
- elif field_name == "ollama_base_url": + elif field_name == "ollama_base_url": @@ - logger.debug( - f"Fetching Ollama models from updated URL: {build_config['ollama_base_url']} \ - and value {self.ollama_base_url}", - ) - await logger.adebug(f"Fetching Ollama models from updated URL: {self.ollama_base_url}") - if await is_valid_ollama_url(url=self.ollama_base_url): + current_url = field_value or getattr(self, "ollama_base_url", DEFAULT_OLLAMA_URL) + logger.debug(f"Fetching Ollama models from updated URL: {current_url}") + await logger.adebug(f"Fetching Ollama models from updated URL: {current_url}") + if await is_valid_ollama_url(url=current_url): try: - models = await get_ollama_models( - base_url_value=self.ollama_base_url, + models = await get_ollama_models( + base_url_value=current_url, desired_capability=DESIRED_CAPABILITY, json_models_key=JSON_MODELS_KEY, json_name_key=JSON_NAME_KEY, json_capabilities_key=JSON_CAPABILITIES_KEY, ) build_config["model_name"]["options"] = models build_config["model_name"]["value"] = models[0] if models else "" - info_message = f"Updated model options: {len(models)} models found in {self.ollama_base_url}" + info_message = f"Updated model options: {len(models)} models found in {current_url}" await logger.ainfo(info_message) except ValueError: await logger.awarning("Error updating Ollama model options.") build_config["model_name"]["options"] = [] build_config["model_name"]["value"] = "" else: - await logger.awarning(f"Invalid Ollama URL: {self.ollama_base_url}") + await logger.awarning(f"Invalid Ollama URL: {current_url}") build_config["model_name"]["options"] = [] build_config["model_name"]["value"] = ""src/backend/base/langflow/initial_setup/starter_projects/Research Agent.json (1)
2206-2234: Duplicate debug logging in ollama_base_url path.Both logger.debug and await logger.adebug log the same message. Keep one (prefer async).
- logger.debug( - f"Fetching Ollama models from updated URL: {build_config['ollama_base_url']} and value {self.ollama_base_url}", - ) await logger.adebug(f"Fetching Ollama models from updated URL: {self.ollama_base_url}")src/backend/base/langflow/initial_setup/starter_projects/Memory Chatbot.json (1)
1496-1511: Duplicate debug logging in ollama_base_url branch (repeat).Remove the sync logger.debug and keep the async log.
- logger.debug( - f"Fetching Ollama models from updated URL: {build_config['ollama_base_url']} and value {self.ollama_base_url}", - ) await logger.adebug(f"Fetching Ollama models from updated URL: {self.ollama_base_url}")src/backend/base/langflow/initial_setup/starter_projects/Blog Writer.json (1)
1463-1463: Consider adding timeout and resilience safeguards to HTTP requests during build config updates.The
update_build_configmethod now makes HTTP requests to external Ollama and IBM watsonx APIs when a user changes provider or URL settings (e.g.,is_valid_ollama_url(),get_ollama_models(),fetch_ibm_models()). Without explicit timeouts and retry logic, slow or unresponsive services could block the build config update indefinitely, degrading the UI responsiveness.Recommendations:
- Set a maximum timeout (e.g., 5–10 seconds) on all external HTTP calls.
- Implement exponential backoff or circuit-breaker logic for failing requests.
- Log warnings but do not fail the entire build config update if external model discovery fails.
src/backend/base/langflow/initial_setup/starter_projects/Portfolio Website Code Generator.json (2)
1733-1737: Tidy log line; remove stray backslash and duplicate debug.The f-string includes a literal backslash and large whitespace, and you immediately log the same info again with adebug(). Keep one line and avoid the backslash.
- logger.debug( - f"Fetching Ollama models from updated URL: {build_config['ollama_base_url']} \ - and value {self.ollama_base_url}", - ) - await logger.adebug(f"Fetching Ollama models from updated URL: {self.ollama_base_url}") + await logger.adebug( + f"Fetching Ollama models from updated URL (template value={build_config.get('ollama_base_url', {}).get('value')}), " + f"resolved value={self.ollama_base_url}" + )Also applies to: 2066-2070
1537-1556: Gate models that lack tool calling when used with StructuredOutput.Model list includes "gpt-5-chat-latest", which doesn’t support tool calling. This template wires a LanguageModel into StructuredOutput; tool-calling gaps will break extraction.
Suggestions:
- When provider == OpenAI and downstream requires tool calls, filter out "gpt-5-chat-latest" from options or show a warning.
- Or set tool_calling=False paths accordingly and skip Trustcall, forcing fallback.
Based on learnings.
src/backend/base/langflow/initial_setup/starter_projects/Twitter Thread Generator.json (1)
2097-2103: Clean up duplicated/awkward debug logging.Remove the first logger.debug with the backslash and keep a single adebug with clear values.
- logger.debug( - f"Fetching Ollama models from updated URL: {build_config['ollama_base_url']} \ - and value {self.ollama_base_url}", - ) - await logger.adebug(f"Fetching Ollama models from updated URL: {self.ollama_base_url}") + await logger.adebug( + f"Fetching Ollama models from updated URL (template value={build_config.get('ollama_base_url', {}).get('value')}), " + f"resolved value={self.ollama_base_url}" + )src/backend/base/langflow/initial_setup/starter_projects/Custom Component Generator.json (1)
2682-2717: Sync provider options with code (add IBM/Ollama).UI template lists only OpenAI/Anthropic/Google, but code supports IBM watsonx.ai and Ollama. Expose them in the starter’s node template to avoid hidden options.
- "options": [ - "OpenAI", - "Anthropic", - "Google" - ], + "options": [ + "OpenAI", + "Anthropic", + "Google", + "IBM watsonx.ai", + "Ollama" + ],Please open this template in the Playground and confirm the dropdown shows 5 providers.
src/backend/base/langflow/initial_setup/starter_projects/Research Translation Loop.json (1)
1471-1505: Expose IBM/Ollama in provider dropdown.Template should reflect all supported providers.
- "options": [ - "OpenAI", - "Anthropic", - "Google" - ], + "options": [ + "OpenAI", + "Anthropic", + "Google", + "IBM watsonx.ai", + "Ollama" + ],src/backend/base/langflow/initial_setup/starter_projects/Vector Store RAG.json (1)
3244-3650: Minor: streamline logging and types.
- Prefer await logger.adebug/awarning uniformly in async method; drop synchronous logger.debug in that block.
- Log build_config["ollama_base_url"]["value"] rather than the entire dict for clarity.
src/backend/base/langflow/initial_setup/starter_projects/Instagram Copywriter.json (3)
2795-3118: Async path calls sync requests; prefer non-blocking IBM model fetch.fetch_ibm_models uses requests.get inside an async flow. Use httpx.AsyncClient or run in a thread to avoid blocking.
Example:
- def fetch_ibm_models(base_url: str) -> list[str]: - ... - response = requests.get(endpoint, params=params, timeout=10) + async def fetch_ibm_models(base_url: str) -> list[str]: + ... + async with httpx.AsyncClient(timeout=10) as client: + response = await client.get(endpoint, params=params)And await where it’s called.
2710-2750: Duplicate debug logging; remove one.Both logger.debug and await logger.adebug log the same message in the ollama_base_url branch. Keep one (prefer async).
- logger.debug( - f"Fetching Ollama models from updated URL: {build_config['ollama_base_url']} \ - and value {self.ollama_base_url}", - ) await logger.adebug(f"Fetching Ollama models from updated URL: {self.ollama_base_url}")Also applies to: 3015-3055
2795-3118: Reasoning models: hide system_message for all o-series, not only o1.*Currently checks only field_value.startswith("o1"). Extend to all reasoning models (e.g., o3, o4*).
- if field_value and field_value.startswith("o1") and hasattr(self, "provider") and self.provider == "OpenAI": + if ( + field_value + and field_value.startswith("o") + and hasattr(self, "provider") + and self.provider == "OpenAI" + ):src/backend/base/langflow/initial_setup/starter_projects/Financial Report Parser.json (5)
874-1049: Async path calls sync requests; prefer non-blocking IBM model fetch.Same as other file: replace requests.get with httpx.AsyncClient and await, or offload to a thread executor.
1007-1016: Duplicate debug logging; remove one.Both logger.debug and await logger.adebug are emitted in the ollama_base_url update branch. Keep only the async variant.
874-1049: Reasoning models: hide system_message for all o-series, not only o1.*Adjust the condition to cover o3/o4* as well.
1066-1352: Guard model selection for tool-calling flow (gpt-5-chat-latest).This flow uses StructuredOutput (tool/extractor). Per learnings, gpt-5-chat-latest doesn’t support tool calling; selection may fail. Add a warning or exclude it from options when this starter is loaded.
Based on learnings
874-1049: Switch ChatOpenAI constructor to usemodelparameter for langchain-openai 0.3.23 compatibility.ChatOpenAI in langchain-openai 0.3.23 expects
modelas the canonical parameter;model_nameis accepted as a backwards-compatible alias butmodelis preferred.- return ChatOpenAI( - model_name=model_name, + return ChatOpenAI( + model=model_name, temperature=temperature, streaming=stream, openai_api_key=self.api_key, )src/backend/base/langflow/initial_setup/starter_projects/Meeting Summary.json (1)
346-351: Duplicate debug lines; prefer a single async logger callBoth logger.debug and await logger.adebug log the same fact. Keep one to reduce noise.
- logger.debug( - f"Fetching Ollama models from updated URL: {build_config['ollama_base_url']} \ - and value {self.ollama_base_url}", - ) - await logger.adebug(f"Fetching Ollama models from updated URL: {self.ollama_base_url}") + await logger.adebug( + f"Fetching Ollama models from updated URL: {self.ollama_base_url}" + )src/lfx/src/lfx/components/models/language_model.py (1)
35-41: Remove unused HTTP_STATUS_OK constantThe constant is declared but never used in this module.
-# Ollama API constants -HTTP_STATUS_OK = 200 JSON_MODELS_KEY = "models" JSON_NAME_KEY = "name" JSON_CAPABILITIES_KEY = "capabilities" DESIRED_CAPABILITY = "completion" DEFAULT_OLLAMA_URL = "http://localhost:11434"src/backend/base/langflow/initial_setup/starter_projects/Youtube Analysis.json (2)
2680-2720: Improve variable reference detection logic.The code detects variable references by checking if a value is all uppercase and contains underscores. This heuristic could produce false positives (e.g.,
API_KEY_VALUEcould legitimately be a URL component). Consider a more robust approach, such as checking against known global variable names or using a dedicated marker/prefix convention.# Current approach (fragile): is_variable_ref = ( config_value and isinstance(config_value, str) and config_value.isupper() and "_" in config_value ) # Alternative: check against known variable patterns or use a marker
2600-2620: Replace broad exception handling with specific exception types.Several
except Exception:blocks catch all exceptions. This masks errors and makes debugging harder. Use specific exception types (e.g.,requests.ConnectionError,requests.Timeout,ValueError,KeyError) to handle only expected failures.# Current (line ~2600): except Exception: # noqa: BLE001 logger.exception("Error fetching IBM watsonx models. Using default models.") # Better: except (requests.ConnectionError, requests.Timeout, requests.HTTPError, ValueError) as e: logger.exception(f"Error fetching IBM watsonx models: {e}")
| "title_case": false, | ||
| "type": "code", | ||
| "value": "from typing import Any\nfrom urllib.parse import urljoin\n\nimport requests\nfrom langchain_anthropic import ChatAnthropic\nfrom langchain_ibm import ChatWatsonx\nfrom langchain_ollama import ChatOllama\nfrom langchain_openai import ChatOpenAI\nfrom pydantic.v1 import SecretStr\n\nfrom lfx.base.models.anthropic_constants import ANTHROPIC_MODELS\nfrom lfx.base.models.google_generative_ai_constants import GOOGLE_GENERATIVE_AI_MODELS\nfrom lfx.base.models.google_generative_ai_model import ChatGoogleGenerativeAIFixed\nfrom lfx.base.models.model import LCModelComponent\nfrom lfx.base.models.openai_constants import OPENAI_CHAT_MODEL_NAMES, OPENAI_REASONING_MODEL_NAMES\nfrom lfx.field_typing import LanguageModel\nfrom lfx.field_typing.range_spec import RangeSpec\nfrom lfx.inputs.inputs import BoolInput, MessageTextInput, StrInput\nfrom lfx.io import DropdownInput, MessageInput, MultilineInput, SecretStrInput, SliderInput\nfrom lfx.log.logger import logger\nfrom lfx.schema.dotdict import dotdict\nfrom lfx.utils.util import transform_localhost_url\n\n# IBM watsonx.ai constants\nIBM_WATSONX_DEFAULT_MODELS = [\"ibm/granite-3-2b-instruct\", \"ibm/granite-3-8b-instruct\", \"ibm/granite-13b-instruct-v2\"]\nIBM_WATSONX_URLS = [\n \"https://us-south.ml.cloud.ibm.com\",\n \"https://eu-de.ml.cloud.ibm.com\",\n \"https://eu-gb.ml.cloud.ibm.com\",\n \"https://au-syd.ml.cloud.ibm.com\",\n \"https://jp-tok.ml.cloud.ibm.com\",\n \"https://ca-tor.ml.cloud.ibm.com\",\n]\n\n\nclass LanguageModelComponent(LCModelComponent):\n display_name = \"Language Model\"\n description = \"Runs a language model given a specified provider.\"\n documentation: str = \"https://docs.langflow.org/components-models\"\n icon = \"brain-circuit\"\n category = \"models\"\n priority = 0 # Set priority to 0 to make it appear first\n\n @staticmethod\n def fetch_ibm_models(base_url: str) -> list[str]:\n \"\"\"Fetch available models from the watsonx.ai API.\"\"\"\n try:\n endpoint = f\"{base_url}/ml/v1/foundation_model_specs\"\n params = {\"version\": \"2024-09-16\", \"filters\": \"function_text_chat,!lifecycle_withdrawn\"}\n response = requests.get(endpoint, params=params, timeout=10)\n response.raise_for_status()\n data = response.json()\n models = [model[\"model_id\"] for model in data.get(\"resources\", [])]\n return sorted(models)\n except Exception: # noqa: BLE001\n logger.exception(\"Error fetching IBM watsonx models. Using default models.\")\n return IBM_WATSONX_DEFAULT_MODELS\n\n @staticmethod\n def fetch_ollama_models(base_url: str) -> list[str]:\n \"\"\"Fetch available models from the Ollama API.\"\"\"\n try:\n # Strip /v1 suffix if present, as Ollama API endpoints are at root level\n base_url = base_url.rstrip(\"/\").removesuffix(\"/v1\")\n if not base_url.endswith(\"/\"):\n base_url = base_url + \"/\"\n base_url = transform_localhost_url(base_url)\n\n # Ollama REST API to return models\n tags_url = urljoin(base_url, \"api/tags\")\n\n response = requests.get(tags_url, timeout=10)\n response.raise_for_status()\n data = response.json()\n models = [model[\"name\"] for model in data.get(\"models\", [])]\n return sorted(models)\n except Exception: # noqa: BLE001\n logger.exception(\"Error fetching Ollama models. Returning empty list.\")\n return []\n\n inputs = [\n DropdownInput(\n name=\"provider\",\n display_name=\"Model Provider\",\n options=[\"OpenAI\", \"Anthropic\", \"Google\", \"IBM watsonx.ai\", \"Ollama\"],\n value=\"OpenAI\",\n info=\"Select the model provider\",\n real_time_refresh=True,\n options_metadata=[\n {\"icon\": \"OpenAI\"},\n {\"icon\": \"Anthropic\"},\n {\"icon\": \"GoogleGenerativeAI\"},\n {\"icon\": \"WatsonxAI\"},\n {\"icon\": \"Ollama\"},\n ],\n ),\n DropdownInput(\n name=\"model_name\",\n display_name=\"Model Name\",\n options=OPENAI_CHAT_MODEL_NAMES + OPENAI_REASONING_MODEL_NAMES,\n value=OPENAI_CHAT_MODEL_NAMES[0],\n info=\"Select the model to use\",\n real_time_refresh=True,\n ),\n SecretStrInput(\n name=\"api_key\",\n display_name=\"OpenAI API Key\",\n info=\"Model Provider API key\",\n required=False,\n show=True,\n real_time_refresh=True,\n ),\n DropdownInput(\n name=\"base_url_ibm_watsonx\",\n display_name=\"watsonx API Endpoint\",\n info=\"The base URL of the API (IBM watsonx.ai only)\",\n options=IBM_WATSONX_URLS,\n value=IBM_WATSONX_URLS[0],\n show=False,\n real_time_refresh=True,\n ),\n StrInput(\n name=\"project_id\",\n display_name=\"watsonx Project ID\",\n info=\"The project ID associated with the foundation model (IBM watsonx.ai only)\",\n show=False,\n required=False,\n ),\n MessageTextInput(\n name=\"ollama_base_url\",\n display_name=\"Ollama API URL\",\n info=\"Endpoint of the Ollama API (Ollama only). Defaults to http://localhost:11434\",\n value=\"http://localhost:11434\",\n show=False,\n real_time_refresh=True,\n ),\n MessageInput(\n name=\"input_value\",\n display_name=\"Input\",\n info=\"The input text to send to the model\",\n ),\n MultilineInput(\n name=\"system_message\",\n display_name=\"System Message\",\n info=\"A system message that helps set the behavior of the assistant\",\n advanced=False,\n ),\n BoolInput(\n name=\"stream\",\n display_name=\"Stream\",\n info=\"Whether to stream the response\",\n value=False,\n advanced=True,\n ),\n SliderInput(\n name=\"temperature\",\n display_name=\"Temperature\",\n value=0.1,\n info=\"Controls randomness in responses\",\n range_spec=RangeSpec(min=0, max=1, step=0.01),\n advanced=True,\n ),\n ]\n\n def build_model(self) -> LanguageModel:\n provider = self.provider\n model_name = self.model_name\n temperature = self.temperature\n stream = self.stream\n\n if provider == \"OpenAI\":\n if not self.api_key:\n msg = \"OpenAI API key is required when using OpenAI provider\"\n raise ValueError(msg)\n\n if model_name in OPENAI_REASONING_MODEL_NAMES:\n # reasoning models do not support temperature (yet)\n temperature = None\n\n return ChatOpenAI(\n model_name=model_name,\n temperature=temperature,\n streaming=stream,\n openai_api_key=self.api_key,\n )\n if provider == \"Anthropic\":\n if not self.api_key:\n msg = \"Anthropic API key is required when using Anthropic provider\"\n raise ValueError(msg)\n return ChatAnthropic(\n model=model_name,\n temperature=temperature,\n streaming=stream,\n anthropic_api_key=self.api_key,\n )\n if provider == \"Google\":\n if not self.api_key:\n msg = \"Google API key is required when using Google provider\"\n raise ValueError(msg)\n return ChatGoogleGenerativeAIFixed(\n model=model_name,\n temperature=temperature,\n streaming=stream,\n google_api_key=self.api_key,\n )\n if provider == \"IBM watsonx.ai\":\n if not self.api_key:\n msg = \"IBM API key is required when using IBM watsonx.ai provider\"\n raise ValueError(msg)\n if not self.base_url_ibm_watsonx:\n msg = \"IBM watsonx API Endpoint is required when using IBM watsonx.ai provider\"\n raise ValueError(msg)\n if not self.project_id:\n msg = \"IBM watsonx Project ID is required when using IBM watsonx.ai provider\"\n raise ValueError(msg)\n return ChatWatsonx(\n apikey=SecretStr(self.api_key).get_secret_value(),\n url=self.base_url_ibm_watsonx,\n project_id=self.project_id,\n model_id=model_name,\n params={\n \"temperature\": temperature,\n },\n streaming=stream,\n )\n if provider == \"Ollama\":\n if not self.ollama_base_url:\n msg = \"Ollama API URL is required when using Ollama provider\"\n raise ValueError(msg)\n if not model_name:\n msg = \"Model name is required when using Ollama provider\"\n raise ValueError(msg)\n\n transformed_base_url = transform_localhost_url(self.ollama_base_url)\n\n # Check if URL contains /v1 suffix (OpenAI-compatible mode)\n if transformed_base_url and transformed_base_url.rstrip(\"/\").endswith(\"/v1\"):\n # Strip /v1 suffix and log warning\n transformed_base_url = transformed_base_url.rstrip(\"/\").removesuffix(\"/v1\")\n logger.warning(\n \"Detected '/v1' suffix in base URL. The Ollama component uses the native Ollama API, \"\n \"not the OpenAI-compatible API. The '/v1' suffix has been automatically removed. \"\n \"If you want to use the OpenAI-compatible API, please use the OpenAI component instead. \"\n \"Learn more at https://docs.ollama.com/openai#openai-compatibility\"\n )\n\n return ChatOllama(\n base_url=transformed_base_url,\n model=model_name,\n temperature=temperature,\n )\n msg = f\"Unknown provider: {provider}\"\n raise ValueError(msg)\n\n def update_build_config(self, build_config: dotdict, field_value: Any, field_name: str | None = None) -> dotdict:\n if field_name == \"provider\":\n if field_value == \"OpenAI\":\n build_config[\"model_name\"][\"options\"] = OPENAI_CHAT_MODEL_NAMES + OPENAI_REASONING_MODEL_NAMES\n build_config[\"model_name\"][\"value\"] = OPENAI_CHAT_MODEL_NAMES[0]\n build_config[\"api_key\"][\"display_name\"] = \"OpenAI API Key\"\n build_config[\"api_key\"][\"show\"] = True\n build_config[\"base_url_ibm_watsonx\"][\"show\"] = False\n build_config[\"project_id\"][\"show\"] = False\n build_config[\"ollama_base_url\"][\"show\"] = False\n elif field_value == \"Anthropic\":\n build_config[\"model_name\"][\"options\"] = ANTHROPIC_MODELS\n build_config[\"model_name\"][\"value\"] = ANTHROPIC_MODELS[0]\n build_config[\"api_key\"][\"display_name\"] = \"Anthropic API Key\"\n build_config[\"api_key\"][\"show\"] = True\n build_config[\"base_url_ibm_watsonx\"][\"show\"] = False\n build_config[\"project_id\"][\"show\"] = False\n build_config[\"ollama_base_url\"][\"show\"] = False\n elif field_value == \"Google\":\n build_config[\"model_name\"][\"options\"] = GOOGLE_GENERATIVE_AI_MODELS\n build_config[\"model_name\"][\"value\"] = GOOGLE_GENERATIVE_AI_MODELS[0]\n build_config[\"api_key\"][\"display_name\"] = \"Google API Key\"\n build_config[\"api_key\"][\"show\"] = True\n build_config[\"base_url_ibm_watsonx\"][\"show\"] = False\n build_config[\"project_id\"][\"show\"] = False\n build_config[\"ollama_base_url\"][\"show\"] = False\n elif field_value == \"IBM watsonx.ai\":\n build_config[\"model_name\"][\"options\"] = IBM_WATSONX_DEFAULT_MODELS\n build_config[\"model_name\"][\"value\"] = IBM_WATSONX_DEFAULT_MODELS[0]\n build_config[\"api_key\"][\"display_name\"] = \"IBM API Key\"\n build_config[\"api_key\"][\"show\"] = True\n build_config[\"base_url_ibm_watsonx\"][\"show\"] = True\n build_config[\"project_id\"][\"show\"] = True\n build_config[\"ollama_base_url\"][\"show\"] = False\n elif field_value == \"Ollama\":\n # Fetch Ollama models from the API\n ollama_url = build_config[\"ollama_base_url\"].get(\"value\", \"http://localhost:11434\")\n models = self.fetch_ollama_models(base_url=ollama_url)\n build_config[\"model_name\"][\"options\"] = models\n build_config[\"model_name\"][\"value\"] = models[0] if models else \"\"\n build_config[\"api_key\"][\"show\"] = False\n build_config[\"base_url_ibm_watsonx\"][\"show\"] = False\n build_config[\"project_id\"][\"show\"] = False\n build_config[\"ollama_base_url\"][\"show\"] = True\n elif (\n field_name == \"base_url_ibm_watsonx\"\n and field_value\n and hasattr(self, \"provider\")\n and self.provider == \"IBM watsonx.ai\"\n ):\n # Fetch IBM models when base_url changes\n try:\n models = self.fetch_ibm_models(base_url=field_value)\n build_config[\"model_name\"][\"options\"] = models\n build_config[\"model_name\"][\"value\"] = models[0] if models else IBM_WATSONX_DEFAULT_MODELS[0]\n info_message = f\"Updated model options: {len(models)} models found in {field_value}\"\n logger.info(info_message)\n except Exception: # noqa: BLE001\n logger.exception(\"Error updating IBM model options.\")\n elif (\n field_name == \"ollama_base_url\" and field_value and hasattr(self, \"provider\") and self.provider == \"Ollama\"\n ):\n # Fetch Ollama models when ollama_base_url changes\n try:\n models = self.fetch_ollama_models(base_url=field_value)\n build_config[\"model_name\"][\"options\"] = models\n build_config[\"model_name\"][\"value\"] = models[0] if models else \"\"\n info_message = f\"Updated model options: {len(models)} models found in {field_value}\"\n logger.info(info_message)\n except Exception: # noqa: BLE001\n logger.exception(\"Error updating Ollama model options.\")\n elif field_name == \"model_name\" and field_value.startswith(\"o1\") and self.provider == \"OpenAI\":\n # Hide system_message for o1 models - currently unsupported\n if \"system_message\" in build_config:\n build_config[\"system_message\"][\"show\"] = False\n elif field_name == \"model_name\" and not field_value.startswith(\"o1\") and \"system_message\" in build_config:\n build_config[\"system_message\"][\"show\"] = True\n return build_config\n" | ||
| "value": "from typing import Any\n\nimport requests\nfrom langchain_anthropic import ChatAnthropic\nfrom langchain_ibm import ChatWatsonx\nfrom langchain_ollama import ChatOllama\nfrom langchain_openai import ChatOpenAI\nfrom pydantic.v1 import SecretStr\n\nfrom lfx.base.models.anthropic_constants import ANTHROPIC_MODELS\nfrom lfx.base.models.google_generative_ai_constants import GOOGLE_GENERATIVE_AI_MODELS\nfrom lfx.base.models.google_generative_ai_model import ChatGoogleGenerativeAIFixed\nfrom lfx.base.models.model import LCModelComponent\nfrom lfx.base.models.model_utils import get_ollama_models, is_valid_ollama_url\nfrom lfx.base.models.openai_constants import OPENAI_CHAT_MODEL_NAMES, OPENAI_REASONING_MODEL_NAMES\nfrom lfx.field_typing import LanguageModel\nfrom lfx.field_typing.range_spec import RangeSpec\nfrom lfx.inputs.inputs import BoolInput, MessageTextInput, StrInput\nfrom lfx.io import DropdownInput, MessageInput, MultilineInput, SecretStrInput, SliderInput\nfrom lfx.log.logger import logger\nfrom lfx.schema.dotdict import dotdict\nfrom lfx.utils.util import transform_localhost_url\n\n# IBM watsonx.ai constants\nIBM_WATSONX_DEFAULT_MODELS = [\"ibm/granite-3-2b-instruct\", \"ibm/granite-3-8b-instruct\", \"ibm/granite-13b-instruct-v2\"]\nIBM_WATSONX_URLS = [\n \"https://us-south.ml.cloud.ibm.com\",\n \"https://eu-de.ml.cloud.ibm.com\",\n \"https://eu-gb.ml.cloud.ibm.com\",\n \"https://au-syd.ml.cloud.ibm.com\",\n \"https://jp-tok.ml.cloud.ibm.com\",\n \"https://ca-tor.ml.cloud.ibm.com\",\n]\n\n# Ollama API constants\nHTTP_STATUS_OK = 200\nJSON_MODELS_KEY = \"models\"\nJSON_NAME_KEY = \"name\"\nJSON_CAPABILITIES_KEY = \"capabilities\"\nDESIRED_CAPABILITY = \"completion\"\nDEFAULT_OLLAMA_URL = \"http://localhost:11434\"\n\n\nclass LanguageModelComponent(LCModelComponent):\n display_name = \"Language Model\"\n description = \"Runs a language model given a specified provider.\"\n documentation: str = \"https://docs.langflow.org/components-models\"\n icon = \"brain-circuit\"\n category = \"models\"\n priority = 0 # Set priority to 0 to make it appear first\n\n @staticmethod\n def fetch_ibm_models(base_url: str) -> list[str]:\n \"\"\"Fetch available models from the watsonx.ai API.\"\"\"\n try:\n endpoint = f\"{base_url}/ml/v1/foundation_model_specs\"\n params = {\"version\": \"2024-09-16\", \"filters\": \"function_text_chat,!lifecycle_withdrawn\"}\n response = requests.get(endpoint, params=params, timeout=10)\n response.raise_for_status()\n data = response.json()\n models = [model[\"model_id\"] for model in data.get(\"resources\", [])]\n return sorted(models)\n except Exception: # noqa: BLE001\n logger.exception(\"Error fetching IBM watsonx models. Using default models.\")\n return IBM_WATSONX_DEFAULT_MODELS\n\n inputs = [\n DropdownInput(\n name=\"provider\",\n display_name=\"Model Provider\",\n options=[\"OpenAI\", \"Anthropic\", \"Google\", \"IBM watsonx.ai\", \"Ollama\"],\n value=\"OpenAI\",\n info=\"Select the model provider\",\n real_time_refresh=True,\n options_metadata=[\n {\"icon\": \"OpenAI\"},\n {\"icon\": \"Anthropic\"},\n {\"icon\": \"GoogleGenerativeAI\"},\n {\"icon\": \"WatsonxAI\"},\n {\"icon\": \"Ollama\"},\n ],\n ),\n DropdownInput(\n name=\"model_name\",\n display_name=\"Model Name\",\n options=OPENAI_CHAT_MODEL_NAMES + OPENAI_REASONING_MODEL_NAMES,\n value=OPENAI_CHAT_MODEL_NAMES[0],\n info=\"Select the model to use\",\n real_time_refresh=True,\n refresh_button=True,\n ),\n SecretStrInput(\n name=\"api_key\",\n display_name=\"OpenAI API Key\",\n info=\"Model Provider API key\",\n required=False,\n show=True,\n real_time_refresh=True,\n ),\n DropdownInput(\n name=\"base_url_ibm_watsonx\",\n display_name=\"watsonx API Endpoint\",\n info=\"The base URL of the API (IBM watsonx.ai only)\",\n options=IBM_WATSONX_URLS,\n value=IBM_WATSONX_URLS[0],\n show=False,\n real_time_refresh=True,\n ),\n StrInput(\n name=\"project_id\",\n display_name=\"watsonx Project ID\",\n info=\"The project ID associated with the foundation model (IBM watsonx.ai only)\",\n show=False,\n required=False,\n ),\n MessageTextInput(\n name=\"ollama_base_url\",\n display_name=\"Ollama API URL\",\n info=f\"Endpoint of the Ollama API (Ollama only). Defaults to {DEFAULT_OLLAMA_URL}\",\n value=DEFAULT_OLLAMA_URL,\n show=False,\n real_time_refresh=True,\n load_from_db=True,\n ),\n MessageInput(\n name=\"input_value\",\n display_name=\"Input\",\n info=\"The input text to send to the model\",\n ),\n MultilineInput(\n name=\"system_message\",\n display_name=\"System Message\",\n info=\"A system message that helps set the behavior of the assistant\",\n advanced=False,\n ),\n BoolInput(\n name=\"stream\",\n display_name=\"Stream\",\n info=\"Whether to stream the response\",\n value=False,\n advanced=True,\n ),\n SliderInput(\n name=\"temperature\",\n display_name=\"Temperature\",\n value=0.1,\n info=\"Controls randomness in responses\",\n range_spec=RangeSpec(min=0, max=1, step=0.01),\n advanced=True,\n ),\n ]\n\n def build_model(self) -> LanguageModel:\n provider = self.provider\n model_name = self.model_name\n temperature = self.temperature\n stream = self.stream\n\n if provider == \"OpenAI\":\n if not self.api_key:\n msg = \"OpenAI API key is required when using OpenAI provider\"\n raise ValueError(msg)\n\n if model_name in OPENAI_REASONING_MODEL_NAMES:\n # reasoning models do not support temperature (yet)\n temperature = None\n\n return ChatOpenAI(\n model_name=model_name,\n temperature=temperature,\n streaming=stream,\n openai_api_key=self.api_key,\n )\n if provider == \"Anthropic\":\n if not self.api_key:\n msg = \"Anthropic API key is required when using Anthropic provider\"\n raise ValueError(msg)\n return ChatAnthropic(\n model=model_name,\n temperature=temperature,\n streaming=stream,\n anthropic_api_key=self.api_key,\n )\n if provider == \"Google\":\n if not self.api_key:\n msg = \"Google API key is required when using Google provider\"\n raise ValueError(msg)\n return ChatGoogleGenerativeAIFixed(\n model=model_name,\n temperature=temperature,\n streaming=stream,\n google_api_key=self.api_key,\n )\n if provider == \"IBM watsonx.ai\":\n if not self.api_key:\n msg = \"IBM API key is required when using IBM watsonx.ai provider\"\n raise ValueError(msg)\n if not self.base_url_ibm_watsonx:\n msg = \"IBM watsonx API Endpoint is required when using IBM watsonx.ai provider\"\n raise ValueError(msg)\n if not self.project_id:\n msg = \"IBM watsonx Project ID is required when using IBM watsonx.ai provider\"\n raise ValueError(msg)\n return ChatWatsonx(\n apikey=SecretStr(self.api_key).get_secret_value(),\n url=self.base_url_ibm_watsonx,\n project_id=self.project_id,\n model_id=model_name,\n params={\n \"temperature\": temperature,\n },\n streaming=stream,\n )\n if provider == \"Ollama\":\n if not self.ollama_base_url:\n msg = \"Ollama API URL is required when using Ollama provider\"\n raise ValueError(msg)\n if not model_name:\n msg = \"Model name is required when using Ollama provider\"\n raise ValueError(msg)\n\n transformed_base_url = transform_localhost_url(self.ollama_base_url)\n\n # Check if URL contains /v1 suffix (OpenAI-compatible mode)\n if transformed_base_url and transformed_base_url.rstrip(\"/\").endswith(\"/v1\"):\n # Strip /v1 suffix and log warning\n transformed_base_url = transformed_base_url.rstrip(\"/\").removesuffix(\"/v1\")\n logger.warning(\n \"Detected '/v1' suffix in base URL. The Ollama component uses the native Ollama API, \"\n \"not the OpenAI-compatible API. The '/v1' suffix has been automatically removed. \"\n \"If you want to use the OpenAI-compatible API, please use the OpenAI component instead. \"\n \"Learn more at https://docs.ollama.com/openai#openai-compatibility\"\n )\n\n return ChatOllama(\n base_url=transformed_base_url,\n model=model_name,\n temperature=temperature,\n )\n msg = f\"Unknown provider: {provider}\"\n raise ValueError(msg)\n\n async def update_build_config(\n self, build_config: dotdict, field_value: Any, field_name: str | None = None\n ) -> dotdict:\n if field_name == \"provider\":\n if field_value == \"OpenAI\":\n build_config[\"model_name\"][\"options\"] = OPENAI_CHAT_MODEL_NAMES + OPENAI_REASONING_MODEL_NAMES\n build_config[\"model_name\"][\"value\"] = OPENAI_CHAT_MODEL_NAMES[0]\n build_config[\"api_key\"][\"display_name\"] = \"OpenAI API Key\"\n build_config[\"api_key\"][\"show\"] = True\n build_config[\"base_url_ibm_watsonx\"][\"show\"] = False\n build_config[\"project_id\"][\"show\"] = False\n build_config[\"ollama_base_url\"][\"show\"] = False\n elif field_value == \"Anthropic\":\n build_config[\"model_name\"][\"options\"] = ANTHROPIC_MODELS\n build_config[\"model_name\"][\"value\"] = ANTHROPIC_MODELS[0]\n build_config[\"api_key\"][\"display_name\"] = \"Anthropic API Key\"\n build_config[\"api_key\"][\"show\"] = True\n build_config[\"base_url_ibm_watsonx\"][\"show\"] = False\n build_config[\"project_id\"][\"show\"] = False\n build_config[\"ollama_base_url\"][\"show\"] = False\n elif field_value == \"Google\":\n build_config[\"model_name\"][\"options\"] = GOOGLE_GENERATIVE_AI_MODELS\n build_config[\"model_name\"][\"value\"] = GOOGLE_GENERATIVE_AI_MODELS[0]\n build_config[\"api_key\"][\"display_name\"] = \"Google API Key\"\n build_config[\"api_key\"][\"show\"] = True\n build_config[\"base_url_ibm_watsonx\"][\"show\"] = False\n build_config[\"project_id\"][\"show\"] = False\n build_config[\"ollama_base_url\"][\"show\"] = False\n elif field_value == \"IBM watsonx.ai\":\n build_config[\"model_name\"][\"options\"] = IBM_WATSONX_DEFAULT_MODELS\n build_config[\"model_name\"][\"value\"] = IBM_WATSONX_DEFAULT_MODELS[0]\n build_config[\"api_key\"][\"display_name\"] = \"IBM API Key\"\n build_config[\"api_key\"][\"show\"] = True\n build_config[\"base_url_ibm_watsonx\"][\"show\"] = True\n build_config[\"project_id\"][\"show\"] = True\n build_config[\"ollama_base_url\"][\"show\"] = False\n elif field_value == \"Ollama\":\n # Fetch Ollama models from the API\n build_config[\"api_key\"][\"show\"] = False\n build_config[\"base_url_ibm_watsonx\"][\"show\"] = False\n build_config[\"project_id\"][\"show\"] = False\n build_config[\"ollama_base_url\"][\"show\"] = True\n\n # Try multiple sources to get the URL (in order of preference):\n # 1. Instance attribute (already resolved from global/db)\n # 2. Build config value (may be a global variable reference)\n # 3. Default value\n ollama_url = getattr(self, \"ollama_base_url\", None)\n if not ollama_url:\n config_value = build_config[\"ollama_base_url\"].get(\"value\", DEFAULT_OLLAMA_URL)\n # If config_value looks like a variable name (all caps with underscores), use default\n is_variable_ref = (\n config_value\n and isinstance(config_value, str)\n and config_value.isupper()\n and \"_\" in config_value\n )\n if is_variable_ref:\n await logger.adebug(\n f\"Config value appears to be a variable reference: {config_value}, using default\"\n )\n ollama_url = DEFAULT_OLLAMA_URL\n else:\n ollama_url = config_value\n\n await logger.adebug(f\"Fetching Ollama models for provider switch. URL: {ollama_url}\")\n if await is_valid_ollama_url(url=ollama_url):\n try:\n models = await get_ollama_models(\n base_url_value=ollama_url,\n desired_capability=DESIRED_CAPABILITY,\n json_models_key=JSON_MODELS_KEY,\n json_name_key=JSON_NAME_KEY,\n json_capabilities_key=JSON_CAPABILITIES_KEY,\n )\n build_config[\"model_name\"][\"options\"] = models\n build_config[\"model_name\"][\"value\"] = models[0] if models else \"\"\n except ValueError:\n await logger.awarning(\"Failed to fetch Ollama models. Setting empty options.\")\n build_config[\"model_name\"][\"options\"] = []\n build_config[\"model_name\"][\"value\"] = \"\"\n else:\n await logger.awarning(f\"Invalid Ollama URL: {ollama_url}\")\n build_config[\"model_name\"][\"options\"] = []\n build_config[\"model_name\"][\"value\"] = \"\"\n elif (\n field_name == \"base_url_ibm_watsonx\"\n and field_value\n and hasattr(self, \"provider\")\n and self.provider == \"IBM watsonx.ai\"\n ):\n # Fetch IBM models when base_url changes\n try:\n models = self.fetch_ibm_models(base_url=field_value)\n build_config[\"model_name\"][\"options\"] = models\n build_config[\"model_name\"][\"value\"] = models[0] if models else IBM_WATSONX_DEFAULT_MODELS[0]\n info_message = f\"Updated model options: {len(models)} models found in {field_value}\"\n logger.info(info_message)\n except Exception: # noqa: BLE001\n logger.exception(\"Error updating IBM model options.\")\n elif field_name == \"ollama_base_url\":\n # Fetch Ollama models when ollama_base_url changes\n # Use the field_value directly since this is triggered when the field changes\n logger.debug(\n f\"Fetching Ollama models from updated URL: {build_config['ollama_base_url']} \\\n and value {self.ollama_base_url}\",\n )\n await logger.adebug(f\"Fetching Ollama models from updated URL: {self.ollama_base_url}\")\n if await is_valid_ollama_url(url=self.ollama_base_url):\n try:\n models = await get_ollama_models(\n base_url_value=self.ollama_base_url,\n desired_capability=DESIRED_CAPABILITY,\n json_models_key=JSON_MODELS_KEY,\n json_name_key=JSON_NAME_KEY,\n json_capabilities_key=JSON_CAPABILITIES_KEY,\n )\n build_config[\"model_name\"][\"options\"] = models\n build_config[\"model_name\"][\"value\"] = models[0] if models else \"\"\n info_message = f\"Updated model options: {len(models)} models found in {self.ollama_base_url}\"\n await logger.ainfo(info_message)\n except ValueError:\n await logger.awarning(\"Error updating Ollama model options.\")\n build_config[\"model_name\"][\"options\"] = []\n build_config[\"model_name\"][\"value\"] = \"\"\n else:\n await logger.awarning(f\"Invalid Ollama URL: {self.ollama_base_url}\")\n build_config[\"model_name\"][\"options\"] = []\n build_config[\"model_name\"][\"value\"] = \"\"\n elif field_name == \"model_name\":\n # Refresh Ollama models when model_name field is accessed\n if hasattr(self, \"provider\") and self.provider == \"Ollama\":\n ollama_url = getattr(self, \"ollama_base_url\", DEFAULT_OLLAMA_URL)\n if await is_valid_ollama_url(url=ollama_url):\n try:\n models = await get_ollama_models(\n base_url_value=ollama_url,\n desired_capability=DESIRED_CAPABILITY,\n json_models_key=JSON_MODELS_KEY,\n json_name_key=JSON_NAME_KEY,\n json_capabilities_key=JSON_CAPABILITIES_KEY,\n )\n build_config[\"model_name\"][\"options\"] = models\n except ValueError:\n await logger.awarning(\"Failed to refresh Ollama models.\")\n build_config[\"model_name\"][\"options\"] = []\n else:\n build_config[\"model_name\"][\"options\"] = []\n\n # Hide system_message for o1 models - currently unsupported\n if field_value and field_value.startswith(\"o1\") and hasattr(self, \"provider\") and self.provider == \"OpenAI\":\n if \"system_message\" in build_config:\n build_config[\"system_message\"][\"show\"] = False\n elif \"system_message\" in build_config:\n build_config[\"system_message\"][\"show\"] = True\n return build_config\n" |
There was a problem hiding this comment.
🧩 Analysis chain
Mix of synchronous and asynchronous logging in update_build_config may indicate incomplete refactoring.
In the elif field_name == "ollama_base_url": branch (approximately line 1308 in the embedded code), there is a call to logger.debug(...) (synchronous) immediately followed by await logger.adebug(...) (asynchronous). This pattern suggests either:
- Incomplete migration to async logging during the refactor.
- Unintended duplication of the same log message with different urgency levels.
Review this section and remove the redundant sync call, or clarify the logging intent.
Search for mixed sync/async logger calls in the LanguageModelComponent code:
🏁 Script executed:
#!/bin/bash
# Find all logger.debug and logger.adebug calls near line 1308
rg -n "logger\.(debug|adebug|info|ainfo|warning|awarning)" --type=py src/backend/base/langflow/base/models/language_model.py | head -30Length of output: 95
I need to search for the correct file location. The code is embedded in a JSON file, not a standalone Python file.
---
🏁 Script executed:
```shell
#!/bin/bash
# Search for the Blog Writer.json file to find the correct path
fd -name "Blog Writer.json" --type f
Length of output: 230
---
🏁 Script executed:
```shell
#!/bin/bash
# Search for the Blog Writer.json file with correct fd syntax
fd "Blog Writer" --type f
Length of output: 189
---
🏁 Script executed:
```shell
#!/bin/bash
# Search for the mixed logging pattern (logger.debug followed by logger.adebug) in the JSON file
# Look for the ollama_base_url section
rg -A5 -B5 "ollama_base_url" "src/backend/base/langflow/initial_setup/starter_projects/Blog Writer.json" | grep -A10 -B10 "logger\."
Length of output: 19946
Remove redundant synchronous logger.debug() call in the ollama_base_url branch.
In the elif field_name == "ollama_base_url": section, there are two consecutive logging calls—logger.debug(...) (synchronous) followed by await logger.adebug(...) (asynchronous)—that log similar information. Remove the sync call to complete the async migration:
elif field_name == "ollama_base_url":
# Fetch Ollama models when ollama_base_url changes
await logger.adebug(f"Fetching Ollama models from updated URL: {self.ollama_base_url}")🤖 Prompt for AI Agents
In src/backend/base/langflow/initial_setup/starter_projects/Blog Writer.json
around line 1463, remove the redundant synchronous logger.debug(...) call inside
the elif branch for field_name == "ollama_base_url" and keep only the
asynchronous await logger.adebug(...) call that logs the updated Ollama URL;
ensure you delete the multiline logger.debug(...) invocation and its f-string so
the code uses solely await logger.adebug(f"Fetching Ollama models from updated
URL: {self.ollama_base_url}") before proceeding with is_valid_ollama_url and
model fetching.
🧩 Analysis chain
Verify that ollama_base_url instance attribute is reliably initialized before use in async provider switches.
In the elif field_value == "Ollama": branch of update_build_config, the code attempts to retrieve ollama_base_url using getattr(self, "ollama_base_url", None). If the attribute has not been set by the component framework at the time this method is called, the code falls back to build_config["ollama_base_url"].get("value", DEFAULT_OLLAMA_URL). However, in the later elif field_name == "ollama_base_url": branch, the code directly accesses self.ollama_base_url without a fallback.
Ensure consistency and add defensive checks:
elif field_name == "ollama_base_url":
# Fetch Ollama models when ollama_base_url changes
- ollama_url = self.ollama_base_url
+ ollama_url = getattr(self, "ollama_base_url", DEFAULT_OLLAMA_URL)I'll verify this review comment by examining the actual codebase structure and the component implementation.
🏁 Script executed:
#!/bin/bash
# Find the actual LanguageModelComponent source file (not the JSON starter project)
fd -e py "LanguageModel" --type f | head -20Length of output: 0
🏁 Script executed:
#!/bin/bash
# Search for the LanguageModelComponent class definition
rg -n "class LanguageModelComponent" --type py -A 5 | head -40Length of output: 605
🏁 Script executed:
#!/bin/bash
# Locate the update_build_config method to verify the exact code pattern
rg -n "async def update_build_config" --type py -A 200 | grep -A 200 "LanguageModelComponent" | head -250Length of output: 0
🏁 Script executed:
#!/bin/bash
# Check if ollama_base_url is declared as an input field and how it's defined
rg -n "ollama_base_url" --type py -B 2 -A 2 | head -60Length of output: 6585
🏁 Script executed:
#!/bin/bash
# Examine how inputs are typically processed/assigned to instance attributes
rg -n "MessageTextInput" --type py -B 2 -A 5 | head -80Length of output: 6917
🏁 Script executed:
#!/bin/bash
# Check the inputs list definition for MessageTextInput fields
rg -n "class.*Component" --type py -A 100 | grep -A 100 "inputs = \[" | head -150Length of output: 14871
🏁 Script executed:
#!/bin/bash
# Check if there's a base class that might initialize these attributes
rg -n "class LCModelComponent" --type py -A 20Length of output: 1508
🏁 Script executed:
#!/bin/bash
# Search for how MessageTextInput fields become instance attributes
rg -n "def __init__|self\\.ollama_base_url\s*=" --type py -B 3 -A 3 | head -100Length of output: 7215
🏁 Script executed:
#!/bin/bash
# Check the LangFlow framework to see if there's automatic attribute mapping
find . -name "*.py" -type f | xargs grep -l "MessageTextInput" | head -5Length of output: 276
🏁 Script executed:
#!/bin/bash
# Get more context about the actual implementation file location
find . -path "*/langflow/*" -name "*model*.py" -type f | grep -i component | head -10Length of output: 0
🏁 Script executed:
#!/bin/bash
# Look for the component implementation that would have this code
find . -path "*/langflow/*" -name "*.py" -type f -exec grep -l "update_build_config" {} \; | head -10Length of output: 0
🏁 Script executed:
#!/bin/bash
# Search for any MessageTextInput usage patterns in langflow components
rg -n "MessageTextInput\(" --type py -A 3 | head -80Length of output: 6969
🏁 Script executed:
#!/bin/bash
# Check the actual structure - look for ollama in component definitions
rg -n "ollama_base_url" -B 5 -A 5 --type py | head -120Length of output: 15671
🏁 Script executed:
#!/bin/bash
# Verify if there's attribute initialization through framework mechanisms
rg -n "setattr|__setattr__|getattr|load_from_db" --type py --context=2 | grep -i component | head -60Length of output: 5549
🏁 Script executed:
#!/bin/bash
# Try to find where input fields become instance attributes
find . -path "*/langflow/*" -name "*.py" -exec grep -l "class.*Component" {} \; | head -5Length of output: 410
🏁 Script executed:
#!/bin/bash
# Check the specific Blog Writer.json file to understand the structure
cat "src/backend/base/langflow/initial_setup/starter_projects/Blog Writer.json" | head -100Length of output: 3847
🏁 Script executed:
#!/bin/bash
# Search for how component frameworks typically map input fields to attributes
rg -n "def build_model|async def update_build_config" --type py -B 5 | head -150Length of output: 11729
🏁 Script executed:
#!/bin_bash
# Look for the specific code in the JSON file mentioned in the review
jq '.nodes[] | select(.data.value != null) | .data.value' "src/backend/base/langflow/initial_setup/starter_projects/Blog Writer.json" | grep -A 300 "LanguageModelComponent" | head -350Length of output: 126
🏁 Script executed:
#!/bin/bash
# Check if ollama_base_url access patterns differ between the two branches in the code
rg -n "elif field_name == \"ollama_base_url\":|elif field_value == \"Ollama\":" --type py -A 30 | head -200Length of output: 13877
🏁 Script executed:
#!/bin/bash
# Verify the specific line numbers if the JSON contains the Python code
wc -l "src/backend/base/langflow/initial_setup/starter_projects/Blog Writer.json"Length of output: 78
🏁 Script executed:
#!/bin/bash
# Extract and examine line 1463 from the JSON file
sed -n '1463p' "src/backend/base/langflow/initial_setup/starter_projects/Blog Writer.json" | head -c 500Length of output: 500
🏁 Script executed:
#!/bin/bash
# Check how Python components in langflow typically initialize instance attributes from input fields
find . -path "*/langflow/*" -name "*.py" -type f | head -3 | xargs grep -l "self\\..*=" | xargs grep -B 10 "def __init__" | head -100Length of output: 1397
🏁 Script executed:
#!/bin/bash
# Look for the framework that handles the automatic mapping of input fields to instance attributes
rg -n "class.*Component.*:" --type py | head -10Length of output: 870
🏁 Script executed:
#!/bin/bash
# Check if inputs are automatically mapped to attributes through a metaclass or descriptor
rg -n "@property|def __getattr__|inputs.*=" --type py | head -100Length of output: 7999
🏁 Script executed:
#!/bin/bash
# Search for how the async update_build_config is invoked and when ollama_base_url might be available
rg -n "update_build_config\(" --type py -B 5 -A 5 | head -100Length of output: 8846
🏁 Script executed:
#!/bin/bash
# Identify the framework's component base class to understand attribute initialization
rg -n "class.*Component" --type py | grep -E "(Base|LC|Model)" | head -10Length of output: 972
🏁 Script executed:
#!/bin/bash
# Check the actual MessageTextInput definition and if load_from_db affects attribute availability
rg -n "class MessageTextInput|load_from_db" --type py -B 2 -A 5 | head -100Length of output: 9297
🏁 Script executed:
#!/bin/bash
# Final verification: Look for direct attribute access without getattr in the context
rg -n "self\\.ollama_base_url" --type py -B 3 -A 3 | head -200Length of output: 7802
🏁 Script executed:
#!/bin/bash
# Check if the issue is about uninitialized attributes in general within update_build_config
rg -n "def update_build_config" --type py -A 200 | grep -E "(self\\.|getattr)" | head -150Length of output: 16621
🏁 Script executed:
#!/bin_bash
# Verify if ollama_base_url as instance attribute is guaranteed to exist
rg -n "self\\.ollama_base_url\\s*=" --type pyLength of output: 0
🏁 Script executed:
#!/bin/bash
# Look for any initialization patterns in the base class
rg -n "class LCModelComponent" --type py -A 50Length of output: 3832
🏁 Script executed:
#!/bin/bash
# Check if inputs dict is converted to attributes automatically
rg -n "def __init__|inputs.*=.*\\[" --type py -A 50 | head -150Length of output: 15260
🏁 Script executed:
#!/bin/bash
# Look at how MessageTextInput with load_from_db is handled
rg -n "load_from_db.*True" --type py -B 5 -A 5 | head -100Length of output: 8775
🏁 Script executed:
#!/bin/bash
# Search for the actual component implementation that contains the problematic code
find . -name "*.py" -path "*/langflow/*" -exec grep -l "ollama_base_url" {} \; Length of output: 0
🏁 Script executed:
#!/bin/bash
# Check what happens when an attribute is accessed but not yet initialized
rg -n "AttributeError|getattr.*DEFAULT" --type py | head -50Length of output: 5004
🏁 Script executed:
#!/bin/bash
# Directly search for the exact code pattern in the branch conditions
rg -n "field_name == \"ollama_base_url\"" --type py -A 20 | head -150Length of output: 4786
Apply defensive fallback to ollama_base_url access in elif field_name == "ollama_base_url": branch.
The inconsistency is confirmed. The first branch (elif field_value == "Ollama") uses getattr(self, "ollama_base_url", None) with fallback logic, but the second branch (elif field_name == "ollama_base_url") directly accesses self.ollama_base_url without any defensive check. This can cause an AttributeError if the attribute is not yet initialized when that branch is triggered.
Apply the suggested fix to both locations:
src/lfx/src/lfx/components/models/language_model.pyline 343+src/lfx/src/lfx/components/models/embedding_model.pyline 260+
🤖 Prompt for AI Agents
In src/backend/base/langflow/initial_setup/starter_projects/Blog Writer.json
around line 1463 (and mirror changes in
src/lfx/src/lfx/components/models/language_model.py line ~343+ and
src/lfx/src/lfx/components/models/embedding_model.py line ~260+), replace direct
accesses to self.ollama_base_url with a defensive getattr fallback (e.g.,
ollama_url = getattr(self, "ollama_base_url", None) or use the provided
field_value/default constant when available), use that local ollama_url variable
for validation/logging/get_ollama_models calls, and ensure logging and
assignment use the resolved value so AttributeError is avoided when the
attribute is not initialized.
| "value": "from typing import Any\n\nimport requests\nfrom langchain_anthropic import ChatAnthropic\nfrom langchain_ibm import ChatWatsonx\nfrom langchain_ollama import ChatOllama\nfrom langchain_openai import ChatOpenAI\nfrom pydantic.v1 import SecretStr\n\nfrom lfx.base.models.anthropic_constants import ANTHROPIC_MODELS\nfrom lfx.base.models.google_generative_ai_constants import GOOGLE_GENERATIVE_AI_MODELS\nfrom lfx.base.models.google_generative_ai_model import ChatGoogleGenerativeAIFixed\nfrom lfx.base.models.model import LCModelComponent\nfrom lfx.base.models.model_utils import get_ollama_models, is_valid_ollama_url\nfrom lfx.base.models.openai_constants import OPENAI_CHAT_MODEL_NAMES, OPENAI_REASONING_MODEL_NAMES\nfrom lfx.field_typing import LanguageModel\nfrom lfx.field_typing.range_spec import RangeSpec\nfrom lfx.inputs.inputs import BoolInput, MessageTextInput, StrInput\nfrom lfx.io import DropdownInput, MessageInput, MultilineInput, SecretStrInput, SliderInput\nfrom lfx.log.logger import logger\nfrom lfx.schema.dotdict import dotdict\nfrom lfx.utils.util import transform_localhost_url\n\n# IBM watsonx.ai constants\nIBM_WATSONX_DEFAULT_MODELS = [\"ibm/granite-3-2b-instruct\", \"ibm/granite-3-8b-instruct\", \"ibm/granite-13b-instruct-v2\"]\nIBM_WATSONX_URLS = [\n \"https://us-south.ml.cloud.ibm.com\",\n \"https://eu-de.ml.cloud.ibm.com\",\n \"https://eu-gb.ml.cloud.ibm.com\",\n \"https://au-syd.ml.cloud.ibm.com\",\n \"https://jp-tok.ml.cloud.ibm.com\",\n \"https://ca-tor.ml.cloud.ibm.com\",\n]\n\n# Ollama API constants\nHTTP_STATUS_OK = 200\nJSON_MODELS_KEY = \"models\"\nJSON_NAME_KEY = \"name\"\nJSON_CAPABILITIES_KEY = \"capabilities\"\nDESIRED_CAPABILITY = \"completion\"\nDEFAULT_OLLAMA_URL = \"http://localhost:11434\"\n\n\nclass LanguageModelComponent(LCModelComponent):\n display_name = \"Language Model\"\n description = \"Runs a language model given a specified provider.\"\n documentation: str = \"https://docs.langflow.org/components-models\"\n icon = \"brain-circuit\"\n category = \"models\"\n priority = 0 # Set priority to 0 to make it appear first\n\n @staticmethod\n def fetch_ibm_models(base_url: str) -> list[str]:\n \"\"\"Fetch available models from the watsonx.ai API.\"\"\"\n try:\n endpoint = f\"{base_url}/ml/v1/foundation_model_specs\"\n params = {\"version\": \"2024-09-16\", \"filters\": \"function_text_chat,!lifecycle_withdrawn\"}\n response = requests.get(endpoint, params=params, timeout=10)\n response.raise_for_status()\n data = response.json()\n models = [model[\"model_id\"] for model in data.get(\"resources\", [])]\n return sorted(models)\n except Exception: # noqa: BLE001\n logger.exception(\"Error fetching IBM watsonx models. Using default models.\")\n return IBM_WATSONX_DEFAULT_MODELS\n\n inputs = [\n DropdownInput(\n name=\"provider\",\n display_name=\"Model Provider\",\n options=[\"OpenAI\", \"Anthropic\", \"Google\", \"IBM watsonx.ai\", \"Ollama\"],\n value=\"OpenAI\",\n info=\"Select the model provider\",\n real_time_refresh=True,\n options_metadata=[\n {\"icon\": \"OpenAI\"},\n {\"icon\": \"Anthropic\"},\n {\"icon\": \"GoogleGenerativeAI\"},\n {\"icon\": \"WatsonxAI\"},\n {\"icon\": \"Ollama\"},\n ],\n ),\n DropdownInput(\n name=\"model_name\",\n display_name=\"Model Name\",\n options=OPENAI_CHAT_MODEL_NAMES + OPENAI_REASONING_MODEL_NAMES,\n value=OPENAI_CHAT_MODEL_NAMES[0],\n info=\"Select the model to use\",\n real_time_refresh=True,\n refresh_button=True,\n ),\n SecretStrInput(\n name=\"api_key\",\n display_name=\"OpenAI API Key\",\n info=\"Model Provider API key\",\n required=False,\n show=True,\n real_time_refresh=True,\n ),\n DropdownInput(\n name=\"base_url_ibm_watsonx\",\n display_name=\"watsonx API Endpoint\",\n info=\"The base URL of the API (IBM watsonx.ai only)\",\n options=IBM_WATSONX_URLS,\n value=IBM_WATSONX_URLS[0],\n show=False,\n real_time_refresh=True,\n ),\n StrInput(\n name=\"project_id\",\n display_name=\"watsonx Project ID\",\n info=\"The project ID associated with the foundation model (IBM watsonx.ai only)\",\n show=False,\n required=False,\n ),\n MessageTextInput(\n name=\"ollama_base_url\",\n display_name=\"Ollama API URL\",\n info=f\"Endpoint of the Ollama API (Ollama only). Defaults to {DEFAULT_OLLAMA_URL}\",\n value=DEFAULT_OLLAMA_URL,\n show=False,\n real_time_refresh=True,\n load_from_db=True,\n ),\n MessageInput(\n name=\"input_value\",\n display_name=\"Input\",\n info=\"The input text to send to the model\",\n ),\n MultilineInput(\n name=\"system_message\",\n display_name=\"System Message\",\n info=\"A system message that helps set the behavior of the assistant\",\n advanced=False,\n ),\n BoolInput(\n name=\"stream\",\n display_name=\"Stream\",\n info=\"Whether to stream the response\",\n value=False,\n advanced=True,\n ),\n SliderInput(\n name=\"temperature\",\n display_name=\"Temperature\",\n value=0.1,\n info=\"Controls randomness in responses\",\n range_spec=RangeSpec(min=0, max=1, step=0.01),\n advanced=True,\n ),\n ]\n\n def build_model(self) -> LanguageModel:\n provider = self.provider\n model_name = self.model_name\n temperature = self.temperature\n stream = self.stream\n\n if provider == \"OpenAI\":\n if not self.api_key:\n msg = \"OpenAI API key is required when using OpenAI provider\"\n raise ValueError(msg)\n\n if model_name in OPENAI_REASONING_MODEL_NAMES:\n # reasoning models do not support temperature (yet)\n temperature = None\n\n return ChatOpenAI(\n model_name=model_name,\n temperature=temperature,\n streaming=stream,\n openai_api_key=self.api_key,\n )\n if provider == \"Anthropic\":\n if not self.api_key:\n msg = \"Anthropic API key is required when using Anthropic provider\"\n raise ValueError(msg)\n return ChatAnthropic(\n model=model_name,\n temperature=temperature,\n streaming=stream,\n anthropic_api_key=self.api_key,\n )\n if provider == \"Google\":\n if not self.api_key:\n msg = \"Google API key is required when using Google provider\"\n raise ValueError(msg)\n return ChatGoogleGenerativeAIFixed(\n model=model_name,\n temperature=temperature,\n streaming=stream,\n google_api_key=self.api_key,\n )\n if provider == \"IBM watsonx.ai\":\n if not self.api_key:\n msg = \"IBM API key is required when using IBM watsonx.ai provider\"\n raise ValueError(msg)\n if not self.base_url_ibm_watsonx:\n msg = \"IBM watsonx API Endpoint is required when using IBM watsonx.ai provider\"\n raise ValueError(msg)\n if not self.project_id:\n msg = \"IBM watsonx Project ID is required when using IBM watsonx.ai provider\"\n raise ValueError(msg)\n return ChatWatsonx(\n apikey=SecretStr(self.api_key).get_secret_value(),\n url=self.base_url_ibm_watsonx,\n project_id=self.project_id,\n model_id=model_name,\n params={\n \"temperature\": temperature,\n },\n streaming=stream,\n )\n if provider == \"Ollama\":\n if not self.ollama_base_url:\n msg = \"Ollama API URL is required when using Ollama provider\"\n raise ValueError(msg)\n if not model_name:\n msg = \"Model name is required when using Ollama provider\"\n raise ValueError(msg)\n\n transformed_base_url = transform_localhost_url(self.ollama_base_url)\n\n # Check if URL contains /v1 suffix (OpenAI-compatible mode)\n if transformed_base_url and transformed_base_url.rstrip(\"/\").endswith(\"/v1\"):\n # Strip /v1 suffix and log warning\n transformed_base_url = transformed_base_url.rstrip(\"/\").removesuffix(\"/v1\")\n logger.warning(\n \"Detected '/v1' suffix in base URL. The Ollama component uses the native Ollama API, \"\n \"not the OpenAI-compatible API. The '/v1' suffix has been automatically removed. \"\n \"If you want to use the OpenAI-compatible API, please use the OpenAI component instead. \"\n \"Learn more at https://docs.ollama.com/openai#openai-compatibility\"\n )\n\n return ChatOllama(\n base_url=transformed_base_url,\n model=model_name,\n temperature=temperature,\n )\n msg = f\"Unknown provider: {provider}\"\n raise ValueError(msg)\n\n async def update_build_config(\n self, build_config: dotdict, field_value: Any, field_name: str | None = None\n ) -> dotdict:\n if field_name == \"provider\":\n if field_value == \"OpenAI\":\n build_config[\"model_name\"][\"options\"] = OPENAI_CHAT_MODEL_NAMES + OPENAI_REASONING_MODEL_NAMES\n build_config[\"model_name\"][\"value\"] = OPENAI_CHAT_MODEL_NAMES[0]\n build_config[\"api_key\"][\"display_name\"] = \"OpenAI API Key\"\n build_config[\"api_key\"][\"show\"] = True\n build_config[\"base_url_ibm_watsonx\"][\"show\"] = False\n build_config[\"project_id\"][\"show\"] = False\n build_config[\"ollama_base_url\"][\"show\"] = False\n elif field_value == \"Anthropic\":\n build_config[\"model_name\"][\"options\"] = ANTHROPIC_MODELS\n build_config[\"model_name\"][\"value\"] = ANTHROPIC_MODELS[0]\n build_config[\"api_key\"][\"display_name\"] = \"Anthropic API Key\"\n build_config[\"api_key\"][\"show\"] = True\n build_config[\"base_url_ibm_watsonx\"][\"show\"] = False\n build_config[\"project_id\"][\"show\"] = False\n build_config[\"ollama_base_url\"][\"show\"] = False\n elif field_value == \"Google\":\n build_config[\"model_name\"][\"options\"] = GOOGLE_GENERATIVE_AI_MODELS\n build_config[\"model_name\"][\"value\"] = GOOGLE_GENERATIVE_AI_MODELS[0]\n build_config[\"api_key\"][\"display_name\"] = \"Google API Key\"\n build_config[\"api_key\"][\"show\"] = True\n build_config[\"base_url_ibm_watsonx\"][\"show\"] = False\n build_config[\"project_id\"][\"show\"] = False\n build_config[\"ollama_base_url\"][\"show\"] = False\n elif field_value == \"IBM watsonx.ai\":\n build_config[\"model_name\"][\"options\"] = IBM_WATSONX_DEFAULT_MODELS\n build_config[\"model_name\"][\"value\"] = IBM_WATSONX_DEFAULT_MODELS[0]\n build_config[\"api_key\"][\"display_name\"] = \"IBM API Key\"\n build_config[\"api_key\"][\"show\"] = True\n build_config[\"base_url_ibm_watsonx\"][\"show\"] = True\n build_config[\"project_id\"][\"show\"] = True\n build_config[\"ollama_base_url\"][\"show\"] = False\n elif field_value == \"Ollama\":\n # Fetch Ollama models from the API\n build_config[\"api_key\"][\"show\"] = False\n build_config[\"base_url_ibm_watsonx\"][\"show\"] = False\n build_config[\"project_id\"][\"show\"] = False\n build_config[\"ollama_base_url\"][\"show\"] = True\n\n # Try multiple sources to get the URL (in order of preference):\n # 1. Instance attribute (already resolved from global/db)\n # 2. Build config value (may be a global variable reference)\n # 3. Default value\n ollama_url = getattr(self, \"ollama_base_url\", None)\n if not ollama_url:\n config_value = build_config[\"ollama_base_url\"].get(\"value\", DEFAULT_OLLAMA_URL)\n # If config_value looks like a variable name (all caps with underscores), use default\n is_variable_ref = (\n config_value\n and isinstance(config_value, str)\n and config_value.isupper()\n and \"_\" in config_value\n )\n if is_variable_ref:\n await logger.adebug(\n f\"Config value appears to be a variable reference: {config_value}, using default\"\n )\n ollama_url = DEFAULT_OLLAMA_URL\n else:\n ollama_url = config_value\n\n await logger.adebug(f\"Fetching Ollama models for provider switch. URL: {ollama_url}\")\n if await is_valid_ollama_url(url=ollama_url):\n try:\n models = await get_ollama_models(\n base_url_value=ollama_url,\n desired_capability=DESIRED_CAPABILITY,\n json_models_key=JSON_MODELS_KEY,\n json_name_key=JSON_NAME_KEY,\n json_capabilities_key=JSON_CAPABILITIES_KEY,\n )\n build_config[\"model_name\"][\"options\"] = models\n build_config[\"model_name\"][\"value\"] = models[0] if models else \"\"\n except ValueError:\n await logger.awarning(\"Failed to fetch Ollama models. Setting empty options.\")\n build_config[\"model_name\"][\"options\"] = []\n build_config[\"model_name\"][\"value\"] = \"\"\n else:\n await logger.awarning(f\"Invalid Ollama URL: {ollama_url}\")\n build_config[\"model_name\"][\"options\"] = []\n build_config[\"model_name\"][\"value\"] = \"\"\n elif (\n field_name == \"base_url_ibm_watsonx\"\n and field_value\n and hasattr(self, \"provider\")\n and self.provider == \"IBM watsonx.ai\"\n ):\n # Fetch IBM models when base_url changes\n try:\n models = self.fetch_ibm_models(base_url=field_value)\n build_config[\"model_name\"][\"options\"] = models\n build_config[\"model_name\"][\"value\"] = models[0] if models else IBM_WATSONX_DEFAULT_MODELS[0]\n info_message = f\"Updated model options: {len(models)} models found in {field_value}\"\n logger.info(info_message)\n except Exception: # noqa: BLE001\n logger.exception(\"Error updating IBM model options.\")\n elif field_name == \"ollama_base_url\":\n # Fetch Ollama models when ollama_base_url changes\n # Use the field_value directly since this is triggered when the field changes\n logger.debug(\n f\"Fetching Ollama models from updated URL: {build_config['ollama_base_url']} \\\n and value {self.ollama_base_url}\",\n )\n await logger.adebug(f\"Fetching Ollama models from updated URL: {self.ollama_base_url}\")\n if await is_valid_ollama_url(url=self.ollama_base_url):\n try:\n models = await get_ollama_models(\n base_url_value=self.ollama_base_url,\n desired_capability=DESIRED_CAPABILITY,\n json_models_key=JSON_MODELS_KEY,\n json_name_key=JSON_NAME_KEY,\n json_capabilities_key=JSON_CAPABILITIES_KEY,\n )\n build_config[\"model_name\"][\"options\"] = models\n build_config[\"model_name\"][\"value\"] = models[0] if models else \"\"\n info_message = f\"Updated model options: {len(models)} models found in {self.ollama_base_url}\"\n await logger.ainfo(info_message)\n except ValueError:\n await logger.awarning(\"Error updating Ollama model options.\")\n build_config[\"model_name\"][\"options\"] = []\n build_config[\"model_name\"][\"value\"] = \"\"\n else:\n await logger.awarning(f\"Invalid Ollama URL: {self.ollama_base_url}\")\n build_config[\"model_name\"][\"options\"] = []\n build_config[\"model_name\"][\"value\"] = \"\"\n elif field_name == \"model_name\":\n # Refresh Ollama models when model_name field is accessed\n if hasattr(self, \"provider\") and self.provider == \"Ollama\":\n ollama_url = getattr(self, \"ollama_base_url\", DEFAULT_OLLAMA_URL)\n if await is_valid_ollama_url(url=ollama_url):\n try:\n models = await get_ollama_models(\n base_url_value=ollama_url,\n desired_capability=DESIRED_CAPABILITY,\n json_models_key=JSON_MODELS_KEY,\n json_name_key=JSON_NAME_KEY,\n json_capabilities_key=JSON_CAPABILITIES_KEY,\n )\n build_config[\"model_name\"][\"options\"] = models\n except ValueError:\n await logger.awarning(\"Failed to refresh Ollama models.\")\n build_config[\"model_name\"][\"options\"] = []\n else:\n build_config[\"model_name\"][\"options\"] = []\n\n # Hide system_message for o1 models - currently unsupported\n if field_value and field_value.startswith(\"o1\") and hasattr(self, \"provider\") and self.provider == \"OpenAI\":\n if \"system_message\" in build_config:\n build_config[\"system_message\"][\"show\"] = False\n elif \"system_message\" in build_config:\n build_config[\"system_message\"][\"show\"] = True\n return build_config\n" | ||
| }, |
There was a problem hiding this comment.
🧩 Analysis chain
Verify async update_build_config signature change and ensure all callers await the result.
The signature changed from def update_build_config(...) to async def update_build_config(...). This is a breaking API change that affects all code invoking this method. Confirm that the component's caller (likely the framework's build system) correctly awaits this method; otherwise, the promise will not be resolved and config updates will be silently ignored.
Search the codebase for all calls to update_build_config on LanguageModelComponent to confirm they now use await:
🏁 Script executed:
#!/bin/bash
# Find all callers of update_build_config to verify they use await
rg -n "update_build_config" --type=py | grep -v "def update_build_config" | head -20Length of output: 2677
🏁 Script executed:
#!/bin/bash
# Check slack_composio.py line 582 and surrounding context
echo "=== slack_composio.py line 582 ==="
sed -n '575,590p' src/lfx/src/lfx/components/composio/slack_composio.py
# Check test_helper_components.py line 45 and surrounding context
echo -e "\n=== test_helper_components.py line 45 ==="
sed -n '38,55p' src/backend/tests/unit/test_helper_components.py
# Check if LanguageModelComponent is used anywhere in the codebase
echo -e "\n=== Search for LanguageModelComponent invocations ==="
rg -n "LanguageModelComponent" --type=py -A2 -B2 | head -30Length of output: 4506
🏁 Script executed:
#!/bin/bash
# Find LanguageModelComponent definition and parent class
echo "=== LanguageModelComponent class definition ==="
rg -n "class LanguageModelComponent" --type=py -A3
# Find SlackComposio class definition and parent
echo -e "\n=== SlackComposio class definition ==="
rg -n "class.*SlackComposio" --type=py -A3
# Check test_language_model_component.py for actual test calls to update_build_config
echo -e "\n=== Test calls in test_language_model_component.py ==="
rg -n "await.*update_build_config\|\.update_build_config" src/backend/tests/unit/components/models/test_language_model_component.py -A1 -B1Length of output: 588
🏁 Script executed:
#!/bin/bash
# Find LCModelComponent and check its update_build_config method
echo "=== LCModelComponent class definition ==="
rg -n "class LCModelComponent" --type=py -A10
# Check if LCModelComponent has update_build_config method
echo -e "\n=== LCModelComponent.update_build_config ==="
rg -n "def update_build_config|async def update_build_config" src/lfx/src/lfx/base/models/model.py -B2 -A5 2>/dev/null || rg -n "def update_build_config|async def update_build_config" --type=py -B2 -A5 | grep -A5 -B2 "LCModelComponent\|models"
# View actual test_language_model_component.py content (whole file if small)
echo -e "\n=== test_language_model_component.py content ==="
wc -l src/backend/tests/unit/components/models/test_language_model_component.py
head -150 src/backend/tests/unit/components/models/test_language_model_component.pyLength of output: 20441
Add missing await keywords in test calls to update_build_config.
The update_build_config method is now async, but the tests call it without await. All five test methods in test_language_model_component.py (lines 75, 87, 99, 111, 126) must use await when calling the method:
# Before (incorrect):
updated_config = component.update_build_config(build_config, "OpenAI", "provider")
# After (correct):
updated_config = await component.update_build_config(build_config, "OpenAI", "provider")Without await, the tests will receive coroutine objects instead of resolved results, causing assertion failures at runtime.
🤖 Prompt for AI Agents
src/backend/base/langflow/initial_setup/starter_projects/Blog Writer.json around
lines 1463-1464: the review notes that update_build_config is now async but
tests call it without awaiting (lines 75, 87, 99, 111, 126 in
test_language_model_component.py), so fix each test by adding await before
component.update_build_config(...) (and ensure those test functions are declared
async or use an async test runner like pytest-asyncio) so the coroutine is
executed and the tests receive the resolved result.
| "value": "from typing import Any\n\nfrom langchain_openai import OpenAIEmbeddings\n\nfrom lfx.base.embeddings.model import LCEmbeddingsModel\nfrom lfx.base.models.model_utils import get_ollama_models, is_valid_ollama_url\nfrom lfx.base.models.openai_constants import OPENAI_EMBEDDING_MODEL_NAMES\nfrom lfx.base.models.watsonx_constants import IBM_WATSONX_URLS, WATSONX_EMBEDDING_MODEL_NAMES\nfrom lfx.field_typing import Embeddings\nfrom lfx.io import (\n BoolInput,\n DictInput,\n DropdownInput,\n FloatInput,\n IntInput,\n MessageTextInput,\n SecretStrInput,\n)\nfrom lfx.log.logger import logger\nfrom lfx.schema.dotdict import dotdict\nfrom lfx.utils.util import transform_localhost_url\n\n# Ollama API constants\nHTTP_STATUS_OK = 200\nJSON_MODELS_KEY = \"models\"\nJSON_NAME_KEY = \"name\"\nJSON_CAPABILITIES_KEY = \"capabilities\"\nDESIRED_CAPABILITY = \"embedding\"\nDEFAULT_OLLAMA_URL = \"http://localhost:11434\"\n\n\nclass EmbeddingModelComponent(LCEmbeddingsModel):\n display_name = \"Embedding Model\"\n description = \"Generate embeddings using a specified provider.\"\n documentation: str = \"https://docs.langflow.org/components-embedding-models\"\n icon = \"binary\"\n name = \"EmbeddingModel\"\n category = \"models\"\n\n inputs = [\n DropdownInput(\n name=\"provider\",\n display_name=\"Model Provider\",\n options=[\"OpenAI\", \"Ollama\", \"IBM watsonx.ai\"],\n value=\"OpenAI\",\n info=\"Select the embedding model provider\",\n real_time_refresh=True,\n options_metadata=[{\"icon\": \"OpenAI\"}, {\"icon\": \"Ollama\"}, {\"icon\": \"WatsonxAI\"}],\n ),\n MessageTextInput(\n name=\"api_base\",\n display_name=\"API Base URL\",\n info=\"Base URL for the API. Leave empty for default.\",\n advanced=True,\n ),\n MessageTextInput(\n name=\"ollama_base_url\",\n display_name=\"Ollama API URL\",\n info=f\"Endpoint of the Ollama API (Ollama only). Defaults to {DEFAULT_OLLAMA_URL}\",\n value=DEFAULT_OLLAMA_URL,\n show=False,\n real_time_refresh=True,\n load_from_db=True,\n ),\n DropdownInput(\n name=\"base_url_ibm_watsonx\",\n display_name=\"watsonx API Endpoint\",\n info=\"The base URL of the API (IBM watsonx.ai only)\",\n options=IBM_WATSONX_URLS,\n value=IBM_WATSONX_URLS[0],\n show=False,\n real_time_refresh=True,\n ),\n DropdownInput(\n name=\"model\",\n display_name=\"Model Name\",\n options=OPENAI_EMBEDDING_MODEL_NAMES,\n value=OPENAI_EMBEDDING_MODEL_NAMES[0],\n info=\"Select the embedding model to use\",\n ),\n SecretStrInput(\n name=\"api_key\",\n display_name=\"OpenAI API Key\",\n info=\"Model Provider API key\",\n required=True,\n show=True,\n real_time_refresh=True,\n ),\n # Watson-specific inputs\n MessageTextInput(\n name=\"project_id\",\n display_name=\"Project ID\",\n info=\"IBM watsonx.ai Project ID (required for IBM watsonx.ai)\",\n show=False,\n ),\n IntInput(\n name=\"dimensions\",\n display_name=\"Dimensions\",\n info=\"The number of dimensions the resulting output embeddings should have. \"\n \"Only supported by certain models.\",\n advanced=True,\n ),\n IntInput(name=\"chunk_size\", display_name=\"Chunk Size\", advanced=True, value=1000),\n FloatInput(name=\"request_timeout\", display_name=\"Request Timeout\", advanced=True),\n IntInput(name=\"max_retries\", display_name=\"Max Retries\", advanced=True, value=3),\n BoolInput(name=\"show_progress_bar\", display_name=\"Show Progress Bar\", advanced=True),\n DictInput(\n name=\"model_kwargs\",\n display_name=\"Model Kwargs\",\n advanced=True,\n info=\"Additional keyword arguments to pass to the model.\",\n ),\n ]\n\n def build_embeddings(self) -> Embeddings:\n provider = self.provider\n model = self.model\n api_key = self.api_key\n api_base = self.api_base\n base_url_ibm_watsonx = self.base_url_ibm_watsonx\n ollama_base_url = self.ollama_base_url\n dimensions = self.dimensions\n chunk_size = self.chunk_size\n request_timeout = self.request_timeout\n max_retries = self.max_retries\n show_progress_bar = self.show_progress_bar\n model_kwargs = self.model_kwargs or {}\n\n if provider == \"OpenAI\":\n if not api_key:\n msg = \"OpenAI API key is required when using OpenAI provider\"\n raise ValueError(msg)\n return OpenAIEmbeddings(\n model=model,\n dimensions=dimensions or None,\n base_url=api_base or None,\n api_key=api_key,\n chunk_size=chunk_size,\n max_retries=max_retries,\n timeout=request_timeout or None,\n show_progress_bar=show_progress_bar,\n model_kwargs=model_kwargs,\n )\n\n if provider == \"Ollama\":\n try:\n from langchain_ollama import OllamaEmbeddings\n except ImportError:\n try:\n from langchain_community.embeddings import OllamaEmbeddings\n except ImportError:\n msg = \"Please install langchain-ollama: pip install langchain-ollama\"\n raise ImportError(msg) from None\n\n transformed_base_url = transform_localhost_url(ollama_base_url)\n\n # Check if URL contains /v1 suffix (OpenAI-compatible mode)\n if transformed_base_url and transformed_base_url.rstrip(\"/\").endswith(\"/v1\"):\n # Strip /v1 suffix and log warning\n transformed_base_url = transformed_base_url.rstrip(\"/\").removesuffix(\"/v1\")\n logger.warning(\n \"Detected '/v1' suffix in base URL. The Ollama component uses the native Ollama API, \"\n \"not the OpenAI-compatible API. The '/v1' suffix has been automatically removed. \"\n \"If you want to use the OpenAI-compatible API, please use the OpenAI component instead. \"\n \"Learn more at https://docs.ollama.com/openai#openai-compatibility\"\n )\n\n return OllamaEmbeddings(\n model=model,\n base_url=transformed_base_url or \"http://localhost:11434\",\n **model_kwargs,\n )\n\n if provider == \"IBM watsonx.ai\":\n try:\n from langchain_ibm import WatsonxEmbeddings\n except ImportError:\n msg = \"Please install langchain-ibm: pip install langchain-ibm\"\n raise ImportError(msg) from None\n\n if not api_key:\n msg = \"IBM watsonx.ai API key is required when using IBM watsonx.ai provider\"\n raise ValueError(msg)\n\n project_id = self.project_id\n\n if not project_id:\n msg = \"Project ID is required for IBM watsonx.ai provider\"\n raise ValueError(msg)\n\n params = {\n \"model_id\": model,\n \"url\": base_url_ibm_watsonx or \"https://us-south.ml.cloud.ibm.com\",\n \"apikey\": api_key,\n }\n\n params[\"project_id\"] = project_id\n\n return WatsonxEmbeddings(**params)\n\n msg = f\"Unknown provider: {provider}\"\n raise ValueError(msg)\n\n async def update_build_config(\n self, build_config: dotdict, field_value: Any, field_name: str | None = None\n ) -> dotdict:\n if field_name == \"provider\":\n if field_value == \"OpenAI\":\n build_config[\"model\"][\"options\"] = OPENAI_EMBEDDING_MODEL_NAMES\n build_config[\"model\"][\"value\"] = OPENAI_EMBEDDING_MODEL_NAMES[0]\n build_config[\"api_key\"][\"display_name\"] = \"OpenAI API Key\"\n build_config[\"api_key\"][\"required\"] = True\n build_config[\"api_key\"][\"show\"] = True\n build_config[\"api_base\"][\"display_name\"] = \"OpenAI API Base URL\"\n build_config[\"api_base\"][\"advanced\"] = True\n build_config[\"api_base\"][\"show\"] = True\n build_config[\"ollama_base_url\"][\"show\"] = False\n build_config[\"project_id\"][\"show\"] = False\n build_config[\"base_url_ibm_watsonx\"][\"show\"] = False\n\n elif field_value == \"Ollama\":\n build_config[\"ollama_base_url\"][\"show\"] = True\n\n if await is_valid_ollama_url(url=self.ollama_base_url):\n try:\n models = await get_ollama_models(\n base_url_value=self.ollama_base_url,\n desired_capability=DESIRED_CAPABILITY,\n json_models_key=JSON_MODELS_KEY,\n json_name_key=JSON_NAME_KEY,\n json_capabilities_key=JSON_CAPABILITIES_KEY,\n )\n build_config[\"model\"][\"options\"] = models\n build_config[\"model\"][\"value\"] = models[0] if models else \"\"\n except ValueError:\n build_config[\"model\"][\"options\"] = []\n build_config[\"model\"][\"value\"] = \"\"\n else:\n build_config[\"model\"][\"options\"] = []\n build_config[\"model\"][\"value\"] = \"\"\n\n build_config[\"api_key\"][\"display_name\"] = \"API Key (Optional)\"\n build_config[\"api_key\"][\"required\"] = False\n build_config[\"api_key\"][\"show\"] = False\n build_config[\"api_base\"][\"show\"] = False\n build_config[\"project_id\"][\"show\"] = False\n build_config[\"base_url_ibm_watsonx\"][\"show\"] = False\n\n elif field_value == \"IBM watsonx.ai\":\n build_config[\"model\"][\"options\"] = WATSONX_EMBEDDING_MODEL_NAMES\n build_config[\"model\"][\"value\"] = WATSONX_EMBEDDING_MODEL_NAMES[0]\n build_config[\"api_key\"][\"display_name\"] = \"IBM watsonx.ai API Key\"\n build_config[\"api_key\"][\"required\"] = True\n build_config[\"api_key\"][\"show\"] = True\n build_config[\"api_base\"][\"show\"] = False\n build_config[\"ollama_base_url\"][\"show\"] = False\n build_config[\"base_url_ibm_watsonx\"][\"show\"] = True\n build_config[\"project_id\"][\"show\"] = True\n\n elif field_name == \"ollama_base_url\":\n # # Refresh Ollama models when base URL changes\n # if hasattr(self, \"provider\") and self.provider == \"Ollama\":\n # Use field_value if provided, otherwise fall back to instance attribute\n ollama_url = self.ollama_base_url\n if await is_valid_ollama_url(url=ollama_url):\n try:\n models = await get_ollama_models(\n base_url_value=ollama_url,\n desired_capability=DESIRED_CAPABILITY,\n json_models_key=JSON_MODELS_KEY,\n json_name_key=JSON_NAME_KEY,\n json_capabilities_key=JSON_CAPABILITIES_KEY,\n )\n build_config[\"model\"][\"options\"] = models\n build_config[\"model\"][\"value\"] = models[0] if models else \"\"\n except ValueError:\n await logger.awarning(\"Failed to fetch Ollama embedding models.\")\n build_config[\"model\"][\"options\"] = []\n build_config[\"model\"][\"value\"] = \"\"\n\n elif field_name == \"model\" and self.provider == \"Ollama\":\n ollama_url = self.ollama_base_url\n if await is_valid_ollama_url(url=ollama_url):\n try:\n models = await get_ollama_models(\n base_url_value=ollama_url,\n desired_capability=DESIRED_CAPABILITY,\n json_models_key=JSON_MODELS_KEY,\n json_name_key=JSON_NAME_KEY,\n json_capabilities_key=JSON_CAPABILITIES_KEY,\n )\n build_config[\"model\"][\"options\"] = models\n except ValueError:\n await logger.awarning(\"Failed to refresh Ollama embedding models.\")\n build_config[\"model\"][\"options\"] = []\n\n return build_config\n" | ||
| }, |
There was a problem hiding this comment.
Use the new Ollama URL during dynamic refresh; normalize before model discovery.
- update_build_config('ollama_base_url') uses self.ollama_base_url, which can be stale while typing; models won’t refresh for the just-entered URL. Also, normalize/strip “/v1” here too for consistency with build_embeddings. Finally, avoid hardcoding the localhost fallback; use DEFAULT_OLLAMA_URL.
Apply this patch inside the EmbeddingModelComponent code string:
@@
- elif field_name == "ollama_base_url":
- # # Refresh Ollama models when base URL changes
- # if hasattr(self, "provider") and self.provider == "Ollama":
- # Use field_value if provided, otherwise fall back to instance attribute
- ollama_url = self.ollama_base_url
- if await is_valid_ollama_url(url=ollama_url):
+ elif field_name == "ollama_base_url":
+ # Refresh Ollama models when base URL changes; prefer the new value
+ ollama_url = (field_value or self.ollama_base_url or DEFAULT_OLLAMA_URL)
+ # Normalize for container/remote envs and strip OpenAI-compatible suffix
+ ollama_url = transform_localhost_url(ollama_url).rstrip("/")
+ if ollama_url.endswith("/v1"):
+ ollama_url = ollama_url[:-3]
+ if await is_valid_ollama_url(url=ollama_url):
try:
models = await get_ollama_models(
- base_url_value=ollama_url,
+ base_url_value=ollama_url,
desired_capability=DESIRED_CAPABILITY,
json_models_key=JSON_MODELS_KEY,
json_name_key=JSON_NAME_KEY,
json_capabilities_key=JSON_CAPABILITIES_KEY,
)
build_config["model"]["options"] = models
build_config["model"]["value"] = models[0] if models else ""
except ValueError:
await logger.awarning("Failed to fetch Ollama embedding models.")
build_config["model"]["options"] = []
build_config["model"]["value"] = ""
@@
- return OllamaEmbeddings(
- model=model,
- base_url=transformed_base_url or "http://localhost:11434",
- **model_kwargs,
- )
+ return OllamaEmbeddings(
+ model=model,
+ base_url=transformed_base_url or DEFAULT_OLLAMA_URL,
+ **model_kwargs,
+ )Optional cleanup:
- Remove unused HTTP_STATUS_OK, or use it in model discovery helpers.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| "value": "from typing import Any\n\nfrom langchain_openai import OpenAIEmbeddings\n\nfrom lfx.base.embeddings.model import LCEmbeddingsModel\nfrom lfx.base.models.model_utils import get_ollama_models, is_valid_ollama_url\nfrom lfx.base.models.openai_constants import OPENAI_EMBEDDING_MODEL_NAMES\nfrom lfx.base.models.watsonx_constants import IBM_WATSONX_URLS, WATSONX_EMBEDDING_MODEL_NAMES\nfrom lfx.field_typing import Embeddings\nfrom lfx.io import (\n BoolInput,\n DictInput,\n DropdownInput,\n FloatInput,\n IntInput,\n MessageTextInput,\n SecretStrInput,\n)\nfrom lfx.log.logger import logger\nfrom lfx.schema.dotdict import dotdict\nfrom lfx.utils.util import transform_localhost_url\n\n# Ollama API constants\nHTTP_STATUS_OK = 200\nJSON_MODELS_KEY = \"models\"\nJSON_NAME_KEY = \"name\"\nJSON_CAPABILITIES_KEY = \"capabilities\"\nDESIRED_CAPABILITY = \"embedding\"\nDEFAULT_OLLAMA_URL = \"http://localhost:11434\"\n\n\nclass EmbeddingModelComponent(LCEmbeddingsModel):\n display_name = \"Embedding Model\"\n description = \"Generate embeddings using a specified provider.\"\n documentation: str = \"https://docs.langflow.org/components-embedding-models\"\n icon = \"binary\"\n name = \"EmbeddingModel\"\n category = \"models\"\n\n inputs = [\n DropdownInput(\n name=\"provider\",\n display_name=\"Model Provider\",\n options=[\"OpenAI\", \"Ollama\", \"IBM watsonx.ai\"],\n value=\"OpenAI\",\n info=\"Select the embedding model provider\",\n real_time_refresh=True,\n options_metadata=[{\"icon\": \"OpenAI\"}, {\"icon\": \"Ollama\"}, {\"icon\": \"WatsonxAI\"}],\n ),\n MessageTextInput(\n name=\"api_base\",\n display_name=\"API Base URL\",\n info=\"Base URL for the API. Leave empty for default.\",\n advanced=True,\n ),\n MessageTextInput(\n name=\"ollama_base_url\",\n display_name=\"Ollama API URL\",\n info=f\"Endpoint of the Ollama API (Ollama only). Defaults to {DEFAULT_OLLAMA_URL}\",\n value=DEFAULT_OLLAMA_URL,\n show=False,\n real_time_refresh=True,\n load_from_db=True,\n ),\n DropdownInput(\n name=\"base_url_ibm_watsonx\",\n display_name=\"watsonx API Endpoint\",\n info=\"The base URL of the API (IBM watsonx.ai only)\",\n options=IBM_WATSONX_URLS,\n value=IBM_WATSONX_URLS[0],\n show=False,\n real_time_refresh=True,\n ),\n DropdownInput(\n name=\"model\",\n display_name=\"Model Name\",\n options=OPENAI_EMBEDDING_MODEL_NAMES,\n value=OPENAI_EMBEDDING_MODEL_NAMES[0],\n info=\"Select the embedding model to use\",\n ),\n SecretStrInput(\n name=\"api_key\",\n display_name=\"OpenAI API Key\",\n info=\"Model Provider API key\",\n required=True,\n show=True,\n real_time_refresh=True,\n ),\n # Watson-specific inputs\n MessageTextInput(\n name=\"project_id\",\n display_name=\"Project ID\",\n info=\"IBM watsonx.ai Project ID (required for IBM watsonx.ai)\",\n show=False,\n ),\n IntInput(\n name=\"dimensions\",\n display_name=\"Dimensions\",\n info=\"The number of dimensions the resulting output embeddings should have. \"\n \"Only supported by certain models.\",\n advanced=True,\n ),\n IntInput(name=\"chunk_size\", display_name=\"Chunk Size\", advanced=True, value=1000),\n FloatInput(name=\"request_timeout\", display_name=\"Request Timeout\", advanced=True),\n IntInput(name=\"max_retries\", display_name=\"Max Retries\", advanced=True, value=3),\n BoolInput(name=\"show_progress_bar\", display_name=\"Show Progress Bar\", advanced=True),\n DictInput(\n name=\"model_kwargs\",\n display_name=\"Model Kwargs\",\n advanced=True,\n info=\"Additional keyword arguments to pass to the model.\",\n ),\n ]\n\n def build_embeddings(self) -> Embeddings:\n provider = self.provider\n model = self.model\n api_key = self.api_key\n api_base = self.api_base\n base_url_ibm_watsonx = self.base_url_ibm_watsonx\n ollama_base_url = self.ollama_base_url\n dimensions = self.dimensions\n chunk_size = self.chunk_size\n request_timeout = self.request_timeout\n max_retries = self.max_retries\n show_progress_bar = self.show_progress_bar\n model_kwargs = self.model_kwargs or {}\n\n if provider == \"OpenAI\":\n if not api_key:\n msg = \"OpenAI API key is required when using OpenAI provider\"\n raise ValueError(msg)\n return OpenAIEmbeddings(\n model=model,\n dimensions=dimensions or None,\n base_url=api_base or None,\n api_key=api_key,\n chunk_size=chunk_size,\n max_retries=max_retries,\n timeout=request_timeout or None,\n show_progress_bar=show_progress_bar,\n model_kwargs=model_kwargs,\n )\n\n if provider == \"Ollama\":\n try:\n from langchain_ollama import OllamaEmbeddings\n except ImportError:\n try:\n from langchain_community.embeddings import OllamaEmbeddings\n except ImportError:\n msg = \"Please install langchain-ollama: pip install langchain-ollama\"\n raise ImportError(msg) from None\n\n transformed_base_url = transform_localhost_url(ollama_base_url)\n\n # Check if URL contains /v1 suffix (OpenAI-compatible mode)\n if transformed_base_url and transformed_base_url.rstrip(\"/\").endswith(\"/v1\"):\n # Strip /v1 suffix and log warning\n transformed_base_url = transformed_base_url.rstrip(\"/\").removesuffix(\"/v1\")\n logger.warning(\n \"Detected '/v1' suffix in base URL. The Ollama component uses the native Ollama API, \"\n \"not the OpenAI-compatible API. The '/v1' suffix has been automatically removed. \"\n \"If you want to use the OpenAI-compatible API, please use the OpenAI component instead. \"\n \"Learn more at https://docs.ollama.com/openai#openai-compatibility\"\n )\n\n return OllamaEmbeddings(\n model=model,\n base_url=transformed_base_url or \"http://localhost:11434\",\n **model_kwargs,\n )\n\n if provider == \"IBM watsonx.ai\":\n try:\n from langchain_ibm import WatsonxEmbeddings\n except ImportError:\n msg = \"Please install langchain-ibm: pip install langchain-ibm\"\n raise ImportError(msg) from None\n\n if not api_key:\n msg = \"IBM watsonx.ai API key is required when using IBM watsonx.ai provider\"\n raise ValueError(msg)\n\n project_id = self.project_id\n\n if not project_id:\n msg = \"Project ID is required for IBM watsonx.ai provider\"\n raise ValueError(msg)\n\n params = {\n \"model_id\": model,\n \"url\": base_url_ibm_watsonx or \"https://us-south.ml.cloud.ibm.com\",\n \"apikey\": api_key,\n }\n\n params[\"project_id\"] = project_id\n\n return WatsonxEmbeddings(**params)\n\n msg = f\"Unknown provider: {provider}\"\n raise ValueError(msg)\n\n async def update_build_config(\n self, build_config: dotdict, field_value: Any, field_name: str | None = None\n ) -> dotdict:\n if field_name == \"provider\":\n if field_value == \"OpenAI\":\n build_config[\"model\"][\"options\"] = OPENAI_EMBEDDING_MODEL_NAMES\n build_config[\"model\"][\"value\"] = OPENAI_EMBEDDING_MODEL_NAMES[0]\n build_config[\"api_key\"][\"display_name\"] = \"OpenAI API Key\"\n build_config[\"api_key\"][\"required\"] = True\n build_config[\"api_key\"][\"show\"] = True\n build_config[\"api_base\"][\"display_name\"] = \"OpenAI API Base URL\"\n build_config[\"api_base\"][\"advanced\"] = True\n build_config[\"api_base\"][\"show\"] = True\n build_config[\"ollama_base_url\"][\"show\"] = False\n build_config[\"project_id\"][\"show\"] = False\n build_config[\"base_url_ibm_watsonx\"][\"show\"] = False\n\n elif field_value == \"Ollama\":\n build_config[\"ollama_base_url\"][\"show\"] = True\n\n if await is_valid_ollama_url(url=self.ollama_base_url):\n try:\n models = await get_ollama_models(\n base_url_value=self.ollama_base_url,\n desired_capability=DESIRED_CAPABILITY,\n json_models_key=JSON_MODELS_KEY,\n json_name_key=JSON_NAME_KEY,\n json_capabilities_key=JSON_CAPABILITIES_KEY,\n )\n build_config[\"model\"][\"options\"] = models\n build_config[\"model\"][\"value\"] = models[0] if models else \"\"\n except ValueError:\n build_config[\"model\"][\"options\"] = []\n build_config[\"model\"][\"value\"] = \"\"\n else:\n build_config[\"model\"][\"options\"] = []\n build_config[\"model\"][\"value\"] = \"\"\n\n build_config[\"api_key\"][\"display_name\"] = \"API Key (Optional)\"\n build_config[\"api_key\"][\"required\"] = False\n build_config[\"api_key\"][\"show\"] = False\n build_config[\"api_base\"][\"show\"] = False\n build_config[\"project_id\"][\"show\"] = False\n build_config[\"base_url_ibm_watsonx\"][\"show\"] = False\n\n elif field_value == \"IBM watsonx.ai\":\n build_config[\"model\"][\"options\"] = WATSONX_EMBEDDING_MODEL_NAMES\n build_config[\"model\"][\"value\"] = WATSONX_EMBEDDING_MODEL_NAMES[0]\n build_config[\"api_key\"][\"display_name\"] = \"IBM watsonx.ai API Key\"\n build_config[\"api_key\"][\"required\"] = True\n build_config[\"api_key\"][\"show\"] = True\n build_config[\"api_base\"][\"show\"] = False\n build_config[\"ollama_base_url\"][\"show\"] = False\n build_config[\"base_url_ibm_watsonx\"][\"show\"] = True\n build_config[\"project_id\"][\"show\"] = True\n\n elif field_name == \"ollama_base_url\":\n # # Refresh Ollama models when base URL changes\n # if hasattr(self, \"provider\") and self.provider == \"Ollama\":\n # Use field_value if provided, otherwise fall back to instance attribute\n ollama_url = self.ollama_base_url\n if await is_valid_ollama_url(url=ollama_url):\n try:\n models = await get_ollama_models(\n base_url_value=ollama_url,\n desired_capability=DESIRED_CAPABILITY,\n json_models_key=JSON_MODELS_KEY,\n json_name_key=JSON_NAME_KEY,\n json_capabilities_key=JSON_CAPABILITIES_KEY,\n )\n build_config[\"model\"][\"options\"] = models\n build_config[\"model\"][\"value\"] = models[0] if models else \"\"\n except ValueError:\n await logger.awarning(\"Failed to fetch Ollama embedding models.\")\n build_config[\"model\"][\"options\"] = []\n build_config[\"model\"][\"value\"] = \"\"\n\n elif field_name == \"model\" and self.provider == \"Ollama\":\n ollama_url = self.ollama_base_url\n if await is_valid_ollama_url(url=ollama_url):\n try:\n models = await get_ollama_models(\n base_url_value=ollama_url,\n desired_capability=DESIRED_CAPABILITY,\n json_models_key=JSON_MODELS_KEY,\n json_name_key=JSON_NAME_KEY,\n json_capabilities_key=JSON_CAPABILITIES_KEY,\n )\n build_config[\"model\"][\"options\"] = models\n except ValueError:\n await logger.awarning(\"Failed to refresh Ollama embedding models.\")\n build_config[\"model\"][\"options\"] = []\n\n return build_config\n" | |
| }, | |
| "value": "from typing import Any\n\nfrom langchain_openai import OpenAIEmbeddings\n\nfrom lfx.base.embeddings.model import LCEmbeddingsModel\nfrom lfx.base.models.model_utils import get_ollama_models, is_valid_ollama_url\nfrom lfx.base.models.openai_constants import OPENAI_EMBEDDING_MODEL_NAMES\nfrom lfx.base.models.watsonx_constants import IBM_WATSONX_URLS, WATSONX_EMBEDDING_MODEL_NAMES\nfrom lfx.field_typing import Embeddings\nfrom lfx.io import (\n BoolInput,\n DictInput,\n DropdownInput,\n FloatInput,\n IntInput,\n MessageTextInput,\n SecretStrInput,\n)\nfrom lfx.log.logger import logger\nfrom lfx.schema.dotdict import dotdict\nfrom lfx.utils.util import transform_localhost_url\n\n# Ollama API constants\nHTTP_STATUS_OK = 200\nJSON_MODELS_KEY = \"models\"\nJSON_NAME_KEY = \"name\"\nJSON_CAPABILITIES_KEY = \"capabilities\"\nDESIRED_CAPABILITY = \"embedding\"\nDEFAULT_OLLAMA_URL = \"http://localhost:11434\"\n\n\nclass EmbeddingModelComponent(LCEmbeddingsModel):\n display_name = \"Embedding Model\"\n description = \"Generate embeddings using a specified provider.\"\n documentation: str = \"https://docs.langflow.org/components-embedding-models\"\n icon = \"binary\"\n name = \"EmbeddingModel\"\n category = \"models\"\n\n inputs = [\n DropdownInput(\n name=\"provider\",\n display_name=\"Model Provider\",\n options=[\"OpenAI\", \"Ollama\", \"IBM watsonx.ai\"],\n value=\"OpenAI\",\n info=\"Select the embedding model provider\",\n real_time_refresh=True,\n options_metadata=[{\"icon\": \"OpenAI\"}, {\"icon\": \"Ollama\"}, {\"icon\": \"WatsonxAI\"}],\n ),\n MessageTextInput(\n name=\"api_base\",\n display_name=\"API Base URL\",\n info=\"Base URL for the API. Leave empty for default.\",\n advanced=True,\n ),\n MessageTextInput(\n name=\"ollama_base_url\",\n display_name=\"Ollama API URL\",\n info=f\"Endpoint of the Ollama API (Ollama only). Defaults to {DEFAULT_OLLAMA_URL}\",\n value=DEFAULT_OLLAMA_URL,\n show=False,\n real_time_refresh=True,\n load_from_db=True,\n ),\n DropdownInput(\n name=\"base_url_ibm_watsonx\",\n display_name=\"watsonx API Endpoint\",\n info=\"The base URL of the API (IBM watsonx.ai only)\",\n options=IBM_WATSONX_URLS,\n value=IBM_WATSONX_URLS[0],\n show=False,\n real_time_refresh=True,\n ),\n DropdownInput(\n name=\"model\",\n display_name=\"Model Name\",\n options=OPENAI_EMBEDDING_MODEL_NAMES,\n value=OPENAI_EMBEDDING_MODEL_NAMES[0],\n info=\"Select the embedding model to use\",\n ),\n SecretStrInput(\n name=\"api_key\",\n display_name=\"OpenAI API Key\",\n info=\"Model Provider API key\",\n required=True,\n show=True,\n real_time_refresh=True,\n ),\n # Watson-specific inputs\n MessageTextInput(\n name=\"project_id\",\n display_name=\"Project ID\",\n info=\"IBM watsonx.ai Project ID (required for IBM watsonx.ai)\",\n show=False,\n ),\n IntInput(\n name=\"dimensions\",\n display_name=\"Dimensions\",\n info=\"The number of dimensions the resulting output embeddings should have. \"\n \"Only supported by certain models.\",\n advanced=True,\n ),\n IntInput(name=\"chunk_size\", display_name=\"Chunk Size\", advanced=True, value=1000),\n FloatInput(name=\"request_timeout\", display_name=\"Request Timeout\", advanced=True),\n IntInput(name=\"max_retries\", display_name=\"Max Retries\", advanced=True, value=3),\n BoolInput(name=\"show_progress_bar\", display_name=\"Show Progress Bar\", advanced=True),\n DictInput(\n name=\"model_kwargs\",\n display_name=\"Model Kwargs\",\n advanced=True,\n info=\"Additional keyword arguments to pass to the model.\",\n ),\n ]\n\n def build_embeddings(self) -> Embeddings:\n provider = self.provider\n model = self.model\n api_key = self.api_key\n api_base = self.api_base\n base_url_ibm_watsonx = self.base_url_ibm_watsonx\n ollama_base_url = self.ollama_base_url\n dimensions = self.dimensions\n chunk_size = self.chunk_size\n request_timeout = self.request_timeout\n max_retries = self.max_retries\n show_progress_bar = self.show_progress_bar\n model_kwargs = self.model_kwargs or {}\n\n if provider == \"OpenAI\":\n if not api_key:\n msg = \"OpenAI API key is required when using OpenAI provider\"\n raise ValueError(msg)\n return OpenAIEmbeddings(\n model=model,\n dimensions=dimensions or None,\n base_url=api_base or None,\n api_key=api_key,\n chunk_size=chunk_size,\n max_retries=max_retries,\n timeout=request_timeout or None,\n show_progress_bar=show_progress_bar,\n model_kwargs=model_kwargs,\n )\n\n if provider == \"Ollama\":\n try:\n from langchain_ollama import OllamaEmbeddings\n except ImportError:\n try:\n from langchain_community.embeddings import OllamaEmbeddings\n except ImportError:\n msg = \"Please install langchain-ollama: pip install langchain-ollama\"\n raise ImportError(msg) from None\n\n transformed_base_url = transform_localhost_url(ollama_base_url)\n\n # Check if URL contains /v1 suffix (OpenAI-compatible mode)\n if transformed_base_url and transformed_base_url.rstrip(\"/\").endswith(\"/v1\"):\n # Strip /v1 suffix and log warning\n transformed_base_url = transformed_base_url.rstrip(\"/\").removesuffix(\"/v1\")\n logger.warning(\n \"Detected '/v1' suffix in base URL. The Ollama component uses the native Ollama API, \"\n \"not the OpenAI-compatible API. The '/v1' suffix has been automatically removed. \"\n \"If you want to use the OpenAI-compatible API, please use the OpenAI component instead. \"\n \"Learn more at https://docs.ollama.com/openai#openai-compatibility\"\n )\n\n return OllamaEmbeddings(\n model=model,\n base_url=transformed_base_url or DEFAULT_OLLAMA_URL,\n **model_kwargs,\n )\n\n if provider == \"IBM watsonx.ai\":\n try:\n from langchain_ibm import WatsonxEmbeddings\n except ImportError:\n msg = \"Please install langchain-ibm: pip install langchain-ibm\"\n raise ImportError(msg) from None\n\n if not api_key:\n msg = \"IBM watsonx.ai API key is required when using IBM watsonx.ai provider\"\n raise ValueError(msg)\n\n project_id = self.project_id\n\n if not project_id:\n msg = \"Project ID is required for IBM watsonx.ai provider\"\n raise ValueError(msg)\n\n params = {\n \"model_id\": model,\n \"url\": base_url_ibm_watsonx or \"https://us-south.ml.cloud.ibm.com\",\n \"apikey\": api_key,\n }\n\n params[\"project_id\"] = project_id\n\n return WatsonxEmbeddings(**params)\n\n msg = f\"Unknown provider: {provider}\"\n raise ValueError(msg)\n\n async def update_build_config(\n self, build_config: dotdict, field_value: Any, field_name: str | None = None\n ) -> dotdict:\n if field_name == \"provider\":\n if field_value == \"OpenAI\":\n build_config[\"model\"][\"options\"] = OPENAI_EMBEDDING_MODEL_NAMES\n build_config[\"model\"][\"value\"] = OPENAI_EMBEDDING_MODEL_NAMES[0]\n build_config[\"api_key\"][\"display_name\"] = \"OpenAI API Key\"\n build_config[\"api_key\"][\"required\"] = True\n build_config[\"api_key\"][\"show\"] = True\n build_config[\"api_base\"][\"display_name\"] = \"OpenAI API Base URL\"\n build_config[\"api_base\"][\"advanced\"] = True\n build_config[\"api_base\"][\"show\"] = True\n build_config[\"ollama_base_url\"][\"show\"] = False\n build_config[\"project_id\"][\"show\"] = False\n build_config[\"base_url_ibm_watsonx\"][\"show\"] = False\n\n elif field_value == \"Ollama\":\n build_config[\"ollama_base_url\"][\"show\"] = True\n\n if await is_valid_ollama_url(url=self.ollama_base_url):\n try:\n models = await get_ollama_models(\n base_url_value=self.ollama_base_url,\n desired_capability=DESIRED_CAPABILITY,\n json_models_key=JSON_MODELS_KEY,\n json_name_key=JSON_NAME_KEY,\n json_capabilities_key=JSON_CAPABILITIES_KEY,\n )\n build_config[\"model\"][\"options\"] = models\n build_config[\"model\"][\"value\"] = models[0] if models else \"\"\n except ValueError:\n build_config[\"model\"][\"options\"] = []\n build_config[\"model\"][\"value\"] = \"\"\n else:\n build_config[\"model\"][\"options\"] = []\n build_config[\"model\"][\"value\"] = \"\"\n\n build_config[\"api_key\"][\"display_name\"] = \"API Key (Optional)\"\n build_config[\"api_key\"][\"required\"] = False\n build_config[\"api_key\"][\"show\"] = False\n build_config[\"api_base\"][\"show\"] = False\n build_config[\"project_id\"][\"show\"] = False\n build_config[\"base_url_ibm_watsonx\"][\"show\"] = False\n\n elif field_value == \"IBM watsonx.ai\":\n build_config[\"model\"][\"options\"] = WATSONX_EMBEDDING_MODEL_NAMES\n build_config[\"model\"][\"value\"] = WATSONX_EMBEDDING_MODEL_NAMES[0]\n build_config[\"api_key\"][\"display_name\"] = \"IBM watsonx.ai API Key\"\n build_config[\"api_key\"][\"required\"] = True\n build_config[\"api_key\"][\"show\"] = True\n build_config[\"api_base\"][\"show\"] = False\n build_config[\"ollama_base_url\"][\"show\"] = False\n build_config[\"base_url_ibm_watsonx\"][\"show\"] = True\n build_config[\"project_id\"][\"show\"] = True\n\n elif field_name == \"ollama_base_url\":\n # Refresh Ollama models when base URL changes; prefer the new value\n ollama_url = (field_value or self.ollama_base_url or DEFAULT_OLLAMA_URL)\n # Normalize for container/remote envs and strip OpenAI-compatible suffix\n ollama_url = transform_localhost_url(ollama_url).rstrip(\"/\")\n if ollama_url.endswith(\"/v1\"):\n ollama_url = ollama_url[:-3]\n if await is_valid_ollama_url(url=ollama_url):\n try:\n models = await get_ollama_models(\n base_url_value=ollama_url,\n desired_capability=DESIRED_CAPABILITY,\n json_models_key=JSON_MODELS_KEY,\n json_name_key=JSON_NAME_KEY,\n json_capabilities_key=JSON_CAPABILITIES_KEY,\n )\n build_config[\"model\"][\"options\"] = models\n build_config[\"model\"][\"value\"] = models[0] if models else \"\"\n except ValueError:\n await logger.awarning(\"Failed to fetch Ollama embedding models.\")\n build_config[\"model\"][\"options\"] = []\n build_config[\"model\"][\"value\"] = \"\"\n\n elif field_name == \"model\" and self.provider == \"Ollama\":\n ollama_url = self.ollama_base_url\n if await is_valid_ollama_url(url=ollama_url):\n try:\n models = await get_ollama_models(\n base_url_value=ollama_url,\n desired_capability=DESIRED_CAPABILITY,\n json_models_key=JSON_MODELS_KEY,\n json_name_key=JSON_NAME_KEY,\n json_capabilities_key=JSON_CAPABILITIES_KEY,\n )\n build_config[\"model\"][\"options\"] = models\n except ValueError:\n await logger.awarning(\"Failed to refresh Ollama embedding models.\")\n build_config[\"model\"][\"options\"] = []\n\n return build_config\n" |
🤖 Prompt for AI Agents
In src/backend/base/langflow/initial_setup/starter_projects/Nvidia Remix.json
around lines 2006-2007, update the update_build_config handler for field_name ==
"ollama_base_url" (and the "model" refresh block if relevant) to use the
incoming field_value first (e.g., ollama_url = field_value or
self.ollama_base_url), then normalize it with transform_localhost_url and strip
a trailing "/v1" like in build_embeddings before calling
is_valid_ollama_url/get_ollama_models, and when a fallback is needed use
DEFAULT_OLLAMA_URL rather than a hardcoded literal; make the same normalization
change in the "model" refresh branch and leave optional cleanup (remove unused
HTTP_STATUS_OK) to your discretion.
|
Thank you @kerinin |
This pull request enhances the logging and message handling logic in the
openai_stream_generatorfunction to better track and manage message states during streaming. The main improvements include adding support for astateproperty in message data and conditionally skipping text processing for completed messages.Logging and message handling improvements:
stateproperty from thepropertiesdictionary in message data to determine the message's current state.statevalue, providing more context for each message processed.stateis"complete", ensuring that already-streamed content is not redundantly processed.state=completefor better traceability.content_blocksare still processed (e.g., for tool calls).Summary by CodeRabbit
New Features
Bug Fixes
Tests