Skip to content

feat: generate requirements.txt from dependencies (#11810)#12087

Merged
jordanrfrazier merged 1 commit into
mainfrom
re-add-requirements-work
Mar 6, 2026
Merged

feat: generate requirements.txt from dependencies (#11810)#12087
jordanrfrazier merged 1 commit into
mainfrom
re-add-requirements-work

Conversation

@jordanrfrazier
Copy link
Copy Markdown
Collaborator

@jordanrfrazier jordanrfrazier commented Mar 6, 2026

Re-add of #11810, which was unceremoniously removed in #11490.

Summary by CodeRabbit

Release Notes

  • New Features

    • Added requirements CLI command to generate requirements.txt files for Langflow flows
    • Supports customization options for LFX package inclusion and dependency version pinning
    • Automatic detection and resolution of flow dependencies, including language models and embedding providers
  • Tests

    • Added comprehensive test suite for requirements generation functionality

* Base script to generate requirements

Dymanically picks dependency for LanguageM Comp.
Requires separate change to remove eager loading.

* Lazy load imports for language model component

Ensures that only the necessary dependencies are required.
For example, if OpenAI provider is used, it will now only
import langchain_openai, rather than requiring langchain_anthropic,
langchain_ibm, etc.

* Add backwards-compat functions

* [autofix.ci] apply automated fixes

* [autofix.ci] apply automated fixes (attempt 2/3)

* Add exception handling

* Add CLI command to create reqs

* correctly exclude langchain imports

* Add versions to reqs

* dynamically resolve provider imports for language model comp

* Lazy load imports for reqs, some ruff fixes

* Add dynamic resolves for embedding model comp

* Add install hints

* Add missing provider tests; add warnings in reqs script

* Add a few warnings and fix install hint

* update comments add logging

* Package hints, warnings, comments, tests

* [autofix.ci] apply automated fixes

* [autofix.ci] apply automated fixes (attempt 2/3)

* [autofix.ci] apply automated fixes (attempt 3/3)

* Add alias for watsonx

* Fix anthropic for basic prompt, azure mapping

* [autofix.ci] apply automated fixes

* [autofix.ci] apply automated fixes (attempt 2/3)

* ruff

* [autofix.ci] apply automated fixes

* test formatting

* ruff

* [autofix.ci] apply automated fixes

---------

Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Mar 6, 2026

Walkthrough

The changes introduce a new CLI command and supporting infrastructure to generate requirements.txt files for Langflow flows. This includes a requirements generation module with AST-based import extraction and dynamic provider resolution, updates to model/embedding mappings, lazy-loading utilities, and comprehensive test coverage.

Changes

Cohort / File(s) Summary
CLI Command
src/lfx/src/lfx/__main__.py
Adds requirements_command_wrapper CLI command that loads a Langflow flow JSON, generates requirements.txt with options to include/exclude LFX package and pin/unpin versions, and outputs to file or stdout with error handling.
Requirements Generation Module
src/lfx/src/lfx/utils/flow_requirements.py
New module providing AST-based import extraction, PyPI package resolution via importlib.metadata, dynamic provider/embedding model detection, transitive dependency resolution, and version pinning. Public API: generate_requirements_from_flow(), generate_requirements_txt(), generate_requirements_from_file().
Model Infrastructure Updates
src/lfx/src/lfx/base/models/unified_models.py
Extends _MODEL_CLASS_IMPORTS and _EMBEDDING_CLASS_IMPORTS to include optional install hints for better error messaging. Adds EMBEDDING_PROVIDER_CLASS_MAPPING canonical mapping and updates import resolution logic in get_model_class() and get_embedding_class() with improved AttributeError handling.
Package Exports & Lazy Loading
src/lfx/src/lfx/utils/__init__.py
Implements forward annotations, TYPE_CHECKING imports, __all__ exports, and __getattr__() hook to enable lazy loading of flow_requirements functions without runtime dependency imports.
Test Suite
src/lfx/tests/unit/test_flow_requirements.py
Comprehensive unit and integration tests covering import extraction, package mapping, provider detection, transitive dependency handling, version pinning, CLI behavior, starter project flows, and error scenarios.

Sequence Diagram

sequenceDiagram
    participant User
    participant CLI as CLI Command
    participant FlowFile as Flow File
    participant StaticAnalysis as Static Analysis<br/>(AST)
    participant DynamicAnalysis as Dynamic Analysis<br/>(Provider Registry)
    participant MetadataResolver as Metadata Resolver<br/>(importlib.metadata)
    participant DepResolver as Dependency Resolver<br/>(Transitive)
    participant Output as Output Handler

    User->>CLI: requirements --flow-path file.json --output reqs.txt
    CLI->>FlowFile: Load & Parse JSON
    FlowFile-->>CLI: Flow structure
    
    CLI->>StaticAnalysis: Extract imports from component source
    StaticAnalysis-->>CLI: List of import names
    
    CLI->>DynamicAnalysis: Resolve providers (LLM, Embedding)
    DynamicAnalysis->>DynamicAnalysis: Query provider registries
    DynamicAnalysis-->>CLI: Provider package names
    
    CLI->>MetadataResolver: Map imports to PyPI packages
    MetadataResolver->>MetadataResolver: Resolve distribution metadata
    MetadataResolver-->>CLI: Distribution names & versions
    
    CLI->>DepResolver: Filter transitive LFX deps
    DepResolver->>DepResolver: Compute lfx dependency tree
    DepResolver-->>CLI: Final external package list
    
    CLI->>Output: Generate requirements text
    Output-->>Output: Pin versions & format
    Output-->>CLI: Formatted requirements
    
    CLI->>FlowFile: Write to output file (optional)
    FlowFile-->>User: requirements.txt or stdout
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~50 minutes


Important

Pre-merge checks failed

Please resolve all errors before merging. Addressing warnings is optional.

❌ Failed checks (1 error, 2 warnings)

Check name Status Explanation Resolution
Test Coverage For New Implementations ❌ Error Tests do not adequately address three critical issues: missing TYPE_CHECKING import exclusion tests, missing non-dict JSON input validation tests, and missing unloaded provider fallback tests. Implement _ImportCollector AST visitor for TYPE_CHECKING exclusion, add input validation for non-dict JSON, and enhance provider resolution with fallback hints for unloaded providers.
Docstring Coverage ⚠️ Warning Docstring coverage is 40.30% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
Test Quality And Coverage ⚠️ Warning Test suite lacks coverage for non-dict inputs, TYPE_CHECKING branch exclusion, and unloaded optional provider packages. Add tests for non-dict inputs (list, string, int), TYPE_CHECKING import exclusion, and mock scenarios for unloaded provider packages.
✅ Passed checks (4 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title clearly summarizes the primary change: adding the ability to generate requirements.txt from a project's dependencies, which is the main focus of all the file changes (CLI command, utils module, tests, unified models updates).
Test File Naming And Structure ✅ Passed Test file test_flow_requirements.py follows pytest conventions with 14 class-based test groups, descriptive names, 5 fixtures, and comprehensive coverage of positive/negative scenarios and edge cases.
Excessive Mock Usage Warning ✅ Passed Test file demonstrates excellent design with zero mock usage, using real object construction and pytest fixtures instead to verify actual behavior without obscuring tests through mocks.
✨ Finishing Touches
  • 📝 Generate docstrings (stacked PR)
  • 📝 Generate docstrings (commit on current branch)
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch re-add-requirements-work

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@github-actions github-actions Bot added the enhancement New feature or request label Mar 6, 2026
@github-actions
Copy link
Copy Markdown
Contributor

github-actions Bot commented Mar 6, 2026

Frontend Unit Test Coverage Report

Coverage Summary

Lines Statements Branches Functions
Coverage: 23%
23.24% (8235/35429) 16.02% (4458/27826) 15.91% (1185/7445)

Unit Test Results

Tests Skipped Failures Errors Time
2631 0 💤 0 ❌ 0 🔥 44.996s ⏱️

@github-actions github-actions Bot added enhancement New feature or request and removed enhancement New feature or request labels Mar 6, 2026
@codecov
Copy link
Copy Markdown

codecov Bot commented Mar 6, 2026

Codecov Report

❌ Patch coverage is 77.90262% with 59 lines in your changes missing coverage. Please review.
✅ Project coverage is 37.72%. Comparing base (d43bf3f) to head (138772d).
⚠️ Report is 1 commits behind head on main.

Files with missing lines Patch % Lines
src/lfx/src/lfx/utils/flow_requirements.py 81.85% 29 Missing and 12 partials ⚠️
src/lfx/src/lfx/base/models/unified_models.py 16.66% 15 Missing ⚠️
src/lfx/src/lfx/__main__.py 86.95% 3 Missing ⚠️

❌ Your project status has failed because the head coverage (43.22%) is below the target coverage (60.00%). You can increase the head coverage or adjust the target coverage.

Additional details and impacted files

Impacted file tree graph

@@            Coverage Diff             @@
##             main   #12087      +/-   ##
==========================================
+ Coverage   37.58%   37.72%   +0.13%     
==========================================
  Files        1623     1624       +1     
  Lines       79603    79862     +259     
  Branches    11971    12020      +49     
==========================================
+ Hits        29917    30124     +207     
- Misses      48027    48067      +40     
- Partials     1659     1671      +12     
Flag Coverage Δ
backend 57.04% <ø> (+<0.01%) ⬆️
frontend 20.82% <ø> (ø)
lfx 43.22% <77.90%> (+0.39%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

Files with missing lines Coverage Δ
src/lfx/src/lfx/__main__.py 85.36% <86.95%> (+2.03%) ⬆️
src/lfx/src/lfx/base/models/unified_models.py 23.56% <16.66%> (-0.19%) ⬇️
src/lfx/src/lfx/utils/flow_requirements.py 81.85% <81.85%> (ø)

... and 6 files with indirect coverage changes

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

Copy link
Copy Markdown
Collaborator

@HzaRashid HzaRashid left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

deja vu

@github-actions github-actions Bot added the lgtm This PR has been approved by a maintainer label Mar 6, 2026
@jordanrfrazier jordanrfrazier enabled auto-merge March 6, 2026 16:35
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Nitpick comments (1)
src/lfx/tests/unit/test_flow_requirements.py (1)

955-998: Cover the -o write-failure path too.

The CLI now has a dedicated OSError branch for unwritable output paths, but the suite only exercises parse/file-not-found failures. One case against a missing directory would keep that exit path from regressing.

As per coding guidelines, "Verify tests cover both positive and negative scenarios where appropriate".

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/lfx/tests/unit/test_flow_requirements.py` around lines 955 - 998, Add a
new negative test (e.g., test_output_write_failure) in
src/lfx/tests/unit/test_flow_requirements.py that invokes the CLI via
runner.invoke(app, ["requirements", str(flow_file), "-o", str(unwritable_path)])
where unwritable_path points to a path that cannot be written (for example a
file inside a non-existent directory or a path in a read-only dir created via
tmp_path), then assert result.exit_code == 1 and that result.output contains an
error message (similar to test_file_not_found and test_invalid_json). This will
exercise the CLI's OSError branch for unwritable output paths and prevent
regressions of the -o write-failure path.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/lfx/src/lfx/utils/flow_requirements.py`:
- Around line 283-294: The current branch returns an empty set when
provider_info is missing from MODEL_PROVIDERS_DICT, which causes valid
but-unimportable providers to yield no requirements; change the logic in the
function that queries MODEL_PROVIDERS_DICT (the block referencing provider_info
and _PROVIDER_PACKAGE_FALLBACKS) to mirror the approach used in
_resolve_embedding_provider_packages(): when provider_info is None, consult
_PROVIDER_PACKAGE_FALLBACKS (or the same fallback source used by
_resolve_embedding_provider_packages) and return that package set instead of an
empty set; ensure the code returns set(fallback) when a fallback exists and only
warns/returns empty when no fallback is available.
- Around line 486-513: The function generate_requirements_from_flow should
validate that its flow argument is a mapping before accessing flow.get: add an
early check at the top of generate_requirements_from_flow that verifies
isinstance(flow, dict) (or collections.abc.Mapping) and raise a clear
TypeError/ValueError with a helpful message if not; callers like
generate_requirements_from_file and the CLI will then get a controlled error
instead of an AttributeError. Ensure the check happens before any use of
flow.get or attribute access so malformed JSON values (e.g., list or string) are
rejected early with an explict error message.
- Around line 222-250: The _extract_imports function currently walks the entire
AST and picks up imports inside typing-only blocks; change it to skip any
imports inside "if TYPE_CHECKING" branches by making the AST traversal aware of
those conditional blocks: implement a small ast.NodeVisitor (or modify the walk)
that when visiting ast.If checks whether the test is a Name('TYPE_CHECKING') or
an Attribute ending with 'TYPE_CHECKING' (e.g., typing.TYPE_CHECKING) and, if
so, does not traverse its body/orelse, while still visiting other nodes to
collect ast.Import and ast.ImportFrom. Ensure the detection covers both Name and
Attribute forms and keep the returned set behavior in _extract_imports
unchanged.

---

Nitpick comments:
In `@src/lfx/tests/unit/test_flow_requirements.py`:
- Around line 955-998: Add a new negative test (e.g., test_output_write_failure)
in src/lfx/tests/unit/test_flow_requirements.py that invokes the CLI via
runner.invoke(app, ["requirements", str(flow_file), "-o", str(unwritable_path)])
where unwritable_path points to a path that cannot be written (for example a
file inside a non-existent directory or a path in a read-only dir created via
tmp_path), then assert result.exit_code == 1 and that result.output contains an
error message (similar to test_file_not_found and test_invalid_json). This will
exercise the CLI's OSError branch for unwritable output paths and prevent
regressions of the -o write-failure path.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: b24d050b-c6c6-4bdd-969a-3c6228045e25

📥 Commits

Reviewing files that changed from the base of the PR and between d43bf3f and 138772d.

📒 Files selected for processing (5)
  • src/lfx/src/lfx/__main__.py
  • src/lfx/src/lfx/base/models/unified_models.py
  • src/lfx/src/lfx/utils/__init__.py
  • src/lfx/src/lfx/utils/flow_requirements.py
  • src/lfx/tests/unit/test_flow_requirements.py

Comment on lines +222 to +250
def _extract_imports(source: str) -> set[str]:
"""Extract top-level package names from all imports in Python source via AST.

Walks the entire AST (including function bodies and try/except blocks) so
that lazy imports inside ``build_model()`` etc. are captured. Returns only
the first segment of each dotted import (e.g. ``foo`` from ``import foo.bar``).
"""
try:
tree = ast.parse(source)
except SyntaxError as exc:
warnings.warn(
f"Could not parse component source (SyntaxError: {exc}). "
"Imports from this component will not be included in requirements.",
stacklevel=2,
)
return set()

imports: set[str] = set()
for node in ast.walk(tree):
if isinstance(node, ast.Import):
for alias in node.names:
imports.add(alias.name.split(".")[0])
elif isinstance(node, ast.ImportFrom):
if node.level > 0:
# Relative import - skip (internal to the component)
continue
if node.module:
imports.add(node.module.split(".")[0])
return imports
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Skip TYPE_CHECKING branches during import extraction.

Walking the full AST with ast.walk() makes typing-only imports look like runtime requirements. A component that does if TYPE_CHECKING: import pandas will currently emit pandas into requirements.txt, which breaks the “minimal requirements” goal.

🩹 Suggested direction
+class _ImportCollector(ast.NodeVisitor):
+    def __init__(self) -> None:
+        self.imports: set[str] = set()
+
+    def visit_If(self, node: ast.If) -> None:
+        test = node.test
+        is_type_checking = (
+            isinstance(test, ast.Name)
+            and test.id == "TYPE_CHECKING"
+        ) or (
+            isinstance(test, ast.Attribute)
+            and isinstance(test.value, ast.Name)
+            and test.value.id == "typing"
+            and test.attr == "TYPE_CHECKING"
+        )
+        if not is_type_checking:
+            self.generic_visit(node)
+
+    def visit_Import(self, node: ast.Import) -> None:
+        for alias in node.names:
+            self.imports.add(alias.name.split(".")[0])
+
+    def visit_ImportFrom(self, node: ast.ImportFrom) -> None:
+        if node.level == 0 and node.module:
+            self.imports.add(node.module.split(".")[0])
+
 def _extract_imports(source: str) -> set[str]:
     ...
-    imports: set[str] = set()
-    for node in ast.walk(tree):
-        ...
-    return imports
+    collector = _ImportCollector()
+    collector.visit(tree)
+    return collector.imports
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/lfx/src/lfx/utils/flow_requirements.py` around lines 222 - 250, The
_extract_imports function currently walks the entire AST and picks up imports
inside typing-only blocks; change it to skip any imports inside "if
TYPE_CHECKING" branches by making the AST traversal aware of those conditional
blocks: implement a small ast.NodeVisitor (or modify the walk) that when
visiting ast.If checks whether the test is a Name('TYPE_CHECKING') or an
Attribute ending with 'TYPE_CHECKING' (e.g., typing.TYPE_CHECKING) and, if so,
does not traverse its body/orelse, while still visiting other nodes to collect
ast.Import and ast.ImportFrom. Ensure the detection covers both Name and
Attribute forms and keep the returned set behavior in _extract_imports
unchanged.

Comment on lines +283 to +294
provider_info = MODEL_PROVIDERS_DICT.get(provider_name)
if not provider_info:
fallback = _PROVIDER_PACKAGE_FALLBACKS.get(provider_name)
if fallback:
return set(fallback)
warnings.warn(
f"Provider '{provider_name}' was detected in the flow but is not "
"registered in MODEL_PROVIDERS_DICT (its package may not be installed). "
"Its dependencies will not be included in requirements.",
stacklevel=2,
)
return set()
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Don’t return an empty package set for unloaded providers.

This resolver depends on MODEL_PROVIDERS_DICT, but the new test helper in src/lfx/tests/unit/test_flow_requirements.py:767-778 already treats that registry as incomplete when optional provider packages are not installed. In that state, a flow can select a supported provider and still get no provider requirement at all. This path needs a provider→package source that does not depend on the provider package already being importable, similar to _resolve_embedding_provider_packages().

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/lfx/src/lfx/utils/flow_requirements.py` around lines 283 - 294, The
current branch returns an empty set when provider_info is missing from
MODEL_PROVIDERS_DICT, which causes valid but-unimportable providers to yield no
requirements; change the logic in the function that queries MODEL_PROVIDERS_DICT
(the block referencing provider_info and _PROVIDER_PACKAGE_FALLBACKS) to mirror
the approach used in _resolve_embedding_provider_packages(): when provider_info
is None, consult _PROVIDER_PACKAGE_FALLBACKS (or the same fallback source used
by _resolve_embedding_provider_packages) and return that package set instead of
an empty set; ensure the code returns set(fallback) when a fallback exists and
only warns/returns empty when no fallback is available.

Comment on lines +486 to +513
def generate_requirements_from_flow(
flow: dict,
*,
lfx_package: str = "lfx",
include_lfx: bool = True,
pin_versions: bool = True,
) -> list[str]:
"""Generate a requirements list from a Langflow flow JSON.

Args:
flow: Parsed Langflow flow JSON (dict).
lfx_package: Name of the LFX package to include (e.g. ``"lfx"`` or
``"lfx-nightly"``).
include_lfx: Whether to include the LFX package itself.
pin_versions: If True, pin each package to the version currently
installed in this environment (``pkg==X.Y.Z``). Falls back to
an unpinned name when the package is not installed.

Returns:
Sorted list of PyPI package specifiers needed to run this flow.
"""
all_packages: set[str] = set()
all_providers: set[str] = set()

data = flow.get("data", {})
nodes = data.get("nodes", [])

for node in nodes:
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Validate the top-level flow shape before walking it.

generate_requirements_from_file() and the new CLI can hand this function any syntactically valid JSON value. If the input is [] or "foo", flow.get(...) raises AttributeError and callers get an unhandled traceback instead of a controlled failure.

🩹 Proposed fix
 def generate_requirements_from_flow(
     flow: dict,
     *,
     lfx_package: str = "lfx",
     include_lfx: bool = True,
     pin_versions: bool = True,
 ) -> list[str]:
@@
+    if not isinstance(flow, dict):
+        msg = "Flow JSON must be an object"
+        raise ValueError(msg)
+
     all_packages: set[str] = set()
     all_providers: set[str] = set()

     data = flow.get("data", {})
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/lfx/src/lfx/utils/flow_requirements.py` around lines 486 - 513, The
function generate_requirements_from_flow should validate that its flow argument
is a mapping before accessing flow.get: add an early check at the top of
generate_requirements_from_flow that verifies isinstance(flow, dict) (or
collections.abc.Mapping) and raise a clear TypeError/ValueError with a helpful
message if not; callers like generate_requirements_from_file and the CLI will
then get a controlled error instead of an AttributeError. Ensure the check
happens before any use of flow.get or attribute access so malformed JSON values
(e.g., list or string) are rejected early with an explict error message.

@jordanrfrazier jordanrfrazier added this pull request to the merge queue Mar 6, 2026
Merged via the queue into main with commit 6652d3f Mar 6, 2026
178 of 181 checks passed
@jordanrfrazier jordanrfrazier deleted the re-add-requirements-work branch March 6, 2026 17:18
HimavarshaVS pushed a commit that referenced this pull request Mar 10, 2026
* Base script to generate requirements

Dymanically picks dependency for LanguageM Comp.
Requires separate change to remove eager loading.

* Lazy load imports for language model component

Ensures that only the necessary dependencies are required.
For example, if OpenAI provider is used, it will now only
import langchain_openai, rather than requiring langchain_anthropic,
langchain_ibm, etc.

* Add backwards-compat functions

* [autofix.ci] apply automated fixes

* [autofix.ci] apply automated fixes (attempt 2/3)

* Add exception handling

* Add CLI command to create reqs

* correctly exclude langchain imports

* Add versions to reqs

* dynamically resolve provider imports for language model comp

* Lazy load imports for reqs, some ruff fixes

* Add dynamic resolves for embedding model comp

* Add install hints

* Add missing provider tests; add warnings in reqs script

* Add a few warnings and fix install hint

* update comments add logging

* Package hints, warnings, comments, tests

* [autofix.ci] apply automated fixes

* [autofix.ci] apply automated fixes (attempt 2/3)

* [autofix.ci] apply automated fixes (attempt 3/3)

* Add alias for watsonx

* Fix anthropic for basic prompt, azure mapping

* [autofix.ci] apply automated fixes

* [autofix.ci] apply automated fixes (attempt 2/3)

* ruff

* [autofix.ci] apply automated fixes

* test formatting

* ruff

* [autofix.ci] apply automated fixes

---------

Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

enhancement New feature or request lgtm This PR has been approved by a maintainer

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants