Skip to content

Conversation

@bobbai00
Copy link
Contributor

@bobbai00 bobbai00 commented Nov 4, 2025

Please see this wiki page to learn how to enable this feature

What changes were proposed in this PR?

This PR introduces the LLM agent management & chat panel on the workflow workspace to help users with their workflows.

Demo

  1. Manage agent using the panel
    2025-11-08 14 59 31

  2. Ask agent questions regarding available Texera operators
    2025-11-08 15 00 38

  3. Ask agent about users' current workflow

2025-11-08 15 02 05

Architecture Diagram

See #4034

Major Changes

  1. Frontend: introduce the agent management & chat panel

  2. Backend:

  • New micro service litellm is introduced: which is a open source service that manages the communication between app and LLM APIs
  • AccessControlService is modified: adding the logic for routing litellm related requests

Any related issues, documentation, discussions?

Related to #4034

Current PR limitation and future PR plans

In current PR, the agent is only able to act in a "read-only" way, meaning it can only answer questions regarding operators, but couldn't change user's workflow.

In future PRs,

  • Agent will be able to edit user's workflow
  • Agent feature will be added to k8s deployment architecture.

How was this PR tested?

Frontend unit test cases are added.

To test the PR e2e:

  1. Launch litellm by following the instruction in bin/litellm-config.yaml
  2. Launch AccessControlService
  3. All set! You can now test the agent in workflow workspace.

Was this PR authored or co-authored using generative AI tooling?

The code content is co-authored with Claude code. This PR is not generated by generative AI.

@github-actions github-actions bot added feature dependencies Pull requests that update a dependency file frontend Changes related to the frontend GUI build common and removed common labels Nov 4, 2025
@bobbai00 bobbai00 self-assigned this Nov 4, 2025
@bobbai00 bobbai00 requested a review from aglinxinyuan November 4, 2025 17:15
Copy link
Contributor

@Xiao-zhen-Liu Xiao-zhen-Liu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Left a few comments. Before I continue reviewing all the changes, I wonder if it is possible to make this into smaller PRs, e.g., you could introduce action plan in a separate PR.

@chenlica
Copy link
Contributor

chenlica commented Nov 5, 2025

This PR is too big! Let's discuss whether we can decompose it and how.

@chenlica
Copy link
Contributor

chenlica commented Nov 7, 2025

@bobbai00 Please summarize the plan per our discussion. In the new PRs, include a architecture diagram and related screenshots.

@bobbai00
Copy link
Contributor Author

bobbai00 commented Nov 8, 2025

@bobbai00 Please summarize the plan per our discussion. In the new PRs, include a architecture diagram and related screenshots.

Done. They are all reflected in PR description and Issue #4034

Copy link
Contributor

@Xiao-zhen-Liu Xiao-zhen-Liu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tested and works well. Left some comments mainly for improving clarity.

I also think we should have a configuration flag for this feature. It would be even better if admins can turn it on / off dynamically.

Copy link
Contributor

@Xiao-zhen-Liu Xiao-zhen-Liu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM in general. Please fix the remaining minor issues, and add a flag to turn this feature on/off.

Copy link
Contributor

@aglinxinyuan aglinxinyuan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Left mirror comments. There is quite a bit of room to improve code quality, but it doesn’t affect functionality.

aglinxinyuan and others added 11 commits November 17, 2025 21:25
- Remove unused isStopping() method from agent-chat.component.ts
- Revert accidental formatting changes in context-menu.component.html
Reverts the global user-select style from 'text' back to 'none' to restore
proper drag-and-drop functionality for operators in the operator menu.
The 'user-select: text' change was making operators selectable, which
interfered with the drag-and-drop interaction.
Adds a new 'copilotEnabled' configuration flag that allows administrators
to enable/disable the AI copilot feature. The feature is disabled by default
and can be controlled via the GUI_WORKFLOW_WORKSPACE_COPILOT_ENABLED
environment variable.

Changes:
- Add copilot-enabled flag to gui.conf (default: false)
- Add parsing logic in GuiConfig.scala
- Add flag to ConfigResource API response
- Add copilotEnabled to frontend GuiConfig type
- Add default value in mock config service
- Conditionally render agent-panel based on flag in workspace

This addresses the PR feedback requesting a configuration flag that
admins can use to turn the feature on/off dynamically.
@bobbai00 bobbai00 changed the title feat: LLM agents as the workflow copilot feat: LLM agents as the Texera workflow copilot Nov 26, 2025
@bobbai00 bobbai00 changed the title feat: LLM agents as the Texera workflow copilot feat: introduce LLM-based workflow copilot Nov 26, 2025
@bobbai00 bobbai00 changed the title feat: introduce LLM-based workflow copilot feat: introduce the LLM-based workflow copilot Nov 26, 2025
@bobbai00 bobbai00 merged commit 748b997 into apache:main Nov 26, 2025
12 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

build common dependencies Pull requests that update a dependency file feature frontend Changes related to the frontend GUI service

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants