Conversation
WalkthroughThe architecture page is rewritten from a short note into a layered component catalog with three platform layers (Container Platform, AI Platform, GenAI Platform), expanding component entries, descriptions, types, and licensing information. Changes
Estimated code review effort🎯 2 (Simple) | ⏱️ ~12 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 inconclusive)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@docs/en/overview/architecture.mdx`:
- Around line 41-60: Normalize the casing of the MLFlow/MLflow mentions to a
single form across the document; specifically change "Experiment Tracking
(Alauda support for MLFlow)" to match "Agent Tracing (Alauda support for
MLflow)" (use "MLflow" everywhere) so both the "Experiment Tracking" and "Agent
Tracing" lines use the same "MLflow" casing.
- Around line 52-61: The table rows for components like "Fine-tuning", "Model
Quantization", "Evaluation", "Llama Stack", "Langchain", "Dify (Alauda support
for Dify)", "MCP Servers", and "Agent Evaluation" currently list Type="Open
source" but License="-" which creates ambiguity; update each row (e.g., the
"Fine-tuning" row, "Training (Alauda support for Kubeflow Trainer v2)" row,
"Model Quantization", "Evaluation", "Llama Stack", "Langchain", "Dify (Alauda
support for Dify)", "MCP Servers", and "Agent Evaluation") to use explicit
SPDX-style license identifiers where known, or replace "-" with a clear value
such as "Multiple/Varies (see component docs)" and add a short parenthetical or
link note pointing to the component docs for license details to remove
ambiguity.
ℹ️ Review info
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
docs/en/overview/assets/architecture.pngis excluded by!**/*.png
📒 Files selected for processing (1)
docs/en/overview/architecture.mdx
| | Experiment Tracking (Alauda support for MLFlow) | Tracking and comparing machine learning experiments | Open source | Apache Version 2.0 | | ||
|
|
||
|
|
||
| ### Components in GenAI Platform Layer | ||
|
|
||
| | Component | Description | Type | License | | ||
| | --- | --- | --- | --- | | ||
| | Kserve (Alauda AI Model Serving/Alauda Generative AI) | Kubernetes-native model serving framework | Open source | Apache Version 2.0 | | ||
| | vLLM (Alauda AI Model Serving/Alauda Generative AI) | High-performance model inference engine for large language models | Open source | Apache Version 2.0 | | ||
| | llm-d (Alauda Generative AI) | Distributed inference engine for large language models | Open source | Apache Version 2.0 | | ||
| | Model as a Service (Alauda build of Envoy AI Gateway) | API gateway for serving AI models as a service | Open source | Apache Version 2.0 | | ||
| | Fine-tuning | Tools integrated with the workbench for fine-tuning large language models, e.g. transformers, accelerate, llama-factory etc. | Open source | - | | ||
| | Training (Alauda support for Kubeflow Trainer v2) | Kubernetes-native training job management | Open source | Apache Version 2.0 | | ||
| | Model Quantization | Tools integrated with the workbench for model quantization, e.g. llm-compressor etc. | Open source | - | | ||
| | Evaluation | Tools integrated with the workbench for evaluating model performance, e.g. lm-evaluation-harness etc. | Open source | - | | ||
| | Llama Stack (Alauda build of Llama Stack) | Framework for building applications with large language models | Open source | - | | ||
| | Langchain | Tools integrated with the workbench for building LLM applications using Langchain | Open source | - | | ||
| | Dify (Alauda support for Dify) | Platform for building AI assistants and chatbots | Open source | - | | ||
| | MCP Servers | Can integrate with various MCP servers | - | - | | ||
| | Agent Tracing (Alauda support for MLflow) | Tracing and monitoring for AI agents | Open source | Apache Version 2.0 | |
There was a problem hiding this comment.
Use consistent MLflow casing across the document.
Line 41 uses MLFlow while Line 60 uses MLflow. Please normalize to one form for consistency.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/en/overview/architecture.mdx` around lines 41 - 60, Normalize the casing
of the MLFlow/MLflow mentions to a single form across the document; specifically
change "Experiment Tracking (Alauda support for MLFlow)" to match "Agent Tracing
(Alauda support for MLflow)" (use "MLflow" everywhere) so both the "Experiment
Tracking" and "Agent Tracing" lines use the same "MLflow" casing.
| | Fine-tuning | Tools integrated with the workbench for fine-tuning large language models, e.g. transformers, accelerate, llama-factory etc. | Open source | - | | ||
| | Training (Alauda support for Kubeflow Trainer v2) | Kubernetes-native training job management | Open source | Apache Version 2.0 | | ||
| | Model Quantization | Tools integrated with the workbench for model quantization, e.g. llm-compressor etc. | Open source | - | | ||
| | Evaluation | Tools integrated with the workbench for evaluating model performance, e.g. lm-evaluation-harness etc. | Open source | - | | ||
| | Llama Stack (Alauda build of Llama Stack) | Framework for building applications with large language models | Open source | - | | ||
| | Langchain | Tools integrated with the workbench for building LLM applications using Langchain | Open source | - | | ||
| | Dify (Alauda support for Dify) | Platform for building AI assistants and chatbots | Open source | - | | ||
| | MCP Servers | Can integrate with various MCP servers | - | - | | ||
| | Agent Tracing (Alauda support for MLflow) | Tracing and monitoring for AI agents | Open source | Apache Version 2.0 | | ||
| | Agent Evaluation | Tools integrated with the workbench for evaluating AI agents, e.g. RAGAS etc. | Open source | - | |
There was a problem hiding this comment.
Replace open-source license placeholders with explicit values.
On Lines 52-58 and Line 61, Type is Open source but License is -. In an architecture/licensing table this creates compliance ambiguity. Use explicit SPDX-style license values, or clearly mark Multiple/Varies (see component docs) with references.
Suggested table fix pattern
-| Fine-tuning | Tools integrated with the workbench for fine-tuning large language models, e.g. transformers, accelerate, llama-factory etc. | Open source | - |
+| Fine-tuning | Tools integrated with the workbench for fine-tuning large language models, e.g. transformers, accelerate, llama-factory etc. | Open source | Multiple/Varies (see component docs) |
-| Model Quantization | Tools integrated with the workbench for model quantization, e.g. llm-compressor etc. | Open source | - |
+| Model Quantization | Tools integrated with the workbench for model quantization, e.g. llm-compressor etc. | Open source | Multiple/Varies (see component docs) |📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| | Fine-tuning | Tools integrated with the workbench for fine-tuning large language models, e.g. transformers, accelerate, llama-factory etc. | Open source | - | | |
| | Training (Alauda support for Kubeflow Trainer v2) | Kubernetes-native training job management | Open source | Apache Version 2.0 | | |
| | Model Quantization | Tools integrated with the workbench for model quantization, e.g. llm-compressor etc. | Open source | - | | |
| | Evaluation | Tools integrated with the workbench for evaluating model performance, e.g. lm-evaluation-harness etc. | Open source | - | | |
| | Llama Stack (Alauda build of Llama Stack) | Framework for building applications with large language models | Open source | - | | |
| | Langchain | Tools integrated with the workbench for building LLM applications using Langchain | Open source | - | | |
| | Dify (Alauda support for Dify) | Platform for building AI assistants and chatbots | Open source | - | | |
| | MCP Servers | Can integrate with various MCP servers | - | - | | |
| | Agent Tracing (Alauda support for MLflow) | Tracing and monitoring for AI agents | Open source | Apache Version 2.0 | | |
| | Agent Evaluation | Tools integrated with the workbench for evaluating AI agents, e.g. RAGAS etc. | Open source | - | | |
| | Fine-tuning | Tools integrated with the workbench for fine-tuning large language models, e.g. transformers, accelerate, llama-factory etc. | Open source | Multiple/Varies (see component docs) | | |
| | Training (Alauda support for Kubeflow Trainer v2) | Kubernetes-native training job management | Open source | Apache Version 2.0 | | |
| | Model Quantization | Tools integrated with the workbench for model quantization, e.g. llm-compressor etc. | Open source | Multiple/Varies (see component docs) | | |
| | Evaluation | Tools integrated with the workbench for evaluating model performance, e.g. lm-evaluation-harness etc. | Open source | Multiple/Varies (see component docs) | | |
| | Llama Stack (Alauda build of Llama Stack) | Framework for building applications with large language models | Open source | Multiple/Varies (see component docs) | | |
| | Langchain | Tools integrated with the workbench for building LLM applications using Langchain | Open source | Multiple/Varies (see component docs) | | |
| | Dify (Alauda support for Dify) | Platform for building AI assistants and chatbots | Open source | Multiple/Varies (see component docs) | | |
| | MCP Servers | Can integrate with various MCP servers | - | - | | |
| | Agent Tracing (Alauda support for MLflow) | Tracing and monitoring for AI agents | Open source | Apache Version 2.0 | | |
| | Agent Evaluation | Tools integrated with the workbench for evaluating AI agents, e.g. RAGAS etc. | Open source | Multiple/Varies (see component docs) | |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/en/overview/architecture.mdx` around lines 52 - 61, The table rows for
components like "Fine-tuning", "Model Quantization", "Evaluation", "Llama
Stack", "Langchain", "Dify (Alauda support for Dify)", "MCP Servers", and "Agent
Evaluation" currently list Type="Open source" but License="-" which creates
ambiguity; update each row (e.g., the "Fine-tuning" row, "Training (Alauda
support for Kubeflow Trainer v2)" row, "Model Quantization", "Evaluation",
"Llama Stack", "Langchain", "Dify (Alauda support for Dify)", "MCP Servers", and
"Agent Evaluation") to use explicit SPDX-style license identifiers where known,
or replace "-" with a clear value such as "Multiple/Varies (see component docs)"
and add a short parenthetical or link note pointing to the component docs for
license details to remove ambiguity.
Deploying alauda-ai with
|
| Latest commit: |
833a360
|
| Status: | ✅ Deploy successful! |
| Preview URL: | https://e672b63e.alauda-ai.pages.dev |
| Branch Preview URL: | https://update-arch-overview.alauda-ai.pages.dev |
| | DRA (Alauda build of NVIDIA DRA Driver for GPUs) | Dynamic Resource Allocation for GPU sharing | Open source | Apache Version 2.0 | | ||
| | Volcano (Alauda support for Volcano) | Batch job scheduling for AI workloads | Open source | Apache Version 2.0 | | ||
| | Kueue (Alauda Build of Kueue) | Job scheduling for AI workloads | Open source | Apache Version 2.0 | | ||
| | Milvus (Alauda Build of Milvus) | Vector database for embedding storage and retrieval | Open source | Apache Version 2.0 | |
There was a problem hiding this comment.
按全量写还是列举一些主要的?还少一些,比如PG Vector(图里有)、LWS(图里没有)、etc
|
有些组件的License没有写呢?表格中感觉有些是Feature,并不是组件。 |
|
方便的话改的话,架构图里workbench那里下方最好是 Jupyter/Codeserver |
There was a problem hiding this comment.
Actionable comments posted: 2
♻️ Duplicate comments (1)
docs/en/overview/architecture.mdx (1)
60-60:⚠️ Potential issue | 🟡 MinorMCP Servers row is incomplete.
Both
TypeandLicenseare-. If these are external integrations with varying licenses, consider using "Multiple/Varies" or providing a brief explanation. If Type is unknown, clarify what "MCP Servers" refers to.Suggested fix
-| MCP Servers | Can integrate with various MCP servers | - | - | +| MCP Servers | Can integrate with various MCP servers | External | Multiple/Varies |🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/en/overview/architecture.mdx` at line 60, The table row for "MCP Servers" in docs/en/overview/architecture.mdx is incomplete with Type and License set to "-", so update that row: replace the Type and License cells with "Multiple / Varies" (or a short phrase like "External integrations; varies by vendor") and add a brief parenthetical clarifying what "MCP Servers" refers to (e.g., "MCP Servers (external message/control platform integrations)"). Edit the table row containing the "MCP Servers" label to change the "-" placeholders to the chosen descriptive values and ensure the wording is concise and consistent with surrounding rows.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@docs/en/overview/architecture.mdx`:
- Line 40: The table row labeled "Guardrails" is ambiguous—update the
architecture table entry (the row containing the text "Guardrails | AI safety
and governance framework | Open source | Apache Version 2.0") to explicitly
state which guardrails project or implementation is used (e.g., "NVIDIA NeMo
Guardrails", "Guardrails AI", or "Proprietary/Custom Guardrails") and ensure the
license column matches that project; if multiple implementations are optional,
indicate that explicitly (e.g., "Pluggable: NVIDIA NeMo Guardrails (Apache-2.0)
or Guardrails AI (LICENSE)") so readers know the exact attribution and license.
- Line 54: The two table entries "Training (Alauda support for Kubeflow Trainer
v2)" and "Training Jobs (Alauda support for Kubeflow Trainer v2)" are
duplicates; either remove one of these rows or edit their descriptions so they
clearly serve different purposes—update the text for "Training" and/or "Training
Jobs" to reflect distinct scopes (e.g., one for conceptual training features and
the other for concrete training job management) and ensure only one row remains
with the accurate description.
---
Duplicate comments:
In `@docs/en/overview/architecture.mdx`:
- Line 60: The table row for "MCP Servers" in docs/en/overview/architecture.mdx
is incomplete with Type and License set to "-", so update that row: replace the
Type and License cells with "Multiple / Varies" (or a short phrase like
"External integrations; varies by vendor") and add a brief parenthetical
clarifying what "MCP Servers" refers to (e.g., "MCP Servers (external
message/control platform integrations)"). Edit the table row containing the "MCP
Servers" label to change the "-" placeholders to the chosen descriptive values
and ensure the wording is concise and consistent with surrounding rows.
docs/en/overview/architecture.mdx
Outdated
| | Workbench (Alauda AI Workbench) | Web-based interface for managing AI projects, including model training and inference | Proprietary | Commercial | | ||
| | Training Jobs (Alauda support for Kubeflow Trainer v2) | Kubernetes-native training job management | Open source | Apache Version 2.0 | | ||
| | Kubeflow Pipelines (Alauda support for Kubeflow Base & Alauda support for Kubeflow Pipeline) | Workflow orchestration for AI pipelines | Open source | Apache Version 2.0 | | ||
| | Guardrails | AI safety and governance framework | Open source | Apache Version 2.0 | |
There was a problem hiding this comment.
Clarify which "Guardrails" project this refers to.
Unlike other components in this table, "Guardrails" lacks an attribution (e.g., "Alauda support for X"). Several open-source guardrails projects exist (NVIDIA NeMo Guardrails, Guardrails AI, etc.) with different licenses. Please specify which implementation is used or if this is a proprietary component.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/en/overview/architecture.mdx` at line 40, The table row labeled
"Guardrails" is ambiguous—update the architecture table entry (the row
containing the text "Guardrails | AI safety and governance framework | Open
source | Apache Version 2.0") to explicitly state which guardrails project or
implementation is used (e.g., "NVIDIA NeMo Guardrails", "Guardrails AI", or
"Proprietary/Custom Guardrails") and ensure the license column matches that
project; if multiple implementations are optional, indicate that explicitly
(e.g., "Pluggable: NVIDIA NeMo Guardrails (Apache-2.0) or Guardrails AI
(LICENSE)") so readers know the exact attribution and license.
| | llm-d (Alauda Generative AI) | Distributed inference engine for large language models | Open source | Apache Version 2.0 | | ||
| | Model as a Service (Alauda build of Envoy AI Gateway) | API gateway for serving AI models as a service | Open source | Apache Version 2.0 | | ||
| | Fine-tuning | Tools integrated with the workbench for fine-tuning large language models, e.g. transformers, accelerate, llama-factory etc. | Open source | - | | ||
| | Training (Alauda support for Kubeflow Trainer v2) | Kubernetes-native training job management | Open source | Apache Version 2.0 | |
There was a problem hiding this comment.
Potential duplicate of "Training Jobs" entry.
Line 54 "Training (Alauda support for Kubeflow Trainer v2)" appears to duplicate Line 38 "Training Jobs (Alauda support for Kubeflow Trainer v2)" with an identical description. Consider removing this duplicate or clarifying how they differ.
Suggested fix
-| Training (Alauda support for Kubeflow Trainer v2) | Kubernetes-native training job management | Open source | Apache Version 2.0 |Or if they serve different purposes, update the descriptions to differentiate them.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| | Training (Alauda support for Kubeflow Trainer v2) | Kubernetes-native training job management | Open source | Apache Version 2.0 | |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/en/overview/architecture.mdx` at line 54, The two table entries
"Training (Alauda support for Kubeflow Trainer v2)" and "Training Jobs (Alauda
support for Kubeflow Trainer v2)" are duplicates; either remove one of these
rows or edit their descriptions so they clearly serve different purposes—update
the text for "Training" and/or "Training Jobs" to reflect distinct scopes (e.g.,
one for conceptual training features and the other for concrete training job
management) and ensure only one row remains with the accurate description.
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (4)
docs/en/overview/architecture.mdx (4)
40-40:⚠️ Potential issue | 🟡 Minor
Guardrails (Coming soon)is ambiguous for an “Open source / Apache 2.0” claim.If implementation is not finalized, set Type/License to
TBD; otherwise name the exact project to match the declared license.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/en/overview/architecture.mdx` at line 40, The table row containing the label "Guardrails (Coming soon)" is ambiguous given the "Open source / Apache Version 2.0" license claim; update that table cell so the Type/License is accurate: if the implementation and license are not finalized change the text to "TBD" (or "Guardrails — TBD") and set License to "TBD", otherwise replace "Guardrails (Coming soon)" with the exact project name and ensure the License column matches the project's actual license (e.g., "Apache 2.0"); locate and edit the markdown table row that currently reads "| Guardrails (Coming soon) | AI safety and governance framework | Open source | Apache Version 2.0 |".
38-38:⚠️ Potential issue | 🟡 Minor
Training JobsandTrainingcurrently read as duplicates.Both rows describe Kubeflow Trainer v2 job management with near-identical scope. Either merge them or clarify distinct responsibilities per layer.
Also applies to: 54-54
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/en/overview/architecture.mdx` at line 38, The two table rows "Training Jobs (Alauda support for Kubeflow Trainer v2)" and "Training" are duplicated in scope; remove redundancy by either merging them into a single row that consolidates Kubernetes-native job management and Kubeflow Trainer v2 support, or split responsibilities clearly (e.g., make "Training Jobs" cover cluster-level job orchestration/Kubernetes details and make "Training" cover framework-level training lifecycle/experiment semantics). Update the table entries so the row titles "Training Jobs" and "Training" have distinct descriptions (or keep only the merged row), and ensure the wording for Kubeflow Trainer v2 and Alauda support appears once and consistently across the table.
53-56:⚠️ Potential issue | 🟠 MajorReplace
-placeholders in Type/License with explicit values.Several rows still use
-for compliance-critical metadata (Fine-tuning,Model Quantization,Evaluation,MCP Servers,Agent Evaluation). Please use explicit values likeMultiple/Varies (see component docs)and keep Type non-empty.Suggested table adjustment
-| Fine-tuning | Tools integrated with the workbench for fine-tuning large language models, e.g. transformers, accelerate, llama-factory etc. | Open source | - | +| Fine-tuning | Tools integrated with the workbench for fine-tuning large language models, e.g. transformers, accelerate, llama-factory etc. | Open source | Multiple/Varies (see component docs) | -| Model Quantization | Tools integrated with the workbench for model quantization, e.g. llm-compressor etc. | Open source | - | +| Model Quantization | Tools integrated with the workbench for model quantization, e.g. llm-compressor etc. | Open source | Multiple/Varies (see component docs) | -| Evaluation | Tools integrated with the workbench for evaluating model performance, e.g. lm-evaluation-harness etc. | Open source | - | +| Evaluation | Tools integrated with the workbench for evaluating model performance, e.g. lm-evaluation-harness etc. | Open source | Multiple/Varies (see component docs) | -| MCP Servers | Can integrate with various MCP servers | - | - | +| MCP Servers | Can integrate with various MCP servers | Multiple/Varies | Multiple/Varies (see component docs) | -| Agent Evaluation | Tools integrated with the workbench for evaluating AI agents, e.g. RAGAS etc. | Open source | - | +| Agent Evaluation | Tools integrated with the workbench for evaluating AI agents, e.g. RAGAS etc. | Open source | Multiple/Varies (see component docs) |Also applies to: 60-62
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/en/overview/architecture.mdx` around lines 53 - 56, Update the table rows for the components "Fine-tuning", "Model Quantization", "Evaluation", "MCP Servers", and "Agent Evaluation" to remove '-' placeholders in the Type and License columns and replace them with explicit values (e.g., set License to "Multiple/Varies (see component docs)" and ensure Type is descriptive/non-empty such as "Open source / Varies" or "Tooling / Varies"); edit the rows shown in the diff (the lines containing "Fine-tuning", "Model Quantization", "Evaluation" and the other mentioned components) so every Type and License cell contains explicit text rather than '-' and keep phrasing consistent with the surrounding rows.
42-42:⚠️ Potential issue | 🟡 MinorUse consistent
MLflowcasing.Line 42 uses
MLFlow, while Line 61 usesMLflow. Normalize to one form (MLflow) for consistency.Also applies to: 61-61
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/en/overview/architecture.mdx` at line 42, Replace inconsistent casing of "MLFlow" with the canonical "MLflow" across the document; specifically update the table cell string "Experiment Tracking (Alauda support for MLFlow)" (and the other occurrence around the same table row) to use "MLflow" so both instances match the preferred casing.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@docs/en/overview/architecture.mdx`:
- Line 59: Update the table row for "Dify (Alauda support for Dify)" in
docs/en/overview/architecture.mdx to replace the ambiguous "Apache Version 2.0
(modified)" with the correct license name "Dify Open Source License" and either
add a parenthetical link to the project's official LICENSE or append a short
clarification listing the two key restrictions: "Commercial multi-tenant (SaaS)
deployments require explicit written authorization" and "Dify frontend
components must retain the Dify logo and copyright notice (cannot be
removed/modified)"; ensure the updated cell text is concise and clearly
references the LICENSE for full terms.
---
Duplicate comments:
In `@docs/en/overview/architecture.mdx`:
- Line 40: The table row containing the label "Guardrails (Coming soon)" is
ambiguous given the "Open source / Apache Version 2.0" license claim; update
that table cell so the Type/License is accurate: if the implementation and
license are not finalized change the text to "TBD" (or "Guardrails — TBD") and
set License to "TBD", otherwise replace "Guardrails (Coming soon)" with the
exact project name and ensure the License column matches the project's actual
license (e.g., "Apache 2.0"); locate and edit the markdown table row that
currently reads "| Guardrails (Coming soon) | AI safety and governance framework
| Open source | Apache Version 2.0 |".
- Line 38: The two table rows "Training Jobs (Alauda support for Kubeflow
Trainer v2)" and "Training" are duplicated in scope; remove redundancy by either
merging them into a single row that consolidates Kubernetes-native job
management and Kubeflow Trainer v2 support, or split responsibilities clearly
(e.g., make "Training Jobs" cover cluster-level job orchestration/Kubernetes
details and make "Training" cover framework-level training lifecycle/experiment
semantics). Update the table entries so the row titles "Training Jobs" and
"Training" have distinct descriptions (or keep only the merged row), and ensure
the wording for Kubeflow Trainer v2 and Alauda support appears once and
consistently across the table.
- Around line 53-56: Update the table rows for the components "Fine-tuning",
"Model Quantization", "Evaluation", "MCP Servers", and "Agent Evaluation" to
remove '-' placeholders in the Type and License columns and replace them with
explicit values (e.g., set License to "Multiple/Varies (see component docs)" and
ensure Type is descriptive/non-empty such as "Open source / Varies" or "Tooling
/ Varies"); edit the rows shown in the diff (the lines containing "Fine-tuning",
"Model Quantization", "Evaluation" and the other mentioned components) so every
Type and License cell contains explicit text rather than '-' and keep phrasing
consistent with the surrounding rows.
- Line 42: Replace inconsistent casing of "MLFlow" with the canonical "MLflow"
across the document; specifically update the table cell string "Experiment
Tracking (Alauda support for MLFlow)" (and the other occurrence around the same
table row) to use "MLflow" so both instances match the preferred casing.
| | Evaluation | Tools integrated with the workbench for evaluating model performance, e.g. lm-evaluation-harness etc. | Open source | - | | ||
| | Llama Stack (Alauda build of Llama Stack) | Framework for building applications with large language models | Open source | MIT | | ||
| | Langchain | Tools integrated with the workbench for building LLM applications using Langchain | Open source | MIT | | ||
| | Dify (Alauda support for Dify) | Platform for building AI assistants and chatbots | Open source | Apache Version 2.0 (modified) | |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
langgenius/dify official license Apache-2.0 modified terms
💡 Result:
langgenius/dify is not plain Apache-2.0; it uses the “Dify Open Source License”, described as a modified Apache License 2.0 with additional conditions. [1][2]
Apache-2.0 “modified terms” / additional conditions (from the official LICENSE)
-
Multi-tenant (SaaS) restriction
You may not use the Dify source code to operate a multi-tenant environment unless explicitly authorized in writing (tenant ≈ one workspace). [1] -
Logo / copyright removal restriction (frontend)
If you use Dify’s frontend, you may not remove or modify the Dify LOGO or copyright information in the console/applications. This does not apply if your use does not involve Dify’s frontend (frontend defined asweb/or the Docker “web” image). [1] -
Contributor-specific terms
Contributors agree that:
- the producer can change the open-source agreement to be more strict or relaxed, and
- contributed code may be used for commercial purposes (including cloud business operations). [1]
The license then states that other rights/restrictions follow Apache License 2.0. [1]
Sources: official Dify LICENSE text (raw) [1] and the repo license summary/label on GitHub [2].
[1] https://raw.githubusercontent.com/langgenius/dify/main/LICENSE
[2] https://github.com/langgenius/dify
🏁 Script executed:
# Check if there's a LICENSE or COPYING file in the repository
find . -type f -name "LICENSE*" -o -name "COPYING*" | head -20Repository: alauda/aml-docs
Length of output: 41
🏁 Script executed:
# Search for license information in the repository root and common locations
if [ -f "LICENSE" ]; then
echo "=== LICENSE file ==="
head -50 LICENSE
fi
if [ -f "LICENSE.md" ]; then
echo "=== LICENSE.md file ==="
head -50 LICENSE.md
fi
if [ -f "LICENSE.txt" ]; then
echo "=== LICENSE.txt file ==="
head -50 LICENSE.txt
fiRepository: alauda/aml-docs
Length of output: 41
Update license expression to match Dify's official license name and include key restrictions.
The documentation currently shows Apache Version 2.0 (modified), which is ambiguous and incomplete. Dify uses the "Dify Open Source License" — an Apache-2.0-based license with specific additional restrictions that should be documented:
- Multi-tenant (SaaS) restriction: Commercial multi-tenant deployments require explicit written authorization
- Logo/copyright restriction (frontend): The Dify logo and copyright notice in the UI cannot be removed or modified when using Dify's frontend components
Clarify the license row to reference the actual license name and either link to the official LICENSE file or document these key terms to avoid misleading users about permitted use cases.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/en/overview/architecture.mdx` at line 59, Update the table row for "Dify
(Alauda support for Dify)" in docs/en/overview/architecture.mdx to replace the
ambiguous "Apache Version 2.0 (modified)" with the correct license name "Dify
Open Source License" and either add a parenthetical link to the project's
official LICENSE or append a short clarification listing the two key
restrictions: "Commercial multi-tenant (SaaS) deployments require explicit
written authorization" and "Dify frontend components must retain the Dify logo
and copyright notice (cannot be removed/modified)"; ensure the updated cell text
is concise and clearly references the LICENSE for full terms.
docs/en/overview/architecture.mdx
Outdated
| | Component | Description | Type | License | | ||
| | --- | --- | --- | --- | | ||
| | Model Catalog (Alauda AI/Alauda AI Essentials) | Centralized repository for managing AI models and their metadata | Proprietary | Commercial | | ||
| | Model Registry (Alauda support for Kubeflow Model Registry) | Git-based storage for model files and versioning | Open source | Apache Version 2.0 | |
There was a problem hiding this comment.
解释的不太对,组件是“Kubeflow Model Registry”的话,不应该是“Git-based storage”
docs/en/overview/architecture.mdx
Outdated
| | Workbench (Alauda AI Workbench) | Web-based interface for managing AI projects, including model training and inference | Proprietary | Commercial | | ||
| | Training Jobs (Alauda support for Kubeflow Trainer v2) | Kubernetes-native training job management | Open source | Apache Version 2.0 | | ||
| | Kubeflow Pipelines (Alauda support for Kubeflow Base & Alauda support for Kubeflow Pipeline) | Workflow orchestration for AI pipelines | Open source | Apache Version 2.0 | | ||
| | Guardrails | AI safety and governance framework | Open source | Apache Version 2.0 | |
There was a problem hiding this comment.
“ Guardrails”在2.0里还没有,这类的要不要标记个“coming soon”,或者去掉?
* update arch overview * update * update
Summary by CodeRabbit