declare required ACP version for knative operator#129
Conversation
|
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: Organization UI Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (1)
🚧 Files skipped from review as they are similar to previous changes (1)
WalkthroughAdded Changes
Sequence Diagram(s)Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 inconclusive)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (1)
docs/en/installation/ai-generative.mdx (1)
30-30: ExpandGIEon first mention.The prerequisites table introduces
GIEbefore the acronym is defined. Expanding it here would make the dependency list self-contained.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/en/installation/ai-generative.mdx` at line 30, Expand the acronym "GIE" on first mention in the prerequisites table so readers immediately understand it; update the table entry that currently shows "| GIE | Built-in | Integrated GIE (gateway-api-inference-extension)..." to explicitly spell out "GIE (Gateway API Inference Extension)" or similar on first occurrence and keep the shortened "GIE" thereafter, ensuring the symbol "GIE" and the phrase "gateway-api-inference-extension" (or the proper title) are present in the same table cell for clarity.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@docs/en/installation/ai-generative.mdx`:
- Around line 103-104: Rewrite the two steps to use direct, grammatical
imperative sentences: change "Upload the new version for package of **Alauda
Build of KServe** plugin to ACP." to something like "Upload the new version of
the Alauda Build of KServe plugin to ACP." and change "you will see the `Alauda
Build of KServe` can be upgraded." to something like "The Alauda Build of KServe
will be listed with an available Upgrade option; click Upgrade." Replace the
original lines (the sentence containing "Upload the new version for package of"
and the sentence containing "you will see the `Alauda Build of KServe` can be
upgraded") with these direct, grammatical imperative forms.
- Around line 5-7: The docs use two different product names causing confusion;
locate occurrences of the phrase "Alauda Generative AI" in the architecture page
(the sections that describe the serving components) and rename them to "Alauda
Build of KServe" so the installation and architecture docs match; update every
matching instance (including the three component descriptions and any summary
mentions) and run a quick search across the docs to ensure no remaining "Alauda
Generative AI" tokens remain.
---
Nitpick comments:
In `@docs/en/installation/ai-generative.mdx`:
- Line 30: Expand the acronym "GIE" on first mention in the prerequisites table
so readers immediately understand it; update the table entry that currently
shows "| GIE | Built-in | Integrated GIE (gateway-api-inference-extension)..."
to explicitly spell out "GIE (Gateway API Inference Extension)" or similar on
first occurrence and keep the shortened "GIE" thereafter, ensuring the symbol
"GIE" and the phrase "gateway-api-inference-extension" (or the proper title) are
present in the same table cell for clarity.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: f5ed262e-e556-4597-ae26-43c44086e5f7
📒 Files selected for processing (2)
docs/en/installation/ai-cluster.mdxdocs/en/installation/ai-generative.mdx
| 1. Upload the new version for package of **Alauda Build of KServe** plugin to ACP. | ||
| 2. Go to the `Administrator` -> `Clusters` -> `Target Cluster` -> `Functional Components` page, then click the `Upgrade` button, and you will see the `Alauda Build of KServe` can be upgraded. |
There was a problem hiding this comment.
Tighten the upgrade wording.
Upload the new version for package of ... and you will see the ... can be upgraded read awkwardly in user-facing instructions. Please rewrite these two steps into direct, grammatical actions before publishing.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/en/installation/ai-generative.mdx` around lines 103 - 104, Rewrite the
two steps to use direct, grammatical imperative sentences: change "Upload the
new version for package of **Alauda Build of KServe** plugin to ACP." to
something like "Upload the new version of the Alauda Build of KServe plugin to
ACP." and change "you will see the `Alauda Build of KServe` can be upgraded." to
something like "The Alauda Build of KServe will be listed with an available
Upgrade option; click Upgrade." Replace the original lines (the sentence
containing "Upload the new version for package of" and the sentence containing
"you will see the `Alauda Build of KServe` can be upgraded") with these direct,
grammatical imperative forms.
Deploying alauda-ai with
|
| Latest commit: |
21aa499
|
| Status: | ✅ Deploy successful! |
| Preview URL: | https://2d78de5d.alauda-ai.pages.dev |
| Branch Preview URL: | https://fix-kantive-acp-versions.alauda-ai.pages.dev |
|
/test-pass |
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@docs/en/overview/architecture.mdx`:
- Line 50: Update the vLLM table row label so it clearly identifies vLLM as its
own inference runtime instead of associating it with KServe: replace the
substring "(Alauda AI/Alauda Build of KServe)" in the vLLM row (the table line
containing "vLLM (Alauda AI/Alauda Build of KServe)") with a consistent branding
choice such as "(Alauda Build of vLLM)" or "(Alauda AI vLLM)"; ensure the change
mirrors the pattern used for other entries like "KServe" and "llm-d" so the
table consistently shows separate runtimes.
- Line 49: The table row string "| Kserve (Alauda Build of KServe) |
Kubernetes-native model serving framework | Open source | Apache Version 2.0 |"
uses incorrect capitalization; change the leading "Kserve" to "KServe" so it
matches the official name and the parenthetical "(Alauda Build of KServe)".
Update that table cell text to "KServe" and ensure the rest of the row remains
unchanged.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 6b6963d4-f2bf-422e-87cc-26a4ee7453d3
📒 Files selected for processing (1)
docs/en/overview/architecture.mdx
docs/en/overview/architecture.mdx
Outdated
| | Kserve (Alauda AI Model Serving/Alauda Generative AI) | Kubernetes-native model serving framework | Open source | Apache Version 2.0 | | ||
| | vLLM (Alauda AI Model Serving/Alauda Generative AI) | High-performance model inference engine for large language models | Open source | Apache Version 2.0 | | ||
| | llm-d (Alauda Generative AI) | Distributed inference engine for large language models | Open source | Apache Version 2.0 | | ||
| | Kserve (Alauda Build of KServe) | Kubernetes-native model serving framework | Open source | Apache Version 2.0 | |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, verify the exact content of the file at the specified line
cat -n docs/en/overview/architecture.mdx | sed -n '45,55p'Repository: alauda/aml-docs
Length of output: 1199
🌐 Web query:
KServe official naming kubernetes model serving
💡 Result:
The official name of the Kubernetes model-serving project is KServe. (kubeflow.org)
It was previously named KFServing, and the project was renamed/rebranded to KServe in September 2021 when the kubeflow/kfserving repo moved to the independent KServe GitHub organization. (kubeflow.org)
In short: KServe (formerly KFServing) is the correct/official naming when referring to Kubernetes-based model serving in the Kubeflow ecosystem. (kubeflow.org)
Citations:
- 1: https://www.kubeflow.org/docs/components/kserve/introduction/?utm_source=openai
- 2: https://www.kubeflow.org/docs/components/kserve/introduction/?utm_source=openai
- 3: https://www.kubeflow.org/docs/components/kserve/introduction/?utm_source=openai
🏁 Script executed:
# Search for all instances of Kserve vs KServe in the repo
rg -i "kserve|vllm" docs/ -A 1 -B 1Repository: alauda/aml-docs
Length of output: 30251
Fix KServe capitalization on line 49.
Line 49 uses Kserve, which contradicts both the official KServe naming convention and the parenthetical reference on the same line that correctly shows KServe.
Suggested fix
-| Kserve (Alauda Build of KServe) | Kubernetes-native model serving framework | Open source | Apache Version 2.0 |
+| KServe (Alauda Build of KServe) | Kubernetes-native model serving framework | Open source | Apache Version 2.0 |📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| | Kserve (Alauda Build of KServe) | Kubernetes-native model serving framework | Open source | Apache Version 2.0 | | |
| | KServe (Alauda Build of KServe) | Kubernetes-native model serving framework | Open source | Apache Version 2.0 | |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/en/overview/architecture.mdx` at line 49, The table row string "| Kserve
(Alauda Build of KServe) | Kubernetes-native model serving framework | Open
source | Apache Version 2.0 |" uses incorrect capitalization; change the leading
"Kserve" to "KServe" so it matches the official name and the parenthetical
"(Alauda Build of KServe)". Update that table cell text to "KServe" and ensure
the rest of the row remains unchanged.
| | vLLM (Alauda AI Model Serving/Alauda Generative AI) | High-performance model inference engine for large language models | Open source | Apache Version 2.0 | | ||
| | llm-d (Alauda Generative AI) | Distributed inference engine for large language models | Open source | Apache Version 2.0 | | ||
| | Kserve (Alauda Build of KServe) | Kubernetes-native model serving framework | Open source | Apache Version 2.0 | | ||
| | vLLM (Alauda AI/Alauda Build of KServe) | High-performance model inference engine for large language models | Open source | Apache Version 2.0 | |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's check the architecture.mdx file around line 50
cat -n docs/en/overview/architecture.mdx | sed -n '45,55p'Repository: alauda/aml-docs
Length of output: 1199
🏁 Script executed:
# Check the inference_service.mdx file around the mentioned lines
cat -n docs/en/model_inference/inference_service/functions/inference_service.mdx | sed -n '38,52p'Repository: alauda/aml-docs
Length of output: 903
🏁 Script executed:
# Let's also search for other references to vLLM in the documentation
rg -n "vLLM" docs/ --type markdownRepository: alauda/aml-docs
Length of output: 41
Clarify the vLLM product label to reflect it as a separate inference runtime.
The label (Alauda AI/Alauda Build of KServe) is inconsistent and misleading. It incorrectly associates vLLM with KServe when vLLM is a distinct inference framework, as documented in docs/en/model_inference/inference_service/functions/inference_service.mdx:43–47, which lists vLLM as a separate mainstream inference framework alongside MLServer. Additionally, the label breaks the pattern used elsewhere in the table (e.g., KServe and llm-d both use "Alauda Build of X"). Update the label to clarify vLLM as its own runtime—either (Alauda Build of vLLM), (Alauda AI vLLM), or a single consistent branding choice.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/en/overview/architecture.mdx` at line 50, Update the vLLM table row
label so it clearly identifies vLLM as its own inference runtime instead of
associating it with KServe: replace the substring "(Alauda AI/Alauda Build of
KServe)" in the vLLM row (the table line containing "vLLM (Alauda AI/Alauda
Build of KServe)") with a consistent branding choice such as "(Alauda Build of
vLLM)" or "(Alauda AI vLLM)"; ensure the change mirrors the pattern used for
other entries like "KServe" and "llm-d" so the table consistently shows separate
runtimes.
0168a39 to
21aa499
Compare
Summary by CodeRabbit