Conversation
WalkthroughAdds repo-wide editor/lint/cspell configuration and updates the Doom toolchain; adjusts documentation files for formatting and minor markup corrections. No functional code or APIs are changed. Changes
Estimated code review effort🎯 2 (Simple) | ⏱️ ~8 minutes Possibly related PRs
Suggested reviewers
Poem
✨ Finishing Touches
🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. CodeRabbit Commands (Invoked using PR/Issue comments)Type Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Actionable comments posted: 3
🧹 Nitpick comments (8)
.cspell/compound.txt (1)
1-5: Verify external cspell config for case sensitivityWe see that
cspell.config.mjssimply re-exports the default from@alauda/doom/cspelland contains no explicitcaseSensitive,ignoreCase, orallowCompoundWordssettings. Without inspecting that shared config, we can’t confirm whether words are matched case-insensitively.Please manually check the
@alauda/doom/cspellconfiguration:
- If it treats words case-sensitively, add capitalized variants for:
•Knative
•KServe
•XInference- If it’s already case-insensitive, no changes are needed in
.cspell/compound.txt.docs/en/model_inference/inference_service/functions/inference_service.mdx (3)
67-69: Admonition fix looks good; minor grammar polish suggested.Consider tightening the sentence for readability.
Apply this diff:
-:::tip -Custom publishing inference service requires manual setting of parameters. You can also create a "template" by combining input parameters for quick publishing of inference services. +:::tip +Custom publishing an inference service requires manually setting parameters. You can also create a template by combining input parameters for quick publishing of inference services. :::
187-187: Tighten wording for clarity.Small style improvement; reduces redundancy.
-In the **Inference Experience** interface, common parameters and default values are pre-made, and any custom parameters can also be added. +In the **Inference Experience** interface, common parameters and default values are pre-filled; you can also add custom parameters.
219-222: Avoid inline MDX comments inside prose; prefer code formatting (and consider removing duplicate row).
- The inline MDX comment {/* lint ignore unit-case */} may be unnecessary if scientific notation is wrapped in code spans, and it slightly distracts in source.
- The
repetition_penaltyrow appears twice (once under Preset Parameters at Line 199 and again under Other Parameters at Line 221). Consider removing the duplicate to avoid confusion.Proposed edits:
-| `epsilon_cutoff` | float | If set to a floating-point number strictly between 0 and 1, only tokens with conditional probabilities greater than `epsilon_cutoff` will be sampled. Suggested values range from {/* lint ignore unit-case */} 3e-4 to 9e-4, depending on the model size. | -| `eta_cutoff` | float | Eta sampling is a hybrid of local typical sampling and epsilon sampling. If set to a floating-point number strictly between 0 and 1, a token will only be considered if it is greater than `eta_cutoff` or sqrt(`eta_cutoff`) * exp(-entropy(softmax(next_token_logits))). Suggested values range from {/* lint ignore unit-case */} 3e-4 to 2e-3, depending on the model size. | +| `epsilon_cutoff` | float | If set to a floating-point number strictly between 0 and 1, only tokens with conditional probabilities greater than `epsilon_cutoff` will be sampled. Suggested values range from `3e-4` to `9e-4`, depending on the model size. | +| `eta_cutoff` | float | Eta sampling is a hybrid of local typical sampling and epsilon sampling. If set to a floating-point number strictly between 0 and 1, a token will only be considered if it is greater than `eta_cutoff` or sqrt(`eta_cutoff`) * exp(-entropy(softmax(next_token_logits))). Suggested values range from `3e-4` to `2e-3`, depending on the model size. |If you apply the change, please re-run "doom lint" to confirm the linter no longer needs the inline ignore comment. If it still complains, we can add a targeted lint-disable directive just above the table instead.
docs/en/model_inference/model_management/functions/model_repository.mdx (4)
26-31: Specify an approximate upload size threshold (optional).“limited to small/medium sizes” is vague. Adding a rough threshold (e.g., <100 MB) or a link to limits improves user expectations and reduces support churn.
Apply this diff if acceptable:
- - Drag-and-drop files/folders (limited to small/medium sizes). + - Drag-and-drop files/folders (best for small/medium sizes; e.g., <100 MB).
52-60: Notebook example is helpful; add a terminal variant (optional).For non-notebook users, a terminal-friendly command avoids confusion about the leading '!'.
You could add a terminal example below the notebook snippet:
Terminal: ```bash git clone https://aml-public/resnet50.git--- `62-67`: **Technical notes are solid; consider adding common LFS patterns (optional).** Many Transformer models use additional formats. You can include a short example to preempt large-file push issues. Suggested addition after the existing example: ```md Common patterns: ```gitattributes *.safetensors filter=lfs diff=lfs merge=lfs -text *.onnx filter=lfs diff=lfs merge=lfs -text *.ckpt filter=lfs diff=lfs merge=lfs -text--- `118-119`: **Track additional common ML file types with Git LFS (optional).** Safetensors/ONNX/ckpt are frequently used and often exceed hosting limits. ```diff - git lfs track "*.h5" "*.bin" "*.pt" + git lfs track "*.h5" "*.bin" "*.pt" "*.safetensors" "*.onnx" "*.ckpt"
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
yarn.lockis excluded by!**/yarn.lock,!**/*.lock
📒 Files selected for processing (8)
.cspell/compound.txt(1 hunks).editorconfig(1 hunks)cspell.config.mjs(1 hunks)docs/en/model_inference/inference_service/functions/inference_service.mdx(4 hunks)docs/en/model_inference/inference_service/index.mdx(1 hunks)docs/en/model_inference/model_management/functions/model_repository.mdx(2 hunks)eslint.config.mjs(1 hunks)package.json(1 hunks)
🔇 Additional comments (9)
package.json (1)
3-3: Upgrade to @alauda/doom ^1.10.9 looks good.Aligns with the new eslint/cspell configs introduced in this PR.
docs/en/model_inference/inference_service/index.mdx (1)
1-3: Whitespace and newline cleanup LGTM.Blank line after H1 and ensured trailing newline improve MDX/remark-lint compatibility.
.editorconfig (1)
1-10: Baseline EditorConfig is solid.Standardizes indentation, newlines, and trailing whitespace across the repo.
eslint.config.mjs (1)
1-3: ESLint flat config via Doom is correctly set up.ESM import + passing docs URL is consistent with Doom’s API.
cspell.config.mjs (1)
1-1: LGTM: Re-exporting DOOM's CSpell config is correct.The re-export pattern is valid ESM and will surface a default export as expected. Assuming the subpath "@alauda/doom/cspell" is exported by the package (it typically is), this should integrate cleanly with doom lint.
docs/en/model_inference/inference_service/functions/inference_service.mdx (1)
268-268: LGTM: Link formatting and trailing newline fixed.Thanks for aligning the link presentation; this also tends to reduce markdown/MDX lint noise.
docs/en/model_inference/model_management/functions/model_repository.mdx (3)
6-6: LGTM: Clear, concise introduction.The rewritten overview is accurate and more scannable.
22-25: LGTM: Section headings and bullets align with a lint-friendly style.The conversions improve structure and readability.
103-138: LGTM: Step-by-step Git/LFS workflow is clear and cautious.Good inclusion of the migration warning and rationale; placement after commit matches the note about first public push vs. coordination.
Summary by CodeRabbit
Documentation
Chores
Style