Skip to content

feat: enable and fix doom lint issues#12

Merged
JounQin merged 1 commit intomasterfrom
feat/lint
Aug 13, 2025
Merged

feat: enable and fix doom lint issues#12
JounQin merged 1 commit intomasterfrom
feat/lint

Conversation

@JounQin
Copy link
Copy Markdown
Member

@JounQin JounQin commented Aug 13, 2025

Summary by CodeRabbit

  • Documentation

    • Improved formatting and readability across Inference Service and Model Repository docs, including clearer tips, consistent lists, corrected headings, and link/bullet fixes.
  • Chores

    • Standardized coding style via EditorConfig.
    • Introduced ESLint configuration and lint script to streamline quality checks.
    • Expanded spell-check dictionary to recognize common ML and platform terms.
    • Updated development tooling dependency to the latest version.
  • Style

    • Whitespace, indentation, and trailing newline adjustments for consistency in docs.

@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Aug 13, 2025

Walkthrough

Adds repo-wide editor/lint/cspell configuration and updates the Doom toolchain; adjusts documentation files for formatting and minor markup corrections. No functional code or APIs are changed.

Changes

Cohort / File(s) Summary of changes
Documentation formatting
docs/en/model_inference/inference_service/functions/inference_service.mdx, docs/en/model_inference/inference_service/index.mdx, docs/en/model_inference/model_management/functions/model_repository.mdx
Normalize headings, admonitions, whitespace, and lists; restructure to numbered lists and tidy code block formatting; no content or API changes.
Spellcheck configuration
.cspell/compound.txt, cspell.config.mjs
Add custom CSpell compounds (amlclusters, amlnamespaces, knative, kserve, xinference) and re-export CSpell config from @alauda/doom.
Editor and lint tooling
.editorconfig, eslint.config.mjs, package.json
Add EditorConfig; add ESLint config using Doom helper; bump @alauda/doom to ^1.10.9 and add “lint” script.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~8 minutes

Possibly related PRs

Suggested reviewers

  • typhoonzero
  • zhaomingkun1030

Poem

In burrows of docs I hop and preen,
Fixing lists neat, with spaces clean.
Doom tools sharpen, lint takes flight,
Spelling warren set just right.
Thump-thump—release, concise and bright! 🐇✨

✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch feat/lint

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR/Issue comments)

Type @coderabbitai help to get the list of available commands.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Status, Documentation and Community

  • Visit our Status Page to check the current availability of CodeRabbit.
  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Nitpick comments (8)
.cspell/compound.txt (1)

1-5: Verify external cspell config for case sensitivity

We see that cspell.config.mjs simply re-exports the default from @alauda/doom/cspell and contains no explicit caseSensitive, ignoreCase, or allowCompoundWords settings. Without inspecting that shared config, we can’t confirm whether words are matched case-insensitively.

Please manually check the @alauda/doom/cspell configuration:

  • If it treats words case-sensitively, add capitalized variants for:
    Knative
    KServe
    XInference
  • If it’s already case-insensitive, no changes are needed in .cspell/compound.txt.
docs/en/model_inference/inference_service/functions/inference_service.mdx (3)

67-69: Admonition fix looks good; minor grammar polish suggested.

Consider tightening the sentence for readability.

Apply this diff:

-:::tip
-Custom publishing inference service requires manual setting of parameters. You can also create a "template" by combining input parameters for quick publishing of inference services.
+:::tip
+Custom publishing an inference service requires manually setting parameters. You can also create a template by combining input parameters for quick publishing of inference services.
 :::

187-187: Tighten wording for clarity.

Small style improvement; reduces redundancy.

-In the **Inference Experience** interface, common parameters and default values are pre-made, and any custom parameters can also be added.
+In the **Inference Experience** interface, common parameters and default values are pre-filled; you can also add custom parameters.

219-222: Avoid inline MDX comments inside prose; prefer code formatting (and consider removing duplicate row).

  • The inline MDX comment {/* lint ignore unit-case */} may be unnecessary if scientific notation is wrapped in code spans, and it slightly distracts in source.
  • The repetition_penalty row appears twice (once under Preset Parameters at Line 199 and again under Other Parameters at Line 221). Consider removing the duplicate to avoid confusion.

Proposed edits:

-| `epsilon_cutoff` | float | If set to a floating-point number strictly between 0 and 1, only tokens with conditional probabilities greater than `epsilon_cutoff` will be sampled. Suggested values range from {/* lint ignore unit-case */} 3e-4 to 9e-4, depending on the model size. |
-| `eta_cutoff` | float | Eta sampling is a hybrid of local typical sampling and epsilon sampling. If set to a floating-point number strictly between 0 and 1, a token will only be considered if it is greater than `eta_cutoff` or sqrt(`eta_cutoff`) * exp(-entropy(softmax(next_token_logits))). Suggested values range from {/* lint ignore unit-case */} 3e-4 to 2e-3, depending on the model size. |
+| `epsilon_cutoff` | float | If set to a floating-point number strictly between 0 and 1, only tokens with conditional probabilities greater than `epsilon_cutoff` will be sampled. Suggested values range from `3e-4` to `9e-4`, depending on the model size. |
+| `eta_cutoff` | float | Eta sampling is a hybrid of local typical sampling and epsilon sampling. If set to a floating-point number strictly between 0 and 1, a token will only be considered if it is greater than `eta_cutoff` or sqrt(`eta_cutoff`) * exp(-entropy(softmax(next_token_logits))). Suggested values range from `3e-4` to `2e-3`, depending on the model size. |

If you apply the change, please re-run "doom lint" to confirm the linter no longer needs the inline ignore comment. If it still complains, we can add a targeted lint-disable directive just above the table instead.

docs/en/model_inference/model_management/functions/model_repository.mdx (4)

26-31: Specify an approximate upload size threshold (optional).

“limited to small/medium sizes” is vague. Adding a rough threshold (e.g., <100 MB) or a link to limits improves user expectations and reduces support churn.

Apply this diff if acceptable:

-    - Drag-and-drop files/folders (limited to small/medium sizes).
+    - Drag-and-drop files/folders (best for small/medium sizes; e.g., <100 MB).

52-60: Notebook example is helpful; add a terminal variant (optional).

For non-notebook users, a terminal-friendly command avoids confusion about the leading '!'.

You could add a terminal example below the notebook snippet:

Terminal:
```bash
git clone https://aml-public/resnet50.git

---

`62-67`: **Technical notes are solid; consider adding common LFS patterns (optional).**

Many Transformer models use additional formats. You can include a short example to preempt large-file push issues.

Suggested addition after the existing example:

```md
Common patterns:
```gitattributes
*.safetensors filter=lfs diff=lfs merge=lfs -text
*.onnx        filter=lfs diff=lfs merge=lfs -text
*.ckpt        filter=lfs diff=lfs merge=lfs -text

---

`118-119`: **Track additional common ML file types with Git LFS (optional).**

Safetensors/ONNX/ckpt are frequently used and often exceed hosting limits.

```diff
-            git lfs track "*.h5" "*.bin" "*.pt"
+            git lfs track "*.h5" "*.bin" "*.pt" "*.safetensors" "*.onnx" "*.ckpt"
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 15230fc and ca863bc.

⛔ Files ignored due to path filters (1)
  • yarn.lock is excluded by !**/yarn.lock, !**/*.lock
📒 Files selected for processing (8)
  • .cspell/compound.txt (1 hunks)
  • .editorconfig (1 hunks)
  • cspell.config.mjs (1 hunks)
  • docs/en/model_inference/inference_service/functions/inference_service.mdx (4 hunks)
  • docs/en/model_inference/inference_service/index.mdx (1 hunks)
  • docs/en/model_inference/model_management/functions/model_repository.mdx (2 hunks)
  • eslint.config.mjs (1 hunks)
  • package.json (1 hunks)
🔇 Additional comments (9)
package.json (1)

3-3: Upgrade to @alauda/doom ^1.10.9 looks good.

Aligns with the new eslint/cspell configs introduced in this PR.

docs/en/model_inference/inference_service/index.mdx (1)

1-3: Whitespace and newline cleanup LGTM.

Blank line after H1 and ensured trailing newline improve MDX/remark-lint compatibility.

.editorconfig (1)

1-10: Baseline EditorConfig is solid.

Standardizes indentation, newlines, and trailing whitespace across the repo.

eslint.config.mjs (1)

1-3: ESLint flat config via Doom is correctly set up.

ESM import + passing docs URL is consistent with Doom’s API.

cspell.config.mjs (1)

1-1: LGTM: Re-exporting DOOM's CSpell config is correct.

The re-export pattern is valid ESM and will surface a default export as expected. Assuming the subpath "@alauda/doom/cspell" is exported by the package (it typically is), this should integrate cleanly with doom lint.

docs/en/model_inference/inference_service/functions/inference_service.mdx (1)

268-268: LGTM: Link formatting and trailing newline fixed.

Thanks for aligning the link presentation; this also tends to reduce markdown/MDX lint noise.

docs/en/model_inference/model_management/functions/model_repository.mdx (3)

6-6: LGTM: Clear, concise introduction.

The rewritten overview is accurate and more scannable.


22-25: LGTM: Section headings and bullets align with a lint-friendly style.

The conversions improve structure and readability.


103-138: LGTM: Step-by-step Git/LFS workflow is clear and cautious.

Good inclusion of the migration warning and rationale; placement after commit matches the note about first public push vs. coordination.

@JounQin JounQin merged commit 1bce989 into master Aug 13, 2025
2 checks passed
@JounQin JounQin deleted the feat/lint branch August 13, 2025 03:36
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant