Skip to content

Splitagain#429

Merged
shajason merged 8 commits intomasterfrom
splitagain
Nov 3, 2025
Merged

Splitagain#429
shajason merged 8 commits intomasterfrom
splitagain

Conversation

@shajason
Copy link
Collaborator

@shajason shajason commented Oct 31, 2025

Please don't review yet!!!! I'm sure CodeRabbit will have a lot to say, and I'll probably not be finished until Monday.

Summary by CodeRabbit

  • Documentation
    • Added full LLM Rubric Autograde docs and clarified The LLM Rubric Grader as a separate instructor‑approval workflow; streamlined rubric creation, grading steps, UI guidance, images, examples, and rubric requirement behavior. Reworked grading and assessment authoring content for clearer navigation, simplified workflows, and updated sample visuals and captions.
  • Changelog
    • Added August 2025 entry: Join a Course token is now four words.

@shajason shajason requested a review from LolaValente October 31, 2025 18:56
@coderabbitai
Copy link

coderabbitai bot commented Oct 31, 2025

Walkthrough

Splits instructor assessment docs: adds an LLM Autograde guide, refactors the manual LLM rubric doc and the assessments toctree, updates grading workflow text, and inserts an August 2025 changelog entry. (50 words)

Changes

Cohort / File(s) Summary
Add-assessment & Toctree
source/instructors/authoring/assessments/add-assessment.rst
Normalized headings and toctree caption; added llm-based-auto-rubric to the Auto‑Graded Assessments list; split previous single LLM link into “LLM Rubric Grader” and “LLM Rubric Autograde”; adjusted intro and image placement.
New Auto-Rubric Guide
source/instructors/authoring/assessments/llm-based-auto-rubric.rst
New page describing the LLM Rubric Autograde workflow: rubric generation (Rubric Creation Agent), LLM grading using generated rubric, UI inputs (General, Grading, Rubric Requirements), generation steps, behavior when requirements are omitted, and sample rubric/grading content.
LLM Rubric (manual) updates
source/instructors/authoring/assessments/llm-based-rubric.rst
Retitled and restructured as “The LLM Rubric Grader”; clarifies instructor approval is required before student feedback; reorganized into three explicit steps (rubric creation, LLM grading, manual approval); simplified field instructions and updated images/captions.
Grading workflow edits
source/instructors/teaching/grading/grading.rst
Reworded and simplified grading flow: clarified the Completed column/check-mark, unified grade assignment via “Open student’s project” → Grading UI, simplified status filter description, updated Actions/Grading>Queue/Grade Adjustments text, removed detailed rubric-override narrative.
Changelog
source/instructors/getstarted/support/changelog.rst
Added August 2025 entry with: “The Join a Course token is now four words instead of two,” inserted between September 2025 and July 2025 entries.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

  • Verify consistency and sequencing between llm-based-auto-rubric.rst and llm-based-rubric.rst (terminology, step numbering, approval flow).
  • Check toctree/cross-reference targets and image paths in add-assessment.rst.
  • Confirm UI label names referenced in grading.rst (Actions column, Grading button, Grading>Queue) match product UI.

Possibly related PRs

Suggested reviewers

  • LolaValente
  • jairovelasquez

Pre-merge checks

❌ Failed checks (1 inconclusive)
Check name Status Explanation Resolution
Title check ❓ Inconclusive The title 'Splitagain' is vague and provides no meaningful information about what changes are included in this pull request. Replace with a descriptive title that summarizes the main change, such as 'Split LLM Rubric documentation into Grader and Autograde topics' or 'Reorganize assessment documentation and update grading workflow.'
✅ Passed checks (1 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.

📜 Recent review details

Configuration used: Path: .coderabbit.yaml

Review profile: ASSERTIVE

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 7da3403 and 35a4274.

📒 Files selected for processing (1)
  • source/instructors/authoring/assessments/add-assessment.rst (2 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**/*.rst

⚙️ CodeRabbit configuration file

**/*.rst: Review files for:

  • Consistent formatting (e.g., headings, lists, links).
  • Clear and concise language.
  • Correct grammar and spelling.
  • Proper use of rst syntax (e.g., avoid broken links or invalid code blocks).
  • Adherence to style guides (e.g., consistent tone, terminology).
    Suggest improvements to enhance readability and accessibility.
  • Make sure to find all improvements on the first pass.
    Go through the material multiple times before submitting comments.

Files:

  • source/instructors/authoring/assessments/add-assessment.rst
🔇 Additional comments (1)
source/instructors/authoring/assessments/add-assessment.rst (1)

26-27: Confirm toctree entry and reference link consistency.

The toctree split and reference updates are syntactically correct and properly formatted. Ensure that both the new toctree entry (llm-based-auto-rubric at line 27) and the corresponding reference links (lines 73–74) point to the correct files and that display text precisely matches the actual document titles.

Also applies to: 73-74


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 6

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: ASSERTIVE

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 23af792 and d47b0bc.

⛔ Files ignored due to path filters (8)
  • source/img/No-overrindengrade.png is excluded by !**/*.png, !**/*.png
  • source/img/Overridengarde.png is excluded by !**/*.png, !**/*.png
  • source/img/class_administration/grading/assignment-comments.png is excluded by !**/*.png, !**/*.png
  • source/img/filter.png is excluded by !**/*.png, !**/*.png
  • source/img/grading-assign.png is excluded by !**/*.png, !**/*.png
  • source/img/grading-unit.png is excluded by !**/*.png, !**/*.png
  • source/img/gradingqueue.png is excluded by !**/*.png, !**/*.png
  • source/img/latepenalty.png is excluded by !**/*.png, !**/*.png
📒 Files selected for processing (5)
  • source/instructors/authoring/assessments/add-assessment.rst (2 hunks)
  • source/instructors/authoring/assessments/llm-based-auto-rubric.rst (1 hunks)
  • source/instructors/authoring/assessments/llm-based-rubric.rst (3 hunks)
  • source/instructors/getstarted/support/changelog.rst (1 hunks)
  • source/instructors/teaching/grading/grading.rst (3 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**/*.rst

⚙️ CodeRabbit configuration file

**/*.rst: Review files for:

  • Consistent formatting (e.g., headings, lists, links).
  • Clear and concise language.
  • Correct grammar and spelling.
  • Proper use of rst syntax (e.g., avoid broken links or invalid code blocks).
  • Adherence to style guides (e.g., consistent tone, terminology).
    Suggest improvements to enhance readability and accessibility.
  • Make sure to find all improvements on the first pass.
    Go through the material multiple times before submitting comments.

Files:

  • source/instructors/authoring/assessments/add-assessment.rst
  • source/instructors/authoring/assessments/llm-based-auto-rubric.rst
  • source/instructors/teaching/grading/grading.rst
  • source/instructors/getstarted/support/changelog.rst
  • source/instructors/authoring/assessments/llm-based-rubric.rst
🔇 Additional comments (13)
source/instructors/getstarted/support/changelog.rst (1)

30-33: Changelog entry is well-structured and consistent.

The August 2025 section follows the established format and chronological ordering. Wording is clear, and the reference label [I] and date notation are consistent with surrounding entries.

source/instructors/authoring/assessments/add-assessment.rst (1)

26-27: RST toctree entries and references are correctly formatted.

The split into separate llm-based-rubric and llm-based-auto-rubric entries follows proper RST syntax. The ordering and indentation are consistent with the surrounding documentation.

source/instructors/authoring/assessments/llm-based-auto-rubric.rst (2)

1-87: Overall structure and content are clear and well-organized.

The file follows a logical two-step workflow with proper RST formatting, clear headings, descriptive image captions, and appropriate use of Note blocks. The step-by-step guidance is easy to follow.


7-8: Title underline length does not match content.

The heading "LLM-Rubric AutoGrade" is 18 characters, but the underline appears to be approximately 20 characters. In reStructuredText, the underline must match the title length exactly.

Count the title "LLM-Rubric AutoGrade" (18 chars) and adjust the underline to match:

 LLM-Rubric AutoGrade
-====================
+==================

Likely an incorrect or invalid review comment.

source/instructors/authoring/assessments/llm-based-rubric.rst (3)

15-21: Clear restructuring to three-step process with required Step 3.

The addition of the explicit three-step workflow with the required note on line 21 significantly improves clarity about the manual approval requirement. Students clearly understand that their feedback is held pending instructor review.


83-96: Manual Approval section (Step 3) is well-explained.

The addition of this section clearly articulates what instructors must do (open assignment, select passing test cases, edit comments, apply grade) and includes appropriate warnings about finality of released feedback. The sample images help users understand the workflow.


109-111: Rubric requirements example format simplified appropriately.

Changing from a code block to a bullet list makes the requirements easier to read and edit. The wording is clear and practical.

source/instructors/teaching/grading/grading.rst (6)

31-31: Clarified "Completed column" language improves accessibility.

The addition of "check mark to the left of their name in the Completed column" makes the UI reference more specific and helps users identify exactly what they're looking for.


36-36: UI references and workflow steps are more actionable.

The changes to lines 36 and 38 guide users more directly: mentioning the status drop-down with filter options (Started, Completed, Needs Grading) and the "three vertical dots" in the Actions column makes the workflow clearer for users unfamiliar with the interface.

Also applies to: 38-38


48-48: Navigation guidance is clearer and more specific.

The rewording to "From inside a student's project" and the Next/Previous/Next Ungraded/Previous Ungraded navigation is easier to follow. The fix of "assingment" to "assignment" corrects a typo.

Also applies to: 50-50


60-60: Grading Queue section benefits from streamlined description.

Changing the reference to "Grading>Queue" and clarifying the display/sorting options (by module/assignment by default, with "Do Not Group" to sort by submission time) makes the section more concise and actionable.

Also applies to: 62-62


77-78: RST comments are properly formatted.

The inline comments using the .. prefix are correctly formatted and will not appear in the rendered output. They appropriately document removal of rubric override content.


83-83: Menu path terminology and Anonymous Grading instructions updated.

The section on "Removing Penalties" now clearly references the "Grade Adjustments" section and the "Do not apply penalty" option. The anonymous grading description and steps (referencing "Grading>Basic Settings") are more concise and direct. Menu path formatting is consistent with ">" separators.

Also applies to: 91-91, 95-95

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

♻️ Duplicate comments (1)
source/instructors/authoring/assessments/add-assessment.rst (1)

72-73: Align reference display text with actual document titles.

The reference text at lines 72–73 does not match the actual document titles:

  • Line 72: :ref:LLM Rubric `` references a document titled "LLM Rubric Grader" (see llm-based-rubric.rst line 7). The display text should be "LLM Rubric Grader".
  • Line 73: :ref:LLM Rubric Autograde `` references a document titled "LLM-Rubric Autograde" (with hyphen; see llm-based-auto-rubric.rst line 7). The display text should match.

Update the reference text to match the actual titles:

-- :ref:`LLM Rubric <llm-based-rubric>`
-- :ref:`LLM Rubric Autograde <llm-based-auto-rubric>`
+- :ref:`LLM Rubric Grader <llm-based-rubric>`
+- :ref:`LLM-Rubric Autograde <llm-based-auto-rubric>`

Alternatively, if the titles should be different, update the titles in the respective .rst files to match the reference text.

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: ASSERTIVE

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between d47b0bc and d1538ea.

📒 Files selected for processing (3)
  • source/instructors/authoring/assessments/add-assessment.rst (5 hunks)
  • source/instructors/authoring/assessments/llm-based-auto-rubric.rst (1 hunks)
  • source/instructors/authoring/assessments/llm-based-rubric.rst (3 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**/*.rst

⚙️ CodeRabbit configuration file

**/*.rst: Review files for:

  • Consistent formatting (e.g., headings, lists, links).
  • Clear and concise language.
  • Correct grammar and spelling.
  • Proper use of rst syntax (e.g., avoid broken links or invalid code blocks).
  • Adherence to style guides (e.g., consistent tone, terminology).
    Suggest improvements to enhance readability and accessibility.
  • Make sure to find all improvements on the first pass.
    Go through the material multiple times before submitting comments.

Files:

  • source/instructors/authoring/assessments/llm-based-rubric.rst
  • source/instructors/authoring/assessments/llm-based-auto-rubric.rst
  • source/instructors/authoring/assessments/add-assessment.rst

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
source/instructors/authoring/assessments/llm-based-rubric.rst (2)

75-82: Inconsistent sample label in manual approval file.

Line 77 labels the image section "Sample feedback for the Auto-Graded Version:" in a file (llm-based-rubric.rst) that describes the manual approval workflow. This label should refer to the manual approval workflow, not autograde.

Update to:

-Sample feedback for the Auto-Graded Version:
+Sample feedback displayed to the student (before instructor approval):

This label appears in the context of Step 2 (LLM Grading), before Step 3 (Manual Approval), so it describes preliminary feedback—not autograde-specific.

-Sample feedback for the Auto-Graded Version:
+Sample feedback displayed to the student (before instructor approval):

53-53: Punctuation error: semicolon should be replaced.

Line 53 uses a semicolon where a comma or period is more appropriate:

Current: "If you do not add rubric requirements; the process will use general code grading norms to supply rubric items."

Update to:

-.. Note:: If you do not add rubric requirements; the process will use general code grading norms to supply rubric items.
+.. Note:: If you do not add rubric requirements, the process will use general code grading norms to supply rubric items.

This matches the corrected punctuation in the autograde variant (verified in llm-based-auto-rubric.rst).

-.. Note:: If you do not add rubric requirements; the process will use general code grading norms to supply rubric items.
+.. Note:: If you do not add rubric requirements, the process will use general code grading norms to supply rubric items.
♻️ Duplicate comments (1)
source/instructors/authoring/assessments/llm-based-rubric.rst (1)

30-30: Misleading reference to Metadata and Files.

Line 30 mentions "For more information on General, Metadata and Files" but the subsequent instructions (lines 32–64) only detail General and Grading sections. This may mislead users into expecting local coverage of Metadata and Files. Consider clarifying:

-Complete the sections below to set up your rubric grader. For more information on **General**, **Metadata** and **Files** see :ref:`Assessments <assessments>`.
+Complete the sections below to set up your rubric grader. For additional assessment concepts, see :ref:`Assessments <assessments>`.

This keeps the cross-reference without creating false expectations about local coverage.

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: ASSERTIVE

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between d1538ea and e5b6cf2.

📒 Files selected for processing (3)
  • source/instructors/authoring/assessments/add-assessment.rst (5 hunks)
  • source/instructors/authoring/assessments/llm-based-auto-rubric.rst (1 hunks)
  • source/instructors/authoring/assessments/llm-based-rubric.rst (3 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**/*.rst

⚙️ CodeRabbit configuration file

**/*.rst: Review files for:

  • Consistent formatting (e.g., headings, lists, links).
  • Clear and concise language.
  • Correct grammar and spelling.
  • Proper use of rst syntax (e.g., avoid broken links or invalid code blocks).
  • Adherence to style guides (e.g., consistent tone, terminology).
    Suggest improvements to enhance readability and accessibility.
  • Make sure to find all improvements on the first pass.
    Go through the material multiple times before submitting comments.

Files:

  • source/instructors/authoring/assessments/llm-based-auto-rubric.rst
  • source/instructors/authoring/assessments/llm-based-rubric.rst
  • source/instructors/authoring/assessments/add-assessment.rst
🔇 Additional comments (10)
source/instructors/authoring/assessments/llm-based-auto-rubric.rst (3)

10-12: Opening paragraphs are now appropriately distinct.

Line 10 clearly differentiates the autograde variant from the manual approval flow, and line 12 provides helpful context about UI location and cross-reference. Past concerns about duplicate openings appear addressed. ✓


1-8: Title consistency verified across all three assessment documents.

The verification confirms that titles and references are properly aligned:

  • "LLM Rubric Autograde" in llm-based-auto-rubric.rst matches the reference in add-assessment.rst
  • "LLM Rubric Grader" in llm-based-rubric.rst matches its reference
  • All titles use consistent non-hyphenated formatting

No hyphenation issues remain. The documentation is consistent and ready.


75-88: Rubric requirements examples lack consistency between auto-graded and manual rubric files.

The rubric requirements example in this file (lines 84–88) uses generic, implementation-focused criteria: task correctness, descriptive naming, and comments. However, the equivalent example in llm-based-rubric.rst (lines 110–113) specifies domain-focused criteria: program correctness and dictionary data structure usage.

Both sections open with identical preamble language ("Use only the following criteria for evaluating the student code:"), suggesting they should follow a consistent pattern. Clarify whether this difference is intentional—if so, add brief guidance explaining why auto-graded rubrics use generic criteria while manual rubrics use domain-specific criteria. If unintentional, align the examples to provide consistent guidance for instructors designing either type of rubric.

source/instructors/authoring/assessments/add-assessment.rst (3)

6-10: Capitalization change: verify style guide consistency.

Line 6 changes header from "Auto-Graded Assessments" to "Auto-graded assessments" (lowercase 'g' in 'graded'). Line 10 caption is updated accordingly to "Auto-graded assessments". Verify this lowercase convention is consistent with other section headers in the documentation style guide. If this is the standard, consider checking other similar headers for consistency.


26-27: Reference labels appear correctly aligned with document titles.

Toctree entry at line 27 (llm-based-auto-rubric) and reference text at line 73 (:ref:LLM Rubric Autograde ``) correctly match the document title "LLM Rubric Autograde" in the target file. Similarly, line 72 references "LLM Rubric Grader" which aligns with the corrected title in llm-based-rubric.rst. Past concerns about reference/title mismatches appear resolved.

Also applies to: 72-73


55-58: New section structure and formatting are clear.

The "Assessment Auto-Generation" section (lines 55–58) is well-formatted with appropriate RST sub-subsection marker (+++). The assessment type list is concise and cross-references are properly formatted. Content is accessible.

source/instructors/authoring/assessments/llm-based-rubric.rst (4)

2-2: Meta description clearly differentiates manual approval workflow.

Line 2 updates the meta description to specify manual approval: "Use an LLM to generate a rubric and grade based on that rubric. Teacher manually approves the results." This is distinct from the autograde variant and provides clear context for search/documentation systems.


7-12: Title, opening, and cross-references appropriately distinguish manual workflow.

Line 7 corrects the title to "LLM Rubric Grader" (matching the reference in add-assessment.rst line 72). Lines 10–12 clearly establish that instructor approval is required, contrast with the autograde variant, and provide the cross-reference. This is a marked improvement in clarity and alignment. ✓


16-22: Three-step process is clearly documented with mandatory-step emphasis.

Line 16–20 now correctly describes three steps (adding manual approval), and line 22's note effectively emphasizes that Step 3 is required before feedback reaches students. The process flow is logical and the formatting is correct.


110-113: Rubric example phrasing is consistent; criteria differences are pedagogically intentional.

The introduction phrasing "Use only the following criteria for evaluating the student code:" is now consistently applied across both files. The differing criteria reflect the distinct pedagogical purposes: the auto-rubric example demonstrates general, reusable criteria for code quality (correctness, naming, comments), while the manual LLM-based rubric example illustrates assignment-specific criteria (correctness, dictionary usage). This differentiation appropriately shows instructors how to tailor rubrics for their specific contexts. No changes needed.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

♻️ Duplicate comments (1)
source/instructors/authoring/assessments/llm-based-rubric.rst (1)

30-30: Clarify the reference to General, Metadata, and Files sections.

Similar to the auto-rubric file, this line references "General, Metadata and Files," but the instructions that follow only explicitly cover General and Grading. Consider revising to reduce ambiguity about which sections are locally addressed.

For clarity and consistency, consider updating:

-Complete the sections below to set up your rubric grader. For more information on **General**, **Metadata** and **Files** see :ref:`Assessments <assessments>`.
+Complete the sections below to set up your rubric grader. For more information on assessment setup, see :ref:`Assessments <assessments>`.
📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: ASSERTIVE

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between e5b6cf2 and 843f4d5.

📒 Files selected for processing (2)
  • source/instructors/authoring/assessments/llm-based-auto-rubric.rst (1 hunks)
  • source/instructors/authoring/assessments/llm-based-rubric.rst (4 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**/*.rst

⚙️ CodeRabbit configuration file

**/*.rst: Review files for:

  • Consistent formatting (e.g., headings, lists, links).
  • Clear and concise language.
  • Correct grammar and spelling.
  • Proper use of rst syntax (e.g., avoid broken links or invalid code blocks).
  • Adherence to style guides (e.g., consistent tone, terminology).
    Suggest improvements to enhance readability and accessibility.
  • Make sure to find all improvements on the first pass.
    Go through the material multiple times before submitting comments.

Files:

  • source/instructors/authoring/assessments/llm-based-auto-rubric.rst
  • source/instructors/authoring/assessments/llm-based-rubric.rst

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (1)
source/instructors/authoring/assessments/llm-based-rubric.rst (1)

7-8: Resolve terminology inconsistency: "LLM Rubric Grader" (title) vs. "LLM Rubric" (body).

The title uses "LLM Rubric Grader," but throughout the document you refer to the assessment as "LLM Rubric" (lines 10, 12, 16, 25). Choose one consistent name and apply it everywhere. Since the body consistently uses "LLM Rubric," align the title to match.

Apply this diff to the title:

-LLM Rubric Grader
-=================
+LLM Rubric
+==========

Also applies to: 10-10, 12-12, 16-16, 25-25

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: ASSERTIVE

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 843f4d5 and e97cfd9.

📒 Files selected for processing (1)
  • source/instructors/authoring/assessments/llm-based-rubric.rst (4 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**/*.rst

⚙️ CodeRabbit configuration file

**/*.rst: Review files for:

  • Consistent formatting (e.g., headings, lists, links).
  • Clear and concise language.
  • Correct grammar and spelling.
  • Proper use of rst syntax (e.g., avoid broken links or invalid code blocks).
  • Adherence to style guides (e.g., consistent tone, terminology).
    Suggest improvements to enhance readability and accessibility.
  • Make sure to find all improvements on the first pass.
    Go through the material multiple times before submitting comments.

Files:

  • source/instructors/authoring/assessments/llm-based-rubric.rst
🔇 Additional comments (1)
source/instructors/authoring/assessments/llm-based-rubric.rst (1)

30-30: Clarify referenced sections.

Line 30 mentions "General, Metadata and Files" but the instructions that follow only cover General and Grading sections. This may mislead users into thinking they need to configure Metadata and Files locally for this assessment type. A past review flagged this; if the narrower scope is intentional, ensure the reference matches what is actually covered below.

Consider whether this should be revised to focus only on what is documented locally:

-Complete the sections below to set up your rubric grader. For more information on **General**, **Metadata** and **Files** see :ref:`Assessments <assessments>`.
+Complete the sections below to set up your rubric grader. For general assessment concepts, see :ref:`Assessments <assessments>`.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (1)
source/instructors/authoring/assessments/llm-based-rubric.rst (1)

27-28: Add hyphen to "LLM-based" when used as compound adjective.

The section heading should read "LLM-Based Rubric Creation" with a hyphen, as "LLM-based" is a compound adjective modifying "Rubric Creation." This correction was flagged in a previous review and marked as addressed, but the hyphen is still missing in the current version.

-LLM Based Rubric Creation (Step 1)
-
+LLM-Based Rubric Creation (Step 1)
+
📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: ASSERTIVE

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between e97cfd9 and 7da3403.

📒 Files selected for processing (2)
  • source/instructors/authoring/assessments/llm-based-auto-rubric.rst (1 hunks)
  • source/instructors/authoring/assessments/llm-based-rubric.rst (4 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**/*.rst

⚙️ CodeRabbit configuration file

**/*.rst: Review files for:

  • Consistent formatting (e.g., headings, lists, links).
  • Clear and concise language.
  • Correct grammar and spelling.
  • Proper use of rst syntax (e.g., avoid broken links or invalid code blocks).
  • Adherence to style guides (e.g., consistent tone, terminology).
    Suggest improvements to enhance readability and accessibility.
  • Make sure to find all improvements on the first pass.
    Go through the material multiple times before submitting comments.

Files:

  • source/instructors/authoring/assessments/llm-based-auto-rubric.rst
  • source/instructors/authoring/assessments/llm-based-rubric.rst
🔇 Additional comments (2)
source/instructors/authoring/assessments/llm-based-auto-rubric.rst (1)

7-8: Standardize terminology: "Auto-grader" in title vs. "Autograde" in content.

The title uses "The LLM Rubric Auto-grader" (with hyphen, two words), but the rest of the document consistently refers to it as "Autograde" (single word, no hyphen)—see lines 10, 12, 16, and 22. Align the title with the terminology used throughout the file and in the companion llm-based-rubric.rst document for consistency.

-The LLM Rubric Auto-grader
-==========================
+The LLM Rubric Autograde
+=======================
source/instructors/authoring/assessments/llm-based-rubric.rst (1)

100-107: Verify intended consistency between rubric examples before unifying.

The inconsistency is confirmed: llm-based-rubric.rst (lines 100–107) shows "Program correctness" and "Proper and efficient usage of a dictionary data structure," while llm-based-auto-rubric.rst (lines 85–89) shows "The code correctly implements the requested task," "Variable and function names are descriptive," and "The code includes at least two meaningful comments."

However, the documentation structure is ambiguous. Both sections reference each other with "You can view another example on..." language, suggesting these may intentionally be different illustrations. Clarify whether these examples should be:

  1. Unified — both sections show the same example for consistency
  2. Intentionally different — make this explicit in the text (e.g., change "another example" to "a different approach" or add explanatory context)

Without understanding the pedagogical intent, unifying them may obscure intentional variety.

@shajason shajason merged commit 9e77d12 into master Nov 3, 2025
1 check passed
@shajason shajason deleted the splitagain branch November 3, 2025 20:19
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants