Conversation
WalkthroughSplits instructor assessment docs: adds an LLM Autograde guide, refactors the manual LLM rubric doc and the assessments toctree, updates grading workflow text, and inserts an August 2025 changelog entry. (50 words) Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes
Possibly related PRs
Suggested reviewers
Pre-merge checks❌ Failed checks (1 inconclusive)
✅ Passed checks (1 passed)
📜 Recent review detailsConfiguration used: Path: .coderabbit.yaml Review profile: ASSERTIVE Plan: Pro 📒 Files selected for processing (1)
🧰 Additional context used📓 Path-based instructions (1)**/*.rst⚙️ CodeRabbit configuration file
Files:
🔇 Additional comments (1)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 6
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
⛔ Files ignored due to path filters (8)
source/img/No-overrindengrade.pngis excluded by!**/*.png,!**/*.pngsource/img/Overridengarde.pngis excluded by!**/*.png,!**/*.pngsource/img/class_administration/grading/assignment-comments.pngis excluded by!**/*.png,!**/*.pngsource/img/filter.pngis excluded by!**/*.png,!**/*.pngsource/img/grading-assign.pngis excluded by!**/*.png,!**/*.pngsource/img/grading-unit.pngis excluded by!**/*.png,!**/*.pngsource/img/gradingqueue.pngis excluded by!**/*.png,!**/*.pngsource/img/latepenalty.pngis excluded by!**/*.png,!**/*.png
📒 Files selected for processing (5)
source/instructors/authoring/assessments/add-assessment.rst(2 hunks)source/instructors/authoring/assessments/llm-based-auto-rubric.rst(1 hunks)source/instructors/authoring/assessments/llm-based-rubric.rst(3 hunks)source/instructors/getstarted/support/changelog.rst(1 hunks)source/instructors/teaching/grading/grading.rst(3 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**/*.rst
⚙️ CodeRabbit configuration file
**/*.rst: Review files for:
- Consistent formatting (e.g., headings, lists, links).
- Clear and concise language.
- Correct grammar and spelling.
- Proper use of rst syntax (e.g., avoid broken links or invalid code blocks).
- Adherence to style guides (e.g., consistent tone, terminology).
Suggest improvements to enhance readability and accessibility.- Make sure to find all improvements on the first pass.
Go through the material multiple times before submitting comments.
Files:
source/instructors/authoring/assessments/add-assessment.rstsource/instructors/authoring/assessments/llm-based-auto-rubric.rstsource/instructors/teaching/grading/grading.rstsource/instructors/getstarted/support/changelog.rstsource/instructors/authoring/assessments/llm-based-rubric.rst
🔇 Additional comments (13)
source/instructors/getstarted/support/changelog.rst (1)
30-33: Changelog entry is well-structured and consistent.The August 2025 section follows the established format and chronological ordering. Wording is clear, and the reference label
[I]and date notation are consistent with surrounding entries.source/instructors/authoring/assessments/add-assessment.rst (1)
26-27: RST toctree entries and references are correctly formatted.The split into separate
llm-based-rubricandllm-based-auto-rubricentries follows proper RST syntax. The ordering and indentation are consistent with the surrounding documentation.source/instructors/authoring/assessments/llm-based-auto-rubric.rst (2)
1-87: Overall structure and content are clear and well-organized.The file follows a logical two-step workflow with proper RST formatting, clear headings, descriptive image captions, and appropriate use of Note blocks. The step-by-step guidance is easy to follow.
7-8: Title underline length does not match content.The heading "LLM-Rubric AutoGrade" is 18 characters, but the underline appears to be approximately 20 characters. In reStructuredText, the underline must match the title length exactly.
Count the title "LLM-Rubric AutoGrade" (18 chars) and adjust the underline to match:
LLM-Rubric AutoGrade -==================== +==================Likely an incorrect or invalid review comment.
source/instructors/authoring/assessments/llm-based-rubric.rst (3)
15-21: Clear restructuring to three-step process with required Step 3.The addition of the explicit three-step workflow with the required note on line 21 significantly improves clarity about the manual approval requirement. Students clearly understand that their feedback is held pending instructor review.
83-96: Manual Approval section (Step 3) is well-explained.The addition of this section clearly articulates what instructors must do (open assignment, select passing test cases, edit comments, apply grade) and includes appropriate warnings about finality of released feedback. The sample images help users understand the workflow.
109-111: Rubric requirements example format simplified appropriately.Changing from a code block to a bullet list makes the requirements easier to read and edit. The wording is clear and practical.
source/instructors/teaching/grading/grading.rst (6)
31-31: Clarified "Completed column" language improves accessibility.The addition of "check mark to the left of their name in the Completed column" makes the UI reference more specific and helps users identify exactly what they're looking for.
36-36: UI references and workflow steps are more actionable.The changes to lines 36 and 38 guide users more directly: mentioning the status drop-down with filter options (Started, Completed, Needs Grading) and the "three vertical dots" in the Actions column makes the workflow clearer for users unfamiliar with the interface.
Also applies to: 38-38
48-48: Navigation guidance is clearer and more specific.The rewording to "From inside a student's project" and the Next/Previous/Next Ungraded/Previous Ungraded navigation is easier to follow. The fix of "assingment" to "assignment" corrects a typo.
Also applies to: 50-50
60-60: Grading Queue section benefits from streamlined description.Changing the reference to "Grading>Queue" and clarifying the display/sorting options (by module/assignment by default, with "Do Not Group" to sort by submission time) makes the section more concise and actionable.
Also applies to: 62-62
77-78: RST comments are properly formatted.The inline comments using the
..prefix are correctly formatted and will not appear in the rendered output. They appropriately document removal of rubric override content.
83-83: Menu path terminology and Anonymous Grading instructions updated.The section on "Removing Penalties" now clearly references the "Grade Adjustments" section and the "Do not apply penalty" option. The anonymous grading description and steps (referencing "Grading>Basic Settings") are more concise and direct. Menu path formatting is consistent with ">" separators.
Also applies to: 91-91, 95-95
source/instructors/authoring/assessments/llm-based-auto-rubric.rst
Outdated
Show resolved
Hide resolved
source/instructors/authoring/assessments/llm-based-auto-rubric.rst
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Actionable comments posted: 4
♻️ Duplicate comments (1)
source/instructors/authoring/assessments/add-assessment.rst (1)
72-73: Align reference display text with actual document titles.The reference text at lines 72–73 does not match the actual document titles:
- Line 72:
:ref:LLM Rubric `` references a document titled "LLM Rubric Grader" (seellm-based-rubric.rstline 7). The display text should be "LLM Rubric Grader".- Line 73:
:ref:LLM Rubric Autograde `` references a document titled "LLM-Rubric Autograde" (with hyphen; seellm-based-auto-rubric.rstline 7). The display text should match.Update the reference text to match the actual titles:
-- :ref:`LLM Rubric <llm-based-rubric>` -- :ref:`LLM Rubric Autograde <llm-based-auto-rubric>` +- :ref:`LLM Rubric Grader <llm-based-rubric>` +- :ref:`LLM-Rubric Autograde <llm-based-auto-rubric>`Alternatively, if the titles should be different, update the titles in the respective
.rstfiles to match the reference text.
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
📒 Files selected for processing (3)
source/instructors/authoring/assessments/add-assessment.rst(5 hunks)source/instructors/authoring/assessments/llm-based-auto-rubric.rst(1 hunks)source/instructors/authoring/assessments/llm-based-rubric.rst(3 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**/*.rst
⚙️ CodeRabbit configuration file
**/*.rst: Review files for:
- Consistent formatting (e.g., headings, lists, links).
- Clear and concise language.
- Correct grammar and spelling.
- Proper use of rst syntax (e.g., avoid broken links or invalid code blocks).
- Adherence to style guides (e.g., consistent tone, terminology).
Suggest improvements to enhance readability and accessibility.- Make sure to find all improvements on the first pass.
Go through the material multiple times before submitting comments.
Files:
source/instructors/authoring/assessments/llm-based-rubric.rstsource/instructors/authoring/assessments/llm-based-auto-rubric.rstsource/instructors/authoring/assessments/add-assessment.rst
source/instructors/authoring/assessments/llm-based-auto-rubric.rst
Outdated
Show resolved
Hide resolved
source/instructors/authoring/assessments/llm-based-auto-rubric.rst
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
source/instructors/authoring/assessments/llm-based-rubric.rst (2)
75-82: Inconsistent sample label in manual approval file.Line 77 labels the image section "Sample feedback for the Auto-Graded Version:" in a file (
llm-based-rubric.rst) that describes the manual approval workflow. This label should refer to the manual approval workflow, not autograde.Update to:
-Sample feedback for the Auto-Graded Version: +Sample feedback displayed to the student (before instructor approval):This label appears in the context of Step 2 (LLM Grading), before Step 3 (Manual Approval), so it describes preliminary feedback—not autograde-specific.
-Sample feedback for the Auto-Graded Version: +Sample feedback displayed to the student (before instructor approval):
53-53: Punctuation error: semicolon should be replaced.Line 53 uses a semicolon where a comma or period is more appropriate:
Current: "If you do not add rubric requirements; the process will use general code grading norms to supply rubric items."
Update to:
-.. Note:: If you do not add rubric requirements; the process will use general code grading norms to supply rubric items. +.. Note:: If you do not add rubric requirements, the process will use general code grading norms to supply rubric items.This matches the corrected punctuation in the autograde variant (verified in
llm-based-auto-rubric.rst).-.. Note:: If you do not add rubric requirements; the process will use general code grading norms to supply rubric items. +.. Note:: If you do not add rubric requirements, the process will use general code grading norms to supply rubric items.
♻️ Duplicate comments (1)
source/instructors/authoring/assessments/llm-based-rubric.rst (1)
30-30: Misleading reference to Metadata and Files.Line 30 mentions "For more information on General, Metadata and Files" but the subsequent instructions (lines 32–64) only detail General and Grading sections. This may mislead users into expecting local coverage of Metadata and Files. Consider clarifying:
-Complete the sections below to set up your rubric grader. For more information on **General**, **Metadata** and **Files** see :ref:`Assessments <assessments>`. +Complete the sections below to set up your rubric grader. For additional assessment concepts, see :ref:`Assessments <assessments>`.This keeps the cross-reference without creating false expectations about local coverage.
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
📒 Files selected for processing (3)
source/instructors/authoring/assessments/add-assessment.rst(5 hunks)source/instructors/authoring/assessments/llm-based-auto-rubric.rst(1 hunks)source/instructors/authoring/assessments/llm-based-rubric.rst(3 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**/*.rst
⚙️ CodeRabbit configuration file
**/*.rst: Review files for:
- Consistent formatting (e.g., headings, lists, links).
- Clear and concise language.
- Correct grammar and spelling.
- Proper use of rst syntax (e.g., avoid broken links or invalid code blocks).
- Adherence to style guides (e.g., consistent tone, terminology).
Suggest improvements to enhance readability and accessibility.- Make sure to find all improvements on the first pass.
Go through the material multiple times before submitting comments.
Files:
source/instructors/authoring/assessments/llm-based-auto-rubric.rstsource/instructors/authoring/assessments/llm-based-rubric.rstsource/instructors/authoring/assessments/add-assessment.rst
🔇 Additional comments (10)
source/instructors/authoring/assessments/llm-based-auto-rubric.rst (3)
10-12: Opening paragraphs are now appropriately distinct.Line 10 clearly differentiates the autograde variant from the manual approval flow, and line 12 provides helpful context about UI location and cross-reference. Past concerns about duplicate openings appear addressed. ✓
1-8: Title consistency verified across all three assessment documents.The verification confirms that titles and references are properly aligned:
- "LLM Rubric Autograde" in
llm-based-auto-rubric.rstmatches the reference inadd-assessment.rst- "LLM Rubric Grader" in
llm-based-rubric.rstmatches its reference- All titles use consistent non-hyphenated formatting
No hyphenation issues remain. The documentation is consistent and ready.
75-88: Rubric requirements examples lack consistency between auto-graded and manual rubric files.The rubric requirements example in this file (lines 84–88) uses generic, implementation-focused criteria: task correctness, descriptive naming, and comments. However, the equivalent example in
llm-based-rubric.rst(lines 110–113) specifies domain-focused criteria: program correctness and dictionary data structure usage.Both sections open with identical preamble language ("Use only the following criteria for evaluating the student code:"), suggesting they should follow a consistent pattern. Clarify whether this difference is intentional—if so, add brief guidance explaining why auto-graded rubrics use generic criteria while manual rubrics use domain-specific criteria. If unintentional, align the examples to provide consistent guidance for instructors designing either type of rubric.
source/instructors/authoring/assessments/add-assessment.rst (3)
6-10: Capitalization change: verify style guide consistency.Line 6 changes header from "Auto-Graded Assessments" to "Auto-graded assessments" (lowercase 'g' in 'graded'). Line 10 caption is updated accordingly to "Auto-graded assessments". Verify this lowercase convention is consistent with other section headers in the documentation style guide. If this is the standard, consider checking other similar headers for consistency.
26-27: Reference labels appear correctly aligned with document titles.Toctree entry at line 27 (
llm-based-auto-rubric) and reference text at line 73 (:ref:LLM Rubric Autograde ``) correctly match the document title "LLM Rubric Autograde" in the target file. Similarly, line 72 references "LLM Rubric Grader" which aligns with the corrected title inllm-based-rubric.rst. Past concerns about reference/title mismatches appear resolved.Also applies to: 72-73
55-58: New section structure and formatting are clear.The "Assessment Auto-Generation" section (lines 55–58) is well-formatted with appropriate RST sub-subsection marker (+++). The assessment type list is concise and cross-references are properly formatted. Content is accessible.
source/instructors/authoring/assessments/llm-based-rubric.rst (4)
2-2: Meta description clearly differentiates manual approval workflow.Line 2 updates the meta description to specify manual approval: "Use an LLM to generate a rubric and grade based on that rubric. Teacher manually approves the results." This is distinct from the autograde variant and provides clear context for search/documentation systems.
7-12: Title, opening, and cross-references appropriately distinguish manual workflow.Line 7 corrects the title to "LLM Rubric Grader" (matching the reference in
add-assessment.rstline 72). Lines 10–12 clearly establish that instructor approval is required, contrast with the autograde variant, and provide the cross-reference. This is a marked improvement in clarity and alignment. ✓
16-22: Three-step process is clearly documented with mandatory-step emphasis.Line 16–20 now correctly describes three steps (adding manual approval), and line 22's note effectively emphasizes that Step 3 is required before feedback reaches students. The process flow is logical and the formatting is correct.
110-113: Rubric example phrasing is consistent; criteria differences are pedagogically intentional.The introduction phrasing "Use only the following criteria for evaluating the student code:" is now consistently applied across both files. The differing criteria reflect the distinct pedagogical purposes: the auto-rubric example demonstrates general, reusable criteria for code quality (correctness, naming, comments), while the manual LLM-based rubric example illustrates assignment-specific criteria (correctness, dictionary usage). This differentiation appropriately shows instructors how to tailor rubrics for their specific contexts. No changes needed.
source/instructors/authoring/assessments/llm-based-auto-rubric.rst
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Actionable comments posted: 2
♻️ Duplicate comments (1)
source/instructors/authoring/assessments/llm-based-rubric.rst (1)
30-30: Clarify the reference to General, Metadata, and Files sections.Similar to the auto-rubric file, this line references "General, Metadata and Files," but the instructions that follow only explicitly cover General and Grading. Consider revising to reduce ambiguity about which sections are locally addressed.
For clarity and consistency, consider updating:
-Complete the sections below to set up your rubric grader. For more information on **General**, **Metadata** and **Files** see :ref:`Assessments <assessments>`. +Complete the sections below to set up your rubric grader. For more information on assessment setup, see :ref:`Assessments <assessments>`.
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
📒 Files selected for processing (2)
source/instructors/authoring/assessments/llm-based-auto-rubric.rst(1 hunks)source/instructors/authoring/assessments/llm-based-rubric.rst(4 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**/*.rst
⚙️ CodeRabbit configuration file
**/*.rst: Review files for:
- Consistent formatting (e.g., headings, lists, links).
- Clear and concise language.
- Correct grammar and spelling.
- Proper use of rst syntax (e.g., avoid broken links or invalid code blocks).
- Adherence to style guides (e.g., consistent tone, terminology).
Suggest improvements to enhance readability and accessibility.- Make sure to find all improvements on the first pass.
Go through the material multiple times before submitting comments.
Files:
source/instructors/authoring/assessments/llm-based-auto-rubric.rstsource/instructors/authoring/assessments/llm-based-rubric.rst
source/instructors/authoring/assessments/llm-based-auto-rubric.rst
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (1)
source/instructors/authoring/assessments/llm-based-rubric.rst (1)
7-8: Resolve terminology inconsistency: "LLM Rubric Grader" (title) vs. "LLM Rubric" (body).The title uses "LLM Rubric Grader," but throughout the document you refer to the assessment as "LLM Rubric" (lines 10, 12, 16, 25). Choose one consistent name and apply it everywhere. Since the body consistently uses "LLM Rubric," align the title to match.
Apply this diff to the title:
-LLM Rubric Grader -================= +LLM Rubric +==========Also applies to: 10-10, 12-12, 16-16, 25-25
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
📒 Files selected for processing (1)
source/instructors/authoring/assessments/llm-based-rubric.rst(4 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**/*.rst
⚙️ CodeRabbit configuration file
**/*.rst: Review files for:
- Consistent formatting (e.g., headings, lists, links).
- Clear and concise language.
- Correct grammar and spelling.
- Proper use of rst syntax (e.g., avoid broken links or invalid code blocks).
- Adherence to style guides (e.g., consistent tone, terminology).
Suggest improvements to enhance readability and accessibility.- Make sure to find all improvements on the first pass.
Go through the material multiple times before submitting comments.
Files:
source/instructors/authoring/assessments/llm-based-rubric.rst
🔇 Additional comments (1)
source/instructors/authoring/assessments/llm-based-rubric.rst (1)
30-30: Clarify referenced sections.Line 30 mentions "General, Metadata and Files" but the instructions that follow only cover General and Grading sections. This may mislead users into thinking they need to configure Metadata and Files locally for this assessment type. A past review flagged this; if the narrower scope is intentional, ensure the reference matches what is actually covered below.
Consider whether this should be revised to focus only on what is documented locally:
-Complete the sections below to set up your rubric grader. For more information on **General**, **Metadata** and **Files** see :ref:`Assessments <assessments>`. +Complete the sections below to set up your rubric grader. For general assessment concepts, see :ref:`Assessments <assessments>`.
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (1)
source/instructors/authoring/assessments/llm-based-rubric.rst (1)
27-28: Add hyphen to "LLM-based" when used as compound adjective.The section heading should read "LLM-Based Rubric Creation" with a hyphen, as "LLM-based" is a compound adjective modifying "Rubric Creation." This correction was flagged in a previous review and marked as addressed, but the hyphen is still missing in the current version.
-LLM Based Rubric Creation (Step 1) - +LLM-Based Rubric Creation (Step 1) +
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
📒 Files selected for processing (2)
source/instructors/authoring/assessments/llm-based-auto-rubric.rst(1 hunks)source/instructors/authoring/assessments/llm-based-rubric.rst(4 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**/*.rst
⚙️ CodeRabbit configuration file
**/*.rst: Review files for:
- Consistent formatting (e.g., headings, lists, links).
- Clear and concise language.
- Correct grammar and spelling.
- Proper use of rst syntax (e.g., avoid broken links or invalid code blocks).
- Adherence to style guides (e.g., consistent tone, terminology).
Suggest improvements to enhance readability and accessibility.- Make sure to find all improvements on the first pass.
Go through the material multiple times before submitting comments.
Files:
source/instructors/authoring/assessments/llm-based-auto-rubric.rstsource/instructors/authoring/assessments/llm-based-rubric.rst
🔇 Additional comments (2)
source/instructors/authoring/assessments/llm-based-auto-rubric.rst (1)
7-8: Standardize terminology: "Auto-grader" in title vs. "Autograde" in content.The title uses "The LLM Rubric Auto-grader" (with hyphen, two words), but the rest of the document consistently refers to it as "Autograde" (single word, no hyphen)—see lines 10, 12, 16, and 22. Align the title with the terminology used throughout the file and in the companion
llm-based-rubric.rstdocument for consistency.-The LLM Rubric Auto-grader -========================== +The LLM Rubric Autograde +=======================source/instructors/authoring/assessments/llm-based-rubric.rst (1)
100-107: Verify intended consistency between rubric examples before unifying.The inconsistency is confirmed:
llm-based-rubric.rst(lines 100–107) shows "Program correctness" and "Proper and efficient usage of a dictionary data structure," whilellm-based-auto-rubric.rst(lines 85–89) shows "The code correctly implements the requested task," "Variable and function names are descriptive," and "The code includes at least two meaningful comments."However, the documentation structure is ambiguous. Both sections reference each other with "You can view another example on..." language, suggesting these may intentionally be different illustrations. Clarify whether these examples should be:
- Unified — both sections show the same example for consistency
- Intentionally different — make this explicit in the text (e.g., change "another example" to "a different approach" or add explanatory context)
Without understanding the pedagogical intent, unifying them may obscure intentional variety.
Please don't review yet!!!! I'm sure CodeRabbit will have a lot to say, and I'll probably not be finished until Monday.
Summary by CodeRabbit