Skip to content

assessment update#435

Merged
LolaValente merged 5 commits intomasterfrom
AssessmentsUpdate4
Nov 17, 2025
Merged

assessment update#435
LolaValente merged 5 commits intomasterfrom
AssessmentsUpdate4

Conversation

@LolaValente
Copy link
Collaborator

@LolaValente LolaValente commented Nov 17, 2025

Summary by CodeRabbit

  • Documentation
    • Added Regenerate path in AI assessment generation (step 4) with guidance and example prompt.
    • Added Starter Pack mention and install steps for assessment examples.
    • Expanded Random Assessments: layout requirements, duplication prevention, sync/publish workflows, filter categories/table, and updated images.
    • Reorganized SPLICE, Standard Code Test, and Student Submission guides (structure, tabs/examples, submit/mark-as-complete flows, wording/formatting).
    • Clarified ungraded-assessments guidance and minor file cleanup.

@LolaValente LolaValente requested a review from shajason November 17, 2025 16:09
@coderabbitai
Copy link

coderabbitai bot commented Nov 17, 2025

Walkthrough

Updates multiple instructor assessment docs: adds a Regenerate path to AI assessment generation, expands Random assessment rules and sync/publish flows, restructures SPLICE and Standard Code Test docs into clearer sections and tabbed layouts, clarifies Student Submission Options and ungraded-assessments wording, and trims trailing blank lines in the assessments library doc.

Changes

Cohort / File(s) Summary
AI Assessment Guidance
source/instructors/authoring/assessments/ai-assessment-generation.rst
Adds guidance in step 4 to click "Regenerate" when a generated assessment is unsatisfactory and to provide additional guidance in the Generation Prompt (example included). Note added in two locations.
Random Assessment — rules & publishing
source/instructors/authoring/assessments/random.rst
Replaces descriptive intro with pool-based definition; adds "Layout Requirements" (Simple vs Complex), "Duplication Prevention" rules; reorganizes Creating/Updating/Publishing into clearer sync/publish flows; adds Sync Options, filter/category guidance (including Bloom’s mapping), updates images and formatting.
SPLICE assessment docs
source/instructors/authoring/assessments/splice.rst
Introduces "How to Use a SPLICE Assessment in Codio" section, removes metadata/files steps and images, retains General/Execution/Grading steps, and renumbers creation steps.
Standard Code Test — restructure
source/instructors/authoring/assessments/standard-code-test.rst
Converts image-centric layout into tab/table-driven presentation: simplifies notes, removes Starter Pack paragraph, replaces Assessment Auto-Generation with AI reference, adds tabbed language-specific commands, reorganizes grading/input/check/test sections, standardizes headings and images.
Student submission UX and options
source/instructors/authoring/assessments/student-submission.rst
Normalizes capitalization and headings; adds Submit button customization note and image; clarifies attempts visibility; broadens suppress-submit applicability; expands Mark as Complete with Advantages/Drawbacks and alternative auto-complete methods; adds viewing-completed guidance.
Ungraded assessments wording
source/instructors/authoring/assessments/ungraded-assessments.rst
Rephrases wording to position ungraded assessments as consequence-free checks while retaining the method (set correct and incorrect points to zero).
Assessments overview & minor cleanup
source/instructors/authoring/assessments/assessments.rst, source/instructors/setupcourses/library/assessmentslibrary.rst
assessments.rst: tweaks assessment overview wording and adds Starter Pack intro/install sentence. assessmentslibrary.rst: removes three trailing blank lines and minor whitespace cleanup.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

  • Extra attention:
    • source/instructors/authoring/assessments/random.rst — verify sync vs reset-and-publish semantics and duplication-prevention wording.
    • source/instructors/authoring/assessments/standard-code-test.rst — confirm tabbed language commands, inputs/tests tabs, and that tables preserve examples and tolerances.
    • source/instructors/authoring/assessments/student-submission.rst — check Submit button customization text, suppress-submit applicability list, and Mark as Complete alternative flows.
  • Spot-check image alt text, widths, directive formatting, and consistent heading capitalization across updated files.

Possibly related PRs

Suggested reviewers

  • shajason

Pre-merge checks

❌ Failed checks (1 inconclusive)
Check name Status Explanation Resolution
Title check ❓ Inconclusive The title 'assessment update' is vague and overly generic. While it relates to the changeset (which updates assessment documentation), it fails to convey the scope, nature, or primary focus of the changes. Revise the title to be more specific, such as 'Update assessment documentation with layout requirements, synchronization guidance, and formatting improvements' or similar, to clearly communicate the main changes.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.

📜 Recent review details

Configuration used: Path: .coderabbit.yaml

Review profile: ASSERTIVE

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between a09396c and 1c0312d.

📒 Files selected for processing (3)
  • source/instructors/authoring/assessments/ai-assessment-generation.rst (1 hunks)
  • source/instructors/authoring/assessments/assessments.rst (1 hunks)
  • source/instructors/authoring/assessments/random.rst (1 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**/*.rst

⚙️ CodeRabbit configuration file

**/*.rst: Review files for:

  • Consistent formatting (e.g., headings, lists, links).
  • Anywhere there are tables, they should use list-table.
  • Clear and concise language.
  • Correct grammar and spelling.
  • Proper use of rst syntax (e.g., avoid broken links or invalid code blocks).
  • Adherence to style guides (e.g., consistent tone, terminology).
    Suggest improvements to enhance readability and accessibility.
  • Make sure to find all improvements on the first pass.
    Go through the material multiple times before submitting comments.

Files:

  • source/instructors/authoring/assessments/ai-assessment-generation.rst
  • source/instructors/authoring/assessments/assessments.rst
  • source/instructors/authoring/assessments/random.rst
🧠 Learnings (1)
📚 Learning: 2025-10-27T17:55:37.142Z
Learnt from: LolaValente
Repo: codio/knowledge PR: 426
File: source/instructors/authoring/assessments/llm-based-rubric.rst:61-61
Timestamp: 2025-10-27T17:55:37.142Z
Learning: In rst files for the Codio knowledge repository, image directive attributes (`:height:`, `:alt:`, etc.) should be indented with 3 spaces, not 4. Content following an image directive should not be indented unless it's intentionally part of that directive.

Applied to files:

  • source/instructors/authoring/assessments/random.rst
🔇 Additional comments (7)
source/instructors/authoring/assessments/ai-assessment-generation.rst (1)

45-45: Effectively addresses prior feedback on Regenerate path and example formatting.

The addition of the Regenerate workflow in step 4 is clearly positioned and well-integrated. The example prompt is now properly formatted as inline code (using double backticks), making it visually distinct and copy/paste-friendly for users—successfully addressing the prior review suggestion. The conditional guidance ("if you are not satisfied...") provides helpful context for regenerating assessments with additional prompt guidance.

source/instructors/authoring/assessments/assessments.rst (2)

10-10: Grammar improvement successfully resolves redundancy.

Line 10 restructures the sentence to eliminate the repeated "can be" construction while maintaining clarity. The revision ("Codio offers assessments that determine...and provide automatic or manual grading options") is more concise and reads naturally. This directly addresses the prior feedback about parallel structure.


12-13: Starter Pack introduction adds valuable context with clear step-by-step navigation.

The new Starter Pack section provides helpful onboarding guidance and uses imperative, step-by-step wording ("Go to Starter Packs, search for Demo Guides and Assessments, click Use Pack, then click Create") which aligns with prior feedback and improves consistency across the documentation. The navigation path is clear and actionable for instructors.

source/instructors/authoring/assessments/random.rst (4)

9-31: New Layout Requirements and Duplication Prevention sections provide essential clarity.

Lines 9–31 introduce two well-structured new sections that significantly improve documentation clarity. The Layout Requirements section uses nested bullets effectively to distinguish Simple Layout (1-panel) from Complex Layout (multi-panel) with clear constraints and warnings. The Duplication Prevention section articulates library uniqueness requirements and includes a helpful note explaining the consequence of insufficient unique assessments. Both sections address prior feedback about layout terminology clarity and provide essential context for users.


40-40: Redundancy resolved with clearer action-focused language.

Line 40 revises the phrasing from "Use...Use" to "Enables selection of," eliminating the redundancy flagged in prior review. The new wording is more concise and better describes the toggle's function without repetition.


44-87: Filter Categories table effectively organizes complex filter options with proper structure.

The new list-table directive (lines 44–87) successfully implements the Filter Categories and Inputs with correct list-table syntax (:widths:, :header-rows:). Nested tab-set and tab-item directives cleanly organize Bloom's taxonomy levels and assessment types, allowing instructors to quickly find relevant filter options. Content indentation is correct (3 spaces per guidelines), and all table rows are properly structured. The table significantly improves usability by presenting a comprehensive reference for filter options in an organized, scannable format.


95-164: Comprehensive workflow reorganization improves task clarity and user guidance.

Lines 95–164 successfully restructure the updating and synchronization workflow into clear sections: "Modifying Assessment Parameters," "Publishing Changes," and "Synchronizing Changes from the Course" with a "Sync Options" subsection. The reorganization effectively distinguishes between random-assessment-only changes and broader assignment changes, with explicit guidance for each scenario. Image directives are properly formatted with consistent alt-text (American spelling "Synchronize"), specified widths (450px, 500px, 300px), and correct 3-space indentation of attributes. The warning block and Sync Options subsection with "no students" vs. "students started" decision paths provide actionable guidance. RST heading hierarchy is correct (= → ~ → -), and all content reads clearly with proper structure.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 8

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
source/instructors/authoring/assessments/ungraded-assessments.rst (1)

1-11: RST syntax and formatting look correct; minor tone observation.

The file follows proper RST structure with correct heading levels and meta tags. The wording updates make the instructions clearer. However, I notice the phrasing could be slightly tightened for consistency with other assessment documentation.

On Line 9, consider: "You can create ungraded assessments to provide..." (emphasizing the key term) for better visual scanning. This aligns with other assessment doc patterns where key concepts are highlighted.

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: ASSERTIVE

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between e6a827b and b5e63c6.

⛔ Files ignored due to path filters (12)
  • source/img/guides/Grading-new-feature1.png is excluded by !**/*.png, !**/*.png
  • source/img/guides/assessment_sct_check.png is excluded by !**/*.png, !**/*.png
  • source/img/guides/assessment_sct_execution.png is excluded by !**/*.png, !**/*.png
  • source/img/guides/customizeSubmitbutton.png is excluded by !**/*.png, !**/*.png
  • source/img/guides/random-sync.png is excluded by !**/*.png, !**/*.png
  • source/img/guides/random-update.png is excluded by !**/*.png, !**/*.png
  • source/img/guides/sql-helpers.png is excluded by !**/*.png, !**/*.png
  • source/img/guides/std-assessment-args.png is excluded by !**/*.png, !**/*.png
  • source/img/guides/std-assessment-error.png is excluded by !**/*.png, !**/*.png
  • source/img/guides/std-assessment-stdin-ignore.png is excluded by !**/*.png, !**/*.png
  • source/img/guides/std-assessment-stdin.png is excluded by !**/*.png, !**/*.png
  • source/img/sql-helpers.png is excluded by !**/*.png, !**/*.png
📒 Files selected for processing (8)
  • source/instructors/authoring/assessments/ai-assessment-generation.rst (1 hunks)
  • source/instructors/authoring/assessments/assessments.rst (1 hunks)
  • source/instructors/authoring/assessments/random.rst (2 hunks)
  • source/instructors/authoring/assessments/splice.rst (2 hunks)
  • source/instructors/authoring/assessments/standard-code-test.rst (1 hunks)
  • source/instructors/authoring/assessments/student-submission.rst (2 hunks)
  • source/instructors/authoring/assessments/ungraded-assessments.rst (1 hunks)
  • source/instructors/setupcourses/library/assessmentslibrary.rst (0 hunks)
💤 Files with no reviewable changes (1)
  • source/instructors/setupcourses/library/assessmentslibrary.rst
🧰 Additional context used
📓 Path-based instructions (1)
**/*.rst

⚙️ CodeRabbit configuration file

**/*.rst: Review files for:

  • Consistent formatting (e.g., headings, lists, links).
  • Anywhere there are tables, they should use list-table.
  • Clear and concise language.
  • Correct grammar and spelling.
  • Proper use of rst syntax (e.g., avoid broken links or invalid code blocks).
  • Adherence to style guides (e.g., consistent tone, terminology).
    Suggest improvements to enhance readability and accessibility.
  • Make sure to find all improvements on the first pass.
    Go through the material multiple times before submitting comments.

Files:

  • source/instructors/authoring/assessments/ungraded-assessments.rst
  • source/instructors/authoring/assessments/ai-assessment-generation.rst
  • source/instructors/authoring/assessments/assessments.rst
  • source/instructors/authoring/assessments/student-submission.rst
  • source/instructors/authoring/assessments/random.rst
  • source/instructors/authoring/assessments/standard-code-test.rst
  • source/instructors/authoring/assessments/splice.rst
🔇 Additional comments (5)
source/instructors/authoring/assessments/ai-assessment-generation.rst (1)

51-51: Content placement and potential duplication to verify.

According to the AI summary, the usage note about regeneration appears "twice in the document: once after the initial important notes and once after the step-by-step generation instructions." This duplication warrants a check—please verify:

  1. Are there two separate instances of this regeneration guidance?
  2. If so, is this intentional or should one be removed?
  3. If intentional, add a note explaining why both placements are necessary for user guidance.
source/instructors/authoring/assessments/splice.rst (1)

13-16: Heading and cross-reference formatting are correct.

The restructured "How to Use a SPLICE Assessment in Codio" section follows proper RST conventions with appropriate heading underlines and cross-reference syntax. The reference to Assessments documentation is cleanly integrated.

source/instructors/authoring/assessments/standard-code-test.rst (1)

112-140: List-table usage is correct and well-structured.

The Grading Configuration table properly uses the list-table directive with correct widths (25/75 split), header row designation, and clear setting/description organization. This follows RST best practices and improves readability.

source/instructors/authoring/assessments/student-submission.rst (1)

45-75: Section expansion is well-structured; terminology consistency check needed.

The expanded Mark as Complete section with new subsections ("Viewing Completed Assignments," "Disabling Student Mark as Complete," "Alternative Methods for Marking Complete") provides better organization and user guidance. The cross-references to related features (:ref:grading, :ref:`assignment-duration``) are helpful.

However, verify that:

  1. "Mark as Complete" vs "mark as complete" capitalization is consistent (mixed usage at lines 45, 66, 68)
  2. The cross-references at lines 73-74 point to valid sections
source/instructors/authoring/assessments/random.rst (1)

113-162: Publishing Changes and Synchronization sections are comprehensive; verify image asset references.

These new sections provide essential workflow guidance for instructors managing random assessments across student cohorts. The distinction between "only random assessment changes" vs "other assignment changes" publishing strategies is valuable. The Sync Options section clearly distinguishes no-students vs students-started scenarios.

Verify the following image references exist and are accessible:

  • Line 108: /img/guides/random-update.png
  • Line 135: /img/guides/random-sync.png
  • Line 151: /img/guides/random-sync-nostudents.png
  • Line 161: /img/guides/random-sync-studentsstarted.png

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 9

♻️ Duplicate comments (3)
source/instructors/authoring/assessments/standard-code-test.rst (1)

188-189: Address unresolved feedback: "Show Error Feedback" label terminology.

A prior review requested updating this label to "Show Error Feedback to Students" for consistency and clarity. While the description mentions "to students," the label itself should be updated to reflect the full audience context.

Apply this change:

-   * - **Show Error Feedback**
-     - Toggle to enable feedback to students about errors related to the specific test case.
+   * - **Show Error Feedback to Students**
+     - Toggle to enable feedback to students about errors related to the specific test case.
source/instructors/authoring/assessments/random.rst (2)

31-34: Add transitional sentence to improve section flow (previously flagged).

A bridge sentence between the Duplication Prevention section and "Creating a Random Assessment" would improve readability. The past review suggested adding clarity about referencing the layout guidance before proceeding.

Consider adding after line 31:

"See the Simple Layout and Complex Layout options above for guidance on using Random assessments on the same page."

This reinforces the layout context before moving to the creation steps.


42-42: Clarify recipient in random assignment phrase (previously flagged).

Line 42 uses "define the range of assessments to randomly assign" which is unclear about who receives the assignment. For consistency with line 9 ("each student receiving a randomly selected assessment"), update to:

"define the range of assessments to randomly assign to each student"

This explicit clarification aligns with the introductory context.

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: ASSERTIVE

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between b5e63c6 and aa6a173.

📒 Files selected for processing (4)
  • source/instructors/authoring/assessments/assessments.rst (1 hunks)
  • source/instructors/authoring/assessments/random.rst (2 hunks)
  • source/instructors/authoring/assessments/standard-code-test.rst (1 hunks)
  • source/instructors/authoring/assessments/student-submission.rst (2 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**/*.rst

⚙️ CodeRabbit configuration file

**/*.rst: Review files for:

  • Consistent formatting (e.g., headings, lists, links).
  • Anywhere there are tables, they should use list-table.
  • Clear and concise language.
  • Correct grammar and spelling.
  • Proper use of rst syntax (e.g., avoid broken links or invalid code blocks).
  • Adherence to style guides (e.g., consistent tone, terminology).
    Suggest improvements to enhance readability and accessibility.
  • Make sure to find all improvements on the first pass.
    Go through the material multiple times before submitting comments.

Files:

  • source/instructors/authoring/assessments/assessments.rst
  • source/instructors/authoring/assessments/random.rst
  • source/instructors/authoring/assessments/student-submission.rst
  • source/instructors/authoring/assessments/standard-code-test.rst
🔇 Additional comments (4)
source/instructors/authoring/assessments/student-submission.rst (2)

8-8: Prior grammar issue resolved.

The sentence now correctly reads "There are two important settings to control:" with the auxiliary verb properly included. Good fix!


25-26: Prior spelling issue resolved.

"Suppressing the Submit Button" is now correctly spelled with double-'p'. Excellent correction!

source/instructors/authoring/assessments/random.rst (2)

44-87: Filter Categories table is well-structured and properly formatted.

The list-table with nested tab-set/tab-item directives presents complex category information effectively. RST syntax is correct, tab-sets render properly, and the hierarchical organization (Bloom's levels, assessment types, language, categories) is clear and accessible.

Minor note: All six filter categories (Bloom's, Type, Language, Category, Content, Learning Objective) are well-documented and provide good guidance for users.


113-127: Publishing guidance is clear and logically structured.

The two-scenario approach (ONLY random assessment changes vs other assignment changes) effectively guides users on the proper publishing workflow. The distinction between students who have/haven't started is explicit and actionable. No changes needed.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

♻️ Duplicate comments (1)
source/instructors/authoring/assessments/standard-code-test.rst (1)

188-189: Minor: Consider consistency in Show Error Feedback label wording.

Line 188 labels this as "Show Error Feedback" with description "feedback to students." For consistency with the pattern used at line 195-196 ("show the students"), consider updating the label to "Show Error Feedback to Students" to be more explicit about the audience.

This is a minor refinement for terminology consistency. The current wording is clear but could align better with other similar settings in the table.

- * - **Show Error Feedback**
-   - Toggle to enable feedback to students about errors related to the specific test case.
+ * - **Show Error Feedback to Students**
+   - Toggle to enable feedback to students about errors related to the specific test case.
📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: ASSERTIVE

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between aa6a173 and 7a0c954.

📒 Files selected for processing (3)
  • source/instructors/authoring/assessments/random.rst (2 hunks)
  • source/instructors/authoring/assessments/standard-code-test.rst (1 hunks)
  • source/instructors/authoring/assessments/student-submission.rst (1 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**/*.rst

⚙️ CodeRabbit configuration file

**/*.rst: Review files for:

  • Consistent formatting (e.g., headings, lists, links).
  • Anywhere there are tables, they should use list-table.
  • Clear and concise language.
  • Correct grammar and spelling.
  • Proper use of rst syntax (e.g., avoid broken links or invalid code blocks).
  • Adherence to style guides (e.g., consistent tone, terminology).
    Suggest improvements to enhance readability and accessibility.
  • Make sure to find all improvements on the first pass.
    Go through the material multiple times before submitting comments.

Files:

  • source/instructors/authoring/assessments/student-submission.rst
  • source/instructors/authoring/assessments/random.rst
  • source/instructors/authoring/assessments/standard-code-test.rst
🧠 Learnings (1)
📚 Learning: 2025-10-27T17:55:37.142Z
Learnt from: LolaValente
Repo: codio/knowledge PR: 426
File: source/instructors/authoring/assessments/llm-based-rubric.rst:61-61
Timestamp: 2025-10-27T17:55:37.142Z
Learning: In rst files for the Codio knowledge repository, image directive attributes (`:height:`, `:alt:`, etc.) should be indented with 3 spaces, not 4. Content following an image directive should not be indented unless it's intentionally part of that directive.

Applied to files:

  • source/instructors/authoring/assessments/random.rst
🔇 Additional comments (1)
source/instructors/authoring/assessments/student-submission.rst (1)

1-80: Well-structured and comprehensive with past issues resolved.

This file demonstrates thorough attention to detail. All grammar, capitalization, and clarity issues flagged in prior reviews have been corrected—from the subject-verb agreement ("student work"), to terminology consistency ("Penalty Deadlines"), to proper hyphenation ("assignment-level"). The new Advantages and Drawbacks subsections under "Mark as Complete" add valuable context for instructors. The overall structure is clear and accessible.

Copy link
Collaborator

@shajason shajason left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry - quite a few edits....

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

♻️ Duplicate comments (1)
source/instructors/authoring/assessments/standard-code-test.rst (1)

52-54: Consider refining UI instruction terminology.

Line 54 uses "Amend the timeout setting" which, while grammatically correct, is less conventional in UI documentation contexts. Adjust or Set are more standard in instructional materials:

-   - **Timeout** - Amend the timeout setting for code execution (up to 300 seconds via arrows, or manually enter a longer period).
+   - **Timeout** - Adjust the timeout setting for code execution (up to 300 seconds via arrows, or manually enter a longer period).
📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: ASSERTIVE

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 7a0c954 and a09396c.

📒 Files selected for processing (5)
  • source/instructors/authoring/assessments/ai-assessment-generation.rst (1 hunks)
  • source/instructors/authoring/assessments/assessments.rst (1 hunks)
  • source/instructors/authoring/assessments/random.rst (1 hunks)
  • source/instructors/authoring/assessments/standard-code-test.rst (1 hunks)
  • source/instructors/authoring/assessments/student-submission.rst (1 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**/*.rst

⚙️ CodeRabbit configuration file

**/*.rst: Review files for:

  • Consistent formatting (e.g., headings, lists, links).
  • Anywhere there are tables, they should use list-table.
  • Clear and concise language.
  • Correct grammar and spelling.
  • Proper use of rst syntax (e.g., avoid broken links or invalid code blocks).
  • Adherence to style guides (e.g., consistent tone, terminology).
    Suggest improvements to enhance readability and accessibility.
  • Make sure to find all improvements on the first pass.
    Go through the material multiple times before submitting comments.

Files:

  • source/instructors/authoring/assessments/random.rst
  • source/instructors/authoring/assessments/standard-code-test.rst
  • source/instructors/authoring/assessments/ai-assessment-generation.rst
  • source/instructors/authoring/assessments/student-submission.rst
  • source/instructors/authoring/assessments/assessments.rst
🧠 Learnings (1)
📚 Learning: 2025-10-27T17:55:37.142Z
Learnt from: LolaValente
Repo: codio/knowledge PR: 426
File: source/instructors/authoring/assessments/llm-based-rubric.rst:61-61
Timestamp: 2025-10-27T17:55:37.142Z
Learning: In rst files for the Codio knowledge repository, image directive attributes (`:height:`, `:alt:`, etc.) should be indented with 3 spaces, not 4. Content following an image directive should not be indented unless it's intentionally part of that directive.

Applied to files:

  • source/instructors/authoring/assessments/random.rst
🔇 Additional comments (15)
source/instructors/authoring/assessments/assessments.rst (1)

12-12: Excellent revision to Starter Pack guidance.

The imperative, step-by-step formatting (using commas between actions) is clearer and more consistent with instructional documentation standards than arrow notation. The bolded UI element names are properly formatted.

source/instructors/authoring/assessments/standard-code-test.rst (5)

10-11: Character limit note is properly formatted.

The note block correctly warns about the 20,000-character output limit and provides a clear reference link to Advanced Code Tests for finer control.


21-31: New Assessment Auto-Generation section improves documentation flow.

The dedicated section clearly explains auto-generation scope, lists what is created, and references the AI assessment generation documentation for deeper guidance. Structure and content are well-organized.


58-106: Language-Specific Commands tabbed structure is well-executed.

Using tabs to organize language-specific commands is clear and maintainable. The Python, Java, C/C++, Ruby, and Bash commands are concise and correct. The SQL tab appropriately includes additional helper script guidance and installation instructions. Image for SQL helpers is properly referenced with alt-text and width. No issues identified.


114-203: Grading and Assessment Settings tables are well-structured.

Both list-table directives use correct formatting (:widths:, :header-rows: 1, proper markup). The descriptions are clear and comprehensive. Images are included with proper alt-text and width specifications. The Settings table entries (e.g., Run Test, Check Test, Show Error Feedback, Show Expected Answer) are well-explained with supporting visuals.


209-218: Output redirection guidance is clear and concise.

The section explaining how to capture file output using the Command field (with cat example) is brief, practical, and correctly formatted with a code block. No issues identified.

source/instructors/authoring/assessments/random.rst (3)

9-31: Layout Requirements and Duplication Prevention sections provide essential clarity.

The new structured guidance clearly distinguishes Simple Layout (1-panel: multiple assessments allowed) from Complex Layout (multi-panel: restrictions apply). The Duplication Prevention section explains the auto-deduplication mechanism and its requirements. The note properly warns users about insufficient unique assessments. Content is well-organized with bullets and appropriate emphasis.


44-87: Filter Categories table with nested tabs is comprehensive and well-formatted.

The list-table directive is correctly formatted with :widths: 30 70 and :header-rows: 1. Nested tab-set directives organize Bloom's Taxonomy levels and Assessment Types efficiently. All six filter categories (Bloom's, Assessment Type, Programming Language, Category, Content, Learning Objective) are explained clearly with relevant examples. The SWBAT example is helpful and realistic.


95-164: Publishing and Synchronizing workflows are clearly restructured.

The distinction between scenarios ("ONLY random assessment changes" vs. "other assignment changes") is explicit and helpful. Synchronizing Changes section provides actionable guidance with supporting images (width: 500px / 300px, consistent alt-text). Sync Options subsection (no-students vs. students-started) clearly explains the branching logic and warnings. Structure supports diverse instructor workflows well.

source/instructors/authoring/assessments/student-submission.rst (6)

6-11: Excellent restructuring of Student Submission Options intro.

The revised title and introductory text now clearly explain what settings are being discussed. The two bullet points concisely state the controllable elements (student submission method and assignment completion notification). This replaces the previously awkward phrasing and makes the page's purpose immediately clear.


15-17: Submit button default behavior and customization are clearly documented.

Line 15 briefly explains the default Submit button behavior and autograde timing. The note provides actionable guidance for customizing the button label with image support. The template reference "|assessment" is clear for users familiar with guide markdown syntax.


25-32: Suppressing the Submit Button section is well-structured.

The heading correctly spells "Suppressing" and the content lists all applicable assessment types. The explanation of use cases ("students shouldn't need to worry about pressing a submit button") is practical. Step-by-step instructions are clear: navigate to guide Settings, disable "Use submit buttons." Global Settings image is properly referenced.


42-51: Mark as Complete section now includes helpful context.

The addition of an introductory sentence explaining autograde script execution and teacher dashboard visibility is practical. The new Advantages and Drawbacks subsections provide balanced perspective: advantages (immediate grading opportunity) and drawbacks (no edits allowed after marking complete). This structure helps instructors make informed decisions about whether to use this feature.


53-75: Viewing and Managing Mark as Complete are clearly separated.

The Viewing Completed Assignments section explains the read-only access model and how students access feedback. Disabling Student Mark as Complete explains when and why instructors might disable this feature. Alternative Methods for Marking Complete provides clear options (manual marking, deadline-triggered marking) with proper references to related documentation. Section hierarchy is logical and accessible.


76-78: Penalty Deadlines reference maintains consistency.

Capitalization of "Penalty Deadlines" is consistent with the section heading. Brief description correctly notes this is a related but separate feature for grade deductions before final deadline. Reference link directs users to detailed documentation. No issues identified.

@LolaValente LolaValente merged commit 65b8983 into master Nov 17, 2025
1 check passed
@LolaValente LolaValente deleted the AssessmentsUpdate4 branch November 17, 2025 22:36
@coderabbitai coderabbitai bot mentioned this pull request Nov 20, 2025
@coderabbitai coderabbitai bot mentioned this pull request Jan 14, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants