[fix(ytdlp)] Stabilize progress streaming and temp file handling#13
Conversation
…lity Co-authored-by: Copilot <copilot@github.com>
📝 WalkthroughWalkthroughReplace polling with a broadcast-driven SSE progress stream for yt-dlp jobs; add regex dependency and OnceLock-backed regex parsing; stage downloads in per-job temp dirs and atomically move files on success; add subscribe API on YtdlpManager; extend config/state/models; rename middleware header constants and update CORS/captcha/rate-limit wiring; large README/docs edits. ChangesSingle Feature DAG
Sequence DiagramsequenceDiagram
participant Client as HTTP Client
participant Controller as YtdlpController
participant Manager as YtdlpManager
participant Broadcast as Broadcast Channel
participant Worker as Background Worker
Client->>Controller: GET /stream_download_progress/{job_id}
Controller->>Manager: subscribe()
Manager-->>Broadcast: create receiver
Controller->>Manager: request current job snapshot
Manager-->>Controller: return snapshot
Controller-->>Client: SSE "progress" (initial)
Worker->>Manager: update_job(job_id, state)
Manager->>Broadcast: send(job_id)
Broadcast-->>Controller: notify
Controller->>Manager: fetch updated snapshot
Controller->>Controller: compare snapshots
alt changed
Controller-->>Client: SSE "progress" (updated)
end
Worker->>Manager: update_job(job_id, terminal)
Manager->>Broadcast: send(job_id)
Broadcast-->>Controller: notify
Controller-->>Client: SSE "done"
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 5✅ Passed checks (5 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Tip 💬 Introducing Slack Agent: The best way for teams to turn conversations into code.Slack Agent is built on CodeRabbit's deep understanding of your code, so your team can collaborate across the entire SDLC without losing context.
Built for teams:
One agent for your entire SDLC. Right inside Slack. Comment |
|
@coderabbitai full review |
✅ Actions performedFull review triggered. |
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/controllers/api/v1/ytdlp_controller.rs (1)
143-188:⚠️ Potential issue | 🟠 MajorHandle
RecvError::Laggedwithout closing the SSE stream.
broadcast::Receiver::recv()can returnLaggedwhen progress updates outpace this client. The currentwhile let Ok(...)treats bothLaggedandClosedthe same way, causing the stream to terminate prematurely even though the download is still running. Under bursty output, a lagged subscriber will silently stop receiving updates.According to Tokio's broadcast semantics,
Laggedis not fatal—the receiver automatically advances to the oldest retained message. The stream should continue and resume consuming from that point, not close.Suggested fix
- while let Ok(updated_id) = broadcast_rx.recv().await { - if updated_id != stream_job_id { - continue; - } + loop { + match broadcast_rx.recv().await { + Ok(updated_id) if updated_id != stream_job_id => continue, + Ok(_) | Err(tokio::sync::broadcast::error::RecvError::Lagged(_)) => {} + Err(tokio::sync::broadcast::error::RecvError::Closed) => break, + } if let Some(job) = manager.get_job(&stream_job_id) {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/controllers/api/v1/ytdlp_controller.rs` around lines 143 - 188, The SSE loop currently uses while let Ok(updated_id) = broadcast_rx.recv().await which exits on any RecvError; change it to explicitly match broadcast_rx.recv().await so you handle RecvError::Lagged by skipping/continuing (optionally log/debug) and only treat RecvError::Closed as the termination condition. Concretely, replace the while-let with loop { match broadcast_rx.recv().await { Ok(updated_id) => { ...existing body... }, Err(tokio::sync::broadcast::error::RecvError::Lagged(_)) => continue, Err(tokio::sync::broadcast::error::RecvError::Closed) => break } } ensuring you reference broadcast_rx.recv() and keep the existing job handling, sends, and breaks unchanged.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/services/ytdlp/mod.rs`:
- Around line 338-355: The current code in the block that creates output_dir and
moves files (the loop using fs::rename) swallows rename errors and
unconditionally calls self.mark_job_finished(&id, moved_files), which can finish
a job with zero promoted artifacts; change the logic in the function handling
promotion (around the fs::create_dir_all branch and the fs::rename loop) to
collect successful moves as before but check if moved_files.is_empty() and if so
call self.mark_job_failed(&id, "no artifacts promoted to output directory") (or
a formatted error), otherwise call self.mark_job_finished(&id, moved_files);
preserve the existing cleanup behavior (the fs::remove_dir_all(&temp_dir) call)
so temp_dir is always removed.
---
Outside diff comments:
In `@src/controllers/api/v1/ytdlp_controller.rs`:
- Around line 143-188: The SSE loop currently uses while let Ok(updated_id) =
broadcast_rx.recv().await which exits on any RecvError; change it to explicitly
match broadcast_rx.recv().await so you handle RecvError::Lagged by
skipping/continuing (optionally log/debug) and only treat RecvError::Closed as
the termination condition. Concretely, replace the while-let with loop { match
broadcast_rx.recv().await { Ok(updated_id) => { ...existing body... },
Err(tokio::sync::broadcast::error::RecvError::Lagged(_)) => continue,
Err(tokio::sync::broadcast::error::RecvError::Closed) => break } } ensuring you
reference broadcast_rx.recv() and keep the existing job handling, sends, and
breaks unchanged.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro Plus
Run ID: 6a217ed5-f8e8-4e53-b160-37e6d9aa23d0
⛔ Files ignored due to path filters (2)
Cargo.lockis excluded by!**/*.lockdocs/images/Themed-Architecture-Diagram.svgis excluded by!**/*.svg
📒 Files selected for processing (5)
Cargo.tomlREADME.mddocs/images/Themed-Architecture-Diagram-code.mdsrc/controllers/api/v1/ytdlp_controller.rssrc/services/ytdlp/mod.rs
💤 Files with no reviewable changes (1)
- docs/images/Themed-Architecture-Diagram-code.md
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/controllers/api/v1/ytdlp_controller.rs (1)
143-188:⚠️ Potential issue | 🟠 MajorHandle lagged broadcasts without closing the SSE stream.
Line 143 exits the loop on any
recv()error. The broadcast channel insrc/services/ytdlp/mod.rs:59has a 1024-slot capacity, and every job update is sent through it atsrc/services/ytdlp/mod.rs:385. Under load, a slow receiver can fall behind, triggeringRecvError::Lagged, which incorrectly terminates the client's progress stream mid-download. Per Tokio documentation,Laggedis recoverable—the nextrecv()resumes from the oldest buffered message, making it ideal for SSE streams that can accept dropped messages and resync.Distinguish
RecvError::Lagged(recoverable) fromRecvError::Closed(terminal):Suggested fix
- while let Ok(updated_id) = broadcast_rx.recv().await { - if updated_id != stream_job_id { - continue; - } + loop { + match broadcast_rx.recv().await { + Ok(updated_id) if updated_id != stream_job_id => continue, + Ok(_) | Err(tokio::sync::broadcast::error::RecvError::Lagged(_)) => {} + Err(tokio::sync::broadcast::error::RecvError::Closed) => break, + } if let Some(job) = manager.get_job(&stream_job_id) { let job_resp = YtdlpJobResponse::from(job); let snapshot = serde_json::to_string(&job_resp).unwrap_or_default();🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/controllers/api/v1/ytdlp_controller.rs` around lines 143 - 188, The loop currently breaks on any error from broadcast_rx.recv().await, which treats Recoverable RecvError::Lagged as terminal; change the recv handling so you match the Result/RecvError from broadcast_rx.recv().await and: on Ok(updated_id) keep existing logic; on Err(RecvError::Lagged) log/debug the lag and continue the loop (do NOT break) so the SSE resyncs to the next available message; on Err(RecvError::Closed) treat as terminal and break (send final error/done via tx as appropriate). Update references around broadcast_rx.recv(), stream_job_id, and tx.send(...) to implement the different branches.
♻️ Duplicate comments (1)
src/services/ytdlp/mod.rs (1)
338-355:⚠️ Potential issue | 🟠 MajorDon’t finish the job when no file was promoted to
output_dir.If every
fs::renamefails,moved_filesstays empty but Line 352 still marks the job asFinished. The download endpoint then has nothing to serve from the final output path.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/services/ytdlp/mod.rs` around lines 338 - 355, The current code calls self.mark_job_finished(&id, moved_files) even when no files were successfully promoted, leaving the output_dir empty; change the logic after the fs::rename loop to check if moved_files is empty and, if so, call self.mark_job_failed(&id, "no files promoted to output_dir") (or a similar descriptive error) instead of mark_job_finished, ensuring you still remove temp_dir afterwards; reference the moved_files Vec, the fs::rename loop, and the mark_job_finished / mark_job_failed methods to implement this conditional behavior.
🧹 Nitpick comments (2)
README.md (1)
445-445: 💤 Low valueAdd newline at end of file.
The file should end with a newline character, following common convention for text files.
📝 Proposed fix
Add a single newline after line 445.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@README.md` at line 445, Add a trailing newline to the README by appending a single newline character after the last line ("Terraform: The Cloudflare provider only supports R2 buckets; use the AWS Terraform provider for object uploads to R2.") so the file ends with a newline following POSIX text file conventions.src/services/ytdlp/mod.rs (1)
511-608: ⚡ Quick winDocument the regex precedence and cover it with parser tests.
This parser now depends on four coordinated patterns plus the
extract_peak_percent(...).or_else(extract_first_percent(...))fallback rule. A short inline example of the expected aria2 line shape, plus focused unit tests for representative lines, would make later changes much safer.As per coding guidelines, "Complex logic (like the Midnight Snap caching strategy) must be documented inline with explanatory comments".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/services/ytdlp/mod.rs` around lines 511 - 608, Add an inline comment above parse_aria2_progress explaining the regex precedence and fallback rule (PROGRESS_RE captures total/speed/eta, extract_peak_percent (PEAK_RE) is preferred over extract_first_percent (FIRST_PCT_RE), and queued fragments are parsed by QUEUED_RE) and list example aria2 line shapes that map to each branch; then add unit tests for parse_aria2_progress and format_aria2_progress_message that cover representative lines (only total/speed/eta, lines with peak percents in parentheses, lines with plain percents, queued fragments via extract_queued_fragments, and combinations) to assert parsed fields and formatted output, referencing the functions parse_aria2_progress, format_aria2_progress_message, extract_peak_percent, extract_first_percent, and extract_queued_fragments so future changes to PROGRESS_RE, PEAK_RE, FIRST_PCT_RE, and QUEUED_RE are validated.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@README.md`:
- Line 295: The bullet "* **Cloudflare Provider:** R2 bucket used for Terraform
remote state and DNS record management for full HTTPS support." is missing an
article; update the sentence to read "* **Cloudflare Provider:** An R2 bucket
used for Terraform remote state and DNS record management for full HTTPS
support." to correct grammar in the README entry for the Cloudflare Provider.
---
Outside diff comments:
In `@src/controllers/api/v1/ytdlp_controller.rs`:
- Around line 143-188: The loop currently breaks on any error from
broadcast_rx.recv().await, which treats Recoverable RecvError::Lagged as
terminal; change the recv handling so you match the Result/RecvError from
broadcast_rx.recv().await and: on Ok(updated_id) keep existing logic; on
Err(RecvError::Lagged) log/debug the lag and continue the loop (do NOT break) so
the SSE resyncs to the next available message; on Err(RecvError::Closed) treat
as terminal and break (send final error/done via tx as appropriate). Update
references around broadcast_rx.recv(), stream_job_id, and tx.send(...) to
implement the different branches.
---
Duplicate comments:
In `@src/services/ytdlp/mod.rs`:
- Around line 338-355: The current code calls self.mark_job_finished(&id,
moved_files) even when no files were successfully promoted, leaving the
output_dir empty; change the logic after the fs::rename loop to check if
moved_files is empty and, if so, call self.mark_job_failed(&id, "no files
promoted to output_dir") (or a similar descriptive error) instead of
mark_job_finished, ensuring you still remove temp_dir afterwards; reference the
moved_files Vec, the fs::rename loop, and the mark_job_finished /
mark_job_failed methods to implement this conditional behavior.
---
Nitpick comments:
In `@README.md`:
- Line 445: Add a trailing newline to the README by appending a single newline
character after the last line ("Terraform: The Cloudflare provider only supports
R2 buckets; use the AWS Terraform provider for object uploads to R2.") so the
file ends with a newline following POSIX text file conventions.
In `@src/services/ytdlp/mod.rs`:
- Around line 511-608: Add an inline comment above parse_aria2_progress
explaining the regex precedence and fallback rule (PROGRESS_RE captures
total/speed/eta, extract_peak_percent (PEAK_RE) is preferred over
extract_first_percent (FIRST_PCT_RE), and queued fragments are parsed by
QUEUED_RE) and list example aria2 line shapes that map to each branch; then add
unit tests for parse_aria2_progress and format_aria2_progress_message that cover
representative lines (only total/speed/eta, lines with peak percents in
parentheses, lines with plain percents, queued fragments via
extract_queued_fragments, and combinations) to assert parsed fields and
formatted output, referencing the functions parse_aria2_progress,
format_aria2_progress_message, extract_peak_percent, extract_first_percent, and
extract_queued_fragments so future changes to PROGRESS_RE, PEAK_RE,
FIRST_PCT_RE, and QUEUED_RE are validated.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro Plus
Run ID: 91fb6489-6680-4d35-be49-72b3d405db30
⛔ Files ignored due to path filters (2)
Cargo.lockis excluded by!**/*.lockdocs/images/Themed-Architecture-Diagram.svgis excluded by!**/*.svg
📒 Files selected for processing (5)
Cargo.tomlREADME.mddocs/images/Themed-Architecture-Diagram-code.mdsrc/controllers/api/v1/ytdlp_controller.rssrc/services/ytdlp/mod.rs
💤 Files with no reviewable changes (1)
- docs/images/Themed-Architecture-Diagram-code.md
|
@coderabbitai full review |
✅ Actions performedFull review triggered. |
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/config.rs (1)
163-163:⚠️ Potential issue | 🟠 Major | ⚡ Quick winValidate
GITHUB_GRAPHQL_URLduring config load (fail fast).At Line 163, any non-empty malformed value is accepted and only fails later when requests are made. This should be rejected in
from_env()asConfigError::InvalidValueto surface misconfiguration at startup.Proposed patch
@@ pub fn from_env() -> Result<Self, ConfigError> { let master_api_key = env_opt("MASTER_API_KEY") .ok_or_else(|| ConfigError::MissingVar("MASTER_API_KEY".to_string()))?; + let github_graphql_url = env::var("GITHUB_GRAPHQL_URL") + .unwrap_or_else(|_| "https://api.github.com/graphql".to_string()) + .trim() + .to_string(); + + if github_graphql_url.is_empty() + || !(github_graphql_url.starts_with("http://") + || github_graphql_url.starts_with("https://")) + { + return Err(ConfigError::InvalidValue { + key: "GITHUB_GRAPHQL_URL".to_string(), + details: format!("'{}' must be a valid http(s) URL", github_graphql_url), + }); + } @@ env_opt("GITHUB_PAT"), env_opt("GITHUB_USERNAME"), - env_or("GITHUB_GRAPHQL_URL", "https://api.github.com/graphql"), + github_graphql_url, )) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/config.rs` at line 163, from_env() currently accepts any non-empty GITHUB_GRAPHQL_URL via env_or("GITHUB_GRAPHQL_URL", ...) and defers validation; update from_env() to parse/validate that value and return ConfigError::InvalidValue on malformed URLs. Locate the env_or call for GITHUB_GRAPHQL_URL in src/config.rs and replace the blind accept with logic that reads the env var (or default), attempts to parse it as a URL (or use a helper like Url::parse), and if parsing fails return Err(ConfigError::InvalidValue(format!("GITHUB_GRAPHQL_URL", err))) otherwise store the validated URL in the Config; ensure error variant used is ConfigError::InvalidValue and the change is implemented inside from_env().
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Outside diff comments:
In `@src/config.rs`:
- Line 163: from_env() currently accepts any non-empty GITHUB_GRAPHQL_URL via
env_or("GITHUB_GRAPHQL_URL", ...) and defers validation; update from_env() to
parse/validate that value and return ConfigError::InvalidValue on malformed
URLs. Locate the env_or call for GITHUB_GRAPHQL_URL in src/config.rs and replace
the blind accept with logic that reads the env var (or default), attempts to
parse it as a URL (or use a helper like Url::parse), and if parsing fails return
Err(ConfigError::InvalidValue(format!("GITHUB_GRAPHQL_URL", err))) otherwise
store the validated URL in the Config; ensure error variant used is
ConfigError::InvalidValue and the change is implemented inside from_env().
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro Plus
Run ID: b28001c8-cb17-4e18-b066-e44dcc4f84f5
📒 Files selected for processing (9)
GEMINI.mdREADME.mdsrc/config.rssrc/error.rssrc/lib.rssrc/models/health.rssrc/models/validation.rssrc/models/ytdlp.rssrc/state.rs
✅ Files skipped from review due to trivial changes (6)
- GEMINI.md
- src/lib.rs
- src/models/ytdlp.rs
- src/models/validation.rs
- src/error.rs
- README.md
There was a problem hiding this comment.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/services/ytdlp/mod.rs (1)
528-565:⚠️ Potential issue | 🔴 CriticalFix regex pattern to match actual aria2c output format.
The pattern in
parse_aria2_progressexpects[DL:...]but real aria2c output is[#GIDtotal(%) CN:X DL:speed ETA:eta]. The current pattern will not match real aria2 output, causingspeedandetato always be empty in progress updates. Update the regex to extract: total as size pairs like400.0KiB/33.2MiB, speed afterDL:as[^\s]+, and eta afterETA:as[^\s\]]+.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/services/ytdlp/mod.rs` around lines 528 - 565, The current parse_aria2_progress regex (initialized in PROGRESS_RE inside parse_aria2_progress) expects a `[DL:...]` token and doesn't match real aria2c lines like `[`#GID` total(%) CN:X DL:speed ETA:eta]`, so speed and eta are never captured; update the regex used in PROGRESS_RE to match aria2c format by capturing the total field as size pairs (e.g. `400.0KiB/33.2MiB`) into the "total" group, capturing speed after `DL:` into the "speed" group using a pattern like `[^\s]+`, and capturing ETA after `ETA:` into the "eta" group using a pattern like `[^\s\]]+`; keep the optionality for CN and other tokens, retain existing percent extraction via extract_peak_percent/extract_first_percent, and return ParsedProgress with the newly captured groups from PROGRESS_RE.
🧹 Nitpick comments (1)
src/controllers/api/v1/ytdlp_controller.rs (1)
79-200: 🏗️ Heavy liftMove the SSE state machine behind the service boundary.
This handler now owns subscription, snapshot diffing, lag semantics, and terminal-state rules. That makes the controller harder to test and drifts from the repo rule that controllers only extract/map while services own business logic.
Based on learnings: Services must handle business logic, caching, and external communication; controllers must only handle request extraction, calling services, and mapping results to HTTP responses.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/controllers/api/v1/ytdlp_controller.rs` around lines 79 - 200, The SSE state machine inside stream_download_progress currently owns subscription, snapshot diffing, lag handling, and terminal rules; move this logic into the service (ytdlp_manager) by implementing a new method (e.g., YtdlpManager::subscribe_progress or YtdlpService::stream_progress) that encapsulates subscribe(), get_job(), last_snapshot/diffing, lag semantics, and terminal-state/event emission and returns a ready-to-consume mpsc::Receiver<Result<Event, Infallible>> or an async Stream for the controller to forward; then simplify stream_download_progress to only extract request_path/id/client_ip, call state.ytdlp_manager.subscribe_progress(&id, request_path.clone(), client_ip.clone()) (or similar), log open/close, and return the returned receiver/stream as the SSE response, removing direct use of manager.subscribe(), manager.get_job(), broadcast_rx, tx/rx and the tokio::spawn state machine from the controller.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/controllers/api/v1/ytdlp_controller.rs`:
- Around line 143-190: Handle the broadcast RecvError::Lagged(_) by re-fetching
the job from the manager (use manager.get_job(&stream_job_id)), construct a
YtdlpJobResponse via YtdlpJobResponse::from(job), serialize and compare with
last_snapshot and send a "progress" event if changed (using tx.send(...)).
Crucially, check job_resp.status for YtdlpJobStatus::Finished or
YtdlpJobStatus::Failed and, if terminal, send the "done" event over tx, log via
info! (use stream_path, stream_job_id, stream_client_ip, job_resp.status), and
break the loop so the SSE stream doesn't hang; if job not found, send the same
"error" event and break. Ensure all tx.send awaits/errors are handled similarly
to the existing Ok(updated_id) branch.
In `@src/models/validation.rs`:
- Around line 7-13: The length validation on the `name` field currently enforces
max = 20 but the error message says "3 and 50 characters"; update the validate
attribute on the `name` field (the #[validate(length(...))] for `pub name:
String`) so the message matches the constraint, e.g. change to "Name must be
between 3 and 20 characters".
---
Outside diff comments:
In `@src/services/ytdlp/mod.rs`:
- Around line 528-565: The current parse_aria2_progress regex (initialized in
PROGRESS_RE inside parse_aria2_progress) expects a `[DL:...]` token and doesn't
match real aria2c lines like `[`#GID` total(%) CN:X DL:speed ETA:eta]`, so speed
and eta are never captured; update the regex used in PROGRESS_RE to match aria2c
format by capturing the total field as size pairs (e.g. `400.0KiB/33.2MiB`) into
the "total" group, capturing speed after `DL:` into the "speed" group using a
pattern like `[^\s]+`, and capturing ETA after `ETA:` into the "eta" group using
a pattern like `[^\s\]]+`; keep the optionality for CN and other tokens, retain
existing percent extraction via extract_peak_percent/extract_first_percent, and
return ParsedProgress with the newly captured groups from PROGRESS_RE.
---
Nitpick comments:
In `@src/controllers/api/v1/ytdlp_controller.rs`:
- Around line 79-200: The SSE state machine inside stream_download_progress
currently owns subscription, snapshot diffing, lag handling, and terminal rules;
move this logic into the service (ytdlp_manager) by implementing a new method
(e.g., YtdlpManager::subscribe_progress or YtdlpService::stream_progress) that
encapsulates subscribe(), get_job(), last_snapshot/diffing, lag semantics, and
terminal-state/event emission and returns a ready-to-consume
mpsc::Receiver<Result<Event, Infallible>> or an async Stream for the controller
to forward; then simplify stream_download_progress to only extract
request_path/id/client_ip, call state.ytdlp_manager.subscribe_progress(&id,
request_path.clone(), client_ip.clone()) (or similar), log open/close, and
return the returned receiver/stream as the SSE response, removing direct use of
manager.subscribe(), manager.get_job(), broadcast_rx, tx/rx and the tokio::spawn
state machine from the controller.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro Plus
Run ID: 11e977f6-7042-442f-bb8a-775a7f1e4f23
⛔ Files ignored due to path filters (2)
Cargo.lockis excluded by!**/*.lockdocs/images/Themed-Architecture-Diagram.svgis excluded by!**/*.svg
📒 Files selected for processing (13)
Cargo.tomlGEMINI.mdREADME.mddocs/images/Themed-Architecture-Diagram-code.mdsrc/config.rssrc/controllers/api/v1/ytdlp_controller.rssrc/error.rssrc/lib.rssrc/models/health.rssrc/models/validation.rssrc/models/ytdlp.rssrc/services/ytdlp/mod.rssrc/state.rs
💤 Files with no reviewable changes (1)
- docs/images/Themed-Architecture-Diagram-code.md
…hance variable documentation in Terraform files
…nhance documentation and validation
|
@coderabbitai full review |
✅ Actions performedFull review triggered. |
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (3)
infra/digitalocean/accounts/naduns-team/variables.tf (1)
202-206:⚠️ Potential issue | 🟠 Major | ⚡ Quick winMark
CADDY_CUSTOM_BROWSE_FILE_URLas sensitive.Line 202-206 still defines a presigned URL variable as plain string; Terraform may expose tokenized query params in plan/apply output. Please add
sensitive = true.Suggested fix
variable "CADDY_CUSTOM_BROWSE_FILE_URL" { //VAR: Declare on terraform.tfvars description = "Presigned URL to download custom browse.html" type = string + sensitive = true }🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@infra/digitalocean/accounts/naduns-team/variables.tf` around lines 202 - 206, The variable declaration for CADDY_CUSTOM_BROWSE_FILE_URL currently exposes a presigned URL as a plain string; mark the variable as sensitive by adding sensitive = true to the variable "CADDY_CUSTOM_BROWSE_FILE_URL" block so Terraform won’t show the tokenized query params in plan/apply output, and leave the description and type as-is to preserve intent.GEMINI.md (2)
3-3:⚠️ Potential issue | 🟡 Minor | ⚡ Quick winFix project name typo for consistency (Line 3).
“naduns codebase” should be corrected to “Nadzu's codebase” to match the project naming used elsewhere.
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@GEMINI.md` at line 3, Replace the incorrect project name string "naduns codebase" with the correct "Nadzu's codebase" in the GEMINI.md sentence so the document reads consistently; locate the exact sentence in GEMINI.md (the line containing "This document serves as the foundational mandate for all engineering work on naduns codebase.") and update it to use "Nadzu's codebase".
67-67:⚠️ Potential issue | 🟡 Minor | ⚡ Quick winRewrite the Rustdoc guideline sentence for clarity (Line 67).
The sentence is grammatically broken and currently implies guessing about unseen code. Split and clarify the directive.
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@GEMINI.md` at line 67, Rewrite the broken Rustdoc guideline into two clear sentences: state that all public-facing methods and services must have /// (Rustdoc) comments that explain their intent and observable behavior, and add a second sentence instructing contributors not to over-document or make assumptions about unseen internal implementation details; update the existing line containing "All public-facing methods and services must have `///` (Rustdoc) comments explaining intent and behavior.do not over document, make guesses about the unseen code." to this clearer phrasing.
🧹 Nitpick comments (10)
GEMINI.md (1)
64-64: ⚡ Quick winMake the parallel-build command explicit (Line 64).
The current wording is ambiguous as a shell command. Prefer explicit usage like
make c -j "$(nproc)"to avoid misuse.Based on learnings: Use the Makefile with
make cfor full validation and-j (nproc)for parallel builds/tests.🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@GEMINI.md` at line 64, The doc line is ambiguous about parallel builds; update the GEMINI.md wording at the referenced line to explicitly show the Makefile invocation using the c target and nproc parallelism (e.g., use make c -j "$(nproc)") so readers know to run the Makefile target "c" with -j set to the system nproc for parallel builds/tests.src/services/ytdlp/mod.rs (2)
392-394: ⚡ Quick winRemove unused parameters from
cleanup_failed_files.The
_idand_is_base_dirparameters are no longer used after the refactor. Consider removing them to simplify the function signature and its call sites.♻️ Suggested fix
- async fn cleanup_failed_files(temp_dir: &str, _id: &str, _is_base_dir: bool) { + async fn cleanup_temp_dir(temp_dir: &str) { let _ = fs::remove_dir_all(temp_dir).await; }And update call sites:
- Self::cleanup_failed_files(&temp_dir.to_string_lossy(), &id, false).await; + Self::cleanup_temp_dir(&temp_dir.to_string_lossy()).await;🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@src/services/ytdlp/mod.rs` around lines 392 - 394, The function cleanup_failed_files still declares unused parameters _id and _is_base_dir; remove these parameters from its signature and from all call sites to simplify the API (update async fn cleanup_failed_files(temp_dir: &str) { ... } and adjust callers that pass id and is_base_dir to only pass temp_dir). Ensure you update any imports/uses that reference the old signature so compilation succeeds and run tests to verify no remaining references to _id or _is_base_dir.
176-179: ⚡ Quick winAdd documentation for the
subscribemethod.The public
subscribemethod lacks documentation explaining what the broadcast channel emits (job IDs on progress updates) and how consumers should use it. As per coding guidelines, all public-facing methods must have Rustdoc comments.📝 Suggested fix
- /// Subscribes to the job progress broadcast channel. + /// Subscribes to the job progress broadcast channel. + /// + /// Returns a receiver that emits job IDs whenever a job's state or progress is updated. + /// Consumers should filter by the job ID they're interested in and fetch the full + /// job state via [`get_job`]. pub fn subscribe(&self) -> broadcast::Receiver<String> { self.progress_tx.subscribe() }🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@src/services/ytdlp/mod.rs` around lines 176 - 179, Add a Rustdoc comment to the public method subscribe() explaining that it returns a broadcast::Receiver<String> from progress_tx which emits job IDs (as Strings) whenever a ytdlp job reports progress, and include guidance for consumers to handle missed messages (e.g., using recv or try_recv and that they may miss earlier broadcasts) and that the receiver should be cloned for multiple listeners; update the doc on the subscribe function to reference progress_tx and the type broadcast::Receiver<String> so callers know what to expect and how to use the channel.src/middleware/rate_limit.rs (1)
58-61: 💤 Low valueDoc comment mentions panics that can no longer occur.
The doc comment states "Panics if rate limit constants above are invalid", but the compile-time assertions now guarantee validity, so this method cannot actually panic at runtime.
📝 Suggested fix
- /// Creates a new instance of `RateLimiters` - /// - /// Panics if rate limit constants above are invalid. + /// Creates a new instance of `RateLimiters`.🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@src/middleware/rate_limit.rs` around lines 58 - 61, Update the doc comment for the RateLimiters constructor (the impl/constructor for RateLimiters) to remove the line "Panics if rate limit constants above are invalid" and instead note that compile-time assertions validate constants so the method does not panic at runtime; keep the rest of the summary ("Creates a new instance of `RateLimiters`") and any must_use attribute intact.src/config.rs (1)
167-168: 💤 Low valueImprove doc comment grammar.
The doc comment has grammatical issues and doesn't follow Rustdoc conventions.
📝 Suggested fix
- /// check API KEY match with provided + constant time eq - /// uses: api key validation middleware + /// Checks if the provided key matches the master API key using constant-time comparison. + /// + /// Used by the API key validation middleware.🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@src/config.rs` around lines 167 - 168, Fix the grammar and Rustdoc style of the existing comment describing API key validation: replace the two short lines with a single, properly formatted Rustdoc sentence that clearly states the purpose (e.g., that this function/middleware validates an API key using a constant-time equality check) and follows Rustdoc conventions (start with a capitalized sentence, include punctuation, and mention usage such as "used by the API key validation middleware"). Apply this change to the doc comment immediately above the API key validation implementation so the comment reads as a complete, grammatically correct Rustdoc sentence.src/extractors/json_validator.rs (1)
12-14: ⚡ Quick winImprove doc comment quality.
The doc comment
///wrapped,automatic: JSON validator ;]is unclear and doesn't follow Rustdoc conventions. It should explain what the struct does.📝 Suggested fix
-///wrapped,automatic: JSON validator ;] +/// An Axum extractor that deserializes JSON and validates it using the `validator` crate. +/// +/// Returns validation errors as a structured JSON response if validation fails. #[derive(Debug)] pub struct ValidatedJson<T>(pub T);🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@src/extractors/json_validator.rs` around lines 12 - 14, The doc comment for the struct ValidatedJson is unclear and non-idiomatic; replace the single-line comment `///wrapped,automatic: JSON validator ;]` with a proper Rustdoc on ValidatedJson that briefly describes its purpose and usage (e.g., "A generic wrapper type representing JSON data that has been validated against expected schema or rules" and note any invariants or intended consumers). Update the doc to mention the generic parameter T and what guarantees ValidatedJson<T> provides so readers and rustdoc-generated docs understand its role.src/middleware/mod.rs (1)
11-11: 💤 Low valueRemove stray "Checkpoint" comment.
This appears to be a leftover development marker that should be removed before merge.
-// Checkpoint🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@src/middleware/mod.rs` at line 11, Remove the stray development marker by deleting the line that contains the comment "// Checkpoint" from src/middleware/mod.rs (the standalone comment string "Checkpoint") so the module has no leftover dev comment; simply remove that comment line and keep the rest of the file untouched.src/middleware/captcha.rs (1)
17-17: 💤 Low valueRemove stray "Checkpoint" comment.
Same development artifact as in
mod.rsthat should be cleaned up.-// Checkpoint🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@src/middleware/captcha.rs` at line 17, Remove the stray inline comment containing the exact text "Checkpoint" from the captcha module source (the comment in src/middleware/captcha.rs) — simply delete that standalone comment line; also remove the matching stray "Checkpoint" comment in mod.rs if present to keep modules clean and consistent.src/middleware/cors.rs (2)
29-29: ⚡ Quick winAdd Rustdoc comment for public function.
Per coding guidelines, all public-facing methods should have
///comments explaining intent and behavior.+/// Builds a CORS layer based on the application configuration. +/// +/// Supports exact origins, wildcard patterns with prefix/suffix matching, +/// and disabling CORS entirely via `"none"` or empty config. pub fn build_cors(config: &AppConfig) -> CorsLayer {As per coding guidelines: "All public-facing methods and services must have
///(Rustdoc) comments explaining intent and behavior."🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@src/middleware/cors.rs` at line 29, Add a Rustdoc comment for the public function build_cors describing its purpose and behavior: what CORS policy it constructs, any configurable behavior driven by the AppConfig parameter, and any important defaults/side-effects (e.g., allowed origins, methods, headers, credentials). Place the triple-slash comment immediately above the pub fn build_cors(config: &AppConfig) -> CorsLayer declaration so tooling and docs pick it up.
21-27: 💤 Low valueRemove stray "Checkpoint" comment; compile-time assertion is overly trivial.
The
// Checkpointcomment should be removed. Additionally, the assertionDEFAULT_METHODS.len() == 2is trivially evident from the declaration two lines above. Consider removing it or replacing with a more meaningful invariant (e.g., ensuring specific methods are included).-// Checkpoint -const _: () = { - assert!( - DEFAULT_METHODS.len() == 2, - "CORS should only allow 2 methods" - ); -};🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@src/middleware/cors.rs` around lines 21 - 27, Remove the stray "// Checkpoint" comment and simplify the compile-time check around DEFAULT_METHODS: either delete the trivial assertion `assert!(DEFAULT_METHODS.len() == 2, ...)` or replace it with a meaningful invariant that checks for specific methods (for example ensure DEFAULT_METHODS contains "GET" and "POST"); update the const block that references DEFAULT_METHODS so it no longer contains the trivial length assertion and instead enforces the concrete method membership if you need a compile-time guarantee.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@src/db/mod.rs`:
- Around line 1-4: Replace the placeholder Rustdoc comments for the public
modules by adding concise, intent-focused doc comments for the exported modules
`postgres` and `redis`: describe each module's responsibility (e.g., `postgres`
provides connection pooling, query helpers, and migrations for the app's
relational DB; `redis` provides cache/TTL helpers and pub/sub utilities),
mention important guarantees or usage notes (thread-safety, expected
configuration keys, panic/error behavior), and include brief examples or
pointers to key public types/functions if useful so generated docs are
actionable and follow the project's doc guideline.
---
Duplicate comments:
In `@GEMINI.md`:
- Line 3: Replace the incorrect project name string "naduns codebase" with the
correct "Nadzu's codebase" in the GEMINI.md sentence so the document reads
consistently; locate the exact sentence in GEMINI.md (the line containing "This
document serves as the foundational mandate for all engineering work on naduns
codebase.") and update it to use "Nadzu's codebase".
- Line 67: Rewrite the broken Rustdoc guideline into two clear sentences: state
that all public-facing methods and services must have /// (Rustdoc) comments
that explain their intent and observable behavior, and add a second sentence
instructing contributors not to over-document or make assumptions about unseen
internal implementation details; update the existing line containing "All
public-facing methods and services must have `///` (Rustdoc) comments explaining
intent and behavior.do not over document, make guesses about the unseen code."
to this clearer phrasing.
In `@infra/digitalocean/accounts/naduns-team/variables.tf`:
- Around line 202-206: The variable declaration for CADDY_CUSTOM_BROWSE_FILE_URL
currently exposes a presigned URL as a plain string; mark the variable as
sensitive by adding sensitive = true to the variable
"CADDY_CUSTOM_BROWSE_FILE_URL" block so Terraform won’t show the tokenized query
params in plan/apply output, and leave the description and type as-is to
preserve intent.
---
Nitpick comments:
In `@GEMINI.md`:
- Line 64: The doc line is ambiguous about parallel builds; update the GEMINI.md
wording at the referenced line to explicitly show the Makefile invocation using
the c target and nproc parallelism (e.g., use make c -j "$(nproc)") so readers
know to run the Makefile target "c" with -j set to the system nproc for parallel
builds/tests.
In `@src/config.rs`:
- Around line 167-168: Fix the grammar and Rustdoc style of the existing comment
describing API key validation: replace the two short lines with a single,
properly formatted Rustdoc sentence that clearly states the purpose (e.g., that
this function/middleware validates an API key using a constant-time equality
check) and follows Rustdoc conventions (start with a capitalized sentence,
include punctuation, and mention usage such as "used by the API key validation
middleware"). Apply this change to the doc comment immediately above the API key
validation implementation so the comment reads as a complete, grammatically
correct Rustdoc sentence.
In `@src/extractors/json_validator.rs`:
- Around line 12-14: The doc comment for the struct ValidatedJson is unclear and
non-idiomatic; replace the single-line comment `///wrapped,automatic: JSON
validator ;]` with a proper Rustdoc on ValidatedJson that briefly describes its
purpose and usage (e.g., "A generic wrapper type representing JSON data that has
been validated against expected schema or rules" and note any invariants or
intended consumers). Update the doc to mention the generic parameter T and what
guarantees ValidatedJson<T> provides so readers and rustdoc-generated docs
understand its role.
In `@src/middleware/captcha.rs`:
- Line 17: Remove the stray inline comment containing the exact text
"Checkpoint" from the captcha module source (the comment in
src/middleware/captcha.rs) — simply delete that standalone comment line; also
remove the matching stray "Checkpoint" comment in mod.rs if present to keep
modules clean and consistent.
In `@src/middleware/cors.rs`:
- Line 29: Add a Rustdoc comment for the public function build_cors describing
its purpose and behavior: what CORS policy it constructs, any configurable
behavior driven by the AppConfig parameter, and any important
defaults/side-effects (e.g., allowed origins, methods, headers, credentials).
Place the triple-slash comment immediately above the pub fn build_cors(config:
&AppConfig) -> CorsLayer declaration so tooling and docs pick it up.
- Around line 21-27: Remove the stray "// Checkpoint" comment and simplify the
compile-time check around DEFAULT_METHODS: either delete the trivial assertion
`assert!(DEFAULT_METHODS.len() == 2, ...)` or replace it with a meaningful
invariant that checks for specific methods (for example ensure DEFAULT_METHODS
contains "GET" and "POST"); update the const block that references
DEFAULT_METHODS so it no longer contains the trivial length assertion and
instead enforces the concrete method membership if you need a compile-time
guarantee.
In `@src/middleware/mod.rs`:
- Line 11: Remove the stray development marker by deleting the line that
contains the comment "// Checkpoint" from src/middleware/mod.rs (the standalone
comment string "Checkpoint") so the module has no leftover dev comment; simply
remove that comment line and keep the rest of the file untouched.
In `@src/middleware/rate_limit.rs`:
- Around line 58-61: Update the doc comment for the RateLimiters constructor
(the impl/constructor for RateLimiters) to remove the line "Panics if rate limit
constants above are invalid" and instead note that compile-time assertions
validate constants so the method does not panic at runtime; keep the rest of the
summary ("Creates a new instance of `RateLimiters`") and any must_use attribute
intact.
In `@src/services/ytdlp/mod.rs`:
- Around line 392-394: The function cleanup_failed_files still declares unused
parameters _id and _is_base_dir; remove these parameters from its signature and
from all call sites to simplify the API (update async fn
cleanup_failed_files(temp_dir: &str) { ... } and adjust callers that pass id and
is_base_dir to only pass temp_dir). Ensure you update any imports/uses that
reference the old signature so compilation succeeds and run tests to verify no
remaining references to _id or _is_base_dir.
- Around line 176-179: Add a Rustdoc comment to the public method subscribe()
explaining that it returns a broadcast::Receiver<String> from progress_tx which
emits job IDs (as Strings) whenever a ytdlp job reports progress, and include
guidance for consumers to handle missed messages (e.g., using recv or try_recv
and that they may miss earlier broadcasts) and that the receiver should be
cloned for multiple listeners; update the doc on the subscribe function to
reference progress_tx and the type broadcast::Receiver<String> so callers know
what to expect and how to use the channel.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro Plus
Run ID: 59c7b344-bf5f-4ff9-98b9-89173d531ab0
⛔ Files ignored due to path filters (2)
Cargo.lockis excluded by!**/*.lockdocs/images/Themed-Architecture-Diagram.svgis excluded by!**/*.svg
📒 Files selected for processing (38)
.github/PULL_REQUEST_TEMPLATE.md.gitignoreCargo.tomlGEMINI.mdREADME.mddocs/images/Themed-Architecture-Diagram-code.mdinfra/digitalocean/accounts/naduns-team/variables.tfinfra/digitalocean/components/locals.tfsrc/config.rssrc/controllers/api/v1/contributions_controller.rssrc/controllers/api/v1/ytdlp_controller.rssrc/controllers/health_controller.rssrc/controllers/root_controller.rssrc/controllers/validation_controller.rssrc/db/mod.rssrc/error.rssrc/extractors/json_validator.rssrc/extractors/mod.rssrc/lib.rssrc/middleware/api_key.rssrc/middleware/auth.rssrc/middleware/captcha.rssrc/middleware/cors.rssrc/middleware/mod.rssrc/middleware/rate_limit.rssrc/models/health.rssrc/models/validation.rssrc/models/ytdlp.rssrc/routes/api/v1/ytdlp_routes.rssrc/services/ytdlp/mod.rssrc/state.rstests/api/auth_tests.rstests/api/captcha_tests.rstests/api/common.rstests/api/cors_tests.rstests/api/rate_limit_tests.rstests/api/ytdlp_tests.rstests/layer_unit_tests.rs
💤 Files with no reviewable changes (2)
- src/middleware/auth.rs
- docs/images/Themed-Architecture-Diagram-code.md
| /// unused | ||
| pub mod postgres; | ||
| /// unused | ||
| pub mod redis; |
There was a problem hiding this comment.
Replace placeholder Rustdoc with meaningful module descriptions
/// unused is not actionable documentation for exported modules. Use short intent-based docs for each module so generated docs remain useful.
Suggested diff
-/// unused
+/// PostgreSQL persistence layer and database integration utilities.
pub mod postgres;
-/// unused
+/// Redis caching and ephemeral state integration utilities.
pub mod redis;As per coding guidelines, “All public-facing methods and services must have /// (Rustdoc) comments explaining intent and behavior.”
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| /// unused | |
| pub mod postgres; | |
| /// unused | |
| pub mod redis; | |
| /// PostgreSQL persistence layer and database integration utilities. | |
| pub mod postgres; | |
| /// Redis caching and ephemeral state integration utilities. | |
| pub mod redis; |
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@src/db/mod.rs` around lines 1 - 4, Replace the placeholder Rustdoc comments
for the public modules by adding concise, intent-focused doc comments for the
exported modules `postgres` and `redis`: describe each module's responsibility
(e.g., `postgres` provides connection pooling, query helpers, and migrations for
the app's relational DB; `redis` provides cache/TTL helpers and pub/sub
utilities), mention important guarantees or usage notes (thread-safety, expected
configuration keys, panic/error behavior), and include brief examples or
pointers to key public types/functions if useful so generated docs are
actionable and follow the project's doc guideline.
Description
This updates the yt-dlp download flow to send an immediate SSE snapshot, stage downloads in a per-job temp directory, and move completed files into the final output path. It also simplifies aria2 progress parsing and refreshes the README and architecture documentation to match the current system design.
Key additions:
Types of changes
Checklist
Summary by CodeRabbit
Documentation
Improvements
Bug Fixes
Chores