Skip to content

[feat(backend)] Add GitHub contributions API endpoint#11

Merged
nxdun merged 12 commits into
mainfrom
nadun/b/github-contribution-graph
Apr 25, 2026
Merged

[feat(backend)] Add GitHub contributions API endpoint#11
nxdun merged 12 commits into
mainfrom
nadun/b/github-contribution-graph

Conversation

@nxdun
Copy link
Copy Markdown
Owner

@nxdun nxdun commented Apr 24, 2026

Description

Adds new API endpoint GET /api/v1/contributions supporting default and specific user queries. Fetches GitHub contribution data via GraphQL with caching, transforms to standardized response format, and includes Postman tests. Configures GHCR_PAT passthrough as GITHUB_PAT for backend authentication.

Key additions:

  • New ContributionsService with GitHub GraphQL integration and 24h in-memory caching
  • API controller and routes for /api/v1/contributions?username=
  • Full ContributionsResponse model matching GitHub calendar schema
  • Env config support for GITHUB_PAT and GITHUB_USERNAME
  • Cloud-init template updated to pass GHCR_PAT as GITHUB_PAT
  • Postman collection tests for default/specific user endpoints
  • Test state and config updates for contributions service

Types of changes

  • Feature

Checklist

  • Updated ChangeLog
  • Tests pass locally
  • API documentation updated
  • Env vars documented

Summary by CodeRabbit

  • New Features

    • Added a contributions API (GET /api/v1/contributions) returning GitHub-style calendar data (cells, months, legend, summary, metadata); only the default username is allowed.
    • App now reads GitHub token, username, and GraphQL endpoint from environment and exposes contributions via app state.
  • Tests

    • Added integration tests and Postman requests covering cached responses, live/mocked fetches, and username validation.
  • Chores

    • Infra: cloud-init and variable comment updates; dependency/lint manifest tweaks.

@nxdun nxdun self-assigned this Apr 24, 2026
@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Apr 24, 2026

📝 Walkthrough

Walkthrough

Adds a GitHub contributions feature: new ContributionsService with GraphQL fetching and in-memory caching, contributions models, controller and routes at GET /api/v1/contributions, app wiring and config env vars, Postman requests, and tests.

Changes

Cohort / File(s) Summary
Infrastructure
infra/common/cloud-init.template, infra/digitalocean/accounts/naduns-team/variables.tf
Inject GITHUB_PAT into /opt/app/.env and add an inline comment for GHCR_PAT.
Configuration & Manifest
src/config.rs, Cargo.toml
Add github_pat, github_username, github_graphql_url env wiring; adjust dependencies/dev-deps and lints in Cargo.toml.
Application Core / State
src/app.rs, src/state.rs
Instantiate ContributionsService at startup and store Arc<ContributionsService> on AppState.
Service Layer
src/services/contributions.rs, src/services/mod.rs
New ContributionsService: bearer-auth GraphQL fetch, transform calendar to response models, in-memory TTL cache with stale-fallback, background cleanup, and seeding helper.
Models
src/models/contributions_model.rs, src/models/mod.rs
New serde structs for contributions payload (response, range, summary, legend, months, cells, meta).
Controller
src/controllers/api/v1/contributions_controller.rs, src/controllers/api/v1/mod.rs
New ContributionsQuery and get_contributions handler; exported in v1 module.
Routing
src/routes/api/v1/contributions_routes.rs, src/routes/api/v1/mod.rs
New router factory mounted at /api/v1/contributions; v1 router composition updated.
Postman Tests
postman/collections/Nadzu API/Contributions (Default User).request.yaml, postman/collections/Nadzu API/Contributions (Specific User).request.yaml
Add two requests validating contributions endpoint responses for default and specific-user queries.
Rust Tests / Test Helpers
tests/api/common.rs, tests/layer_unit_tests.rs, tests/api/contributions_tests.rs, tests/api_tests.rs
Test helpers updated for GitHub config and ContributionsService; add async tests covering cached responses, wiremock-backed fetch, and validation for non-default username.

Sequence Diagram

sequenceDiagram
    participant Client as Client
    participant API as API Handler
    participant Service as ContributionsService
    participant Cache as In-Memory Cache
    participant GitHub as GitHub GraphQL API

    Client->>API: GET /api/v1/contributions?username=...
    API->>Service: get_contributions(username_opt)
    Service->>Cache: lookup(username)
    alt cache hit and fresh
        Cache-->>Service: cached ContributionsResponse (meta.cached = true)
    else cache miss or expired
        Service->>GitHub: POST GraphQL (Bearer token)
        GitHub-->>Service: contribution calendar data
        Service->>Service: transform -> ContributionsResponse
        Service->>Cache: store(username, response, ttl)
    end
    Service-->>API: ContributionsResponse
    API-->>Client: HTTP 200 JSON
Loading

Estimated Code Review Effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Poem

🐇 I nibbled GraphQL carrots in a row,

cached their crunch so responses flow,
weeks and days in tidy tune,
I hum and hop beneath the moon,
"Contributions bloom!" sings the coding bunny. 🌿

🚥 Pre-merge checks | ✅ 4 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 34.78% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (4 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title clearly and concisely summarizes the main change: adding a GitHub contributions API endpoint, which is the primary feature introduced across all modified files.
Linked Issues check ✅ Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check ✅ Passed Check skipped because no linked issues were found for this pull request.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch nadun/b/github-contribution-graph

Comment @coderabbitai help to get the list of available commands and usage tips.

@nxdun
Copy link
Copy Markdown
Owner Author

nxdun commented Apr 24, 2026

@coderabbitai help

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Apr 24, 2026

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

CodeRabbit commands

These commands are invoked using PR/Issue comments.

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai evaluate custom pre-merge check --instructions <custom-checks-instructions> --name <custom-checks-title> [--mode <error|warning>] to validate the custom pre-merge checks instructions. Defaults to error when --mode is omitted.
  • @coderabbitai ignore pre-merge checks to override pre-merge checks and get an approval on PR.
  • @coderabbitai run pre-merge checks to run pre-merge checks on the pull request.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai generate unit tests to generate unit tests for this PR.
  • @coderabbitai resolve merge conflict to automatically resolve merge conflicts.
  • @coderabbitai autofix to automatically fix issues identified in unresolved review comments.
  • @coderabbitai resolve to resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai generate configuration to create a PR that adds the current resolved configuration as .coderabbit.yaml (or show it if already present).
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore or @coderabbit ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit configuration file (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • You can also validate your configuration using the online YAML validator.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Status, support, documentation and community

  • Visit our status page to check the current availability of CodeRabbit.
  • Create a ticket on our support page for assistance with any issues or questions.
  • Visit our documentation site for detailed information on how to use CodeRabbit.
  • Join our Discord community to connect with other users and get help from the community.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
infra/common/cloud-init.template (1)

126-137: ⚠️ Potential issue | 🔴 Critical

Fix GitHub env wiring in .env (invalid formatting, missing variable, and wrong secret source).

Line 137 has invalid .env formatting (space before =), GITHUB_USERNAME is missing from the env block, and GITHUB_PAT is sourced from GHCR_PAT instead of GITHUB_PAT, creating a least-privilege violation.

Proposed fix
       MASTER_API_KEY=${MASTER_API_KEY}
-      GITHUB_PAT =${GHCR_PAT}
+      GITHUB_PAT=${GITHUB_PAT}
+      GITHUB_USERNAME=${GITHUB_USERNAME}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@infra/common/cloud-init.template` around lines 126 - 137, Fix the .env
wiring: correct the invalid assignment syntax by removing the space before the
"=" in the GITHUB_PAT line, add the missing GITHUB_USERNAME environment variable
to the env block, and source GITHUB_PAT from the GITHUB_PAT variable (not
GHCR_PAT) so the secret uses the proper, least-privilege credential; update the
block containing CAPTCHA_SECRET_KEY, APP_HOST, HOST_PORT, ..., ALLOWED_ORIGINS,
WARP_LICENSE_KEY, MASTER_API_KEY, GITHUB_USERNAME, and GITHUB_PAT to reflect
these changes.
🧹 Nitpick comments (6)
postman/collections/Nadzu API/Contributions (Specific User).request.yaml (1)

3-3: Parameterize the specific username instead of hardcoding nxdun.

This request becomes reusable across environments if the username comes from a Postman variable.

♻️ Suggested update
-url: "{{base_url}}/api/v1/contributions?username=nxdun"
+url: "{{base_url}}/api/v1/contributions?username={{github_username}}"
@@
-          pm.expect(jsonData.username).to.eql('nxdun');
+          pm.expect(jsonData.username).to.eql(pm.variables.get('github_username'));

Also applies to: 14-14

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@postman/collections/Nadzu` API/Contributions (Specific User).request.yaml at
line 3, The request URL currently hardcodes the username "nxdun"; replace it
with a Postman variable (e.g., change the query to ?username={{username}}) so
the request is reusable across environments, and add/update the corresponding
Postman variable "username" in the collection/environment; apply the same change
to the other occurrence referenced (lines 14-14) to keep both requests
consistent.
infra/digitalocean/accounts/naduns-team/variables.tf (1)

76-76: Clarify the GHCR_PAT operator note.

This comment is hard to parse and has typos; it may confuse secret handling during ops/rotation.

✏️ Suggested wording
-  //update: I use this to process contribtion grapth in backend server. to avoid confusion it passed to backend as GITHUB_PAT. im using the same 
+  // Reused for backend GitHub contributions API as GITHUB_PAT.
+  // Keep naming explicit in runtime env to avoid operator confusion.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@infra/digitalocean/accounts/naduns-team/variables.tf` at line 76, The inline
comment is unclear and contains typos; replace it with a concise, unambiguous
note clarifying secret usage: state that the variable is used by the backend to
process the GitHub contribution graph and is injected into the backend as
GITHUB_PAT, and explicitly distinguish it from GHCR_PAT (the separate GitHub
Container Registry token) so operators understand these are different secrets
and should be rotated/managed separately; update the comment text near the
existing note in variables.tf accordingly.
src/services/contributions.rs (4)

74-89: Good security practice with Debug impl, but cache can grow unbounded.

The custom Debug implementation correctly omits the PAT from logs. However, the DashMap cache has no eviction mechanism—expired entries remain in memory indefinitely since only the TTL check prevents serving stale data, but entries are never removed.

For a low-traffic service this is acceptable, but consider periodic cleanup or a bounded cache (e.g., moka or quick_cache) if the service will handle many distinct usernames over time.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/services/contributions.rs` around lines 74 - 89, ContributionsService
currently stores entries in the DashMap cache without eviction, so expired
entries never get removed; add an eviction strategy such as periodic cleanup or
replacing DashMap with a bounded/TTL cache crate. Specifically, either implement
a background task (spawned where ContributionsService is constructed) that
iterates over self.cache and removes entries whose SystemTime expiration has
passed, or swap the cache field for a TTL/bounded cache (e.g., moka or
quick_cache) and update usages in methods that read/write the cache (references:
ContributionsService struct, the cache field, and any methods that check TTL
when returning ContributionsResponse). Ensure the chosen approach still avoids
logging the pat and preserves Debug impl behavior.

239-241: Silent fallback on malformed dates may hide upstream issues.

Using unwrap_or((1970, 1, 1)) silently accepts malformed dates from the GitHub API. While unlikely, this could mask upstream API changes or data corruption without any indication in logs.

Consider logging a warning when date parsing fails:

🔍 Proposed improvement
-                let (y_day, m_day, _) = parse_ymd(&day.date).unwrap_or((1970, 1, 1));
+                let (y_day, m_day, _) = parse_ymd(&day.date).unwrap_or_else(|| {
+                    tracing::warn!("Failed to parse date '{}', using fallback", day.date);
+                    (1970, 1, 1)
+                });
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/services/contributions.rs` around lines 239 - 241, The code currently
uses parse_ymd(&day.date).unwrap_or((1970, 1, 1)) which silently hides parse
failures; change it to explicitly handle the Result from parse_ymd (e.g., match
or if let Err) so that on Err you log a warning including the problematic
day.date and context (e.g., which contribution record) and still provide a safe
fallback, then destructure the successful tuple into y_day and m_day; refer to
parse_ymd, day.date, and the y_day/m_day variables to locate and update the
handling.

105-119: Consider normalizing username for cache key.

GitHub usernames are case-insensitive, but the cache key uses the username as-is. Requests for "OctoCat" and "octocat" would create separate cache entries and trigger redundant API calls.

🔧 Proposed fix to normalize cache key
     pub async fn get_contributions(
         &self,
         username: &str,
     ) -> Result<ContributionsResponse, AppError> {
         let now = SystemTime::now();
-        let cache_key = username.to_string();
+        let cache_key = username.to_lowercase();

         if let Some(entry) = self.cache.get(&cache_key) {
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/services/contributions.rs` around lines 105 - 119, The cache key in
get_contributions is using the raw username, causing case variants to miss the
same cache entry; normalize the username (e.g., call to_lowercase() or
to_ascii_lowercase()) before creating cache_key and use that normalized key for
both cache lookups and when inserting into the cache so "OctoCat" and "octocat"
map to the same entry; update references to cache_key in get_contributions and
any cache insert logic to use the normalized value.

195-201: Let chains syntax is fully supported with edition 2024.

The project uses edition = "2024" which requires Rust 1.82+. Since let chains stabilized in Rust 1.76 (February 2024), the if let ... && ... syntax is compatible and requires no changes.

Consider logging the actual GraphQL errors for better debugging:

Optional improvement: Enhanced error logging
         if let Some(errs) = gh_resp.errors
             && !errs.is_empty()
         {
             return Err(AppError::Internal(anyhow::anyhow!(
-                "GitHub GraphQL returned errors"
+                "GitHub GraphQL returned errors: {:?}",
+                errs
             )));
         }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/services/contributions.rs` around lines 195 - 201, The current check for
GraphQL errors uses gh_resp.errors and returns a generic AppError::Internal;
update this to capture and include the actual GraphQL error details in the
failure path (either by logging them with the same logger used in this module or
by embedding the error text into the AppError::Internal message) so debugging
shows the returned errors; specifically modify the branch that inspects
gh_resp.errors to format the errs (from gh_resp.errors) into a string and pass
that into the AppError::Internal (or call the module logger) instead of the
generic "GitHub GraphQL returned errors".
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/app.rs`:
- Around line 56-64: Fail fast on missing GITHUB_PAT: replace the use of
unwrap_or_default() for config.github_pat when constructing ContributionsService
(ContributionsService::new) so the app validates at startup instead of passing
an empty token to the service; locate the ContributionsService construction in
app.rs and either propagate a configuration error from main (return Result and
error out) or call expect()/panic with a clear message like "GITHUB_PAT is
required" before creating ContributionsService, keeping the github_username
fallback unchanged.

In `@src/controllers/api/v1/contributions_controller.rs`:
- Around line 18-23: The username fallback currently uses
query.username.unwrap_or_else(...) which treats "?username=" or whitespace as a
valid empty string; update the logic around query.username in
contributions_controller.rs to trim whitespace and treat empty/whitespace-only
strings as None before falling back to
state.contributions_service.get_default_username().to_string() — e.g.,
map/and_then or similar on query.username to call .trim(), reject empty results,
and only then unwrap_or_else to the default, referencing the variable username
and the get_default_username() method.

In `@src/models/contributions_model.rs`:
- Around line 43-44: The serialized field week_index in the Contributions model
is currently typed as usize which is platform-dependent; change its type to u16
in the struct definition (the #[serde(rename = "weekIndex")] pub week_index
field in src/models/contributions_model.rs) and update all assignments that
build this struct to cast the source index to u16 (e.g. replace uses of
week_index: week_idx with week_index: week_idx as u16) – specifically update the
assignment sites in src/services/contributions.rs where week_index is set (the
three places noted in the review) to use week_idx as u16 so the public JSON
contract is stable.

---

Outside diff comments:
In `@infra/common/cloud-init.template`:
- Around line 126-137: Fix the .env wiring: correct the invalid assignment
syntax by removing the space before the "=" in the GITHUB_PAT line, add the
missing GITHUB_USERNAME environment variable to the env block, and source
GITHUB_PAT from the GITHUB_PAT variable (not GHCR_PAT) so the secret uses the
proper, least-privilege credential; update the block containing
CAPTCHA_SECRET_KEY, APP_HOST, HOST_PORT, ..., ALLOWED_ORIGINS, WARP_LICENSE_KEY,
MASTER_API_KEY, GITHUB_USERNAME, and GITHUB_PAT to reflect these changes.

---

Nitpick comments:
In `@infra/digitalocean/accounts/naduns-team/variables.tf`:
- Line 76: The inline comment is unclear and contains typos; replace it with a
concise, unambiguous note clarifying secret usage: state that the variable is
used by the backend to process the GitHub contribution graph and is injected
into the backend as GITHUB_PAT, and explicitly distinguish it from GHCR_PAT (the
separate GitHub Container Registry token) so operators understand these are
different secrets and should be rotated/managed separately; update the comment
text near the existing note in variables.tf accordingly.

In `@postman/collections/Nadzu` API/Contributions (Specific User).request.yaml:
- Line 3: The request URL currently hardcodes the username "nxdun"; replace it
with a Postman variable (e.g., change the query to ?username={{username}}) so
the request is reusable across environments, and add/update the corresponding
Postman variable "username" in the collection/environment; apply the same change
to the other occurrence referenced (lines 14-14) to keep both requests
consistent.

In `@src/services/contributions.rs`:
- Around line 74-89: ContributionsService currently stores entries in the
DashMap cache without eviction, so expired entries never get removed; add an
eviction strategy such as periodic cleanup or replacing DashMap with a
bounded/TTL cache crate. Specifically, either implement a background task
(spawned where ContributionsService is constructed) that iterates over
self.cache and removes entries whose SystemTime expiration has passed, or swap
the cache field for a TTL/bounded cache (e.g., moka or quick_cache) and update
usages in methods that read/write the cache (references: ContributionsService
struct, the cache field, and any methods that check TTL when returning
ContributionsResponse). Ensure the chosen approach still avoids logging the pat
and preserves Debug impl behavior.
- Around line 239-241: The code currently uses
parse_ymd(&day.date).unwrap_or((1970, 1, 1)) which silently hides parse
failures; change it to explicitly handle the Result from parse_ymd (e.g., match
or if let Err) so that on Err you log a warning including the problematic
day.date and context (e.g., which contribution record) and still provide a safe
fallback, then destructure the successful tuple into y_day and m_day; refer to
parse_ymd, day.date, and the y_day/m_day variables to locate and update the
handling.
- Around line 105-119: The cache key in get_contributions is using the raw
username, causing case variants to miss the same cache entry; normalize the
username (e.g., call to_lowercase() or to_ascii_lowercase()) before creating
cache_key and use that normalized key for both cache lookups and when inserting
into the cache so "OctoCat" and "octocat" map to the same entry; update
references to cache_key in get_contributions and any cache insert logic to use
the normalized value.
- Around line 195-201: The current check for GraphQL errors uses gh_resp.errors
and returns a generic AppError::Internal; update this to capture and include the
actual GraphQL error details in the failure path (either by logging them with
the same logger used in this module or by embedding the error text into the
AppError::Internal message) so debugging shows the returned errors; specifically
modify the branch that inspects gh_resp.errors to format the errs (from
gh_resp.errors) into a string and pass that into the AppError::Internal (or call
the module logger) instead of the generic "GitHub GraphQL returned errors".
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro Plus

Run ID: dc48171b-1620-441c-9613-3a5c4f1051bc

📥 Commits

Reviewing files that changed from the base of the PR and between 3d6dc0e and 6254fd8.

📒 Files selected for processing (17)
  • infra/common/cloud-init.template
  • infra/digitalocean/accounts/naduns-team/variables.tf
  • postman/collections/Nadzu API/Contributions (Default User).request.yaml
  • postman/collections/Nadzu API/Contributions (Specific User).request.yaml
  • src/app.rs
  • src/config.rs
  • src/controllers/api/v1/contributions_controller.rs
  • src/controllers/api/v1/mod.rs
  • src/models/contributions_model.rs
  • src/models/mod.rs
  • src/routes/api/v1/contributions_routes.rs
  • src/routes/api/v1/mod.rs
  • src/services/contributions.rs
  • src/services/mod.rs
  • src/state.rs
  • tests/api/common.rs
  • tests/layer_unit_tests.rs

Comment thread src/app.rs
Comment thread src/controllers/api/v1/contributions_controller.rs Outdated
Comment thread src/models/contributions_model.rs
@nxdun
Copy link
Copy Markdown
Owner Author

nxdun commented Apr 24, 2026

@coderabbitai full review

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Apr 24, 2026

✅ Actions performed

Full review triggered.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

♻️ Duplicate comments (3)
src/models/contributions_model.rs (1)

43-44: ⚠️ Potential issue | 🟠 Major

Use fixed-width integers for serialized weekIndex fields.

Lines 43-44 and 50-51 use usize in a public JSON schema. usize is platform-dependent; use u16 (or u32) for stable cross-platform contracts. This was already flagged earlier and is still unresolved.

Proposed model fix
 pub struct ContributionMonth {
     pub label: String,
     #[serde(rename = "weekIndex")]
-    pub week_index: usize,
+    pub week_index: u16,
 }
 ...
 pub struct ContributionCell {
     pub date: String,
     #[serde(rename = "weekIndex")]
-    pub week_index: usize,
+    pub week_index: u16,

Also update assignment sites in src/services/contributions.rs to cast: week_index: week_idx as u16.

#!/bin/bash
set -euo pipefail

echo "1) Check model field types:"
rg -n 'pub\s+week_index:\s*usize' src/models/contributions_model.rs || true

echo
echo "2) Check service assignment sites still using uncast week_idx:"
rg -n 'week_index:\s*week_idx\b' src/services/contributions.rs || true

echo
echo "Expected after fix:"
echo "- No matches in step 1."
echo "- No uncast matches in step 2 (should use week_idx as u16)."

Also applies to: 50-51

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/models/contributions_model.rs` around lines 43 - 44, The public
serialized model uses platform-dependent usize for week_index (in the
contributions model struct) which breaks cross-platform JSON contracts; change
the field type(s) pub week_index: usize to a fixed-width integer (e.g., pub
week_index: u16) in src/models/contributions_model.rs and update all assignment
sites (e.g., in functions that set week_index in src/services/contributions.rs)
to cast the source week_idx as the chosen width (week_idx as u16); run the grep
checks suggested to ensure no remaining pub week_index: usize or uncast
week_index: week_idx usages remain.
src/app.rs (1)

56-64: ⚠️ Potential issue | 🟠 Major

Do not initialize ContributionsService with an empty PAT fallback.

On Line 59, unwrap_or_default() masks missing config and causes runtime GitHub auth failures instead of failing predictably at startup.

Suggested fix
-    let contributions_service =
-        Arc::new(crate::services::contributions::ContributionsService::new(
-            http_client.clone(),
-            config.github_pat.clone().unwrap_or_default(),
-            config
-                .github_username
-                .clone()
-                .unwrap_or_else(|| "nxdun".to_string()),
-        ));
+    let github_pat = config.github_pat.clone().unwrap_or_else(|| {
+        error!("GITHUB_PAT must be set to a non-empty value");
+        std::process::exit(1);
+    });
+    let default_username = config
+        .github_username
+        .clone()
+        .unwrap_or_else(|| "nxdun".to_string());
+
+    let contributions_service =
+        Arc::new(crate::services::contributions::ContributionsService::new(
+            http_client.clone(),
+            github_pat,
+            default_username,
+        ));
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/app.rs` around lines 56 - 64, ContributionsService is being constructed
with config.github_pat.unwrap_or_default(), which silently substitutes an empty
PAT; instead, fail fast or propagate the missing value: remove
unwrap_or_default() and either (A) require a PAT at startup by using
expect()/returning an Err with a clear message when config.github_pat is None
before calling ContributionsService::new, or (B) change the
ContributionsService::new call to accept an Option<String> (pass
config.github_pat.clone()) and have ContributionsService handle the missing
token explicitly; reference ContributionsService::new and config.github_pat when
making the change.
src/controllers/api/v1/contributions_controller.rs (1)

18-23: ⚠️ Potential issue | 🟡 Minor

Treat blank username query values as missing input.

At Line 18, unwrap_or_else accepts ?username= / whitespace as valid, so fallback is skipped and an empty username is sent downstream.

Suggested fix
-    let username = query.username.unwrap_or_else(|| {
-        state
-            .contributions_service
-            .get_default_username()
-            .to_string()
-    });
+    let username = query
+        .username
+        .as_deref()
+        .map(str::trim)
+        .filter(|u| !u.is_empty())
+        .map(ToOwned::to_owned)
+        .unwrap_or_else(|| {
+            state
+                .contributions_service
+                .get_default_username()
+                .to_string()
+        });
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/controllers/api/v1/contributions_controller.rs` around lines 18 - 23, The
current unwrap_or_else on query.username treats empty or whitespace values as
present; change the logic so blank usernames are treated as missing by trimming
and checking emptiness before falling back: transform query.username into an
Option<String> that returns None when the contained string is empty after trim
(e.g. use query.username.and_then(|s| if s.trim().is_empty() { None } else {
Some(s.trim().to_string()) })) and then call unwrap_or_else to use
state.contributions_service.get_default_username().to_string(); update the code
around the username variable and keep references to query.username,
unwrap_or_else, and state.contributions_service.get_default_username().
🧹 Nitpick comments (1)
postman/collections/Nadzu API/Contributions (Specific User).request.yaml (1)

3-3: Parameterize the username instead of hardcoding nxdun.

Lines 3 and 14 hardcode a specific account, making the request brittle across environments. Use a variable (e.g., {{github_username}}) for portability.

Proposed refactor
-url: "{{base_url}}/api/v1/contributions?username=nxdun"
+url: "{{base_url}}/api/v1/contributions?username={{github_username}}"
...
-          pm.expect(jsonData.username).to.eql('nxdun');
+          pm.expect(jsonData.username).to.eql(pm.variables.get('github_username'));

Also applies to: 14-14

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@postman/collections/Nadzu` API/Contributions (Specific User).request.yaml at
line 3, Replace the hardcoded username in the request URL (the "url:" line
containing "?username=nxdun") with a templated variable so the collection is
portable; change the query parameter from "username=nxdun" to
"username={{github_username}}" and update the other occurrence on line 14
similarly, then ensure the Postman/collection environment defines
github_username or document adding it to environments.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@infra/common/cloud-init.template`:
- Line 137: The .env assignment for GITHUB_PAT is malformed ("GITHUB_PAT
=${GHCR_PAT}" contains a space before '='); update the assignment to remove the
space so the variable is exported correctly (set GITHUB_PAT=${GHCR_PAT}),
ensuring parsers read GITHUB_PAT and the backend receives it at runtime; locate
the assignment of GITHUB_PAT in the cloud-init.template and fix that token.

In `@infra/digitalocean/accounts/naduns-team/variables.tf`:
- Around line 74-77: The variable GHCR_PAT is being reused at runtime as
GITHUB_PAT which expands credential scope; add a separate variable GITHUB_PAT
(distinct from GHCR_PAT) in variables.tf and update any places that pass
GHCR_PAT into the backend/runtime to instead pass GITHUB_PAT (e.g., where
GHCR_PAT is forwarded to the backend contribution processing), document required
minimal scopes for each (GHCR_PAT for registry push/pull, GITHUB_PAT for GitHub
API read scopes), and ensure GHCR_PAT remains used only for registry auth while
GITHUB_PAT is used for API calls.

In `@src/services/contributions.rs`:
- Around line 74-80: The cache field currently uses an unbounded DashMap
(ContributionsService.cache) and only checks expiry on reads, so replace it with
a bounded TTL cache (e.g., moka::sync::Cache or lru_time_cache) with both a max
size and per-entry TTL, then update all places that reference
ContributionsService.cache (including the read/write logic around
ContributionsResponse and expiry checks found in the methods around the 112-135
region) to use the new cache API (get/insert/evict) so entries expire
automatically and LRU/size eviction prevents unbounded memory growth. Ensure the
cache type stores ContributionsResponse values, configure the TTL and max
capacity constants, and remove the manual SystemTime expiry tuple usage so reads
rely on the cache’s built-in TTL/eviction behavior.
- Around line 173-181: The GitHub GraphQL request can hang because the injected
reqwest::Client has no timeout; add a per-request timeout on the request builder
used in contributions.rs (the chain starting from
self.http_client.post(GITHUB_GRAPHQL_URL) and ending with .send()) by inserting
.timeout(Duration::from_secs(<N>)) before .send(), import std::time::Duration,
and ensure the existing .map_err(...) still maps timeout/network errors into
AppError::Internal (adjust the error message if desired) so the handler can fall
back to stale cache.

---

Duplicate comments:
In `@src/app.rs`:
- Around line 56-64: ContributionsService is being constructed with
config.github_pat.unwrap_or_default(), which silently substitutes an empty PAT;
instead, fail fast or propagate the missing value: remove unwrap_or_default()
and either (A) require a PAT at startup by using expect()/returning an Err with
a clear message when config.github_pat is None before calling
ContributionsService::new, or (B) change the ContributionsService::new call to
accept an Option<String> (pass config.github_pat.clone()) and have
ContributionsService handle the missing token explicitly; reference
ContributionsService::new and config.github_pat when making the change.

In `@src/controllers/api/v1/contributions_controller.rs`:
- Around line 18-23: The current unwrap_or_else on query.username treats empty
or whitespace values as present; change the logic so blank usernames are treated
as missing by trimming and checking emptiness before falling back: transform
query.username into an Option<String> that returns None when the contained
string is empty after trim (e.g. use query.username.and_then(|s| if
s.trim().is_empty() { None } else { Some(s.trim().to_string()) })) and then call
unwrap_or_else to use
state.contributions_service.get_default_username().to_string(); update the code
around the username variable and keep references to query.username,
unwrap_or_else, and state.contributions_service.get_default_username().

In `@src/models/contributions_model.rs`:
- Around line 43-44: The public serialized model uses platform-dependent usize
for week_index (in the contributions model struct) which breaks cross-platform
JSON contracts; change the field type(s) pub week_index: usize to a fixed-width
integer (e.g., pub week_index: u16) in src/models/contributions_model.rs and
update all assignment sites (e.g., in functions that set week_index in
src/services/contributions.rs) to cast the source week_idx as the chosen width
(week_idx as u16); run the grep checks suggested to ensure no remaining pub
week_index: usize or uncast week_index: week_idx usages remain.

---

Nitpick comments:
In `@postman/collections/Nadzu` API/Contributions (Specific User).request.yaml:
- Line 3: Replace the hardcoded username in the request URL (the "url:" line
containing "?username=nxdun") with a templated variable so the collection is
portable; change the query parameter from "username=nxdun" to
"username={{github_username}}" and update the other occurrence on line 14
similarly, then ensure the Postman/collection environment defines
github_username or document adding it to environments.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro Plus

Run ID: d833c31c-5592-4ce0-8bbe-6861df589e4e

📥 Commits

Reviewing files that changed from the base of the PR and between 3d6dc0e and 6254fd8.

📒 Files selected for processing (17)
  • infra/common/cloud-init.template
  • infra/digitalocean/accounts/naduns-team/variables.tf
  • postman/collections/Nadzu API/Contributions (Default User).request.yaml
  • postman/collections/Nadzu API/Contributions (Specific User).request.yaml
  • src/app.rs
  • src/config.rs
  • src/controllers/api/v1/contributions_controller.rs
  • src/controllers/api/v1/mod.rs
  • src/models/contributions_model.rs
  • src/models/mod.rs
  • src/routes/api/v1/contributions_routes.rs
  • src/routes/api/v1/mod.rs
  • src/services/contributions.rs
  • src/services/mod.rs
  • src/state.rs
  • tests/api/common.rs
  • tests/layer_unit_tests.rs

Comment thread infra/common/cloud-init.template Outdated
Comment thread infra/digitalocean/accounts/naduns-team/variables.tf
Comment thread src/services/contributions.rs
Comment thread src/services/contributions.rs Outdated
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (2)
src/services/contributions.rs (1)

277-278: Minor timing inconsistency between current_date_str and fetched_at.

fetched_at is passed as a parameter but current_date_str is computed from a fresh SystemTime::now() call. In practice the difference is negligible, but for deterministic testing, consider deriving current_date_str from fetched_at as well.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/services/contributions.rs` around lines 277 - 278, The computed
current_date_str is derived from a fresh SystemTime::now() which can diverge
from the provided fetched_at; change the code to derive
current_year/current_month_num/current_day by calling get_utc_date(fetched_at)
(or otherwise converting fetched_at to the same UTC date tuple) and then format
current_date_str from those values so current_date_str and fetched_at are
consistent (refer to current_date_str, get_utc_date, and the fetched_at
parameter in contributions.rs).
tests/api/contributions_tests.rs (1)

87-93: Minor: Avoid double Arc wrapping of config.

config is cloned into one Arc (line 88) and then moved into another Arc (line 89), creating two separate reference-counted pointers. This works but is slightly wasteful.

♻️ Cleaner approach
+    let config = Arc::new(config);
     let state = nadzu::state::AppState {
-        config: Arc::new(config.clone()),
-        ytdlp_manager: Arc::new(nadzu::services::ytdlp::YtdlpManager::new(Arc::new(config))),
+        config: config.clone(),
+        ytdlp_manager: Arc::new(nadzu::services::ytdlp::YtdlpManager::new(config.clone())),
         rate_limiters: Arc::new(nadzu::middleware::rate_limit::RateLimiters::new()),
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/api/contributions_tests.rs` around lines 87 - 93, The code double-wraps
the config in two Arcs; create a single Arc for config and reuse it: allocate
let config_arc = Arc::new(config.clone()) (or otherwise obtain a single Arc),
then set AppState.config = Arc::clone(&config_arc) and call
YtdlpManager::new(Arc::clone(&config_arc)) instead of passing Arc::new(config)
into YtdlpManager::new; this removes the extra Arc allocation while using
Arc::clone for both AppState and YtdlpManager to share the same Arc instance.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/services/contributions.rs`:
- Line 111: Clippy flags the literal seconds as suboptimal units; change the
interval construction in the tokio::time::interval call (the let mut interval =
tokio::time::interval(...) line) to use a clearer expression like
Duration::from_secs(10 * 60) instead of Duration::from_secs(600) so the intent
(10 minutes) is explicit.

---

Nitpick comments:
In `@src/services/contributions.rs`:
- Around line 277-278: The computed current_date_str is derived from a fresh
SystemTime::now() which can diverge from the provided fetched_at; change the
code to derive current_year/current_month_num/current_day by calling
get_utc_date(fetched_at) (or otherwise converting fetched_at to the same UTC
date tuple) and then format current_date_str from those values so
current_date_str and fetched_at are consistent (refer to current_date_str,
get_utc_date, and the fetched_at parameter in contributions.rs).

In `@tests/api/contributions_tests.rs`:
- Around line 87-93: The code double-wraps the config in two Arcs; create a
single Arc for config and reuse it: allocate let config_arc =
Arc::new(config.clone()) (or otherwise obtain a single Arc), then set
AppState.config = Arc::clone(&config_arc) and call
YtdlpManager::new(Arc::clone(&config_arc)) instead of passing Arc::new(config)
into YtdlpManager::new; this removes the extra Arc allocation while using
Arc::clone for both AppState and YtdlpManager to share the same Arc instance.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro Plus

Run ID: dff47d4c-4169-450b-a151-c9c50428513c

📥 Commits

Reviewing files that changed from the base of the PR and between 6254fd8 and 9423a8e.

⛔ Files ignored due to path filters (1)
  • Cargo.lock is excluded by !**/*.lock
📒 Files selected for processing (10)
  • Cargo.toml
  • infra/common/cloud-init.template
  • src/app.rs
  • src/config.rs
  • src/controllers/api/v1/contributions_controller.rs
  • src/services/contributions.rs
  • tests/api/common.rs
  • tests/api/contributions_tests.rs
  • tests/api_tests.rs
  • tests/layer_unit_tests.rs
✅ Files skipped from review due to trivial changes (3)
  • infra/common/cloud-init.template
  • tests/api_tests.rs
  • tests/layer_unit_tests.rs
🚧 Files skipped from review as they are similar to previous changes (3)
  • tests/api/common.rs
  • src/app.rs
  • src/controllers/api/v1/contributions_controller.rs

Comment thread src/services/contributions.rs Outdated
@nxdun
Copy link
Copy Markdown
Owner Author

nxdun commented Apr 25, 2026

@coderabbitai full review

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Apr 25, 2026

✅ Actions performed

Full review triggered.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/services/contributions.rs`:
- Around line 290-291: The code currently coerces invalid dates by using
parse_ymd(&day.date).unwrap_or((1970, 1, 1)); instead of treating parse failures
as transform errors; update the transform to propagate or return an error when
parse_ymd fails (remove the unwrap_or fallback), e.g. have parse_ymd return a
Result and use the ? operator or map_err to convert to the function's
TransformError so fetch_and_process can fall back to stale-cache logic; make
sure to update the caller (fetch_and_process) to handle the error result from
the contribution parsing path rather than receiving silently normalized
1970-01-01 values.
- Around line 161-179: Concurrent cache misses on cache_key can cause a stampede
because the read-check and fetch_and_process calls are unsynchronized; serialize
refreshes by acquiring a lock per cache_key (or a global async mutex if simpler)
before performing fetch_and_process, re-checking the cache after acquiring the
lock to avoid redundant fetches, and only then insert into self.cache using the
existing expires_at logic (use the same CACHE_TTL_SECONDS and CACHE_MAX_CAPACITY
semantics). Implement this around the existing cache lookup/insert code that
references self.cache, cache_key, fetch_and_process, CACHE_TTL_SECONDS, and
CACHE_MAX_CAPACITY so only one task fetches and populates the cache for a given
key at a time.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro Plus

Run ID: ff0fc9e8-0c71-4f73-9fce-2f9406b1c66d

📥 Commits

Reviewing files that changed from the base of the PR and between 9423a8e and 9cf1f44.

📒 Files selected for processing (3)
  • postman/collections/Nadzu API/Contributions (Default User).request.yaml
  • postman/collections/Nadzu API/Contributions (Specific User).request.yaml
  • src/services/contributions.rs
✅ Files skipped from review due to trivial changes (1)
  • postman/collections/Nadzu API/Contributions (Default User).request.yaml
🚧 Files skipped from review as they are similar to previous changes (1)
  • postman/collections/Nadzu API/Contributions (Specific User).request.yaml

Comment on lines +161 to +179
if let Some(entry) = self.cache.get(&cache_key) {
let (cached_resp, expires_at) = entry.value();
if *expires_at > now {
let mut resp = cached_resp.clone();
resp.meta.cached = true;
return Ok(resp);
}
}

let resp_result = self.fetch_and_process(username, SystemTime::now()).await;

match resp_result {
Ok(new_resp) => {
let expires_at = now + u64::from(CACHE_TTL_SECONDS);
// Bounded: only insert if under capacity to prevent memory leaks
if self.cache.len() < CACHE_MAX_CAPACITY as usize {
self.cache.insert(cache_key, (new_resp.clone(), expires_at));
}
Ok(new_resp)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Serialize refreshes to avoid a cache stampede.

Two concurrent requests that both miss/expire the cache will both hit GitHub, because the read-check and fetch/insert path is unsynchronized. Since this endpoint only serves one username, cold start and TTL rollover can fan out identical upstream calls and burn rate limit unnecessarily.

Suggested fix
 pub struct ContributionsService {
     http_client: Client,
     pat: String,
     default_username: String,
     graphql_url: String,
     cache: Arc<DashMap<String, (ContributionsResponse, u64)>>,
+    refresh_lock: tokio::sync::Mutex<()>,
 }
@@
         Self {
             http_client,
             pat,
             default_username,
             graphql_url,
             cache,
+            refresh_lock: tokio::sync::Mutex::new(()),
         }
     }
@@
-        let resp_result = self.fetch_and_process(username, SystemTime::now()).await;
+        let _refresh_guard = self.refresh_lock.lock().await;
+
+        if let Some(entry) = self.cache.get(&cache_key) {
+            let (cached_resp, expires_at) = entry.value();
+            if *expires_at > now_unix() {
+                let mut resp = cached_resp.clone();
+                resp.meta.cached = true;
+                return Ok(resp);
+            }
+        }
+
+        let resp_result = self.fetch_and_process(username, SystemTime::now()).await;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/services/contributions.rs` around lines 161 - 179, Concurrent cache
misses on cache_key can cause a stampede because the read-check and
fetch_and_process calls are unsynchronized; serialize refreshes by acquiring a
lock per cache_key (or a global async mutex if simpler) before performing
fetch_and_process, re-checking the cache after acquiring the lock to avoid
redundant fetches, and only then insert into self.cache using the existing
expires_at logic (use the same CACHE_TTL_SECONDS and CACHE_MAX_CAPACITY
semantics). Implement this around the existing cache lookup/insert code that
references self.cache, cache_key, fetch_and_process, CACHE_TTL_SECONDS, and
CACHE_MAX_CAPACITY so only one task fetches and populates the cache for a given
key at a time.

Comment on lines +290 to +291
// Parse YYYY-MM-DD
let (y_day, m_day, _) = parse_ymd(&day.date).unwrap_or((1970, 1, 1));
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Don’t silently coerce invalid upstream dates to 1970-01-01.

If GitHub ever returns an unexpected date format here, the service will emit corrupted month labels and current/future flags instead of failing and using the stale-cache fallback. This should be treated as a transform error, not normalized into a valid-looking response.

Suggested fix
-    fn transform_calendar(
+    fn transform_calendar(
         username: &str,
         calendar: &GithubContributionCalendar,
         fetched_at: SystemTime,
-    ) -> ContributionsResponse {
+    ) -> Result<ContributionsResponse, AppError> {
@@
-                let (y_day, m_day, _) = parse_ymd(&day.date).unwrap_or((1970, 1, 1));
+                let (y_day, m_day, _) = parse_ymd(&day.date).ok_or_else(|| {
+                    AppError::Internal(anyhow::anyhow!("Invalid GitHub date: {}", day.date))
+                })?;
@@
-        ContributionsResponse {
+        Ok(ContributionsResponse {
             username: username.to_string(),
             range: ContributionRange {
@@
-        }
+        })

And in fetch_and_process:

-        Ok(Self::transform_calendar(username, &calendar, fetched_at))
+        Self::transform_calendar(username, &calendar, fetched_at)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/services/contributions.rs` around lines 290 - 291, The code currently
coerces invalid dates by using parse_ymd(&day.date).unwrap_or((1970, 1, 1));
instead of treating parse failures as transform errors; update the transform to
propagate or return an error when parse_ymd fails (remove the unwrap_or
fallback), e.g. have parse_ymd return a Result and use the ? operator or map_err
to convert to the function's TransformError so fetch_and_process can fall back
to stale-cache logic; make sure to update the caller (fetch_and_process) to
handle the error result from the contribution parsing path rather than receiving
silently normalized 1970-01-01 values.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (2)
infra/digitalocean/accounts/naduns-team/variables.tf (1)

74-77: ⚠️ Potential issue | 🟠 Major

Use a dedicated runtime token instead of reusing GHCR_PAT.

Line 76 confirms GHCR_PAT is still being reused as GITHUB_PAT. This expands secret scope and increases impact if leaked. Keep separate secrets for registry auth vs GitHub GraphQL API calls.

Suggested Terraform variable split
 variable "GHCR_PAT" {
   //SECRET: Expected to be set via root TF_VAR_GHCR_PAT. never Declare
-  //update: I use this to process contribtion grapth in backend server. to avoid confusion it passed to backend as GITHUB_PAT. im using the same 
   description = "GitHub Personal Access Token for GHCR login"
   type        = string
   sensitive   = true
 }
+
+variable "GITHUB_PAT" {
+  //SECRET: Expected to be set via root TF_VAR_GITHUB_PAT. never Declare
+  description = "GitHub Personal Access Token for GitHub GraphQL contributions API"
+  type        = string
+  sensitive   = true
+}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@infra/digitalocean/accounts/naduns-team/variables.tf` around lines 74 - 77,
The variable GHCR_PAT is being reused as GITHUB_PAT (registry token vs runtime
GraphQL token); instead define a separate Terraform variable (e.g., GITHUB_PAT
or GITHUB_RUNTIME_TOKEN) instead of repurposing GHCR_PAT, update the variable
declarations and descriptions accordingly, update any references where GHCR_PAT
is passed to the backend to use the new variable name, and ensure both variables
remain secret (not declared in repo) and are expected to be set via TF_VAR_* at
runtime.
src/app.rs (1)

56-65: ⚠️ Potential issue | 🟠 Major

Fail fast when GITHUB_PAT is missing.

Line 59 still uses unwrap_or_default(), which silently turns missing config into an empty token and defers failure to runtime requests.

Suggested fail-fast update
-    let contributions_service =
-        Arc::new(crate::services::contributions::ContributionsService::new(
-            http_client.clone(),
-            config.github_pat.clone().unwrap_or_default(),
-            config
-                .github_username
-                .clone()
-                .unwrap_or_else(|| "nxdun".to_string()),
-            config.github_graphql_url.clone(),
-        ));
+    let github_pat = config.github_pat.clone().unwrap_or_else(|| {
+        error!("GITHUB_PAT must be set to a non-empty value");
+        std::process::exit(1);
+    });
+    let default_username = config
+        .github_username
+        .clone()
+        .unwrap_or_else(|| "nxdun".to_string());
+
+    let contributions_service = Arc::new(
+        crate::services::contributions::ContributionsService::new(
+            http_client.clone(),
+            github_pat,
+            default_username,
+            config.github_graphql_url.clone(),
+        ),
+    );
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/app.rs` around lines 56 - 65, The code currently uses
config.github_pat.clone().unwrap_or_default() when constructing
ContributionsService (ContributionsService::new), which hides a missing
GITHUB_PAT; change this to fail fast by requiring the token: replace the
unwrap_or_default usage with an explicit require (e.g., call expect or propagate
a configuration error) so that ContributionsService::new receives a real token
and the app errors immediately if config.github_pat is None; update the
surrounding initialization to propagate or handle the error accordingly rather
than passing an empty string.
🧹 Nitpick comments (2)
tests/api/contributions_tests.rs (1)

109-113: Consider adding more specific matchers to the mock.

The mock only matches on HTTP method POST, which could inadvertently match unrelated requests. Adding a path matcher and/or body matcher would make the test more robust and self-documenting.

🔧 Proposed enhancement
+use wiremock::matchers::{method, path, body_partial_json};
...
-    Mock::given(method("POST"))
+    Mock::given(method("POST"))
+        .and(path("/"))  // or the actual GraphQL endpoint path
+        .and(body_partial_json(serde_json::json!({"variables": {"username": "nxdun"}})))
         .respond_with(ResponseTemplate::new(200).set_body_json(github_response))
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/api/contributions_tests.rs` around lines 109 - 113, The test's Mock
setup currently only matches method("POST"), which can falsely match other
requests; update the Mock definition (the Mock::given(...) chain around
method("POST") that ends with .mount(&mock_server)) to include more specific
matchers such as path("/expected/path") and/or a body matcher (e.g., matching
JSON keys or exact body) by adding additional .and(path(...)) and/or
.and(body_json(...) or .and(body_string(...))) calls before .respond_with(...).
Ensure the path string and body matcher correspond to the request produced by
the code under test so the mock only matches the intended request.
src/services/contributions.rs (1)

175-179: Capacity check silently drops new entries when cache is full.

The bounded cache check prevents unbounded growth, but when len() >= CACHE_MAX_CAPACITY, fresh responses are silently not cached. This means subsequent requests will repeatedly hit GitHub until an entry expires and is cleaned up. Consider an LRU eviction strategy or at least logging when entries are dropped.

♻️ Option: Log when cache is at capacity
                 // Bounded: only insert if under capacity to prevent memory leaks
                 if self.cache.len() < CACHE_MAX_CAPACITY as usize {
                     self.cache.insert(cache_key, (new_resp.clone(), expires_at));
+                } else {
+                    tracing::warn!("Contributions cache at capacity, not caching response for {}", username);
                 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/services/contributions.rs` around lines 175 - 179, The cache currently
silently skips inserting when self.cache.len() >= CACHE_MAX_CAPACITY which leads
to repeated upstream hits; change the insertion logic in the block that uses
self.cache.insert(cache_key, (new_resp.clone(), expires_at)) so that when at
capacity you evict an entry before inserting (either integrate an LRU structure
or, if keeping the current map, find and remove the least-recent/oldest entry by
inspecting stored expires_at or usage metadata, then call insert), and also emit
a log entry indicating an eviction (include cache_key and reason). Ensure you
reference CACHE_MAX_CAPACITY, self.cache, cache_key, expires_at and the insert
call in your changes.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@infra/common/cloud-init.template`:
- Line 137: The line mapping GITHUB_PAT=${GHCR_PAT} in cloud-init.template
reuses the registry credential for runtime app calls; remove that mapping and
instead inject a dedicated, minimally-scoped GITHUB_PAT secret into the
environment during provisioning. Update cloud-init.template to stop exporting
GITHUB_PAT from GHCR_PAT and modify your provisioning/secret store to create and
pass a separate GITHUB_PAT value; reference the environment variable names
GHCR_PAT and GITHUB_PAT in your changes so reviewers can locate the edit. Ensure
the app receives only the new low-scope GITHUB_PAT (not GHCR_PAT) at runtime and
document that GHCR_PAT remains solely for registry pulls.

---

Duplicate comments:
In `@infra/digitalocean/accounts/naduns-team/variables.tf`:
- Around line 74-77: The variable GHCR_PAT is being reused as GITHUB_PAT
(registry token vs runtime GraphQL token); instead define a separate Terraform
variable (e.g., GITHUB_PAT or GITHUB_RUNTIME_TOKEN) instead of repurposing
GHCR_PAT, update the variable declarations and descriptions accordingly, update
any references where GHCR_PAT is passed to the backend to use the new variable
name, and ensure both variables remain secret (not declared in repo) and are
expected to be set via TF_VAR_* at runtime.

In `@src/app.rs`:
- Around line 56-65: The code currently uses
config.github_pat.clone().unwrap_or_default() when constructing
ContributionsService (ContributionsService::new), which hides a missing
GITHUB_PAT; change this to fail fast by requiring the token: replace the
unwrap_or_default usage with an explicit require (e.g., call expect or propagate
a configuration error) so that ContributionsService::new receives a real token
and the app errors immediately if config.github_pat is None; update the
surrounding initialization to propagate or handle the error accordingly rather
than passing an empty string.

---

Nitpick comments:
In `@src/services/contributions.rs`:
- Around line 175-179: The cache currently silently skips inserting when
self.cache.len() >= CACHE_MAX_CAPACITY which leads to repeated upstream hits;
change the insertion logic in the block that uses self.cache.insert(cache_key,
(new_resp.clone(), expires_at)) so that when at capacity you evict an entry
before inserting (either integrate an LRU structure or, if keeping the current
map, find and remove the least-recent/oldest entry by inspecting stored
expires_at or usage metadata, then call insert), and also emit a log entry
indicating an eviction (include cache_key and reason). Ensure you reference
CACHE_MAX_CAPACITY, self.cache, cache_key, expires_at and the insert call in
your changes.

In `@tests/api/contributions_tests.rs`:
- Around line 109-113: The test's Mock setup currently only matches
method("POST"), which can falsely match other requests; update the Mock
definition (the Mock::given(...) chain around method("POST") that ends with
.mount(&mock_server)) to include more specific matchers such as
path("/expected/path") and/or a body matcher (e.g., matching JSON keys or exact
body) by adding additional .and(path(...)) and/or .and(body_json(...) or
.and(body_string(...))) calls before .respond_with(...). Ensure the path string
and body matcher correspond to the request produced by the code under test so
the mock only matches the intended request.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro Plus

Run ID: e09de899-a5b6-4c4f-a67d-6c2cc845730f

📥 Commits

Reviewing files that changed from the base of the PR and between 3d6dc0e and 9cf1f44.

⛔ Files ignored due to path filters (1)
  • Cargo.lock is excluded by !**/*.lock
📒 Files selected for processing (20)
  • Cargo.toml
  • infra/common/cloud-init.template
  • infra/digitalocean/accounts/naduns-team/variables.tf
  • postman/collections/Nadzu API/Contributions (Default User).request.yaml
  • postman/collections/Nadzu API/Contributions (Specific User).request.yaml
  • src/app.rs
  • src/config.rs
  • src/controllers/api/v1/contributions_controller.rs
  • src/controllers/api/v1/mod.rs
  • src/models/contributions_model.rs
  • src/models/mod.rs
  • src/routes/api/v1/contributions_routes.rs
  • src/routes/api/v1/mod.rs
  • src/services/contributions.rs
  • src/services/mod.rs
  • src/state.rs
  • tests/api/common.rs
  • tests/api/contributions_tests.rs
  • tests/api_tests.rs
  • tests/layer_unit_tests.rs

Comment thread infra/common/cloud-init.template
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant