feat: weekly Slack digest for release visibility#2749
Conversation
📝 WalkthroughWalkthroughAdds a scheduled GitHub Actions workflow and a new Node.js ESModule that fetches recent repository releases, classifies and deduplicates changes, requests an AI analysis from AkashML, and posts a formatted weekly digest to Slack; supports manual dispatch, dry-run, and configurable lookback. Changes
Sequence DiagramsequenceDiagram
participant GH as GitHub API
participant Script as Weekly Digest Script
participant ML as AkashML API
participant Slack as Slack API/Webhook
Script->>GH: Fetch releases (paginated, until DAYS_BACK)
activate GH
GH-->>Script: Releases & commits
deactivate GH
Script->>Script: Parse releases → services, dedupe commits, classify changes
Script->>ML: POST aggregated payload (DeepSeek-V3.2)
activate ML
ML-->>Script: JSON analysis / digest
deactivate ML
Script->>Slack: Post main digest (webhook or bot)
activate Slack
Slack-->>Script: Confirmation
deactivate Slack
Script->>Slack: Post per-service threaded messages (optional, sequential with pauses)
activate Slack
Slack-->>Script: Confirmations
deactivate Slack
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Fix all issues with AI agents
Verify each finding against the current code and only fix it if needed.
In @.github/scripts/weekly-digest.mjs:
- Around line 25-34: Add upfront validation after the env var assignments:
verify DAYS_BACK (parsed by parseInt) is a positive integer within a sensible
range (e.g., 1–365) and fail fast with a clear error if invalid; ensure required
credentials (GITHUB_TOKEN and AKASHML_API_KEY) are present and, unless DRY_RUN
is true, validate Slack credentials (SLACK_WEBHOOK_URL or both SLACK_BOT_TOKEN
and SLACK_CHANNEL_ID) and emit a descriptive error/exit when missing. Reference
the existing constants REPO_OWNER, REPO_NAME, DAYS_BACK, DRY_RUN, GITHUB_TOKEN,
AKASHML_API_KEY, SLACK_WEBHOOK_URL, SLACK_BOT_TOKEN, and SLACK_CHANNEL_ID when
implementing these guard checks so errors are clear and occur before any
date-range or Slack actions.
- Around line 135-161: The commit-matching regex in weekly-digest.mjs currently
only captures bare or simple-bracketed hashes and therefore misses
semantic-release markdown links; update the regex used in the changeMatch call
so the third/fourth capture group also accepts a markdown link form like
([<hash>](<url>)) in addition to (hash) and ([hash]) and ensure the extracted
commitHash logic (the commitHash = changeMatch[3] || changeMatch[4]; line) still
points to the correct capture for all three forms; adjust the pattern used in
changeMatch and/or the capture ordering so seenCommits deduping and description
cleanup continue to work with these markdown link formats.
|
I'd like to clarify who is the main audience? But I like the idea! |
This would be us internally to start with. It'll help us highlight what has been done from every perspective of the repo, not just features and more visibility. |
37339a2 to
3434fe3
Compare
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@script/weekly-digest.mjs`:
- Around line 459-497: The code builds changeLines from serviceDigest.changes
and pushes it into threadBlocks without size checks, which can exceed Slack's
3,000-character Block Kit limit and cause chat.postMessage to fail; fix by
validating changeLines (derived from serviceDigest.changes) before adding it to
threadBlocks: if its length > 3000 either (A) split the string into multiple
mrkdwn "section" blocks/chunks each <= 3000 chars and push each chunk as a
separate section block, or (B) truncate changeLines to ~2990 chars and append an
ellipsis and a “See full digest” link if available; implement this logic where
changeLines is computed and before the threadBlocks.push call so that
threadBlocks only contains safe-sized section.text values.
- Around line 81-107: In fetchRecentReleases, avoid silently truncating results
when the hard pagination cap is hit: replace the silent break on the page cap
(page > 10) with an explicit error path that indicates the pagination limit was
reached so callers know results may be incomplete; update the loop in
fetchRecentReleases to throw or return a clear error (with context like
REPO_OWNER/REPO_NAME, DAYS_BACK, page size and current page) when the page cap
is reached instead of simply breaking, so the caller can handle/retry or surface
the failure.
3434fe3 to
93152e6
Compare
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@script/weekly-digest.mjs`:
- Around line 296-300: The LLM response parsing assumes
data.choices[0].message.content always exists and is valid JSON; update the
parsing in the function that reads const data = await res.json() so it first
reads text via data.choices?.[0]?.message?.content and throws a clear error
including a truncated JSON.stringify(data) when that path is missing, then run
the existing cleaning (replace /```json\s*|```\s*/g) and wrap
JSON.parse(cleaned) in a try/catch that throws a new Error containing
parseErr.message plus a truncated raw cleaned string to provide context for
malformed JSON.
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## main #2749 +/- ##
==========================================
- Coverage 51.77% 50.96% -0.81%
==========================================
Files 1045 1010 -35
Lines 27447 26615 -832
Branches 6340 6245 -95
==========================================
- Hits 14211 13565 -646
+ Misses 12809 12632 -177
+ Partials 427 418 -9
*This pull request uses carry forward flags. Click here to find out more. 🚀 New features to boost your workflow:
|
349da42 to
ec08b4b
Compare
There was a problem hiding this comment.
🧹 Nitpick comments (2)
script/weekly-digest.mjs (2)
376-382: Consider truncatingdigest.summaryfor consistency withchangeLines.While the LLM is instructed to produce "2-3 sentences," there's no guarantee it will comply. The
changeLinesfield correctly applies a 3000-character truncation guard (lines 508-512), butdigest.summarydoes not. For defensive consistency, consider applying the same pattern.🛡️ Optional truncation guard
+ const MAX_BLOCK_CHARS = 3000; + const safeSummary = + digest.summary.length > MAX_BLOCK_CHARS + ? `${digest.summary.slice(0, MAX_BLOCK_CHARS - 15)}…(truncated)` + : digest.summary; + { type: "section", text: { type: "mrkdwn", - text: digest.summary, + text: safeSummary, }, },🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@script/weekly-digest.mjs` around lines 376 - 382, The digest.summary text block needs the same 3000-character truncation guard used for changeLines: replace direct use of digest.summary in the Slack section block with a truncated version (e.g., compute const summary = digest.summary?.length > 3000 ? digest.summary.slice(0, 3000) + '…' : digest.summary) and then use that summary variable in the block; reference digest.summary and the existing changeLines truncation logic as the model to copy for consistency.
274-294: Add timeout to AkashML fetch to prevent indefinite hangs.This scheduled job makes LLM inference calls that can take 30+ seconds and may hang if the service is degraded. Without a timeout, the fetch could block indefinitely. Use
AbortControllerwith a timeout (e.g., 120 seconds) to ensure the job completes or fails gracefully.♻️ Proposed fix with AbortController timeout
+ const controller = new AbortController(); + const timeoutId = setTimeout(() => controller.abort(), 120_000); + const res = await fetch("https://api.akashml.com/v1/chat/completions", { method: "POST", headers: { "Content-Type": "application/json", Authorization: `Bearer ${AKASHML_API_KEY}`, }, body: JSON.stringify({ model: "deepseek-ai/DeepSeek-V3.2", messages: [ { role: "system", content: systemPrompt }, { role: "user", content: userPrompt }, ], temperature: 0.3, max_tokens: 4096, top_p: 0.9, }), + signal: controller.signal, }); + clearTimeout(timeoutId); if (!res.ok) { throw new Error(`AkashML API error: ${res.status} ${await res.text()}`); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@script/weekly-digest.mjs` around lines 274 - 294, Add an AbortController-based timeout around the AkashML fetch call to avoid indefinite hangs: create an AbortController, start a timer (e.g., 120000 ms) that calls controller.abort(), pass controller.signal to the fetch options used where `const res = await fetch(...)` is created, and ensure you clear the timeout after fetch completes; also catch/handle the abort error so the scheduled job fails gracefully when the request times out.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@script/weekly-digest.mjs`:
- Around line 376-382: The digest.summary text block needs the same
3000-character truncation guard used for changeLines: replace direct use of
digest.summary in the Slack section block with a truncated version (e.g.,
compute const summary = digest.summary?.length > 3000 ? digest.summary.slice(0,
3000) + '…' : digest.summary) and then use that summary variable in the block;
reference digest.summary and the existing changeLines truncation logic as the
model to copy for consistency.
- Around line 274-294: Add an AbortController-based timeout around the AkashML
fetch call to avoid indefinite hangs: create an AbortController, start a timer
(e.g., 120000 ms) that calls controller.abort(), pass controller.signal to the
fetch options used where `const res = await fetch(...)` is created, and ensure
you clear the timeout after fetch completes; also catch/handle the abort error
so the scheduled job fails gracefully when the request times out.
What
Adds an automated weekly digest that posts to Slack every Monday, summarizing all releases from the past week across every service in the monorepo.
Why
Architecture, infrastructure, and reliability improvements are invisible to most of the org because people fixate on features. This digest:
Zero extra work for contributors — it reads directly from the existing conventional-commit / semantic-release workflow.
Files
.github/workflows/weekly-digest.yml.github/scripts/weekly-digest.mjs.github/scripts/package.jsonRequired secrets
AKASHML_API_KEYSLACK_DIGEST_WEBHOOK_URLSLACK_BOT_TOKEN+SLACK_CHANNEL_IDExample output
Each service then gets its own threaded reply with categorized changes.
Testing
Run manually via Actions → Weekly Console Digest → Run workflow with
dry_run = trueto see output in logs without posting to Slack.Summary by CodeRabbit