Skip to content

feat: weekly Slack digest for release visibility#2749

Open
baktun14 wants to merge 2 commits intomainfrom
feat/weekly-slack-digest
Open

feat: weekly Slack digest for release visibility#2749
baktun14 wants to merge 2 commits intomainfrom
feat/weekly-slack-digest

Conversation

@baktun14
Copy link
Contributor

@baktun14 baktun14 commented Feb 16, 2026

What

Adds an automated weekly digest that posts to Slack every Monday, summarizing all releases from the past week across every service in the monorepo.

Why

Architecture, infrastructure, and reliability improvements are invisible to most of the org because people fixate on features. This digest:

  • Groups releases by service (Console Web, Console API, Notifications, TX Signer, etc.)
  • Deduplicates shared commits (e.g. chain SDK upgrades that touch 4 services show once with a cross-cutting note)
  • Adds AI-generated impact context via AkashML (DeepSeek-V3.2) — answers "so what?" for each change
  • Highlights non-feature work explicitly (architecture, reliability, DX)
  • Posts per-service threads in Slack so people can drill into what they care about

Zero extra work for contributors — it reads directly from the existing conventional-commit / semantic-release workflow.

Files

File Purpose
.github/workflows/weekly-digest.yml Cron trigger (Monday 9AM UTC) + manual dispatch
.github/scripts/weekly-digest.mjs Fetches releases → parses → AI analysis → Slack posting
.github/scripts/package.json ESM module config for the script

Required secrets

Secret Purpose
AKASHML_API_KEY AkashML API key for DeepSeek-V3.2
SLACK_DIGEST_WEBHOOK_URL Slack webhook (simple mode)
SLACK_BOT_TOKEN + SLACK_CHANNEL_ID Slack bot (threaded mode, recommended)

Example output

📋 Console Weekly Digest — Feb 10 – Feb 17, 2026

A reliability-focused week: Notifications got a NestJS upgrade and
connection pooling, Console Web shipped better error handling, and
the chain SDK was upgraded across 4 services.

11 releases across 6 services │ 9 unique changes

🌐 Console Web v3.28.3  │  ⚙️ Console API v3.30.3  │  🔔 Notifications v2.16.0

💡 Worth noting: The Notifications NestJS + pg-boss upgrade modernizes
   the job queue layer, reducing future maintenance burden.

🔗 Cross-cutting: Chain SDK upgrade touched TX Signer, Stats Web,
   Provider Proxy, and Console Web.

🩺 Health signal: Console API payment deadlock fix suggests the
   payment flow may need a broader concurrency review.

Each service then gets its own threaded reply with categorized changes.

Testing

Run manually via Actions → Weekly Console Digest → Run workflow with dry_run = true to see output in logs without posting to Slack.

Summary by CodeRabbit

  • Chores
    • Added an automated weekly digest workflow scheduled Mondays at 09:00 UTC with manual trigger and configurable lookback/dry-run options.
  • New Features
    • Generates per-service release summaries and highlights, aggregates changes, and posts a main digest plus optional per-service threaded messages to Slack.
    • Supports dry-run mode, multiple Slack posting modes, ML-based analysis for summaries, and improved logging and error handling.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Feb 16, 2026

📝 Walkthrough

Walkthrough

Adds a scheduled GitHub Actions workflow and a new Node.js ESModule that fetches recent repository releases, classifies and deduplicates changes, requests an AI analysis from AkashML, and posts a formatted weekly digest to Slack; supports manual dispatch, dry-run, and configurable lookback.

Changes

Cohort / File(s) Summary
GitHub Actions Workflow
.github/workflows/weekly-digest.yml
New "Weekly Console Digest" workflow scheduled Mondays 09:00 UTC with workflow_dispatch inputs (days_back, dry_run). Sets checkout, Node.js v20, and runs script/weekly-digest.mjs, passing secrets and inputs as environment variables with read-only contents permission.
Weekly Digest Script
script/weekly-digest.mjs
New ESM script that fetches paginated GitHub releases, groups/deduplicates per-service changes, classifies sections, validates env inputs, posts aggregated payload to AkashML and parses JSON response, and posts main + per-service Slack messages (webhook or bot, optional threading). Includes DRY_RUN, error handling, pagination cap, logging, and rate-limiting between posts.

Sequence Diagram

sequenceDiagram
    participant GH as GitHub API
    participant Script as Weekly Digest Script
    participant ML as AkashML API
    participant Slack as Slack API/Webhook

    Script->>GH: Fetch releases (paginated, until DAYS_BACK)
    activate GH
    GH-->>Script: Releases & commits
    deactivate GH

    Script->>Script: Parse releases → services, dedupe commits, classify changes

    Script->>ML: POST aggregated payload (DeepSeek-V3.2)
    activate ML
    ML-->>Script: JSON analysis / digest
    deactivate ML

    Script->>Slack: Post main digest (webhook or bot)
    activate Slack
    Slack-->>Script: Confirmation
    deactivate Slack

    Script->>Slack: Post per-service threaded messages (optional, sequential with pauses)
    activate Slack
    Slack-->>Script: Confirmations
    deactivate Slack
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Poem

🐰 I hop through commits and gather cheer,

Lines and releases, tidy and clear.
AI hums summaries, neat and bright,
Slack gets my basket — weekly delight! 🥕✨

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 50.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly and concisely summarizes the main change: adding a weekly automated Slack digest for release visibility.
Description check ✅ Passed The description comprehensively covers both Why and What sections with detailed context, files, required secrets, example output, and testing instructions.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch feat/weekly-slack-digest

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Fix all issues with AI agents
Verify each finding against the current code and only fix it if needed.


In @.github/scripts/weekly-digest.mjs:
- Around line 25-34: Add upfront validation after the env var assignments:
verify DAYS_BACK (parsed by parseInt) is a positive integer within a sensible
range (e.g., 1–365) and fail fast with a clear error if invalid; ensure required
credentials (GITHUB_TOKEN and AKASHML_API_KEY) are present and, unless DRY_RUN
is true, validate Slack credentials (SLACK_WEBHOOK_URL or both SLACK_BOT_TOKEN
and SLACK_CHANNEL_ID) and emit a descriptive error/exit when missing. Reference
the existing constants REPO_OWNER, REPO_NAME, DAYS_BACK, DRY_RUN, GITHUB_TOKEN,
AKASHML_API_KEY, SLACK_WEBHOOK_URL, SLACK_BOT_TOKEN, and SLACK_CHANNEL_ID when
implementing these guard checks so errors are clear and occur before any
date-range or Slack actions.
- Around line 135-161: The commit-matching regex in weekly-digest.mjs currently
only captures bare or simple-bracketed hashes and therefore misses
semantic-release markdown links; update the regex used in the changeMatch call
so the third/fourth capture group also accepts a markdown link form like
([<hash>](<url>)) in addition to (hash) and ([hash]) and ensure the extracted
commitHash logic (the commitHash = changeMatch[3] || changeMatch[4]; line) still
points to the correct capture for all three forms; adjust the pattern used in
changeMatch and/or the capture ordering so seenCommits deduping and description
cleanup continue to work with these markdown link formats.

@stalniy
Copy link
Contributor

stalniy commented Feb 17, 2026

I'd like to clarify who is the main audience? But I like the idea!

@baktun14
Copy link
Contributor Author

I'd like to clarify who is the main audience? But I like the idea!

This would be us internally to start with. It'll help us highlight what has been done from every perspective of the repo, not just features and more visibility.

@baktun14 baktun14 requested a review from ygrishajev February 17, 2026 18:11
@baktun14 baktun14 force-pushed the feat/weekly-slack-digest branch 2 times, most recently from 37339a2 to 3434fe3 Compare February 18, 2026 04:30
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@script/weekly-digest.mjs`:
- Around line 459-497: The code builds changeLines from serviceDigest.changes
and pushes it into threadBlocks without size checks, which can exceed Slack's
3,000-character Block Kit limit and cause chat.postMessage to fail; fix by
validating changeLines (derived from serviceDigest.changes) before adding it to
threadBlocks: if its length > 3000 either (A) split the string into multiple
mrkdwn "section" blocks/chunks each <= 3000 chars and push each chunk as a
separate section block, or (B) truncate changeLines to ~2990 chars and append an
ellipsis and a “See full digest” link if available; implement this logic where
changeLines is computed and before the threadBlocks.push call so that
threadBlocks only contains safe-sized section.text values.
- Around line 81-107: In fetchRecentReleases, avoid silently truncating results
when the hard pagination cap is hit: replace the silent break on the page cap
(page > 10) with an explicit error path that indicates the pagination limit was
reached so callers know results may be incomplete; update the loop in
fetchRecentReleases to throw or return a clear error (with context like
REPO_OWNER/REPO_NAME, DAYS_BACK, page size and current page) when the page cap
is reached instead of simply breaking, so the caller can handle/retry or surface
the failure.

@baktun14 baktun14 force-pushed the feat/weekly-slack-digest branch from 3434fe3 to 93152e6 Compare February 18, 2026 04:42
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@script/weekly-digest.mjs`:
- Around line 296-300: The LLM response parsing assumes
data.choices[0].message.content always exists and is valid JSON; update the
parsing in the function that reads const data = await res.json() so it first
reads text via data.choices?.[0]?.message?.content and throws a clear error
including a truncated JSON.stringify(data) when that path is missing, then run
the existing cleaning (replace /```json\s*|```\s*/g) and wrap
JSON.parse(cleaned) in a try/catch that throws a new Error containing
parseErr.message plus a truncated raw cleaned string to provide context for
malformed JSON.

@codecov
Copy link

codecov bot commented Feb 18, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 50.96%. Comparing base (292820f) to head (349da42).
⚠️ Report is 14 commits behind head on main.
✅ All tests successful. No failed tests found.

Additional details and impacted files
@@            Coverage Diff             @@
##             main    #2749      +/-   ##
==========================================
- Coverage   51.77%   50.96%   -0.81%     
==========================================
  Files        1045     1010      -35     
  Lines       27447    26615     -832     
  Branches     6340     6245      -95     
==========================================
- Hits        14211    13565     -646     
+ Misses      12809    12632     -177     
+ Partials      427      418       -9     
Flag Coverage Δ *Carryforward flag
api 76.67% <ø> (ø) Carriedforward from 292820f
deploy-web 37.19% <ø> (ø) Carriedforward from 292820f
log-collector ?
notifications 85.56% <ø> (ø) Carriedforward from 292820f
provider-console 81.48% <ø> (ø) Carriedforward from 292820f
provider-proxy 82.41% <ø> (ø) Carriedforward from 292820f
tx-signer ?

*This pull request uses carry forward flags. Click here to find out more.
see 35 files with indirect coverage changes

🚀 New features to boost your workflow:
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

@baktun14 baktun14 force-pushed the feat/weekly-slack-digest branch from 349da42 to ec08b4b Compare February 18, 2026 22:10
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (2)
script/weekly-digest.mjs (2)

376-382: Consider truncating digest.summary for consistency with changeLines.

While the LLM is instructed to produce "2-3 sentences," there's no guarantee it will comply. The changeLines field correctly applies a 3000-character truncation guard (lines 508-512), but digest.summary does not. For defensive consistency, consider applying the same pattern.

🛡️ Optional truncation guard
+    const MAX_BLOCK_CHARS = 3000;
+    const safeSummary =
+      digest.summary.length > MAX_BLOCK_CHARS
+        ? `${digest.summary.slice(0, MAX_BLOCK_CHARS - 15)}…(truncated)`
+        : digest.summary;
+
     {
       type: "section",
       text: {
         type: "mrkdwn",
-        text: digest.summary,
+        text: safeSummary,
       },
     },
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@script/weekly-digest.mjs` around lines 376 - 382, The digest.summary text
block needs the same 3000-character truncation guard used for changeLines:
replace direct use of digest.summary in the Slack section block with a truncated
version (e.g., compute const summary = digest.summary?.length > 3000 ?
digest.summary.slice(0, 3000) + '…' : digest.summary) and then use that summary
variable in the block; reference digest.summary and the existing changeLines
truncation logic as the model to copy for consistency.

274-294: Add timeout to AkashML fetch to prevent indefinite hangs.

This scheduled job makes LLM inference calls that can take 30+ seconds and may hang if the service is degraded. Without a timeout, the fetch could block indefinitely. Use AbortController with a timeout (e.g., 120 seconds) to ensure the job completes or fails gracefully.

♻️ Proposed fix with AbortController timeout
+  const controller = new AbortController();
+  const timeoutId = setTimeout(() => controller.abort(), 120_000);
+
   const res = await fetch("https://api.akashml.com/v1/chat/completions", {
     method: "POST",
     headers: {
       "Content-Type": "application/json",
       Authorization: `Bearer ${AKASHML_API_KEY}`,
     },
     body: JSON.stringify({
       model: "deepseek-ai/DeepSeek-V3.2",
       messages: [
         { role: "system", content: systemPrompt },
         { role: "user", content: userPrompt },
       ],
       temperature: 0.3,
       max_tokens: 4096,
       top_p: 0.9,
     }),
+    signal: controller.signal,
   });
+  clearTimeout(timeoutId);

   if (!res.ok) {
     throw new Error(`AkashML API error: ${res.status} ${await res.text()}`);
   }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@script/weekly-digest.mjs` around lines 274 - 294, Add an
AbortController-based timeout around the AkashML fetch call to avoid indefinite
hangs: create an AbortController, start a timer (e.g., 120000 ms) that calls
controller.abort(), pass controller.signal to the fetch options used where
`const res = await fetch(...)` is created, and ensure you clear the timeout
after fetch completes; also catch/handle the abort error so the scheduled job
fails gracefully when the request times out.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@script/weekly-digest.mjs`:
- Around line 376-382: The digest.summary text block needs the same
3000-character truncation guard used for changeLines: replace direct use of
digest.summary in the Slack section block with a truncated version (e.g.,
compute const summary = digest.summary?.length > 3000 ? digest.summary.slice(0,
3000) + '…' : digest.summary) and then use that summary variable in the block;
reference digest.summary and the existing changeLines truncation logic as the
model to copy for consistency.
- Around line 274-294: Add an AbortController-based timeout around the AkashML
fetch call to avoid indefinite hangs: create an AbortController, start a timer
(e.g., 120000 ms) that calls controller.abort(), pass controller.signal to the
fetch options used where `const res = await fetch(...)` is created, and ensure
you clear the timeout after fetch completes; also catch/handle the abort error
so the scheduled job fails gracefully when the request times out.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants

Comments