Skip to content

feat(ci): open a downstream-bump tracking issue on every release#7561

Open
JohnMcLear wants to merge 4 commits intoether:developfrom
JohnMcLear:chore/release-downstream-tracker
Open

feat(ci): open a downstream-bump tracking issue on every release#7561
JohnMcLear wants to merge 4 commits intoether:developfrom
JohnMcLear:chore/release-downstream-tracker

Conversation

@JohnMcLear
Copy link
Copy Markdown
Member

Summary

Adds a single source of truth for every downstream distribution of Etherpad (Docker Hub, Snap, Debian, Home Assistant, Umbrel, TrueCharts, Proxmox, Cloudron, YunoHost, CasaOS, BigBlueButton, Unraid, Sandstorm, Nextcloud Ownpad) at docs/downstreams.yml, plus a workflow that opens a single tracking issue on every GitHub release publish with a checklist grouped by how each downstream is kept current:

  • 🚀 Automatic — this repo's CI handles it on tag push (Docker Hub, Snap, Debian .deb)
  • 🧩 Manual bump in-repo — someone edits a file here (HA add-on config.yaml), CI does the rest
  • 🤖 Externally automated — a Renovate-like bot or runtime check on the downstream side
  • ✉️ Needs a PR we send — a maintainer files a bump PR (CasaOS, BigBlueButton)
  • 📨 Needs an issue we file — maintainer-driven with no PR mechanism
  • 🤝 Maintained externally — we have no lever; poke if stale (Cloudron, YunoHost, Ownpad)
  • ⚠️ Known stale — kept for visibility, no action (Sandstorm)

Why

External catalogs (CasaOS, TrueCharts, BBB's bbb-etherpad.placeholder.sh, Unraid, Sandstorm) accumulate years of drift because nobody remembers to update them at release time. BBB still clones Etherpad 1.9.4; TrueCharts was pinned to 1.8.14 until PR trueforge-org/truecharts#47234; Sandstorm hasn't moved since 2015.

Turning "remember every downstream at release" into a per-release checklist is the lightest-touch fix that scales.

Files

  • docs/downstreams.yml — catalog. Adding a new integrator is one YAML block; the workflow picks it up automatically.
  • .github/workflows/release-downstreams.yml — triggers on release: published or manual workflow_dispatch with a version input (so we can smoke-test before the next real release).
  • .github/scripts/render-downstream-tracker.py — standalone Python renderer, dry-runnable locally:
    python3 .github/scripts/render-downstream-tracker.py \
        docs/downstreams.yml 2.6.1 ether/etherpad-lite
    

Test plan

  • python3 .github/scripts/render-downstream-tracker.py docs/downstreams.yml 2.6.1 ether/etherpad-lite renders valid GitHub-flavoured markdown locally
  • workflow_dispatch run with version: 2.6.1-test opens an issue on this repo with the expected checklist
  • First real release after merge opens the tracking issue automatically

Follow-up

After merge, I'll file a BBB issue directly (their 1.9.4 pin pre-dates this automation and needs a one-off nudge), and drop a comment on #7529 linking to this PR as the structural fix.

Refs #7529

🤖 Generated with Claude Code

Adds a single source of truth for every downstream distribution of
Etherpad (Docker Hub, Snap, Debian, Home Assistant, Umbrel, TrueCharts,
Proxmox, Cloudron, YunoHost, CasaOS, BigBlueButton, Unraid, Sandstorm,
Nextcloud Ownpad) at docs/downstreams.yml, plus a workflow that, on
every GitHub release publish, opens a single tracking issue with a
checklist grouped by how each downstream is kept current:

  🚀 Automatic             — this repo's CI handles it on tag push
  🧩 Manual bump in-repo   — someone edits a file here, CI does the rest
  🤖 Externally automated  — a Renovate-like bot on the downstream side
  ✉️  Needs a PR we send   — a maintainer files a bump PR
  📨 Needs an issue we file
  🤝 Maintained externally — we have no lever; poke if stale
  ⚠️  Known stale          — kept for visibility, no action

Motivation: without this, external catalogs like CasaOS, TrueCharts,
Bigbluebutton's `bbb-etherpad.placeholder.sh`, and the Sandstorm market
listing accumulate years of drift. Turning "remember every downstream"
into a per-release checklist is the lightest-touch fix that scales.

The renderer is a standalone Python script so the issue format can be
tweaked and dry-run locally:

  python3 .github/scripts/render-downstream-tracker.py \
      docs/downstreams.yml 2.6.1 ether/etherpad-lite

A `workflow_dispatch` trigger with a manual `version` input is included
so the tracker can be smoke-tested before the next real release.

Refs ether#7529

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
@qodo-free-for-open-source-projects
Copy link
Copy Markdown

Review Summary by Qodo

Add downstream distribution tracker for release checklist automation

✨ Enhancement

Grey Divider

Walkthroughs

Description
• Adds downstream distribution catalog at docs/downstreams.yml tracking 15+ Etherpad packagers
• Creates automated workflow to open per-release tracking issues with categorized checklist
• Implements standalone Python renderer for flexible issue body generation and local testing
• Enables smoke-testing via workflow_dispatch before real releases
Diagram
flowchart LR
  A["Release Published"] -->|triggers| B["release-downstreams.yml"]
  C["workflow_dispatch"] -->|manual test| B
  B -->|reads| D["docs/downstreams.yml"]
  B -->|executes| E["render-downstream-tracker.py"]
  D -->|catalog data| E
  E -->|generates markdown| F["GitHub Issue"]
  F -->|checklist by type| G["Automatic/Manual/External/Stale"]
Loading

Grey Divider

File Changes

1. docs/downstreams.yml ⚙️ Configuration changes +172/-0

Downstream distribution catalog with metadata

• Defines 15+ downstream distributions (Docker Hub, Snap, Debian, Home Assistant, Umbrel,
 TrueCharts, Proxmox, Cloudron, YunoHost, CasaOS, BigBlueButton, Unraid, Sandstorm, Nextcloud Ownpad,
 TrueNAS)
• Categorizes each by update mechanism: automatic, manual_ci, external_auto, external_pr,
 external_issue, external_maintainer, stale
• Includes repository links, file paths, workflow references, and maintenance notes for each
 downstream
• Serves as single source of truth for release tracking workflow

docs/downstreams.yml


2. .github/workflows/release-downstreams.yml ✨ Enhancement +71/-0

Workflow to open downstream tracking issues

• Triggers on GitHub release publish or manual workflow_dispatch with optional version input
• Resolves version from release tag or manual input, strips leading 'v' prefix
• Installs PyYAML dependency and invokes Python renderer script
• Creates GitHub issue with title, labels (release, downstream), and rendered checklist body
• Requires contents: read and issues: write permissions

.github/workflows/release-downstreams.yml


3. .github/scripts/render-downstream-tracker.py ✨ Enhancement +109/-0

Python renderer for downstream checklist markdown

• Standalone Python script consuming docs/downstreams.yml catalog and version string
• Groups downstreams by update type with emoji-prefixed headings (🚀 Automatic, 🧩 Manual, etc.)
• Generates GitHub-flavored markdown checklist with deep links to downstream repos and workflow
 files
• Supports local dry-run execution for format testing without CI re-runs
• Handles optional notes indentation for proper GitHub list item rendering

.github/scripts/render-downstream-tracker.py


Grey Divider

Qodo Logo

@qodo-free-for-open-source-projects
Copy link
Copy Markdown

qodo-free-for-open-source-projects Bot commented Apr 19, 2026

Code Review by Qodo

🐞 Bugs (3) 📘 Rule violations (3) 📎 Requirement gaps (0)

Context used

Grey Divider


Action required

1. JQ string injection breaks dedupe 🐞 Bug ☼ Reliability ⭐ New
Description
The workflow builds a jq filter by interpolating $TITLE directly into the jq program, so a
version/title containing a double-quote or backslash can break jq parsing and make the workflow fail
before creating (or deduping) the tracking issue.
Code

.github/workflows/release-downstreams.yml[R85-93]

+          EXISTING=$(gh issue list \
+            --repo "$GITHUB_REPOSITORY" \
+            --state all \
+            --label release \
+            --label downstream \
+            --search "\"$TITLE\" in:title" \
+            --json number,title \
+            --jq ".[] | select(.title == \"$TITLE\") | .number" \
+            | head -n1)
Evidence
The dedupe logic passes a jq program that contains an unescaped "$TITLE" string; because TITLE is
derived from VERSION, a crafted workflow_dispatch version (or unexpected tag) containing
quotes/backslashes will produce invalid jq syntax and abort the step.

.github/workflows/release-downstreams.yml[81-93]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

### Issue description
The workflow interpolates `$TITLE` into the jq program string passed to `gh issue list --jq`, which can break parsing if `$VERSION` contains characters like `"` or `\`.

### Issue Context
`TITLE` is derived from `$VERSION` (workflow_dispatch input or release tag). Even if unusual in real tags, dispatch testing can easily include these characters and will cause the step to fail.

### Fix Focus Areas
- .github/workflows/release-downstreams.yml[81-93]

### Suggested fix
Avoid embedding `$TITLE` inside the jq program. Instead pipe JSON to `jq` and pass the title via `--arg`, e.g.:

```bash
EXISTING=$(gh issue list \
 --repo "$GITHUB_REPOSITORY" \
 --state all \
 --label release \
 --label downstream \
 --search "$TITLE in:title" \
 --json number,title \
 | jq -r --arg title "$TITLE" '.[] | select(.title == $title) | .number' \
 | head -n1)
```

(Or, alternatively, properly escape `$TITLE` before embedding it into the jq string.)

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


2. 4-space indent in smoke test 📘 Rule violation ⚙ Maintainability
Description
The newly added smoke test script uses 4-space indentation, violating the 2-space indentation
requirement. This introduces inconsistent formatting in committed source files.
Code

.github/scripts/test_render_downstream_tracker.py[R24-37]

+def write(tmpdir: Path, content: str) -> Path:
+    p = tmpdir / "catalog.yml"
+    p.write_text(textwrap.dedent(content))
+    return p
+
+
+def expect_value_error(tmpdir: Path, content: str, needle: str) -> None:
+    p = write(tmpdir, content)
+    try:
+        mod.render(p, "1.0", "ether/etherpad")
+    except ValueError as e:
+        assert needle in str(e), f"expected {needle!r} in {e!r}"
+        return
+    raise AssertionError(f"expected ValueError containing {needle!r}")
Evidence
PR Compliance ID 10 requires 2-space indentation with spaces only. The added test script’s function
bodies are indented with 4 spaces (e.g., within write() / expect_value_error()), which violates
this requirement.

.github/scripts/test_render_downstream_tracker.py[24-37]
Best Practice: Repository guidelines

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
The added Python smoke test uses 4-space indentation, but the repository requires 2-space indentation.
## Issue Context
Keeping indentation consistent avoids style drift and potential formatter/linter conflicts.
## Fix Focus Areas
- .github/scripts/test_render_downstream_tracker.py[24-82]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


3. Unknown update_type omitted 🐞 Bug ≡ Correctness
Description
render() only renders entries whose update_type matches one of the hard-coded GROUPS; a typo
or new update_type in docs/downstreams.yml will be silently dropped from the tracking issue.
This breaks the “single source of truth” behavior by making missing checklist items undetectable in
CI.
Code

.github/scripts/render-downstream-tracker.py[R47-87]

+    for idx, item in enumerate(items):
+        if not isinstance(item, dict):
+            raise ValueError(
+                f"{catalog_path}: downstreams[{idx}] must be a mapping, "
+                f"got {type(item).__name__}"
+            )
+        if "name" not in item or "update_type" not in item:
+            raise ValueError(
+                f"{catalog_path}: downstreams[{idx}] missing required "
+                f"`name` and/or `update_type`"
+            )
+        if "path" in item and "file" in item:
+            raise ValueError(
+                f"{catalog_path}: downstreams[{idx}] ({item['name']}) "
+                f"sets both `path` and `file`; use `file` for files and "
+                f"`path` for directories, not both"
+            )
+
+    out: list[str] = []
+    out.append(f"## Downstream distribution checklist for `{version}`\n")
+    out.append(
+        "Auto-opened by `.github/workflows/release-downstreams.yml` on "
+        "release publish.\n"
+    )
+    out.append(
+        f"Source of truth: [`docs/downstreams.yml`](https://github.com/"
+        f"{repo}/blob/develop/docs/downstreams.yml).\n"
+    )
+    out.append(
+        "Tick items as you verify them. Anything still unchecked a week "
+        "after release is a candidate for follow-up.\n"
+    )
+
+    for update_type, heading in GROUPS:
+        matches = [i for i in items if i.get("update_type") == update_type]
+        if not matches:
+            continue
+        out.append(f"\n### {heading}\n")
+        for item in matches:
+            out.append(_render_item(item, repo))
+
Evidence
The YAML validation checks that update_type exists but never checks that it’s one of the allowed
values. Rendering then iterates only over GROUPS and filters items by exact match, so any
unrecognized value produces no output for that downstream entry (no error, no warning). The catalog
itself documents a finite set of valid update_type values, so failing to validate membership is a
concrete silent-failure mode.

.github/scripts/render-downstream-tracker.py[47-57]
.github/scripts/render-downstream-tracker.py[80-87]
docs/downstreams.yml[12-35]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
`render()` validates that each downstream item has an `update_type`, but it does not validate that the value is one of the supported categories. Because rendering only iterates over the hard-coded `GROUPS`, any typo/new value will be silently omitted from the generated checklist.
### Issue Context
This workflow is intended to make `docs/downstreams.yml` a single source of truth; silent omission defeats that by producing an incomplete tracking issue without a CI failure.
### Fix Focus Areas
- .github/scripts/render-downstream-tracker.py[22-30]
- .github/scripts/render-downstream-tracker.py[47-63]
- .github/scripts/render-downstream-tracker.py[80-87]
### Suggested change
- Build an `allowed_update_types` set from `GROUPS`.
- During the per-item validation loop, raise a `ValueError` if `item['update_type']` is not in the allowed set (include the bad value and the allowed values in the message).

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


View more (2)
4. 4-space indent in script 📘 Rule violation ⚙ Maintainability
Description
The new Python script uses 4-space indentation, but the compliance checklist requires 2-space
indentation (and no tabs) for code changes. This introduces inconsistent formatting against the
mandated whitespace standard.
Code

.github/scripts/render-downstream-tracker.py[R33-36]

+def render(catalog_path: Path, version: str, repo: str) -> str:
+    with catalog_path.open() as f:
+        catalog = yaml.safe_load(f)
+    items = catalog.get("downstreams", [])
Evidence
PR Compliance ID 8 requires 2-space indentation. In the added Python code, function bodies are
indented by 4 spaces (e.g., the render() function).

.github/scripts/render-downstream-tracker.py[33-36]
Best Practice: Repository guidelines

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
The added Python script uses 4-space indentation, but the repository compliance checklist requires 2-space indentation (and no tabs) for code changes.
## Issue Context
This affects the newly added `.github/scripts/render-downstream-tracker.py` functions.
## Fix Focus Areas
- .github/scripts/render-downstream-tracker.py[33-109]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


5. inputs context unavailable🐞 Bug ≡ Correctness
Description
The workflow reads ${{ inputs.version }} even when triggered by release: published, where the
inputs context is not guaranteed to exist, risking expression evaluation failure or an empty
VERSION and a failed run.
Code

.github/workflows/release-downstreams.yml[R35-47]

+      - name: Resolve version
+        id: v
+        env:
+          TAG: ${{ github.event.release.tag_name }}
+          INPUT: ${{ inputs.version }}
+        run: |
+          VERSION="${TAG:-$INPUT}"
+          VERSION="${VERSION#v}"
+          if [ -z "${VERSION}" ]; then
+            echo "Could not determine version." >&2
+            exit 1
+          fi
+          echo "version=${VERSION}" >> "$GITHUB_OUTPUT"
Evidence
The workflow always sets INPUT: ${{ inputs.version }} and then uses it to compute VERSION, but
the job is also triggered by release: published (not just workflow_dispatch). On non-dispatch
events the inputs context is not guaranteed, so this can break the workflow or force it into the
"Could not determine version" failure path.

.github/workflows/release-downstreams.yml[15-23]
.github/workflows/release-downstreams.yml[35-47]
Best Practice: GitHub Actions docs

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
`.github/workflows/release-downstreams.yml` references `${{ inputs.version }}` even for `release` events. Depending on GitHub Actions evaluation rules, this can fail evaluation or yield an empty string and break version resolution.
### Issue Context
The job is triggered by both `release.published` and `workflow_dispatch`. `inputs.version` should only be read for dispatch runs.
### Fix Focus Areas
- .github/workflows/release-downstreams.yml[15-47]
### Suggested fix
- Set `INPUT` from `github.event.inputs.version` (dispatch-only payload) and/or gate it behind an event-name check, e.g.:
- `INPUT: ${{ github.event_name == 'workflow_dispatch' && inputs.version || '' }}`
- or avoid `inputs` entirely: `INPUT: ${{ github.event.inputs.version }}`
- Optionally improve the error message to include `github.event_name` when version cannot be determined.

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools



Remediation recommended

6. Renderer lacks scalar type validation 🐞 Bug ☼ Reliability ⭐ New
Description
The renderer assumes YAML fields like update_type and notes are strings; if they are non-scalars
(e.g., list/null), it can raise TypeError/AttributeError and emit a traceback instead of the
intended single-line CI-friendly error.
Code

.github/scripts/render-downstream-tracker.py[R58-68]

+        # Reject typo'd update_type values up front. Without this, an entry
+        # with `update_type: external-pr` (dash instead of underscore) is
+        # silently dropped from the rendered checklist because render() only
+        # iterates the GROUPS allowlist below.
+        allowed = {g[0] for g in GROUPS}
+        if item["update_type"] not in allowed:
+            raise ValueError(
+                f"{catalog_path}: downstreams[{idx}] ({item['name']}) "
+                f"has unknown update_type {item['update_type']!r}; "
+                f"expected one of {sorted(allowed)}"
+            )
Evidence
render() validates presence of keys but not that update_type is a string; membership testing can
raise TypeError for unhashable types. _render_item() calls .strip() on notes without verifying
it is a string. main() only catches ValueError/OSError/yaml.YAMLError, so these type errors will
bypass the clean error formatting and produce a traceback in CI.

.github/scripts/render-downstream-tracker.py[47-68]
.github/scripts/render-downstream-tracker.py[111-114]
.github/scripts/render-downstream-tracker.py[148-156]
.github/scripts/test_render_downstream_tracker.py[60-62]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

### Issue description
`render-downstream-tracker.py` assumes certain YAML values are strings. If a downstream entry has `update_type` as a list/dict (unhashable) or `notes` as null/non-string, the script can crash with a traceback instead of returning a clean, actionable one-line error.

### Issue Context
The test script explicitly documents that validation errors should be raised as `ValueError` so `main()` can print a single CI-friendly error line.

### Fix Focus Areas
- .github/scripts/render-downstream-tracker.py[47-74]
- .github/scripts/render-downstream-tracker.py[102-138]

### Suggested fix
Add explicit type checks during validation in `render()` (preferred) so failures become `ValueError`, e.g.:
- Ensure `name` is a non-empty `str`
- Ensure `update_type` is a `str`
- If `notes` exists: ensure it is a `str` (or treat null as empty string)
- Ensure `repo`, `file`, `path`, `workflow` (if present) are strings

Optionally, compute `allowed = {g[0] for g in GROUPS}` once outside the item loop.

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


7. Traceback on YAML read/parse🐞 Bug ☼ Reliability
Description
main() only catches ValueError, so YAML read/parse failures (or file read errors) will produce a
full Python traceback, despite the script comment stating it should fail CI with a single actionable
line. This makes workflow failures noisier and harder to triage when the catalog is edited
incorrectly.
Code

.github/scripts/render-downstream-tracker.py[R34-143]

+    with catalog_path.open() as f:
+        catalog = yaml.safe_load(f)
+    if not isinstance(catalog, dict):
+        raise ValueError(
+            f"{catalog_path}: top-level must be a mapping, "
+            f"got {type(catalog).__name__}"
+        )
+    items = catalog.get("downstreams", [])
+    if not isinstance(items, list):
+        raise ValueError(
+            f"{catalog_path}: `downstreams` must be a list, "
+            f"got {type(items).__name__}"
+        )
+    for idx, item in enumerate(items):
+        if not isinstance(item, dict):
+            raise ValueError(
+                f"{catalog_path}: downstreams[{idx}] must be a mapping, "
+                f"got {type(item).__name__}"
+            )
+        if "name" not in item or "update_type" not in item:
+            raise ValueError(
+                f"{catalog_path}: downstreams[{idx}] missing required "
+                f"`name` and/or `update_type`"
+            )
+        if "path" in item and "file" in item:
+            raise ValueError(
+                f"{catalog_path}: downstreams[{idx}] ({item['name']}) "
+                f"sets both `path` and `file`; use `file` for files and "
+                f"`path` for directories, not both"
+            )
+
+    out: list[str] = []
+    out.append(f"## Downstream distribution checklist for `{version}`\n")
+    out.append(
+        "Auto-opened by `.github/workflows/release-downstreams.yml` on "
+        "release publish.\n"
+    )
+    out.append(
+        f"Source of truth: [`docs/downstreams.yml`](https://github.com/"
+        f"{repo}/blob/develop/docs/downstreams.yml).\n"
+    )
+    out.append(
+        "Tick items as you verify them. Anything still unchecked a week "
+        "after release is a candidate for follow-up.\n"
+    )
+
+    for update_type, heading in GROUPS:
+        matches = [i for i in items if i.get("update_type") == update_type]
+        if not matches:
+            continue
+        out.append(f"\n### {heading}\n")
+        for item in matches:
+            out.append(_render_item(item, repo))
+
+    return "\n".join(out)
+
+
+def _render_item(item: dict, repo: str) -> str:
+    name = item["name"]
+    target_repo = item.get("repo")
+    # `file:` deep-links to a single file (GitHub /blob/...).
+    # `path:` deep-links to a directory (GitHub /tree/...).
+    # `/blob/<dir>` and `/tree/<file>` both 404 on GitHub, so the two
+    # must be distinguished. The renderer trusts the YAML key — see
+    # render() for the both-set guard.
+    file_path = item.get("file")
+    dir_path = item.get("path")
+    workflow = item.get("workflow")
+    notes = item.get("notes", "").strip()
+
+    # Primary link: deep-link to the file/dir if we know one, otherwise
+    # to the repo root. `HEAD` avoids pinning to a stale default-branch
+    # name (`main` vs `master` vs `develop`).
+    link = ""
+    if target_repo:
+        base = f"https://github.com/{target_repo}"
+        if file_path:
+            link = f" — [`{target_repo}/{file_path}`]({base}/blob/HEAD/{file_path})"
+        elif dir_path:
+            link = f" — [`{target_repo}/{dir_path}`]({base}/tree/HEAD/{dir_path})"
+        else:
+            link = f" — [`{target_repo}`]({base})"
+    if workflow:
+        workflow_url = f"https://github.com/{repo}/blob/develop/{workflow}"
+        link += f" · [workflow]({workflow_url})"
+
+    lines = [f"- [ ] **{name}**{link}"]
+    if notes:
+        # Indent notes under the checkbox so GitHub renders them as part
+        # of the list item rather than a sibling paragraph.
+        for note_line in notes.splitlines():
+            lines.append(f"      {note_line}")
+    lines.append("")
+    return "\n".join(lines)
+
+
+def main() -> int:
+    if len(sys.argv) != 4:
+        print(__doc__, file=sys.stderr)
+        return 2
+    catalog_path = Path(sys.argv[1])
+    version = sys.argv[2]
+    repo = sys.argv[3]
+    try:
+        body = render(catalog_path, version, repo)
+    except ValueError as e:
+        # Surface validation errors as a clean CI failure with a single
+        # actionable line, instead of a Python traceback.
+        print(f"render-downstream-tracker: {e}", file=sys.stderr)
+        return 1
Evidence
The script’s stated intent is to avoid tracebacks and surface validation errors as a single
CI-friendly line, but the exception handling only covers ValueError. The YAML load and file read
occur outside that exception type guarantee, so other failures will bypass the clean error path and
print a traceback.

.github/scripts/render-downstream-tracker.py[34-36]
.github/scripts/render-downstream-tracker.py[137-143]
.github/scripts/render-downstream-tracker.py[140-142]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
The renderer’s `main()` is designed to print a single CI-friendly error line, but it only catches `ValueError`. Failures from reading or parsing the YAML can still emit a full traceback.
### Issue Context
This script is run inside a release workflow; when the catalog is edited, parse/read errors are a realistic failure mode and should be surfaced cleanly.
### Fix Focus Areas
- .github/scripts/render-downstream-tracker.py[34-36]
- .github/scripts/render-downstream-tracker.py[137-143]
### Suggested change
- Broaden the exception handling in `main()` to also catch file read and YAML load/parse exceptions, and print the same one-line `render-downstream-tracker: ...` message.
- Optionally include the exception type in the message to aid debugging, while still avoiding a full traceback.

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


8. Workflow lacks feature flag 📘 Rule violation ☼ Reliability
Description
The new workflow runs automatically on release: published without a disable/enable mechanism,
which conflicts with the requirement that new features be behind a feature flag and disabled by
default. This can cause unintended behavior (auto-creating issues) immediately after merge.
Code

.github/workflows/release-downstreams.yml[R15-18]

+on:
+  release:
+    types: [published]
+  workflow_dispatch:
Evidence
PR Compliance ID 6 requires new features to be behind a feature flag and disabled by default. The
workflow is configured to trigger automatically on published releases, with no gating flag or opt-in
control shown in the added configuration.

.github/workflows/release-downstreams.yml[15-18]
Best Practice: Repository guidelines

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
The workflow auto-runs on `release: published` and creates issues without an opt-in/feature-flag style control.
## Issue Context
Compliance requires new features to be behind a flag and disabled by default; for CI, this can be implemented as an explicit repository variable/secret check, an `if:` condition, or by limiting automatic triggers.
## Fix Focus Areas
- .github/workflows/release-downstreams.yml[15-18]
- .github/workflows/release-downstreams.yml[28-71]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


View more (3)
9. Wrong GitHub path URLs🐞 Bug ≡ Correctness
Description
The renderer always deep-links path/file using .../blob/HEAD/..., which breaks directory links
(they require .../tree/...) and will produce 404s for catalog entries that point to directories.
Code

.github/scripts/render-downstream-tracker.py[R64-81]

+def _render_item(item: dict, repo: str) -> str:
+    name = item["name"]
+    target_repo = item.get("repo")
+    # `path` and `file` are aliases that point at a specific file/dir
+    # inside the downstream repo (or inside this repo for `manual_ci`).
+    path = item.get("path") or item.get("file")
+    workflow = item.get("workflow")
+    notes = item.get("notes", "").strip()
+
+    # Primary link: deep-link to the file/dir if we know one, otherwise
+    # to the repo root. `HEAD` avoids pinning to a stale default-branch
+    # name (`main` vs `master` vs `develop`).
+    link = ""
+    if target_repo:
+        base = f"https://github.com/{target_repo}"
+        if path:
+            link = f" — [`{target_repo}/{path}`]({base}/blob/HEAD/{path})"
+        else:
Evidence
_render_item() explicitly treats path and file as aliases for a file/dir but always formats
the URL as /blob/HEAD/{path}. The catalog includes values that are clearly directory paths (e.g.
charts/stable/etherpad), so those links will be incorrect in the generated tracking issue.

.github/scripts/render-downstream-tracker.py[67-81]
docs/downstreams.yml[76-92]
.github/scripts/render-downstream-tracker.py[67-69]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
The markdown renderer uses GitHub `/blob/` links for both files and directories. Directory targets should use `/tree/`, otherwise the issue body contains broken links.
### Issue Context
The catalog currently treats `path` and `file` as interchangeable. Several entries use `path` for directories (e.g. `charts/stable/etherpad`).
### Fix Focus Areas
- .github/scripts/render-downstream-tracker.py[64-86]
- docs/downstreams.yml[36-172]
### Suggested fix
- Stop treating `path` and `file` as aliases:
- Interpret `file:` as a file target → use `{base}/blob/HEAD/{file}`
- Interpret `path:` as a directory target → use `{base}/tree/HEAD/{path}`
- Update `docs/downstreams.yml` so file targets use `file:` (e.g., entries currently using `path:` for a single file like `ct/etherpad.sh` and `bbb-etherpad.placeholder.sh`).
- Add a small validation in the renderer that errors if both `path` and `file` are set.

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


10. Duplicate issue creation🐞 Bug ☼ Reliability
Description
The workflow always runs gh issue create with no deduplication, so reruns and repeated
workflow_dispatch executions for the same version will create duplicate tracking issues.
Code

.github/workflows/release-downstreams.yml[R63-71]

+      - name: Open tracking issue
+        env:
+          GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
+        run: |
+          gh issue create \
+            --repo "$GITHUB_REPOSITORY" \
+            --title "Downstream bumps for ${{ steps.v.outputs.version }}" \
+            --label "release,downstream" \
+            --body-file '${{ steps.render.outputs.body-path }}'
Evidence
There is no step to search for an existing issue by title/label/version before creating a new one,
and the final step unconditionally creates a new issue each run.

.github/workflows/release-downstreams.yml[63-71]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
Re-running the workflow (or manually dispatching it multiple times with the same version) will open multiple tracking issues for the same release.
### Issue Context
This workflow is intended to create "a single tracking issue on every release".
### Fix Focus Areas
- .github/workflows/release-downstreams.yml[63-71]
### Suggested fix
- Before `gh issue create`, query for an existing issue (open or recent) matching the version and labels, e.g.:
- `gh issue list --label release --label downstream --search "\"Downstream bumps for ${VERSION}\" in:title" --json number --jq '.[0].number'`
- If found, either:
- comment on the existing issue, or
- skip creation and output a message.

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


11. Missing YAML type validation🐞 Bug ☼ Reliability
Description
If docs/downstreams.yml is empty or not a mapping, the renderer will crash (catalog.get(...))
and fail the workflow with a Python traceback instead of a clear validation error.
Code

.github/scripts/render-downstream-tracker.py[R33-37]

+def render(catalog_path: Path, version: str, repo: str) -> str:
+    with catalog_path.open() as f:
+        catalog = yaml.safe_load(f)
+    items = catalog.get("downstreams", [])
+
Evidence
yaml.safe_load() may return None or a non-dict type, but the code assumes a dict and calls
.get(). This can fail the release workflow if the YAML is accidentally malformed.

.github/scripts/render-downstream-tracker.py[33-37]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
The renderer assumes the YAML top-level is a dict with a `downstreams` list of dict items. If the YAML is empty or malformed, it throws an unhelpful exception.
### Issue Context
This script is run in CI during releases; failures should be actionable.
### Fix Focus Areas
- .github/scripts/render-downstream-tracker.py[33-60]
### Suggested fix
- Add explicit type checks and raise a clear `ValueError`, e.g.:
- ensure `catalog` is a `dict`
- ensure `items` is a `list`
- ensure each item is a `dict` and has required keys like `name` and `update_type`
- In `main()`, catch `ValueError` and print a concise error to stderr before returning non-zero.

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


Grey Divider

Previous review results

Review updated until commit 049a4a3

Results up to commit N/A


🐞 Bugs (1) 📘 Rule violations (3) 📎 Requirement gaps (0)


Action required
1. 4-space indent in smoke test 📘 Rule violation ⚙ Maintainability
Description
The newly added smoke test script uses 4-space indentation, violating the 2-space indentation
requirement. This introduces inconsistent formatting in committed source files.
Code

.github/scripts/test_render_downstream_tracker.py[R24-37]

+def write(tmpdir: Path, content: str) -> Path:
+    p = tmpdir / "catalog.yml"
+    p.write_text(textwrap.dedent(content))
+    return p
+
+
+def expect_value_error(tmpdir: Path, content: str, needle: str) -> None:
+    p = write(tmpdir, content)
+    try:
+        mod.render(p, "1.0", "ether/etherpad")
+    except ValueError as e:
+        assert needle in str(e), f"expected {needle!r} in {e!r}"
+        return
+    raise AssertionError(f"expected ValueError containing {needle!r}")
Evidence
PR Compliance ID 10 requires 2-space indentation with spaces only. The added test script’s function
bodies are indented with 4 spaces (e.g., within write() / expect_value_error()), which violates
this requirement.

.github/scripts/test_render_downstream_tracker.py[24-37]
Best Practice: Repository guidelines

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
The added Python smoke test uses 4-space indentation, but the repository requires 2-space indentation.
## Issue Context
Keeping indentation consistent avoids style drift and potential formatter/linter conflicts.
## Fix Focus Areas
- .github/scripts/test_render_downstream_tracker.py[24-82]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


2. Unknown update_type omitted 🐞 Bug ≡ Correctness
Description
render() only renders entries whose update_type matches one of the hard-coded GROUPS; a typo
or new update_type in docs/downstreams.yml will be silently dropped from the tracking issue.
This breaks the “single source of truth” behavior by making missing checklist items undetectable in
CI.
Code

.github/scripts/render-downstream-tracker.py[R47-87]

+    for idx, item in enumerate(items):
+        if not isinstance(item, dict):
+            raise ValueError(
+                f"{catalog_path}: downstreams[{idx}] must be a mapping, "
+                f"got {type(item).__name__}"
+            )
+        if "name" not in item or "update_type" not in item:
+            raise ValueError(
+                f"{catalog_path}: downstreams[{idx}] missing required "
+                f"`name` and/or `update_type`"
+            )
+        if "path" in item and "file" in item:
+            raise ValueError(
+                f"{catalog_path}: downstreams[{idx}] ({item['name']}) "
+                f"sets both `path` and `file`; use `file` for files and "
+                f"`path` for directories, not both"
+            )
+
+    out: list[str] = []
+    out.append(f"## Downstream distribution checklist for `{version}`\n")
+    out.append(
+        "Auto-opened by `.github/workflows/release-downstreams.yml` on "
+        "release publish.\n"
+    )
+    out.append(
+        f"Source of truth: [`docs/downstreams.yml`](https://github.com/"
+        f"{repo}/blob/develop/docs/downstreams.yml).\n"
+    )
+    out.append(
+        "Tick items as you verify them. Anything still unchecked a week "
+        "after release is a candidate for follow-up.\n"
+    )
+
+    for update_type, heading in GROUPS:
+        matches = [i for i in items if i.get("update_type") == update_type]
+        if not matches:
+            continue
+        out.append(f"\n### {heading}\n")
+        for item in matches:
+            out.append(_render_item(item, repo))
+
Evidence
The YAML validation checks that update_type exists but never checks that it’s one of the allowed
values. Rendering then iterates only over GROUPS and filters items by exact match, so any
unrecognized value produces no output for that downstream entry (no error, no warning). The catalog
itself documents a finite set of valid update_type values, so failing to validate membership is a
concrete silent-failure mode.

.github/scripts/render-downstream-tracker.py[47-57]
.github/scripts/render-downstream-tracker.py[80-87]
docs/downstreams.yml[12-35]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
`render()` validates that each downstream item has an `update_type`, but it does not validate that the value is one of the supported categories. Because rendering only iterates over the hard-coded `GROUPS`, any typo/new value will be silently omitted from the generated checklist.
### Issue Context
This workflow is intended to make `docs/downstreams.yml` a single source of truth; silent omission defeats that by producing an incomplete tracking issue without a CI failure.
### Fix Focus Areas
- .github/scripts/render-downstream-tracker.py[22-30]
- .github/scripts/render-downstream-tracker.py[47-63]
- .github/scripts/render-downstream-tracker.py[80-87]
### Suggested change
- Build an `allowed_update_types` set from `GROUPS`.
- During the per-item validation loop, raise a `ValueError` if `item['update_type']` is not in the allowed set (include the bad value and the allowed values in the message).

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


3. 4-space indent in script 📘 Rule violation ⚙ Maintainability
Description
The new Python script uses 4-space indentation, but the compliance checklist requires 2-space
indentation (and no tabs) for code changes. This introduces inconsistent formatting against the
mandated whitespace standard.
Code

.github/scripts/render-downstream-tracker.py[R33-36]

+def render(catalog_path: Path, version: str, repo: str) -> str:
+    with catalog_path.open() as f:
+        catalog = yaml.safe_load(f)
+    items = catalog.get("downstreams", [])
Evidence
PR Compliance ID 8 requires 2-space indentation. In the added Python code, function bodies are
indented by 4 spaces (e.g., the render() function).

.github/scripts/render-downstream-tracker.py[33-36]
Best Practice: Repository guidelines

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
The added Python script uses 4-space indentation, but the repository compliance checklist requires 2-space indentation (and no tabs) for code changes.
## Issue Context
This affects the newly added `.github/scripts/render-downstream-tracker.py` functions.
## Fix Focus Areas
- .github/scripts/render-downstream-tracker.py[33-109]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


View more (1)
4. inputs context unavailable🐞 Bug ≡ Correctness
Description
The workflow reads ${{ inputs.version }} even when triggered by release: published, where the
inputs context is not guaranteed to exist, risking expression evaluation failure or an empty
VERSION and a failed run.
Code

.github/workflows/release-downstreams.yml[R35-47]

+      - name: Resolve version
+        id: v
+        env:
+          TAG: ${{ github.event.release.tag_name }}
+          INPUT: ${{ inputs.version }}
+        run: |
+          VERSION="${TAG:-$INPUT}"
+          VERSION="${VERSION#v}"
+          if [ -z "${VERSION}" ]; then
+            echo "Could not determine version." >&2
+            exit 1
+          fi
+          echo "version=${VERSION}" >> "$GITHUB_OUTPUT"
Evidence
The workflow always sets INPUT: ${{ inputs.version }} and then uses it to compute VERSION, but
the job is also triggered by release: published (not just workflow_dispatch). On non-dispatch
events the inputs context is not guaranteed, so this can break the workflow or force it into the
"Could not determine version" failure path.

.github/workflows/release-downstreams.yml[15-23]
.github/workflows/release-downstreams.yml[35-47]
Best Practice: GitHub Actions docs

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
`.github/workflows/release-downstreams.yml` references `${{ inputs.version }}` even for `release` events. Depending on GitHub Actions evaluation rules, this can fail evaluation or yield an empty string and break version resolution.
### Issue Context
The job is triggered by both `release.published` and `workflow_dispatch`. `inputs.version` should only be read for dispatch runs.
### Fix Focus Areas
- .github/workflows/release-downstreams.yml[15-47]
### Suggested fix
- Set `INPUT` from `github.event.inputs.version` (dispatch-only payload) and/or gate it behind an event-name check, e.g.:
- `INPUT: ${{ github.event_name == 'workflow_dispatch' && inputs.version || '' }}`
- or avoid `inputs` entirely: `INPUT: ${{ github.event.inputs.version }}`
- Optionally improve the error message to include `github.event_name` when version cannot be determined.

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools



Remediation recommended
5. Traceback on YAML read/parse🐞 Bug ☼ Reliability
Description
main() only catches ValueError, so YAML read/parse failures (or file read errors) will produce a
full Python traceback, despite the script comment stating it should fail CI with a single actionable
line. This makes workflow failures noisier and harder to triage when the catalog is edited
incorrectly.
Code

.github/scripts/render-downstream-tracker.py[R34-143]

+    with catalog_path.open() as f:
+        catalog = yaml.safe_load(f)
+    if not isinstance(catalog, dict):
+        raise ValueError(
+            f"{catalog_path}: top-level must be a mapping, "
+            f"got {type(catalog).__name__}"
+        )
+    items = catalog.get("downstreams", [])
+    if not isinstance(items, list):
+        raise ValueError(
+            f"{catalog_path}: `downstreams` must be a list, "
+            f"got {type(items).__name__}"
+        )
+    for idx, item in enumerate(items):
+        if not isinstance(item, dict):
+            raise ValueError(
+                f"{catalog_path}: downstreams[{idx}] must be a mapping, "
+                f"got {type(item).__name__}"
+            )
+        if "name" not in item or "update_type" not in item:
+            raise ValueError(
+                f"{catalog_path}: downstreams[{idx}] missing required "
+                f"`name` and/or `update_type`"
+            )
+        if "path" in item and "file" in item:
+            raise ValueError(
+                f"{catalog_path}: downstreams[{idx}] ({item['name']}) "
+                f"sets both `path` and `file`; use `file` for files and "
+                f"`path` for directories, not both"
+            )
+
+    out: list[str] = []
+    out.append(f"## Downstream distribution checklist for `{version}`\n")
+    out.append(
+        "Auto-opened by `.github/workflows/release-downstreams.yml` on "
+        "release publish.\n"
+    )
+    out.append(
+        f"Source of truth: [`docs/downstreams.yml`](https://github.com/"
+        f"{repo}/blob/develop/docs/downstreams.yml).\n"
+    )
+    out.append(
+        "Tick items as you verify them. Anything still unchecked a week "
+        "after release is a candidate for follow-up.\n"
+    )
+
+    for update_type, heading in GROUPS:
+        matches = [i for i in items if i.get("update_type") == update_type]
+        if not matches:
+            continue
+        out.append(f"\n### {heading}\n")
+        for item in matches:
+            out.append(_render_item(item, repo))
+
+    return "\n".join(out)
+
+
+def _render_item(item: dict, repo: str) -> str:
+    name = item["name"]
+    target_repo = item.get("repo")
+    # `file:` deep-links to a single file (GitHub /blob/...).
+    # `path:` deep-links to a directory (GitHub /tree/...).
+    # `/blob/<dir>` and `/tree/<file>` both 404 on GitHub, so the two
+    # must be distinguished. The renderer trusts the YAML key — see
+    # render() for the both-set guard.
+    file_path = item.get("file")
+    dir_path = item.get("path")
+    workflow = item.get("workflow")
+    notes = item.get("notes", "").strip()
+
+    # Primary link: deep-link to the file/dir if we know one, otherwise
+    # to the repo root. `HEAD` avoids pinning to a stale default-branch
+    # name (`main` vs `master` vs `develop`).
+    link = ""
+    if target_repo:
+        base = f"https://github.com/{target_repo}"
+        if file_path:
+            link = f" — [`{target_repo}/{file_path}`]({base}/blob/HEAD/{file_path})"
+        elif dir_path:
+            link = f" — [`{target_repo}/{dir_path}`]({base}/tree/HEAD/{dir_path})"
+        else:
+            link = f" — [`{target_repo}`]({base})"
+    if workflow:
+        workflow_url = f"https://github.com/{repo}/blob/develop/{workflow}"
+        link += f" · [workflow]({workflow_url})"
+
+    lines = [f"- [ ] **{name}**{link}"]
+    if notes:
+        # Indent notes under the checkbox so GitHub renders them as part
+        # of the list item rather than a sibling paragraph.
+        for note_line in notes.splitlines():
+            lines.append(f"      {note_line}")
+    lines.append("")
+    return "\n".join(lines)
+
+
+def main() -> int:
+    if len(sys.argv) != 4:
+        print(__doc__, file=sys.stderr)
+        return 2
+    catalog_path = Path(sys.argv[1])
+    version = sys.argv[2]
+    repo = sys.argv[3]
+    try:
+        body = render(catalog_path, version, repo)
+    except ValueError as e:
+        # Surface validation errors as a clean CI failure with a single
+        # actionable line, instead of a Python traceback.
+        print(f"render-downstream-tracker: {e}", file=sys.stderr)
+        return 1
Evidence
The script’s stated intent is to avoid tracebacks and surface validation errors as a single
CI-friendly line, but the exception handling only covers ValueError. The YAML load and file read
occur outside that exception type guarantee, so other failures will bypass the clean error path and
print a traceback.

.github/scripts/render-downstream-tracker.py[34-36]
.github/scripts/render-downstream-tracker.py[137-143]
.github/scripts/render-downstream-tracker.py[140-142]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
The renderer’s `main()` is designed to print a single CI-friendly error line, but it only catches `ValueError`. Failures from reading or parsing the YAML can still emit a full traceback.
### Issue Context
This script is run inside a release workflow; when the catalog is edited, parse/read errors are a realistic failure mode and should be surfaced cleanly.
### Fix Focus Areas
- .github/scripts/render-downstream-tracker.py[34-36]
- .github/scripts/render-downstream-tracker.py[137-143]
### Suggested change
- Broaden the exception handling in `main()` to also catch file read and YAML load/parse exceptions, and print the same one-line `render-downstream-tracker: ...` message.
- Optionally include the exception type in the message to aid debugging, while still avoiding a full traceback.

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


6. Workflow lacks feature flag 📘 Rule violation ☼ Reliability
Description
The new workflow runs automatically on release: published without a disable/enable mechanism,
which conflicts with the requirement that new features be behind a feature flag and disabled by
default. This can cause unintended behavior (auto-creating issues) immediately after merge.
Code

.github/workflows/release-downstreams.yml[R15-18]

+on:
+  release:
+    types: [published]
+  workflow_dispatch:
Evidence
PR Compliance ID 6 requires new features to be behind a feature flag and disabled by default. The
workflow is configured to trigger automatically on published releases, with no gating flag or opt-in
control shown in the added configuration.

.github/workflows/release-downstreams.yml[15-18]
Best Practice: Repository guidelines

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
The workflow auto-runs on `release: published` and creates issues without an opt-in/feature-flag style control.
## Issue Context
Compliance requires new features to be behind a flag and disabled by default; for CI, this can be implemented as an explicit repository variable/secret check, an `if:` condition, or by limiting automatic triggers.
## Fix Focus Areas
- .github/workflows/release-downstreams.yml[15-18]
- .github/workflows/release-downstreams.yml[28-71]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


7. Wrong GitHub path URLs🐞 Bug ≡ Correctness
Description
The renderer always deep-links path/file using .../blob/HEAD/..., which breaks directory links
(they require .../tree/...) and will produce 404s for catalog entries that point to directories.
Code

.github/scripts/render-downstream-tracker.py[R64-81]

+def _render_item(item: dict, repo: str) -> str:
+    name = item["name"]
+    target_repo = item.get("repo")
+    # `path` and `file` are aliases that point at a specific file/dir
+    # inside the downstream repo (or inside this repo for `manual_ci`).
+    path = item.get("path") or item.get("file")
+    workflow = item.get("workflow")
+    notes = item.get("notes", "").strip()
+
+    # Primary link: deep-link to the file/dir if we know one, otherwise
+    # to the repo root. `HEAD` avoids pinning to a stale default-branch
+    # name (`main` vs `master` vs `develop`).
+    link = ""
+    if target_repo:
+        base = f"https://github.com/{target_repo}"
+        if path:
+            link = f" — [`{target_repo}/{path}`]({base}/blob/HEAD/{path})"
+        else:
Evidence
_render_item() explicitly treats path and file as aliases for a file/dir but always formats
the URL as /blob/HEAD/{path}. The catalog includes values that are clearly directory paths (e.g.
charts/stable/etherpad), so those links will be incorrect in the generated tracking issue.

.github/scripts/render-downstream-tracker.py[67-81]
docs/downstreams.yml[76-92]
.github/scripts/render-downstream-tracker.py[67-69]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
The markdown renderer uses GitHub `/blob/` links for both files and directories. Directory targe...

Comment on lines +33 to +36
def render(catalog_path: Path, version: str, repo: str) -> str:
with catalog_path.open() as f:
catalog = yaml.safe_load(f)
items = catalog.get("downstreams", [])
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Action required

1. 4-space indent in script 📘 Rule violation ⚙ Maintainability

The new Python script uses 4-space indentation, but the compliance checklist requires 2-space
indentation (and no tabs) for code changes. This introduces inconsistent formatting against the
mandated whitespace standard.
Agent Prompt
## Issue description
The added Python script uses 4-space indentation, but the repository compliance checklist requires 2-space indentation (and no tabs) for code changes.

## Issue Context
This affects the newly added `.github/scripts/render-downstream-tracker.py` functions.

## Fix Focus Areas
- .github/scripts/render-downstream-tracker.py[33-109]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools

Comment thread .github/workflows/release-downstreams.yml
Addresses Qodo feedback on ether#7561: reading `inputs.version` on a
`release: published` event can yield an empty string or a
context-evaluation failure depending on runtime. `inputs` only populates
on workflow_dispatch. Switch to `github.event.inputs.version` which is
typed as the dispatch payload directly, and add the event name to the
error message for easier debugging when neither tag nor input is set.

Python 4-space indent is left as-is — that's PEP 8, and the
2-space repo style rule Qodo references applies to the shell/YAML/JS
tree, not to standalone Python scripts.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
@JohnMcLear
Copy link
Copy Markdown
Member Author

Qodo round-2 on this PR:

  1. inputs.version context unavailable — already addressed on current HEAD. The workflow reads github.event.inputs.version (dispatch-only payload), not inputs.version. See release-downstreams.yml:44.
  2. Python 4-space indent — intentional, this is PEP 8. The repo's 2-space style rule applies to JS/TS/YAML; applying it to Python would violate the language-wide convention every Python tool enforces (Black, Ruff, pycodestyle). Leaving as-is.

@JohnMcLear
Copy link
Copy Markdown
Member Author

We have to be careful merging this because some downstream providers might not want us opening issues on each release and might prefer automatic methods.. It also might be worth waiting X days if its not a critical security thing..

I don't handle downstream much so happy for feedback from downstream providers

@JohnMcLear JohnMcLear requested a review from SamTV12345 April 27, 2026 10:19
Addresses Qodo review on ether#7561:

3. Workflow lacks feature flag — add an opt-out gate via
   `vars.SKIP_DOWNSTREAM_TRACKER`. Default stays opt-out (the whole
   point of the tracker is to fire automatically on release; opt-in
   would re-introduce the "forgot to enable it" failure mode).

4. Wrong GitHub path URLs — `path:` and `file:` are no longer aliases.
   `file:` now renders as `/blob/HEAD/...` (single file) and `path:`
   as `/tree/HEAD/...` (directory). Updated `docs/downstreams.yml`
   entries that pointed at single files (Proxmox VED ct/etherpad.sh,
   CasaOS docker-compose.yml, BBB placeholder.sh) to use `file:`.
   Renderer now errors if both keys are set on the same entry.

5. Duplicate issue creation — before `gh issue create`, search for an
   existing issue with the same title (across open/closed) and skip
   create if one exists. Re-running the workflow for the same release
   no longer piles up duplicates.

6. Missing YAML type validation — render() now validates that the
   top-level is a mapping, that `downstreams` is a list, and that each
   entry is a mapping with `name` and `update_type`. main() catches
   ValueError and surfaces it as a single CI-friendly error line
   instead of a Python traceback.

Plus a `test_render_downstream_tracker.py` smoke test exercising the
file/dir routing and each validation guard.

Pushing back on Qodo issue 1 (4-space → 2-space indent): Python files
follow PEP 8, which mandates 4-space indentation. The 2-space rule in
the project compliance checklist applies to JS/TS source. Forcing
2-space on a Python script makes it harder to read and breaks tooling
defaults (formatters, linters, IDEs). Leaving as-is.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
@JohnMcLear
Copy link
Copy Markdown
Member Author

/review

@qodo-code-review
Copy link
Copy Markdown

ⓘ You've reached your Qodo monthly free-tier limit. Reviews pause until next month — upgrade your plan to continue now, or link your paid account if you already have one.

@qodo-free-for-open-source-projects
Copy link
Copy Markdown

qodo-free-for-open-source-projects Bot commented Apr 29, 2026

Persistent review updated to latest commit 972a08f

Comment on lines +24 to +37
def write(tmpdir: Path, content: str) -> Path:
p = tmpdir / "catalog.yml"
p.write_text(textwrap.dedent(content))
return p


def expect_value_error(tmpdir: Path, content: str, needle: str) -> None:
p = write(tmpdir, content)
try:
mod.render(p, "1.0", "ether/etherpad")
except ValueError as e:
assert needle in str(e), f"expected {needle!r} in {e!r}"
return
raise AssertionError(f"expected ValueError containing {needle!r}")
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Action required

1. 4-space indent in smoke test 📘 Rule violation ⚙ Maintainability

The newly added smoke test script uses 4-space indentation, violating the 2-space indentation
requirement. This introduces inconsistent formatting in committed source files.
Agent Prompt
## Issue description
The added Python smoke test uses 4-space indentation, but the repository requires 2-space indentation.

## Issue Context
Keeping indentation consistent avoids style drift and potential formatter/linter conflicts.

## Fix Focus Areas
- .github/scripts/test_render_downstream_tracker.py[24-82]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools

Comment on lines +47 to +87
for idx, item in enumerate(items):
if not isinstance(item, dict):
raise ValueError(
f"{catalog_path}: downstreams[{idx}] must be a mapping, "
f"got {type(item).__name__}"
)
if "name" not in item or "update_type" not in item:
raise ValueError(
f"{catalog_path}: downstreams[{idx}] missing required "
f"`name` and/or `update_type`"
)
if "path" in item and "file" in item:
raise ValueError(
f"{catalog_path}: downstreams[{idx}] ({item['name']}) "
f"sets both `path` and `file`; use `file` for files and "
f"`path` for directories, not both"
)

out: list[str] = []
out.append(f"## Downstream distribution checklist for `{version}`\n")
out.append(
"Auto-opened by `.github/workflows/release-downstreams.yml` on "
"release publish.\n"
)
out.append(
f"Source of truth: [`docs/downstreams.yml`](https://github.com/"
f"{repo}/blob/develop/docs/downstreams.yml).\n"
)
out.append(
"Tick items as you verify them. Anything still unchecked a week "
"after release is a candidate for follow-up.\n"
)

for update_type, heading in GROUPS:
matches = [i for i in items if i.get("update_type") == update_type]
if not matches:
continue
out.append(f"\n### {heading}\n")
for item in matches:
out.append(_render_item(item, repo))

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Action required

2. Unknown update_type omitted 🐞 Bug ≡ Correctness

render() only renders entries whose update_type matches one of the hard-coded GROUPS; a typo
or new update_type in docs/downstreams.yml will be silently dropped from the tracking issue.
This breaks the “single source of truth” behavior by making missing checklist items undetectable in
CI.
Agent Prompt
### Issue description
`render()` validates that each downstream item has an `update_type`, but it does not validate that the value is one of the supported categories. Because rendering only iterates over the hard-coded `GROUPS`, any typo/new value will be silently omitted from the generated checklist.

### Issue Context
This workflow is intended to make `docs/downstreams.yml` a single source of truth; silent omission defeats that by producing an incomplete tracking issue without a CI failure.

### Fix Focus Areas
- .github/scripts/render-downstream-tracker.py[22-30]
- .github/scripts/render-downstream-tracker.py[47-63]
- .github/scripts/render-downstream-tracker.py[80-87]

### Suggested change
- Build an `allowed_update_types` set from `GROUPS`.
- During the per-item validation loop, raise a `ValueError` if `item['update_type']` is not in the allowed set (include the bad value and the allowed values in the message).

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools

Round 3 of Qodo review on ether#7561:

ether#2 Unknown update_type silently omitted — render() now validates each
   item's `update_type` against the GROUPS allowlist and raises
   ValueError. Without this, a typo'd entry (e.g. `external-pr` with a
   dash) would parse cleanly but never reach the rendered checklist,
   so the downstream would silently fall off the radar — exactly the
   failure mode the tracker exists to prevent.

ether#5 Traceback on YAML read/parse — main()'s except clause only caught
   ValueError, so missing-file (FileNotFoundError/OSError) and
   malformed-YAML (yaml.YAMLError) errors still surfaced as full
   tracebacks. Broaden to catch all three with a single `type(e).__name__:
   message` line so the workflow log stays actionable.

Plus three new test cases: unknown update_type, missing catalog file,
malformed YAML.

Pushing back again on:

#1/ether#3 4-space → 2-space Python indent — Python files follow PEP 8.
   Forcing 2-space breaks formatter/linter/IDE defaults and is harder
   to read than the universal Python convention. The 2-space rule is
   for JS/TS source.

ether#6 Workflow lacks feature flag — already addressed in the previous
   commit via opt-out gate `vars.SKIP_DOWNSTREAM_TRACKER`. Default
   stays opt-out because the tracker exists precisely to catch
   "forgot to enable it at release time" cases.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
@JohnMcLear
Copy link
Copy Markdown
Member Author

/review

@qodo-code-review
Copy link
Copy Markdown

ⓘ You've reached your Qodo monthly free-tier limit. Reviews pause until next month — upgrade your plan to continue now, or link your paid account if you already have one.

@qodo-free-for-open-source-projects
Copy link
Copy Markdown

qodo-free-for-open-source-projects Bot commented Apr 30, 2026

Persistent review updated to latest commit 049a4a3

Comment on lines +85 to +93
EXISTING=$(gh issue list \
--repo "$GITHUB_REPOSITORY" \
--state all \
--label release \
--label downstream \
--search "\"$TITLE\" in:title" \
--json number,title \
--jq ".[] | select(.title == \"$TITLE\") | .number" \
| head -n1)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Action required

1. Jq string injection breaks dedupe 🐞 Bug ☼ Reliability

The workflow builds a jq filter by interpolating $TITLE directly into the jq program, so a
version/title containing a double-quote or backslash can break jq parsing and make the workflow fail
before creating (or deduping) the tracking issue.
Agent Prompt
### Issue description
The workflow interpolates `$TITLE` into the jq program string passed to `gh issue list --jq`, which can break parsing if `$VERSION` contains characters like `"` or `\`.

### Issue Context
`TITLE` is derived from `$VERSION` (workflow_dispatch input or release tag). Even if unusual in real tags, dispatch testing can easily include these characters and will cause the step to fail.

### Fix Focus Areas
- .github/workflows/release-downstreams.yml[81-93]

### Suggested fix
Avoid embedding `$TITLE` inside the jq program. Instead pipe JSON to `jq` and pass the title via `--arg`, e.g.:

```bash
EXISTING=$(gh issue list \
  --repo "$GITHUB_REPOSITORY" \
  --state all \
  --label release \
  --label downstream \
  --search "$TITLE in:title" \
  --json number,title \
  | jq -r --arg title "$TITLE" '.[] | select(.title == $title) | .number' \
  | head -n1)
```

(Or, alternatively, properly escape `$TITLE` before embedding it into the jq string.)

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants