Conversation
|
Warning Rate limit exceeded@ejfine has exceeded the limit for the number of commits or files that can be reviewed per hour. Please wait 12 minutes and 3 seconds before requesting another review. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. 📒 Files selected for processing (3)
WalkthroughUpdates development and CI configurations: bumps multiple tool and dependency versions, adds three template timeout context variables, and adds job-level timeout-minutes to several GitHub Actions workflows and reusable templates. Changes
Estimated code review effort🎯 2 (Simple) | ⏱️ ~10–15 minutes
Possibly related PRs
Pre-merge checks❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 5
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
⛔ Files ignored due to path filters (1)
uv.lockis excluded by!**/*.lock
📒 Files selected for processing (13)
.devcontainer/devcontainer.json(2 hunks).devcontainer/install-ci-tooling.py(1 hunks).github/workflows/ci.yaml(3 hunks).github/workflows/tag-on-merge.yaml(1 hunks)extensions/context.py(2 hunks)pyproject.toml(1 hunks)template/.devcontainer/devcontainer.json.jinja-base(1 hunks)template/.github/reusable_workflows/build-docker-image.yaml.jinja-base(1 hunks)template/.github/workflows/ci.yaml.jinja-base(2 hunks)template/.github/workflows/get-values.yaml.jinja-base(1 hunks)template/.github/workflows/pre-commit.yaml.jinja-base(1 hunks)template/extensions/context.py.jinja-base(1 hunks)template/template/.github/workflows/{% if template_uses_pulumi %}pulumi-aws.yml{% endif %}.jinja-base(1 hunks)
🔇 Additional comments (14)
.github/workflows/tag-on-merge.yaml (1)
13-13: LGTM!The 2-minute timeout is appropriate for this lightweight workflow that only checks out code and creates a tag.
template/extensions/context.py.jinja-base (1)
98-100: LGTM!The new timeout context variables are correctly added and will be properly propagated to child templates for use in GitHub Actions workflows.
template/.devcontainer/devcontainer.json.jinja-base (1)
30-30: LGTM!The VS Code extension version updates are appropriate and maintain environment consistency across the development team.
Also applies to: 34-34
pyproject.toml (1)
8-8: No action required—package versions are valid and secure.Both pytest 9.0.1 and copier 9.11.0 exist on PyPI and are free from known vulnerabilities. The copier version is above the patched version (9.9.1), ensuring the previously reported HIGH and MODERATE severity advisories are resolved.
.devcontainer/install-ci-tooling.py (1)
8-11: I need to verify the remaining package version that wasn't checked in the script.All specified tool versions are available and functional.
The verification confirms:
- copier version 9.11.0 is available on PyPI
- pre-commit version 4.5.0 is available on PyPI
- uv version 0.9.11 is accessible via its install script
- copier-template-extensions version 0.3.3 exists on PyPI
All version bumps in lines 8–11 are valid and functional.
template/.github/workflows/pre-commit.yaml.jinja-base (1)
31-31: Job timeout insufficient for mutex wait strategy — verify with actual CI execution data.The review comment correctly identifies a conflict: the job-level timeout (8 minutes) and mutex step timeout (30 minutes) create a logical inconsistency because job timeouts take precedence, canceling the entire job before the step's 30-minute timeout can be reached.
This becomes problematic when jobs are queued — if mutex acquisition takes more than 8 minutes, the job will be terminated before the mutex step can fail gracefully at 30 minutes. Combined with pre-commit hook execution time on large codebases, the workflow may frequently timeout.
However, the appropriate timeout value depends on typical execution patterns in downstream repositories. Verify actual pre-commit execution times from CI logs to determine whether:
- 8 minutes is adequate for typical runs
- A smaller increase (e.g., 15–20 minutes) suffices
- Full alignment to 30 minutes is necessary
.github/workflows/ci.yaml (3)
25-25: LGTM! Timeout added to prevent hanging.The 8-minute timeout for the pre-commit job is appropriate given it runs pre-commit hooks, manages caching, and uses mutex locking. This aligns with the PR objective to prevent CI jobs from hanging indefinitely.
79-79: LGTM! Timeout added to prevent hanging.The 8-minute timeout for the lint-matrix job is reasonable given it instantiates the copier template, installs dependencies, and runs pre-commit checks across multiple Python versions.
160-160: LGTM! Timeout added to prevent hanging.The 2-minute timeout for the required-check job is appropriate and generous given it only validates the status of previous jobs.
template/.github/workflows/ci.yaml.jinja-base (2)
135-135: LGTM! Timeout appropriate for simple status check.The short timeout for the required-check job is suitable since it only validates previous job results and optionally updates a commit status.
46-46: Template variable is properly defined—no issues found.The
gha_medium_timeout_minutesvariable is correctly defined inextensions/context.py:106with a value of "8" minutes. The code change is sound..devcontainer/devcontainer.json (2)
64-64: LGTM! Hash update managed by automation.The devcontainer context hash update is managed by a pre-commit hook as indicated by the comment, which is the correct approach.
24-24: I need to search for available versions of the CodeRabbit VS Code extension to determine if 0.16.0 is a valid version or if the specified version doesn't exist.I need to search more specifically for version history or changelog information to verify if 0.16.0 is a valid version.
Based on my web search, the CodeRabbit VS Code extension exists as a free AI code review tool, and it was released as a new integration in Visual Studio Code along with support for Cursor and Windsurf IDEs. However, I could not find any evidence of version 0.16.0 existing in publicly available sources, release notes, or changelogs.
Update the CodeRabbit extension version in
.devcontainer/devcontainer.jsonline 24 after verifying the correct available version from the VS Code Marketplace. The specific version 0.16.0 could not be verified as existing in available public documentation or release history.template/.github/workflows/get-values.yaml.jinja-base (1)
22-22: LGTM! Template variable is properly defined.The
gha_short_timeout_minutestemplate variable is defined inextensions/context.py:105with the value"2", confirming that the timeout addition in the workflow is correctly configured and will be properly resolved during template rendering.
| "MS-vsliveshare.vsliveshare@1.0.5905", | ||
| "github.copilot@1.388.0", | ||
| "github.copilot-chat@0.33.2025102701", | ||
| "github.copilot-chat@0.34.2025112401", |
There was a problem hiding this comment.
🧩 Analysis chain
Verify the GitHub Copilot Chat extension version exists.
Please verify that GitHub Copilot Chat extension version 0.34.2025112401 exists:
🌐 Web query:
GitHub Copilot Chat VS Code extension version 0.34.2025112401
💡 Result:
I couldn't find any official record of version 0.34.2025112401. The latest published Copilot Chat builds I found are 0.34.2025112102 (listed on a VSIX mirror) and nearby 0.34.2025111403; the extension repo and Marketplace are the authoritative sources to check for a new release. [1][2]
- Repo / general info: microsoft/vscode-copilot-chat on GitHub. [1]
- VSIX listing showing 0.34.2025112102 (latest mirror listing). [2]
If you want, I can check the VS Code Marketplace or fetch the extension manifest now and confirm whether 0.34.2025112401 is available.
Let me search for more definitive information from the authoritative sources.
Update devcontainer.json to use an available version of GitHub Copilot Chat.
The latest available version of GitHub Copilot Chat is 0.34.2025111403 (updated November 15, 2025). The pinned version 0.34.2025112401 does not exist and will cause the devcontainer configuration to fail. Update to 0.34.2025111403 or remove the version pin to use the latest automatically.
🤖 Prompt for AI Agents
In .devcontainer/devcontainer.json around line 28, the pinned extension version
"github.copilot-chat@0.34.2025112401" does not exist and will break the
devcontainer; replace it with the valid version
"github.copilot-chat@0.34.2025111403" or remove the version suffix entirely (use
"github.copilot-chat") so the devcontainer uses the available/latest extension;
update the JSON string accordingly and validate the file is still valid JSON.
| # These are duplicated in the install-ci-tooling.py script in this repository | ||
| context["uv_version"] = "0.9.7" | ||
| context["pre_commit_version"] = "4.3.0" | ||
| context["uv_version"] = "0.9.11" |
There was a problem hiding this comment.
🧩 Analysis chain
Verify that critical dependency versions are available and secure.
Multiple infrastructure and tooling versions have been updated. Ensure that the updated versions exist and are free from known vulnerabilities, particularly for critical tools.
Also applies to: 17-17, 20-20, 23-23, 28-29, 34-34, 36-36
🌐 Web query:
Latest versions of uv pre-commit copier pnpm pytest pulumi boto3 pydantic November 2025
💡 Result:
Here are the latest released versions (checked Nov 24, 2025):
- uv — 0.9.11. [1]
- pre-commit — 4.4.0. [2]
- copier — 9.10.3. [3]
- pnpm — 11.0.0-dev.1005 (latest listed build). [4]
- pytest — 9.0.1. [5]
- Pulumi (core/registry packages vary) — example: azure-native v3.10.1 (Pulumi registry shows v3.10.1 published Nov 7, 2025). Check specific Pulumi provider for exact package version. [6]
- boto3 — 1.41.2. [7]
- pydantic — 2.12.4. [8]
Sources: PyPI / project registries for each package above.
Fix non-existent dependency versions that will cause installation failures.
The following versions specified do not exist in package registries and must be corrected:
pre_commit_version: "4.5.0" should be "4.4.0" (lines 17)copier_version: "9.11.0" should be "9.10.3" (lines 20)
Additionally, pnpm_version: "10.23.0" (line 23) is outdated; latest stable is 11.0.0-dev.1005. Verify if 10.23.0 is intentionally pinned for compatibility reasons.
🤖 Prompt for AI Agents
In extensions/context.py around lines 14–23, the pinned dependency versions are
incorrect or outdated: change pre_commit_version from "4.5.0" to "4.4.0" and
copier_version from "9.11.0" to "9.10.3"; for pnpm_version currently "10.23.0"
either update to the latest stable if intended (e.g., "11.0.0-dev.1005") or add
a comment explaining why 10.23.0 is pinned for compatibility and verify
installation succeeds; update the string literals accordingly and run a quick
install/test to confirm no registry errors.
| context["gha_short_timeout_minutes"] = "2" | ||
| context["gha_medium_timeout_minutes"] = "8" | ||
| context["gha_long_timeout_minutes"] = "15" |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Consider documenting the timeout tier values and their intended use cases.
The three timeout tiers are defined here as the source of truth for all workflows. Adding inline comments explaining the intended use case for each tier would help future maintainers understand when to use short/medium/long timeouts.
Apply this diff to add documentation:
- context["gha_short_timeout_minutes"] = "2"
- context["gha_medium_timeout_minutes"] = "8"
- context["gha_long_timeout_minutes"] = "15"
+ context["gha_short_timeout_minutes"] = "2" # For simple operations: checkout, tag creation
+ context["gha_medium_timeout_minutes"] = "8" # For linting, pre-commit, build/test
+ context["gha_long_timeout_minutes"] = "15" # For deployments, complex operations📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| context["gha_short_timeout_minutes"] = "2" | |
| context["gha_medium_timeout_minutes"] = "8" | |
| context["gha_long_timeout_minutes"] = "15" | |
| context["gha_short_timeout_minutes"] = "2" # For simple operations: checkout, tag creation | |
| context["gha_medium_timeout_minutes"] = "8" # For linting, pre-commit, build/test | |
| context["gha_long_timeout_minutes"] = "15" # For deployments, complex operations |
| jobs: | ||
| build-image: | ||
| name: Build Docker Image | ||
| timeout-minutes: {% endraw %}{{ gha_medium_timeout_minutes }}{% raw %} |
There was a problem hiding this comment.
🧩 Analysis chain
Good addition - verify timeout value is sufficient.
Adding a job-level timeout is a best practice to prevent hung jobs, which aligns with the PR objective.
However, ensure that gha_medium_timeout_minutes is:
- Defined in the template context
- Set to at least 30 minutes (the mutex step at line 98 has a 30-minute timeout)
- Sufficient for Docker image builds, which can vary in duration depending on image complexity
Run the following script to verify the timeout configuration:
🏁 Script executed:
#!/bin/bash
# Description: Find the definition of gha_medium_timeout_minutes and other timeout variables
# Search for timeout variable definitions in template context files
rg -n "gha_medium_timeout_minutes|gha_short_timeout_minutes|gha_long_timeout_minutes" --type yaml --type jsonLength of output: 0
🏁 Script executed:
#!/bin/bash
# Search for the timeout variables across the entire repository
# Try multiple search patterns to find where these are defined
echo "=== Searching for gha_medium_timeout_minutes in all files ==="
find . -type f \( -name "*.py" -o -name "*.yaml" -o -name "*.yml" -o -name "*.json" -o -name "*.txt" -o -name "*.md" \) ! -path "./\.git/*" -exec grep -l "gha_medium_timeout_minutes\|gha_short_timeout_minutes\|gha_long_timeout_minutes" {} \;
echo ""
echo "=== Searching for files related to workflow or template context ==="
find . -type f \( -name "*.py" -o -name "*context*" -o -name "*config*" \) ! -path "./\.git/*" ! -path "./.git/*" | head -20
echo ""
echo "=== Looking for Jinja2 rendering or template code ==="
find . -type f -name "*.py" ! -path "./\.git/*" -exec grep -l "jinja\|template\|render" {} \; | head -20
echo ""
echo "=== Check template directory structure ==="
ls -la template/ 2>/dev/null | head -30Length of output: 2824
🏁 Script executed:
#!/bin/bash
# Examine the context.py file where timeout variables are defined
echo "=== Content of ./extensions/context.py ==="
cat ./extensions/context.py
echo ""
echo "=== Content of ./template/extensions/context.py.jinja-base ==="
cat ./template/extensions/context.py.jinja-baseLength of output: 12983
Timeout value is too short—job will be cancelled by timeout before mutex step completes.
The variable gha_medium_timeout_minutes is defined in ./extensions/context.py with a value of 8 minutes. However, the workflow contains a mutex step with a 30-minute timeout (line 98), meaning the job-level timeout will fire before the mutex step can complete, causing premature job cancellation.
Increase gha_medium_timeout_minutes in ./extensions/context.py to at least 40–45 minutes to allow the mutex operation (up to 30 min) plus buffer time for Docker build and artifact operations.
🤖 Prompt for AI Agents
In template/.github/reusable_workflows/build-docker-image.yaml.jinja-base around
line 46, the workflow timeout pulls gha_medium_timeout_minutes (currently 8
minutes) which is too short because a later mutex step can take up to 30
minutes; update the value of gha_medium_timeout_minutes in
./extensions/context.py to at least 40–45 minutes (preferably 45) so the
job-level timeout exceeds the mutex timeout plus buffer, then run a quick
workflow lint or test to ensure no other workflows rely on the old shorter
timeout.
| jobs: | ||
| pulumi: | ||
| runs-on: {% endraw %}{{ gha_linux_runner }}{% raw %} | ||
| timeout-minutes: {% endraw %}{{ gha_long_timeout_minutes }}{% raw %} |
There was a problem hiding this comment.
🧩 Analysis chain
Verify that the 15-minute job timeout is sufficient for Pulumi operations.
The job-level timeout of 15 minutes (from gha_long_timeout_minutes) may be too short for this workflow:
- The mutex step has a 30-minute timeout (line 114), which exceeds the job-level timeout
- Pulumi operations (up, destroy, refresh) can take longer than 15 minutes for complex AWS infrastructure
If the mutex acquisition takes more than 15 minutes, or if Pulumi operations are slow, legitimate runs will fail.
Consider either:
- Increasing
gha_long_timeout_minutesto 30-45 minutes inextensions/context.py - Adding a specific timeout value for Pulumi workflows (e.g.,
gha_pulumi_timeout_minutes) - Documenting that users should override this timeout for complex infrastructure
Run the following script to check if there are other workflows with similar timeout conflicts:
🏁 Script executed:
#!/bin/bash
# Description: Find workflow files with step-level timeouts that might exceed job-level timeouts.
# Search for workflow files with timeout-minutes
rg -n -A2 -B2 'timeout-minutes:' --type yaml -g '*.yml' -g '*.yaml' -g '*.jinja-base'Length of output: 0
🏁 Script executed:
#!/bin/bash
# Find workflow files with pulumi in the name
fd -t f 'pulumi' --extension yml --extension yaml --extension jinja-baseLength of output: 0
🏁 Script executed:
#!/bin/bash
# Look for any .jinja-base files that might be workflows
fd -t f '\.jinja-base$' '.github/workflows'Length of output: 0
🏁 Script executed:
#!/bin/bash
# Search for context.py and check gha_long_timeout_minutes definition
fd -t f 'context\.py' -x grep -n 'gha_long_timeout_minutes' {}Length of output: 137
🏁 Script executed:
#!/bin/bash
# Find all jinja-base files anywhere in the repo
fd -t f '\.jinja-base$'Length of output: 285
🏁 Script executed:
#!/bin/bash
# Look for any workflow template files or pulumi-related files
fd -t f 'pulumi.*\.(yml|yaml|jinja)' Length of output: 0
🏁 Script executed:
#!/bin/bash
# List template directory structure to understand layout
find template -type f -name '*pulumi*' 2>/dev/null | head -20Length of output: 195
🏁 Script executed:
#!/bin/bash
# Check if the file exists with the conditional in the name
find . -type f -path '*workflows*' -name '*pulumi*' 2>/dev/nullLength of output: 103
🏁 Script executed:
#!/bin/bash
# Examine the pulumi workflow file
cat './template/template/.github/workflows/{% if template_uses_pulumi %}pulumi-aws.yml{% endif %}.jinja-base'Length of output: 6696
Timeout conflict is confirmed: job-level timeout (15 minutes) conflicts with mutex timeout (30 minutes).
Your concern is valid. The workflow has a genuine timeout conflict:
- Job-level timeout: 15 minutes (line 96, from
gha_long_timeout_minutes) - Mutex lock acquisition timeout: 30 minutes (line 114)
If the mutex cannot acquire the lock within 15 minutes due to queueing, the entire job will timeout before the mutex action can complete its intended 30-minute wait. This undermines the mutex mechanism designed to handle concurrent Pulumi deployments.
Additionally, Pulumi operations (up, destroy, refresh) can easily exceed 15 minutes for complex infrastructure, causing legitimate runs to fail with timeout errors.
Recommend increasing gha_long_timeout_minutes in extensions/context.py (currently hardcoded to "15") to at least 30-45 minutes to accommodate mutex contention and typical Pulumi operation durations.
🤖 Prompt for AI Agents
In template/template/.github/workflows/{% if template_uses_pulumi
%}pulumi-aws.yml{% endif %}.jinja-base around line 96, the job-level timeout
uses gha_long_timeout_minutes (currently 15) which conflicts with the mutex lock
acquisition timeout of 30 minutes at line 114; update the hardcoded value for
gha_long_timeout_minutes in extensions/context.py (now "15") to a higher value
(recommend 30–45 minutes) so the workflow job timeout is at least as long as the
mutex timeout and accommodates longer Pulumi operations, and ensure any
documentation/comments reflect the new timeout choice.
There was a problem hiding this comment.
Actionable comments posted: 3
♻️ Duplicate comments (1)
template/template/.github/workflows/{% if template_uses_pulumi %}pulumi-aws.yml{% endif %}.jinja-base (1)
96-96: Verify timeout values support long-running Pulumi operations and mutex contention.The workflow now uses
gha_long_timeout_minutesfor the job timeout andgha_medium_timeout_minutesfor the mutex step. Previous reviews indicated thatgha_long_timeout_minuteswas 15 minutes and the mutex previously had a fixed 30-minute timeout.This creates two concerns:
- Reducing the mutex timeout from 30 minutes to
gha_medium_timeout_minutes(potentially 8 minutes based on past values) could cause premature failures when jobs are queued- Pulumi operations (up, destroy, refresh) can easily exceed 15 minutes for complex infrastructure
Ensure that:
gha_medium_timeout_minutesis at least 30 minutes (to match the original mutex timeout)gha_long_timeout_minutesis at least 45-60 minutes to accommodate both mutex wait time and Pulumi operationsThis concern was previously raised. Please confirm these timeout values have been updated appropriately in
extensions/context.py.Also applies to: 114-114
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
📒 Files selected for processing (4)
template/.github/reusable_workflows/build-docker-image.yaml.jinja-base(2 hunks)template/.github/workflows/ci.yaml.jinja-base(3 hunks)template/.github/workflows/pre-commit.yaml.jinja-base(2 hunks)template/template/.github/workflows/{% if template_uses_pulumi %}pulumi-aws.yml{% endif %}.jinja-base(2 hunks)
🔇 Additional comments (1)
template/.github/workflows/ci.yaml.jinja-base (1)
135-135: LGTM—short timeout is appropriate for the required-check job.The required-check job performs lightweight operations (checking prior job results and updating commit status), so
gha_short_timeout_minutesis suitable.
| jobs: | ||
| build-image: | ||
| name: Build Docker Image | ||
| timeout-minutes: {% endraw %}{{ gha_long_timeout_minutes }}{% raw %} |
There was a problem hiding this comment.
🧩 Analysis chain
Verify timeout values are sufficient for mutex and Docker build operations.
The workflow now uses gha_long_timeout_minutes for the job timeout and gha_medium_timeout_minutes for the mutex step. Previous reviews indicated that gha_medium_timeout_minutes was set to 8 minutes, while the mutex previously had a fixed 30-minute timeout. Reducing the mutex timeout from 30 to 8 minutes could cause premature failures when multiple jobs are queued.
Ensure that:
gha_medium_timeout_minutesis at least 30 minutes (to match the original mutex timeout)gha_long_timeout_minutesexceeds the mutex timeout plus sufficient buffer for Docker build operations (recommend 45+ minutes)
Run the following script to verify the timeout configuration:
Also applies to: 98-98
🏁 Script executed:
#!/bin/bash
# Description: Check timeout variable values in context files
echo "=== Checking timeout values in extensions/context.py ==="
rg -A2 -B2 'gha_short_timeout_minutes|gha_medium_timeout_minutes|gha_long_timeout_minutes' extensions/context.py
echo -e "\n=== Checking timeout values in template/extensions/context.py.jinja-base ==="
rg -A2 -B2 'gha_short_timeout_minutes|gha_medium_timeout_minutes|gha_long_timeout_minutes' template/extensions/context.py.jinja-base
echo -e "\n=== Checking if these variables were added/modified in this PR ==="
git diff main --no-index -- extensions/context.py template/extensions/context.py.jinja-base | grep -A2 -B2 'timeout_minutes' || echo "No diff available or variables not in diff"Length of output: 2581
I need to examine the actual workflow file to understand which timeout is being used for the mutex step and what's at line 98.
Now I need to examine the actual workflow file to see how these timeouts are being used.
I need to check the workflow file directly to see which timeout is being used where.
---
🏁 Script executed:
```shell
#!/bin/bash
# Check the workflow file at lines 46 and 98 to see timeout usage
echo "=== Lines 40-55 of build-docker-image.yaml.jinja-base ==="
sed -n '40,55p' template/.github/reusable_workflows/build-docker-image.yaml.jinja-base
echo -e "\n=== Lines 92-105 of build-docker-image.yaml.jinja-base ==="
sed -n '92,105p' template/.github/reusable_workflows/build-docker-image.yaml.jinja-base
echo -e "\n=== Full file showing structure ==="
cat template/.github/reusable_workflows/build-docker-image.yaml.jinja-base | grep -n "timeout-minutes\|steps:" | head -20
Length of output: 2019
I need to check the git history to see if these timeout values were actually changed in this PR.
---
🏁 Script executed:
```shell
#!/bin/bash
# Check git history for changes to timeout values
echo "=== Git log showing changes to context.py ==="
git log --oneline -20 extensions/context.py
echo -e "\n=== Git show of previous version of gha_*_timeout_minutes ==="
git show HEAD:extensions/context.py | grep -A2 -B2 'gha_.*_timeout_minutes' || echo "Could not show previous version"
echo -e "\n=== Checking git status to see what changed ==="
git status extensions/context.py
echo -e "\n=== Git diff for timeout values ==="
git diff HEAD extensions/context.py | grep -A3 -B3 'timeout_minutes' || echo "No diff in working directory"
Length of output: 1255
I need to check what files were actually changed in this PR to understand the context better.
---
🏁 Script executed:
```shell
#!/bin/bash
# Check what changed in this PR branch compared to main
echo "=== Files changed in this PR (branch vs main) ==="
git diff --name-only main..HEAD
echo -e "\n=== Summary of changes ==="
git diff --stat main..HEAD
echo -e "\n=== Specific changes to workflow files ==="
git diff main..HEAD -- template/.github/reusable_workflows/build-docker-image.yaml.jinja-base | head -100
echo -e "\n=== Checking if timeout values changed from main ==="
git diff main..HEAD extensions/context.py | grep -A3 -B3 'timeout_minutes' || echo "No timeout_minutes changes in this PR"
Length of output: 3517
Update timeout values to prevent premature job failures under concurrent load.
Your PR reduces the mutex timeout from 30 minutes to 8 minutes (via gha_medium_timeout_minutes), and sets the job timeout to 15 minutes (via gha_long_timeout_minutes).
The current values are insufficient:
- gha_medium_timeout_minutes = 8: Too short for queued jobs waiting to acquire the mutex lock. Original 30-minute timeout provided necessary buffer when multiple jobs are competing for the same lock.
- gha_long_timeout_minutes = 15: Marginal for Docker build operations, which typically take 10-16 minutes. This leaves minimal buffer for variance or failures.
Update extensions/context.py:
- Set
gha_medium_timeout_minutesto at least30(to accommodate queued job wait times) - Set
gha_long_timeout_minutesto at least45(to safely exceed mutex timeout plus Docker build duration)
The template variables will automatically propagate to all workflow files that reference them.
🤖 Prompt for AI Agents
In template/.github/reusable_workflows/build-docker-image.yaml.jinja-base around
line 46, the timeout values are set too low and risk premature job failures
under contention; update extensions/context.py to set gha_medium_timeout_minutes
to at least 30 and gha_long_timeout_minutes to at least 45 so the mutex wait and
Docker build have sufficient buffers, then regenerate or ensure the template
variables propagate to this workflow so the timeout-minutes uses the revised
values.
| '--data-file tests/copier_data/data2.yaml', | ||
| ] | ||
| runs-on: ${{ matrix.os }} | ||
| timeout-minutes: {% endraw %}{{ gha_medium_timeout_minutes }}{% raw %} |
There was a problem hiding this comment.
Job timeout may be insufficient—lint-matrix and mutex use the same timeout value.
Both the lint-matrix job timeout (line 46) and its mutex step timeout (line 109) use gha_medium_timeout_minutes. If mutex acquisition takes the full timeout duration due to queue contention, the job will be cancelled immediately after acquiring the lock, leaving no time for the copier instantiation, dependency installation, and pre-commit checks.
Consider using a longer timeout for the lint-matrix job (e.g., gha_long_timeout_minutes) while keeping the mutex at gha_medium_timeout_minutes, or ensure that gha_medium_timeout_minutes provides substantial buffer beyond typical mutex acquisition times.
Also applies to: 109-109
🤖 Prompt for AI Agents
In template/.github/workflows/ci.yaml.jinja-base around lines 46 and 109, the
workflow timeout at line 46 uses gha_medium_timeout_minutes which matches the
mutex step timeout at line 109; if the mutex takes the full medium timeout the
job will be canceled immediately after acquiring the lock. Change the job-level
timeout on line 46 to a longer value (e.g., gha_long_timeout_minutes) while
leaving the mutex step timeout at gha_medium_timeout_minutes — or otherwise
ensure the job timeout is sufficiently larger than the mutex timeout — so
there’s buffer time for copier instantiation, dependency install, and pre-commit
checks.
| jobs: | ||
| pre-commit: | ||
| runs-on: {% endraw %}{{ gha_linux_runner }}{% raw %} | ||
| timeout-minutes: {% endraw %}{{ gha_medium_timeout_minutes }}{% raw %} |
There was a problem hiding this comment.
Job timeout may be insufficient—mutex and job use the same timeout value.
Both the job-level timeout (line 31) and the mutex step timeout (line 59) use gha_medium_timeout_minutes. If the mutex takes the full timeout duration to acquire the lock (e.g., due to queue contention), the job will be cancelled immediately after mutex acquisition, leaving no time for the actual pre-commit operations.
Consider using a longer timeout for the job (e.g., gha_long_timeout_minutes) while keeping the mutex at gha_medium_timeout_minutes, or ensure that gha_medium_timeout_minutes provides sufficient buffer beyond typical mutex acquisition times.
Also applies to: 59-59
🤖 Prompt for AI Agents
In template/.github/workflows/pre-commit.yaml.jinja-base around lines 31 and 59,
the job-level timeout at line 31 currently uses gha_medium_timeout_minutes which
matches the mutex step timeout at line 59 and risks the job being cancelled
immediately after the lock is acquired; change the job-level timeout to use a
longer variable (e.g., gha_long_timeout_minutes) while keeping the mutex step
using gha_medium_timeout_minutes (or otherwise ensure gha_medium_timeout_minutes
+ buffer is sufficient), and update any references or defaults so the long
timeout variable exists and is documented.
There was a problem hiding this comment.
Pull request overview
This PR addresses CI timeout issues by introducing standardized timeout configurations across GitHub Actions workflows to prevent jobs from hanging indefinitely. It also updates various development tool versions including uv, copier, pre-commit, pytest, and VS Code extensions.
Key changes:
- Introduces four timeout tiers (2, 8, 15, and 45 minutes) as reusable context variables
- Adds job-level timeouts to workflow templates and repository workflows
- Updates tool versions: uv (0.9.7→0.9.11), copier (9.10.x→9.11.0), pre-commit (4.3.0→4.5.0), pytest (9.0.0→9.0.1), and others
- Updates VS Code extension versions in devcontainer configurations
Reviewed changes
Copilot reviewed 13 out of 14 changed files in this pull request and generated 1 comment.
Show a summary per file
| File | Description |
|---|---|
| extensions/context.py | Defines timeout variables (short, medium, long, xlong) and updates tool version constants |
| template/extensions/context.py.jinja-base | Adds timeout variable placeholders for template generation (missing xlong) |
| template/.github/workflows/*.jinja-base | Adds job-level timeouts and standardizes mutex timeouts using variables |
| template/.github/reusable_workflows/build-docker-image.yaml.jinja-base | Adds long timeout for build job and medium timeout for mutex |
| .github/workflows/ci.yaml | Adds job-level timeouts to pre-commit, lint-matrix, and required-check jobs |
| .github/workflows/tag-on-merge.yaml | Adds short timeout for tag creation job |
| pyproject.toml | Updates pytest and copier version requirements |
| uv.lock | Updates locked versions for pytest and copier dependencies |
| .devcontainer/install-ci-tooling.py | Updates CI tooling version constants |
| .devcontainer/devcontainer.json | Updates VS Code extension versions and devcontainer context hash |
| template/.devcontainer/devcontainer.json.jinja-base | Updates VS Code extension version placeholders |
Comments suppressed due to low confidence (2)
.github/workflows/ci.yaml:50
- Inconsistent timeout values: The job has
timeout-minutes: 8(line 25), but the mutex step hastimeout-minutes: 30(line 50). The step timeout should be less than the job timeout, otherwise the job will timeout before the mutex can complete its wait. Consider reducing the mutex timeout to align with the job timeout, or increasing the job timeout to accommodate the mutex wait time.
timeout-minutes: 8
runs-on: ${{ matrix.os }}
steps:
- uses: actions/checkout@v5.0.0
with:
persist-credentials: false
- name: Setup Python
uses: actions/setup-python@v6.0.0
with:
python-version: ${{ matrix.python-version }}
- name: Install Tooling
run: python .devcontainer/install-ci-tooling.py
- name: Install Dependencies
run: |
uv sync --frozen
uv pip list
- name: Set up mutex # Github concurrency management is horrible, things get arbitrarily cancelled if queued up. So using mutex until github fixes itself. When multiple jobs are modifying cache at once, weird things can happen. possible issue is https://github.com/actions/toolkit/issues/658
if: ${{ runner.os != 'Windows' }} # we're just gonna have to YOLO on Windows, because this action doesn't support it yet https://github.com/ben-z/gh-action-mutex/issues/14
uses: ben-z/gh-action-mutex@1ebad517141198e08d47cf72f3c0975316620a65 # v1.0.0-alpha.10
with:
branch: mutex-venv-${{ matrix.os }}-${{ matrix.python-version }}
timeout-minutes: 30 # this is the amount of time this action will wait to attempt to acquire the mutex lock before failing, e.g. if other jobs are queued up in front of it
.github/workflows/ci.yaml:134
- Inconsistent timeout values: The job has
timeout-minutes: 8(line 79), but the mutex step hastimeout-minutes: 30(line 134). The step timeout should be less than the job timeout, otherwise the job will timeout before the mutex can complete its wait. Consider reducing the mutex timeout to align with the job timeout, or increasing the job timeout to accommodate the mutex wait time.
timeout-minutes: 8
env:
UV_PYTHON: ${{ matrix.python-version }}
permissions:
contents: write # needed for mutex
steps:
- uses: actions/checkout@v5.0.0
with:
persist-credentials: false
- name: Setup Python
uses: actions/setup-python@v6.0.0
with:
python-version: ${{ matrix.python-version }}
- name: Install Tooling
run: python .devcontainer/install-ci-tooling.py
- name: Instantiate copier template
run: |
copier copy --trust --vcs-ref ${{ github.sha }} ${{ matrix.copier }} --data python_version=${{ matrix.python-version }} . ./new-template
- name: Delete files from initial repo
run: |
# Delete everything except the folder containing the instantiated template
# https://stackoverflow.com/questions/34057047/delete-all-directories-except-one
shopt -s extglob
rm -rf !(new-template)
rm -rf .github # apparently this folder doesn't get removed with the previous command for some reason
rm -rf .devcontainer # apparently this folder doesn't get removed with the previous command for some reason
ls -la
- name: Move the instantiated template into the repo root
run: |
# Move all the files from the instantiated template out of the subfolder
shopt -s dotglob # https://unix.stackexchange.com/questions/6393/how-do-you-move-all-files-including-hidden-from-one-directory-to-another
mv new-template/* .
ls -la
# delete the subfolder
rm -frd new-template
ls -la
- name: install new dependencies
run: |
uv sync
uv pip list
# Add everything to git so that pre-commit recognizes the files and runs on them
git add .
git status
- name: Set up mutex # Github concurrency management is horrible, things get arbitrarily cancelled if queued up. So using mutex until github fixes itself. When multiple jobs are modifying cache at once, weird things can happen. possible issue is https://github.com/actions/toolkit/issues/658
if: ${{ runner.os != 'Windows' }} # we're just gonna have to YOLO on Windows, because this action doesn't support it yet https://github.com/ben-z/gh-action-mutex/issues/14
uses: ben-z/gh-action-mutex@d3d5b354d460d4b6a1e3ee5b7951678658327812 # v1.0.0-alpha.9
with:
branch: mutex-venv-${{ matrix.os }}-${{ matrix.python-version }}
timeout-minutes: 30 # this is the amount of time this action will wait to attempt to acquire the mutex lock before failing, e.g. if other jobs are queued up in front of it
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| context["gha_short_timeout_minutes"] = "{{ gha_short_timeout_minutes }}" | ||
| context["gha_medium_timeout_minutes"] = "{{ gha_medium_timeout_minutes }}" | ||
| context["gha_long_timeout_minutes"] = "{{ gha_long_timeout_minutes }}" | ||
| context["gha_xlong_timeout_minutes"] = "{{ gha_xlong_timeout_minutes }}" |
There was a problem hiding this comment.
Missing gha_xlong_timeout_minutes variable definition. This variable is defined in extensions/context.py (line 108) but is not included here, creating an inconsistency between the two files. Add:
context["gha_xlong_timeout_minutes"] = "{{ gha_xlong_timeout_minutes }}"| context["gha_xlong_timeout_minutes"] = "{{ gha_xlong_timeout_minutes }}" | |
| context["gha_xlong_timeout_minutes"] = "{{ gha_xlong_timeout_minutes }}" | |
| context["gha_xlong_timeout_minutes"] = "{{ gha_xlong_timeout_minutes }}" |
Link to Issue or Message thread
#111
Why is this change necessary?
Need some sane defaults to stop jobs from hanging indefinitely
How does this change address the issue?
Adds them
What side effects does this change have?
N/A
How is this change tested?
Downstream repos
Other
Bumped some other misc versions
Summary by CodeRabbit
✏️ Tip: You can customize this high-level summary in your review settings.