Conversation
|
Warning Rate limit exceeded@ejfine has exceeded the limit for the number of commits or files that can be reviewed per hour. Please wait 17 minutes and 16 seconds before requesting another review. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. 📒 Files selected for processing (4)
WalkthroughPR updates development infrastructure including PNPM (10.23.0→10.24.0) and AWS CLI (2.31.11→2.32.6) versions, conditionalizes AWS features in devcontainer templates, refactors Windows helper and Python scripts, adds pr-short-num workflow output, improves CloudFront/S3 handling in infrastructure code, and expands gitignore patterns. Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes
Possibly related PRs
Pre-merge checks✅ Passed checks (3 passed)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
template/infrastructure/src/infrastructure/program.py (1)
37-37: Fix return type mismatch between function signature and usage.The function
_upload_assets_to_s3is declared to returnNone(line 37), but its return value is assigned toall_uploads(line 100) and later used as a dependency (line 148). This type mismatch will cause issues because:
depends_onexpects a Resource or list of Resources, notNone- The CloudFront invalidation won't properly wait for uploads to complete
Apply this diff to fix the issue:
Option 1 (Recommended): Collect and return the upload resources
-def _upload_assets_to_s3(*, bucket_id: Output[str], base_dir: Path) -> None: +def _upload_assets_to_s3(*, bucket_id: Output[str], base_dir: Path) -> list[BucketObjectv2]: + uploads = [] for dirpath, _, filenames in os.walk(base_dir): for filename in filenames: file_path = Path(dirpath) / filename # Compute the S3 key relative to the base directory. # For example, if base_dir is "./upload-dir" and file_path is "./upload-dir/docs/readme.txt", # then s3_key will be "docs/readme.txt". s3_key = os.path.relpath(file_path, base_dir) # Since resource names cannot have slashes, we replace them with dashes. resource_name = append_resource_suffix(s3_key.replace(os.sep, "-"), max_length=200) - _ = BucketObjectv2( + obj = BucketObjectv2( resource_name, content_type=_get_mime_type(file_path), bucket=bucket_id, key=s3_key, source=pulumi.FileAsset(str(file_path)), ) + uploads.append(obj) + return uploadsOption 2: Don't assign the return value if it truly returns None
- all_uploads = _upload_assets_to_s3( + _upload_assets_to_s3( bucket_id=app_website_bucket.id, base_dir=repo_root / APP_DIRECTORY_NAME / ".output" / "public" )And remove the
depends_onor replace with a different dependency strategy.Also applies to: 100-102
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
📒 Files selected for processing (17)
.copier-answers.yml(1 hunks).devcontainer/devcontainer.json(1 hunks).devcontainer/install-ci-tooling.py(1 hunks).devcontainer/windows-host-helper.sh(2 hunks).github/workflows/get-values.yaml(1 hunks).github/workflows/hash_git_files.py(1 hunks).github/workflows/pre-commit.yaml(1 hunks).gitignore(1 hunks)extensions/context.py(1 hunks)template/.devcontainer/devcontainer.json.jinja(2 hunks)template/.devcontainer/windows-host-helper.sh(2 hunks)template/.github/workflows/get-values.yaml(1 hunks)template/.github/workflows/hash_git_files.py(1 hunks)template/.github/workflows/pre-commit.yaml(1 hunks)template/.gitignore(1 hunks)template/README.md.jinja(2 hunks)template/infrastructure/src/infrastructure/program.py(3 hunks)
🧰 Additional context used
🧬 Code graph analysis (2)
.github/workflows/hash_git_files.py (1)
template/.github/workflows/hash_git_files.py (1)
find_devcontainer_hash_line(75-88)
template/.github/workflows/hash_git_files.py (1)
.github/workflows/hash_git_files.py (1)
find_devcontainer_hash_line(75-88)
🔇 Additional comments (19)
.gitignore (1)
80-80: ✓ Correct macOS ignore pattern update.The recursive pattern
**/.DS_Storeis the standard best practice for ignoring macOS system files across all subdirectories. This change improves coverage compared to single-level patterns.template/.gitignore (1)
80-80: ✓ Consistent macOS ignore pattern across files.The pattern matches the corresponding update in the root
.gitignore, ensuring consistent.DS_Storesuppression across the project and its templates.template/infrastructure/src/infrastructure/program.py (4)
5-5: LGTM!The new imports are appropriate for the CloudFront invalidation feature and resource dependency management.
Also applies to: 14-14, 22-22
77-98: LGTM! Successfully moved BucketPolicy creation outside of apply.The refactoring correctly moves the
BucketPolicyresource creation outside of the apply block by computingpolicy_jsonas an Output first, then passing it to the BucketPolicy constructor. This aligns with the PR objective.
105-105: LGTM!The use of
removeprefix("http://")to extract the domain from the S3 website URL is safe and appropriate for the CloudFront origin configuration.
106-140: LGTM! Successfully moved CloudFront Distribution creation outside of apply.The CloudFront Distribution is now created directly rather than inside an apply block, which aligns with the PR objective. The configuration correctly handles Output values (like
origin_domain) as constructor arguments.extensions/context.py (1)
13-15: PNPM template context version bump looks consistent.
pnpm_versionis now10.24.0, matching the installer script and keeping template context aligned with CI tooling; no issues spotted here.Please just confirm there are no remaining references to
10.23.0in the repo after this bump (e.g., via a quick search or relying on CI/pre‑commit)..copier-answers.yml (1)
2-2: Copier template commit pin update is fine.Updating
_committov0.0.85-10-gd009c10cleanly tracks the newer base template revision; no concerns.template/.github/workflows/pre-commit.yaml (1)
71-72: Whitespace-only tweak to pre-commit step; behavior unchanged.The
run: pre-commit run -aline still invokes the same command; this just normalizes the literal string.Rely on your existing workflow runs to confirm the step still executes as expected.
.devcontainer/devcontainer.json (1)
61-61: Devcontainer context hash comment stays in sync with hash utility.The comment line format still matches
DEVCONTAINER_COMMENT_LINE_PREFIX/suffix from the hashing script, with an updated hash value (f74dddc7), which is what we expect after context changes..github/workflows/pre-commit.yaml (1)
71-72: Repository pre-commit workflow matches the template change.The
run: pre-commit run -aadjustment mirrors the template workflow and does not alter behavior.Let CI runs confirm the job still passes with this formatting change.
.devcontainer/install-ci-tooling.py (1)
10-12: PNPM_VERSION bump aligns installer with template context.Setting
PNPM_VERSION = "10.24.0"keeps this installer in sync withpnpm_versioninextensions/context.py; no other logic impact.After merging, it’s worth quickly checking for any remaining
10.23.0references (and letting a CI/devcontainer build verify the new pnpm install path).template/.github/workflows/hash_git_files.py (1)
61-70: Hash computation robustness + reverse scan both look good.
- Narrowing the
compute_adler32excepttoIsADirectoryErrorcleanly ignores directory-like entries (e.g., symlinks misreported as dirs) without swallowing other IO problems.- Switching
find_devcontainer_hash_linetofor i in reversed(range(len(lines))):preserves behavior while making the reverse scan a bit clearer.No issues spotted with either change.
Please run the hash script (or the pre-commit hook that wraps it) once on a Windows checkout to confirm the
IsADirectoryErrorhandling behaves as expected with any devcontainer-related symlinks.Also applies to: 77-87
.github/workflows/hash_git_files.py (1)
61-70: Repository hash_git_files script stays in lockstep with template.The narrowed
IsADirectoryErrorcatch and thereversed(range(len(lines)))loop match the template version and maintain the intended behavior of skipping directory-like entries while scanning from the end of the devcontainer file.As with the template script, it’s worth re-running the devcontainer-hash pre-commit/check on a real repo to ensure no unexpected IO errors surface after the exception tightening.
Also applies to: 77-87
template/.github/workflows/get-values.yaml (1)
12-14:pr-short-numoutput is wired correctly; ensure callers handle non‑PR events.The new
pr-short-numworkflow_call output correctly forwards${{ jobs.get-values.outputs.pr-short-num }}, which in turn comes fromsteps.find-pr-num.outputs.number. That matches the step’s logic of truncating to the last two digits.For events other than
pull_request/merge_group,find-pr-numis skipped, so this output will be empty; just make sure any workflows consumingpr-short-numare prepared for that case.Also applies to: 27-29
template/README.md.jinja (1)
1-5: Dynamic badge URLs and raw blocks look correct.The README badges now parameterize org/repo via
{{ repo_org_name }}/{{ repo_name }}and the surrounding{% raw %}...{% endraw %}usage keeps the markdown itself from being pre‑templated. The update command line is still clearly rendered. No issues from a templating or markdown perspective.Also applies to: 17-17
.github/workflows/get-values.yaml (1)
12-14: Mirroredpr-short-numoutput wiring looks good.This workflow’s new
pr-short-numoutput matches the template version, forwarding${{ steps.find-pr-num.outputs.number }}via the job outputs. The behavior and caveats (empty value for non‑PR/merge_group events) are consistent and look intentional.Also applies to: 27-29
template/.devcontainer/devcontainer.json.jinja (2)
27-36: Conditional AWS Toolkit extension removal with correct syntax.Prefixing an extension ID with
-incustomizations.vscode.extensionsis the standard way to remove or opt-out of an extension in devcontainer.json. The"-AmazonWebServices.aws-toolkit-vscode"entry correctly uses this syntax, and the conditional around it ensures child templates don't inherit this override, keeping the behavior consistent with how the AWS CLI feature is gated.
5-22: Original review comment is incorrect—no JSON syntax error exists.The code at line 5 correctly outputs
"features": {as valid JSON. The opening brace{is positioned inside the{% raw %}block on line 1, so it is emitted literally before the Jinja conditional begins. There is no stray{{in the rendered output, and the"features"object is properly opened.The interleaving of
{% raw %}...{% endraw %}blocks with Jinja conditionals is a valid pattern for mixing static JSON with conditional template logic. The current structure correctly produces a valid devcontainer.json file.Likely an incorrect or invalid review comment.
| sudo rm -rf "./$repoName" || true | ||
| sudo rm -rf "./$repoName/*.md" |
There was a problem hiding this comment.
Apply the same safety guard for sudo rm -rf as in the template helper.
This copy of the helper has the same pattern:
repoNamederived fromgitUrlis used insudo rm -rf "./$repoName", which is intentionally destructive but should be guarded against empty / unsafe values like.or/.sudo rm -rf "./$repoName/*.md"is redundant after removing the entire directory.
Recommend mirroring a guard and simplification here as well, e.g.:
-repoName=$(basename "$gitUrl" .git)
-
-echo "Repo name extracted as '$repoName'"
-
-sudo rm -rf "./$repoName" || true
-sudo rm -rf "./$repoName/*.md"
+repoName=$(basename "$gitUrl" .git)
+
+if [ -z "$repoName" ] || [ "$repoName" = "." ] || [ "$repoName" = "/" ]; then
+ echo "Refusing to operate on unsafe repoName='$repoName'" >&2
+ exit 1
+fi
+
+echo "Repo name extracted as '$repoName'"
+
+sudo rm -rf "./$repoName" || truePlease keep this script and the template version aligned so future fixes stay consistent.
Also applies to: 45-49
🤖 Prompt for AI Agents
.devcontainer/windows-host-helper.sh around lines 30-31 (also apply same fix to
45-49): the script calls sudo rm -rf "./$repoName" and sudo rm -rf
"./$repoName/*.md" without validating repoName, which is dangerous and
redundant; add the same safety guard used in the template helper to validate
repoName is non-empty and does not equal "." or "/" (or contain dangerous path
chars), only then run a single sudo rm -rf "./$repoName"; remove the redundant
sudo rm -rf "./$repoName/*.md" line and apply the same guarded validation to the
other occurrence at lines 45-49 so both places stay consistent with the
template.
| sudo rm -rf "./$repoName" || true | ||
| sudo rm -rf "./$repoName/*.md" |
There was a problem hiding this comment.
Guard destructive sudo rm -rf against bad $repoName and drop redundant second rm.
repoName is derived from user input and then used in sudo rm -rf "./$repoName". If gitUrl is malformed or templating changes, this could accidentally target . or some unexpected directory. The follow‑up sudo rm -rf "./$repoName/*.md" is redundant because the whole directory was already removed.
Recommend adding a sanity check before the rm and simplifying the cleanup to a single call, e.g.:
-gitUrl="$1"
-
-# Extract repository name (removes .git suffix if present)
-repoName=$(basename "$gitUrl" .git)
-
-echo "Repo name extracted as '$repoName'"
-
-sudo rm -rf "./$repoName" || true
-sudo rm -rf "./$repoName/*.md"
+gitUrl="$1"
+
+# Extract repository name (removes .git suffix if present)
+repoName=$(basename "$gitUrl" .git)
+
+if [ -z "$repoName" ] || [ "$repoName" = "." ] || [ "$repoName" = "/" ]; then
+ echo "Refusing to operate on unsafe repoName='$repoName'" >&2
+ exit 1
+fi
+
+echo "Repo name extracted as '$repoName'"
+
+sudo rm -rf "./$repoName" || trueThe rsync section and excludes look good once the target directory safety is ensured.
Also applies to: 45-49
🤖 Prompt for AI Agents
In template/.devcontainer/windows-host-helper.sh around lines 30-31 (and also
apply the same fix to lines 45-49): the script runs destructive sudo rm -rf on a
variable-derived path and then redundantly attempts to remove *.md; add a sanity
check that $repoName is non-empty and does not equal . or .. and does not start
with / (and optionally validate against a safe pattern like [A-Za-z0-9._-]+),
aborting with a clear error if the check fails; then replace the two rm calls
with a single safe removal using the quoted relative path (e.g. rm -rf --
"./$repoName") after the guard; apply identical validation and single-call
removal at the other occurrence (lines 45-49).
| _ = Command( | ||
| append_resource_suffix("app-cloudfront-invalidation"), | ||
| create=app_cloudfront.id.apply( | ||
| lambda distribution_id: f'aws cloudfront create-invalidation --distribution-id {distribution_id} --paths "/*" && echo {uuid4()}' | ||
| ), | ||
| opts=ResourceOptions(depends_on=all_uploads), | ||
| ) |
There was a problem hiding this comment.
UUID generation timing may not achieve intended behavior.
The uuid4() call on line 146 is evaluated at program-definition time (when the Python code runs), not at execution time (when Pulumi applies changes). This means:
- The same UUID will be used across multiple
pulumi upruns if the code doesn't change - The CloudFront invalidation might not run on every deployment as intended
- Pulumi's Command resource uses the
createstring as part of determining whether to re-run
Consider one of these approaches:
Option 1: Use Pulumi's built-in change detection
- create=app_cloudfront.id.apply(
- lambda distribution_id: f'aws cloudfront create-invalidation --distribution-id {distribution_id} --paths "/*" && echo {uuid4()}'
- ),
+ create=app_cloudfront.id.apply(
+ lambda distribution_id: f'aws cloudfront create-invalidation --distribution-id {distribution_id} --paths "/*"'
+ ),
+ triggers=[app_cloudfront.urn],Option 2: Include a timestamp or content hash to force re-runs
+from datetime import datetime
+
...
create=app_cloudfront.id.apply(
- lambda distribution_id: f'aws cloudfront create-invalidation --distribution-id {distribution_id} --paths "/*" && echo {uuid4()}'
+ lambda distribution_id: f'aws cloudfront create-invalidation --distribution-id {distribution_id} --paths "/*" && echo {datetime.now().isoformat()}'
),Note: Also address the depends_on=all_uploads issue flagged in the previous comment.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| _ = Command( | |
| append_resource_suffix("app-cloudfront-invalidation"), | |
| create=app_cloudfront.id.apply( | |
| lambda distribution_id: f'aws cloudfront create-invalidation --distribution-id {distribution_id} --paths "/*" && echo {uuid4()}' | |
| ), | |
| opts=ResourceOptions(depends_on=all_uploads), | |
| ) | |
| from datetime import datetime | |
| _ = Command( | |
| append_resource_suffix("app-cloudfront-invalidation"), | |
| create=app_cloudfront.id.apply( | |
| lambda distribution_id: f'aws cloudfront create-invalidation --distribution-id {distribution_id} --paths "/*"' | |
| ), | |
| triggers=[app_cloudfront.urn], | |
| opts=ResourceOptions(depends_on=all_uploads), | |
| ) |
| _ = Command( | |
| append_resource_suffix("app-cloudfront-invalidation"), | |
| create=app_cloudfront.id.apply( | |
| lambda distribution_id: f'aws cloudfront create-invalidation --distribution-id {distribution_id} --paths "/*" && echo {uuid4()}' | |
| ), | |
| opts=ResourceOptions(depends_on=all_uploads), | |
| ) | |
| from datetime import datetime | |
| _ = Command( | |
| append_resource_suffix("app-cloudfront-invalidation"), | |
| create=app_cloudfront.id.apply( | |
| lambda distribution_id: f'aws cloudfront create-invalidation --distribution-id {distribution_id} --paths "/*" && echo {datetime.now().isoformat()}' | |
| ), | |
| opts=ResourceOptions(depends_on=all_uploads), | |
| ) |
🤖 Prompt for AI Agents
In template/infrastructure/src/infrastructure/program.py around lines 143-149,
the uuid4() is evaluated when the Python runs (definition time) so the Command
create string becomes static across runs; move UUID/timestamp/hash generation
into something evaluated at Pulumi apply time (for example generate the token
inside an apply on app_cloudfront.id or use a Pulumi random.UUID resource or a
changing timestamp/content-hash and pass it via the Command's triggers argument)
so the command string changes each deployment and forces re-run; also ensure
depends_on=all_uploads is a proper list of Resource objects (not an Output or
unsupported type) — if all_uploads is an Output or collection, unwrap or convert
it to a list of resources or attach explicit depends_on list so Pulumi honors
the upload dependency.
Why is this change necessary?
Some resources for the website hosting were being created inside apply statements
How does this change address the issue?
Moves the resource creation outside of apply
What side effects does this change have?
N/A
How is this change tested?
Downstream repo
Other
Pulled in some upstream template fixes
Summary by CodeRabbit
New Features
Bug Fixes
Chores
✏️ Tip: You can customize this high-level summary in your review settings.