-
Notifications
You must be signed in to change notification settings - Fork 170
fix: Docker #1077
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix: Docker #1077
Conversation
khandrew1
commented
Dec 12, 2025
- Fixes our Docker container
|
You have run out of free Bugbot PR reviews for this billing cycle. This will reset on January 11. To receive reviews on all of your PRs, visit the Cursor dashboard to activate Pro and start your 14-day free trial. |
WalkthroughThe Dockerfile has been restructured to consolidate multiple build stages into a streamlined pipeline. The separate client, SDK, and server builder stages are unified into a single builder stage, while the dependencies stage remains distinct. The production stage now explicitly manages artifact copying with ownership flags and sets up the non-root user earlier in the build process. The runtime command has been updated from a direct Node.js invocation to npm start. Layer optimization has been improved through consolidated RUN commands and strategic file ordering for better caching efficiency, while dumb-init and healthcheck provisions remain intact. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (1)
Dockerfile (1)
64-65: Minor: Consolidate SDK artifact copying for clarity.Lines 64–65 copy SDK artifacts in two separate
COPYcommands. This works but is verbose. Consider consolidating into a single directory copy for cleaner layering.Optionally consolidate:
-# Copy SDK dist and package.json (required at runtime) -COPY --from=builder --chown=mcpjam:nodejs /app/sdk/dist ./sdk/dist -COPY --from=builder --chown=mcpjam:nodejs /app/sdk/package.json ./sdk/package.json +# Copy SDK dist and package.json (required at runtime) +COPY --from=builder --chown=mcpjam:nodejs /app/sdk ./sdkThis assumes
sdk/in the builder image contains onlydist/andpackage.json(no source code or node_modules). Verify before applying.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
sdk/package-lock.jsonis excluded by!**/package-lock.json
📒 Files selected for processing (1)
Dockerfile(2 hunks)
🧰 Additional context used
🔍 Remote MCP
I'll search for relevant context about the Docker optimization patterns and best practices mentioned in this PR to help with the review.## Summary of Additional Context for PR #1077
Based on my research of Docker best practices and optimization patterns, here is relevant context for reviewing the Dockerfile changes:
Multi-Stage Build Optimization
Multi-stage builds create more efficient and smaller final images by separating different build stages within a single Dockerfile. The PR's consolidation from separate client/server/sdk builder stages into a unified builder stage aligns with this pattern. "Cherry-picking" static build artifacts from potentially massive build images is important for final image optimization.
COPY --chown Usage
The PR's addition of COPY --chown during artifact copying is a best practice with significant impact. Using the --chown option of Docker's COPY command reduces build time versus doing it manually. This is critical because when chowning files modifies them, the file counts in both the layer where copied and the chown layer, doubling image size. By combining the copy operation and ownership assignment in a single step, COPY --chown reduces layers created during build and keeps final image size smaller.
CMD Changes (npm start vs. node)
The PR's switch from node dist/server index.js to npm start deserves scrutiny. It is recommended to bake the entrypoint directly into the image, as using npm scripts reduces the number of processes running and causes exit signals like SIGTERM and SIGINT to be received by Node.js instead of npm swallowing them. The PR description notes this aligns with "GitHub workflow expectations," but this may conflict with signal handling best practices unless the npm start script is properly configured.
Layer and Cache Optimization
By copying only package.json files first and installing dependencies before copying the rest of the code, dependency installation layers can be reused even when source code changes. The PR's mention of "reworked file inclusion order" suggests attention to cache optimization, which is important to verify in the actual diff.
Production Dependencies
Multi-stage builds keep final images lean by including only required dependencies and enhance security by running as a non-root user. The PR's focus on explicit ownership setup is aligned with these security practices.,,
🔇 Additional comments (4)
Dockerfile (4)
4-15: Dependencies stage is well-structured and optimized.Consolidating both root and SDK dependency installation into a single
RUNreduces layer count, and thenpm ciapproach with pinned versions ensures reproducibility.
17-38: Builder stage is correct and caching-aware.Inheriting from
depsprovides access to pre-installed dependencies, and file ordering prioritizes infrequently-changing files (configs) before source code. The consolidatedRUNstatement is efficient.
93-97: Verify signal handling with npm start and dumb-init.The switch from
node dist/server index.jstonpm startintroduces an extra process layer. Whiledumb-initimproves signal forwarding, npm itself may not reliably propagate signals (SIGTERM, SIGINT) to the Node.js process. This can prevent graceful shutdown. Ensure that: (1) your npm start script usesexecto replace the npm process with Node.js, or (2) you verify that signal handling works as expected in production.Does your
npm startscript (likely inpackage.json) useexec node ...to ensure signals reach Node.js? Or do you have another mechanism to verify signals are properly forwarded?
74-77: Clarify necessity of shared/ and .env.production in production image.Lines 74 and 77 copy
shared/(TypeScript source files) and.env.productionto production. These warrant clarification:
- shared/: Contains source
.tsfiles that are unlikely needed at runtime. Verify whether yournpm startcommand or bin/ scripts actually require these. If not, removing them would reduce image size by ~1–5 MB.- .env.production: Typically .gitignored for secrets. Confirm this file exists in the repository and only contains non-sensitive, runtime-safe configuration values.
Can you clarify whether
shared/and.env.productionare required at runtime, and confirm that.env.productionis in the repository and safe for production deployment?
| # Copy built artifacts with correct ownership (avoids slow chown -R on large dirs) | ||
| COPY --from=builder --chown=mcpjam:nodejs /app/dist ./dist | ||
|
|
||
| # Copy public assets (logos, etc.) to be served at root level | ||
| COPY --from=client-builder /app/client/public ./public | ||
| # Copy SDK dist and package.json (required at runtime) | ||
| COPY --from=builder --chown=mcpjam:nodejs /app/sdk/dist ./sdk/dist | ||
| COPY --from=builder --chown=mcpjam:nodejs /app/sdk/package.json ./sdk/package.json | ||
|
|
||
| # Copy package.json and node_modules for runtime dependencies | ||
| COPY --from=deps-base /app/package.json ./package.json | ||
| COPY --from=deps-base /app/node_modules ./node_modules | ||
| # Copy runtime dependencies with correct ownership | ||
| # Using --chown here is critical - node_modules can have 50k+ files! | ||
| COPY --from=deps --chown=mcpjam:nodejs /app/package.json ./package.json | ||
| COPY --from=deps --chown=mcpjam:nodejs /app/node_modules ./node_modules | ||
| COPY --from=deps --chown=mcpjam:nodejs /app/sdk/node_modules ./sdk/node_modules | ||
|
|
||
| # Copy shared types | ||
| COPY shared/ ./shared/ | ||
|
|
||
| # Copy any startup scripts | ||
| COPY bin/ ./bin/ | ||
|
|
||
| # Create non-root user | ||
| RUN groupadd --gid 1001 nodejs && \ | ||
| useradd --uid 1001 --gid nodejs --shell /bin/bash --create-home mcpjam | ||
| # Copy shared types and startup script | ||
| COPY --chown=mcpjam:nodejs shared/ ./shared/ | ||
| COPY --chown=mcpjam:nodejs bin/ ./bin/ | ||
| # Copy .env.production for runtime (not needed during build) | ||
| COPY --chown=mcpjam:nodejs .env.production ./ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Critical: SDK package-lock.json missing in production image.
Line 65 copies sdk/package.json but not sdk/package-lock.json. If any npm commands run at runtime (or during container initialization), dependency versions cannot be guaranteed. This breaks reproducibility for the SDK layer.
Add the missing package-lock.json:
# Copy SDK dist and package.json (required at runtime)
COPY --from=builder --chown=mcpjam:nodejs /app/sdk/dist ./sdk/dist
COPY --from=builder --chown=mcpjam:nodejs /app/sdk/package.json ./sdk/package.json
+COPY --from=builder --chown=mcpjam:nodejs /app/sdk/package-lock.json ./sdk/package-lock.json🤖 Prompt for AI Agents
In Dockerfile around lines 60 to 77, the SDK's package-lock.json is not copied
into the production image which risks non-reproducible SDK installs; add a COPY
--chown=mcpjam:nodejs /app/sdk/package-lock.json ./sdk/package-lock.json
alongside the existing sdk/package.json COPY so the lockfile is present in the
runtime image and preserves exact SDK dependency versions.