ci: add Docker Hub auto-publish workflow on v-tag push#56
ci: add Docker Hub auto-publish workflow on v-tag push#56banjuer wants to merge 4 commits intoAmintaCCCP:mainfrom
Conversation
📝 WalkthroughWalkthroughConverted app to a single Node/Express container with JWT auth, per-user DB scoping, admin user management, background scheduler & release monitor, notification delivery, proxy-based GitHub access, PWA support, and CI for multi-arch Docker image publish. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant Cron as Scheduler (node-cron)
participant ReleaseMon as ReleaseMonitor
participant DB as SQLite DB
participant Proxy as Backend Proxy
participant GitHub as GitHub API
participant Notify as Notification Service (Apprise)
Cron->>ReleaseMon: trigger (hourly / startup)
ReleaseMon->>DB: SELECT users with apprise_url and encrypted github_token
DB-->>ReleaseMon: users + tokens
ReleaseMon->>DB: decrypt github_token (using encryptionKey)
ReleaseMon->>DB: SELECT subscribed repositories for user
DB-->>ReleaseMon: repo list
loop per repo
ReleaseMon->>Proxy: POST /api/proxy (include user token)
Proxy->>GitHub: proxied request with Authorization
GitHub-->>Proxy: release payload
Proxy-->>ReleaseMon: release data
alt new release detected
ReleaseMon->>DB: INSERT new release (user_id scoped)
ReleaseMon->>Notify: sendNotification(user.apprise_url, title, body)
Notify-->>ReleaseMon: 200/OK or error
end
end
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
📝 Coding Plan
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.github/workflows/docker-publish.yml:
- Around line 24-28: The workflow currently initializes "Set up Docker Buildx"
before "Set up QEMU (for multi-platform builds)", which prevents Buildx from
detecting emulation support; swap the two steps so the QEMU setup
(docker/setup-qemu-action@v3) runs before the Docker Buildx setup
(docker/setup-buildx-action@v3) to ensure proper multi-platform builder
configuration.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 3f74fd31-c261-426a-9cd6-39945abae4ae
📒 Files selected for processing (1)
.github/workflows/docker-publish.yml
| - name: Set up Docker Buildx | ||
| uses: docker/setup-buildx-action@v3 | ||
|
|
||
| - name: Set up QEMU (for multi-platform builds) | ||
| uses: docker/setup-qemu-action@v3 |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
docker build-push-action multi-platform setup order QEMU Buildx
💡 Result:
For docker/build-push-action multi-platform builds on a typical GitHub-hosted ubuntu-latest runner, the usual setup order is:
- Checkout
- Set up QEMU (binfmt emulation) (needed if you’re building non-native architectures on the runner, e.g. building
linux/arm64on anamd64runner) (docs.docker.com) - Set up Buildx (creates/boots a Buildx builder; default
docker-containerdriver supports multi-platform builds) (github.com) - Login to registry (Docker Hub / GHCR / ECR / etc.)
- Build & push with
platforms:(docs.docker.com)
Minimal example:
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: docker/setup-qemu-action@v3 # 2) QEMU first (for emulation)
- uses: docker/setup-buildx-action@v3 # 3) then create/boot the builder
- uses: docker/login-action@v3
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- uses: docker/build-push-action@v6
with:
push: true
tags: ghcr.io/OWNER/IMAGE:latest
platforms: linux/amd64,linux/arm64Notes:
- If you use native ARM runners (or otherwise don’t need emulation), you can often skip the QEMU step. (docs.docker.com)
Citations:
- 1: https://docs.docker.com/build/ci/github-actions/multi-platform/?utm_source=openai
- 2: https://github.com/docker/setup-buildx-action?utm_source=openai
- 3: https://docs.docker.com/build/ci/github-actions/multi-platform/?utm_source=openai
- 4: https://docs.docker.com/build/ci/github-actions/multi-platform/?utm_source=openai
QEMU must be set up before Docker Buildx for multi-platform builds.
According to Docker's official documentation for multi-platform CI/CD on GitHub Actions, QEMU should be initialized first so that Buildx can detect emulation support when creating the builder. The current setup initializes Buildx before QEMU is available, which prevents proper multi-platform configuration.
🔧 Proposed fix to reorder setup steps
- - name: Set up Docker Buildx
- uses: docker/setup-buildx-action@v3
-
- name: Set up QEMU (for multi-platform builds)
uses: docker/setup-qemu-action@v3
+ - name: Set up Docker Buildx
+ uses: docker/setup-buildx-action@v3
+📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| - name: Set up Docker Buildx | |
| uses: docker/setup-buildx-action@v3 | |
| - name: Set up QEMU (for multi-platform builds) | |
| uses: docker/setup-qemu-action@v3 | |
| - name: Set up QEMU (for multi-platform builds) | |
| uses: docker/setup-qemu-action@v3 | |
| - name: Set up Docker Buildx | |
| uses: docker/setup-buildx-action@v3 |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.github/workflows/docker-publish.yml around lines 24 - 28, The workflow
currently initializes "Set up Docker Buildx" before "Set up QEMU (for
multi-platform builds)", which prevents Buildx from detecting emulation support;
swap the two steps so the QEMU setup (docker/setup-qemu-action@v3) runs before
the Docker Buildx setup (docker/setup-buildx-action@v3) to ensure proper
multi-platform builder configuration.
There was a problem hiding this comment.
Actionable comments posted: 8
Note
Due to the large number of review comments, Critical severity comments were prioritized as inline comments.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (4)
server/src/routes/repositories.ts (1)
59-75:⚠️ Potential issue | 🟡 MinorReuse the escaped search pattern in the count query.
The page query escapes
%,_, and\, but the COUNT query does not. Searches containing wildcard characters will return atotalthat does not match the actual result set.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@server/src/routes/repositories.ts` around lines 59 - 75, The COUNT query uses the raw `search` pattern while the main SELECT escapes wildcard chars and uses `searchPattern`, causing mismatched totals; update the count branch in routes/repositories.ts (where `countSql` and `countParams` are built) to reuse the escaped pattern and ESCAPE clause used earlier — i.e., apply the same escape of `%`, `_`, and `\` to produce `searchPattern` and push five copies of that into `countParams`, and include "ESCAPE '\\'" in the COUNT's WHERE LIKE clauses so the COUNT uses the identical matching logic as the SELECT that produces `rows` (see variables `escaped`, `searchPattern`, `sql`, `params`, `countSql`, and `countParams`).server/src/routes/categories.ts (2)
153-162:⚠️ Potential issue | 🔴 CriticalUse the inserted asset-filter id for the readback too.
This has the same bug as categories: Line 156 writes a TEXT id, but Line 161 queries by
lastInsertRowid. Successful inserts can still fail during response construction.🐛 Proposed fix
- const result = db.prepare( + const assetFilterId = typeof id === 'string' && id ? id : crypto.randomUUID(); + db.prepare( 'INSERT INTO asset_filters (id, user_id, name, description, keywords, platform, sort_order) VALUES (?, ?, ?, ?, ?, ?, ?)' ).run( - id ?? crypto.randomUUID(), userId, name ?? '', description ?? null, + assetFilterId, userId, name ?? '', description ?? null, JSON.stringify(keywords ?? []), platform ?? null, sort_order ?? 0 ); - const row = db.prepare('SELECT * FROM asset_filters WHERE id = ?').get(result.lastInsertRowid) as Record<string, unknown>; + const row = db.prepare('SELECT * FROM asset_filters WHERE id = ? AND user_id = ?').get(assetFilterId, userId) as Record<string, unknown>;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@server/src/routes/categories.ts` around lines 153 - 162, The SELECT after inserting into asset_filters should use the actual text id used for the INSERT rather than result.lastInsertRowid; compute and store the id value first (e.g., const insertId = id ?? crypto.randomUUID()), pass insertId into the .run call for db.prepare('INSERT INTO asset_filters ...') and then use that same insertId as the parameter to the subsequent SELECT ('SELECT * FROM asset_filters WHERE id = ?') so transformAssetFilter(row) reads the correct row.
50-59:⚠️ Potential issue | 🔴 CriticalRead the new category back by its UUID, not
lastInsertRowid.Line 53 inserts a TEXT id, so
result.lastInsertRowidis not the value stored incategories.id. The follow-upSELECTcan returnundefinedand turn a successful create into a 500.🐛 Proposed fix
- const result = db.prepare( + const categoryId = typeof id === 'string' && id ? id : crypto.randomUUID(); + db.prepare( 'INSERT INTO categories (id, user_id, name, description, keywords, color, icon, sort_order) VALUES (?, ?, ?, ?, ?, ?, ?, ?)' ).run( - id ?? crypto.randomUUID(), userId, name ?? '', description ?? null, + categoryId, userId, name ?? '', description ?? null, JSON.stringify(keywords ?? []), color ?? null, icon ?? null, sort_order ?? 0 ); - const row = db.prepare('SELECT * FROM categories WHERE id = ?').get(result.lastInsertRowid) as Record<string, unknown>; + const row = db.prepare('SELECT * FROM categories WHERE id = ? AND user_id = ?').get(categoryId, userId) as Record<string, unknown>;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@server/src/routes/categories.ts` around lines 50 - 59, The SELECT that reads back the inserted category uses result.lastInsertRowid but the INSERT sets a TEXT id (id ?? crypto.randomUUID()), so the read should query by the actual id used. Fix by capturing the final id value (e.g., const usedId = id ?? crypto.randomUUID()) before the INSERT, use usedId in the INSERT parameters and in the subsequent db.prepare('SELECT * FROM categories WHERE id = ?').get(usedId) call (fall back to result.lastInsertRowid only if absolutely necessary), then pass the retrieved row to transformCategory and return it.src/services/githubApi.ts (1)
39-49:⚠️ Potential issue | 🟡 MinorDon't map every proxy 401 to a GitHub token failure.
After moving this traffic through the backend proxy, a 401 can now come from the app's own auth layer before GitHub is ever called. Returning
"GitHub token expired or invalid"for that case will send users down the wrong recovery path.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/services/githubApi.ts` around lines 39 - 49, The code currently maps every 401 to "GitHub token expired or invalid"; change the logic in the response handling (the block around the fetch to targetUrl in githubApi.ts) so you only treat 401 as a GitHub token failure when the response is actually from GitHub (e.g., check response.url.includes('api.github.com') or another GitHub-specific response header) — otherwise throw the generic API error string (`GitHub API error: ${response.status} ${response.statusText}`); update the condition that throws inside the fetch response handling so unauthorized responses from the proxy are not misclassified as token issues.
🟠 Major comments (13)
src/services/backendAdapter.ts-6-15 (1)
6-15:⚠️ Potential issue | 🟠 MajorDon’t report the backend as available before proving it.
Hard-coding
/apiand returningtruehere removes the previous fallback behavior. When the API is down or the frontend is started without the server, app startup still proceeds intosyncFromBackend()/auto-sync as if the backend were healthy, and then fails on the first real request. Keep availability tied to a real health probe, or make the backend a hard requirement everywhere and remove the stale fallback paths/docs consistently.Suggested fix
class BackendAdapter { - private _backendUrl: string = '/api'; + private _backendUrl: string | null = null; + private _isAvailable = false; async init(): Promise<void> { - // In monolith, we just use /api which is proxied in dev or same-origin in prod - console.log('✅ Backend initialized at /api'); + try { + const res = await this.fetchWithTimeout('/api/health', undefined, 5000); + if (!res.ok) throw new Error('health check failed'); + this._backendUrl = '/api'; + this._isAvailable = true; + console.log('✅ Backend initialized at /api'); + } catch { + this._backendUrl = null; + this._isAvailable = false; + } } get isAvailable(): boolean { - return true; // Assume available in monolith + return this._isAvailable; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/services/backendAdapter.ts` around lines 6 - 15, The code currently hardcodes _backendUrl and always returns true from isAvailable, which hides backend outages; change init() to actually probe the backend (e.g., GET `${this._backendUrl}/health` or another health endpoint) and set a private flag (e.g., _available: boolean) based on the probe result, have get isAvailable() return that flag (not true), and ensure callers like syncFromBackend() check isAvailable() before proceeding (or make init() throw if the backend is required everywhere). Update _backendUrl initialization only if you still want a default, but don’t assume availability without the health probe.server/src/services/releaseMonitor.ts-66-100 (1)
66-100:⚠️ Potential issue | 🟠 MajorSeparate release ingestion from notification state.
Every unseen release is inserted and treated as handled immediately. That will backfill-alert on the last five historical releases, and a transient
sendNotificationfailure is never retried because the row already exists on the next pass.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@server/src/services/releaseMonitor.ts` around lines 66 - 100, The current loop in releaseMonitor.ts inserts every unseen ghRelease (INSERT INTO releases ...) and immediately treats it as handled which prevents retrying sendNotification failures; change to separate ingestion from notification state by: insert the new release row with a stable unread flag (keep is_read = 0) but do NOT mark it as notified on insert, then attempt sendNotification(…) and only after a successful send update the record to a distinct notification flag (add or use a column like notification_sent or notified_at instead of conflating with is_read); perform the insert + optional update inside a transaction (use the same db.prepare/transaction flow around the INSERT INTO releases and an UPDATE releases SET notification_sent = 1 WHERE id = ?) so failed notifications leave the row in a retryable state for the next run while keeping githubReleases/ghRelease and sendNotification as the integration points.README_zh.md-168-173 (1)
168-173:⚠️ Potential issue | 🟠 MajorDo not mark
API_SECRETas optional here.Line 171 says the server falls back to a default JWT secret, but
server/src/index.tsLines 80-82 warn that auth is disabled whenAPI_SECRETis missing. That mismatch can lead to an unintentionally unauthenticated deployment.Suggested doc fix
| 变量 | 必填 | 说明 | |----------|----------|-------------| -| `API_SECRET` | 否 | JWT 签名密钥(如不设置将使用默认值)。 | +| `API_SECRET` | 是(生产环境) | JWT 签名密钥。未设置时认证将被禁用,请务必在部署时配置。 | | `ENCRYPTION_KEY` | 否 | 用于加密存储密钥的 AES-256 密钥。未设置时自动生成。 | | `PORT` | 否 | 服务器端口(默认:3000) |🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@README_zh.md` around lines 168 - 173, The README currently marks API_SECRET as optional but the server startup logic checks process.env.API_SECRET and logs/warns that "auth is disabled" when it's missing; update the README entry for `API_SECRET` to mark it as required (remove "否"/optional), and add a short note that omitting `API_SECRET` disables authentication (matching the server behavior), referencing the environment variable `API_SECRET` and the server startup auth-warning string so readers see the exact behavior; leave `ENCRYPTION_KEY` and `PORT` as-is.server/src/services/releaseMonitor.ts-7-18 (1)
7-18:⚠️ Potential issue | 🟠 MajorPrevent overlapping monitor sweeps.
cron.schedulewon't wait for the previous async run, and the 10-second startup check can overlap the hourly tick. Because release detection is check-then-insert, concurrent sweeps can double-send notifications or race the same insert.Suggested fix
export function startReleaseMonitor() { + let running = false; + const run = async () => { + if (running) return; + running = true; + try { + await checkNewReleases(); + } finally { + running = false; + } + }; + // Run every hour cron.schedule('0 * * * *', async () => { console.log('⏰ Running release monitor job...'); - await checkNewReleases(); + await run(); }); // Also run once on startup after a short delay setTimeout(() => { console.log('🚀 Initial release monitor check...'); - checkNewReleases().catch(err => console.error('Error in initial release check:', err)); + run().catch(err => console.error('Error in initial release check:', err)); }, 10000); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@server/src/services/releaseMonitor.ts` around lines 7 - 18, The monitor can run overlapping sweeps; add a module-scoped guard (e.g., let isReleaseMonitorRunning = false) in startReleaseMonitor and use it in both the cron handler and the startup setTimeout to skip if already running; before calling checkNewReleases() set the flag, await checkNewReleases() inside a try/finally and clear the flag in finally (and handle/log errors in catch if desired) so concurrent executions are prevented; reference the existing startReleaseMonitor and checkNewReleases functions when applying this change.server/src/index.ts-51-62 (1)
51-62:⚠️ Potential issue | 🟠 MajorKeep the SPA fallback off
/api.An authenticated
GET /api/...that misses every API route now falls through toapp.get('*')and returnsindex.htmlwith 200 instead of a JSON 404. Add an/apinot-found handler, restrict the fallback to non-API paths, and keeperrorHandlerlast sosendFilefailures still hit it.Suggested fix
- // Global error handler - app.use(errorHandler); - // Serve static UI in production const __dirname = path.dirname(new URL(import.meta.url).pathname); const frontendDistPath = path.resolve(__dirname, '../../dist'); app.use(express.static(frontendDistPath)); + app.use('/api', (_req, res) => { + res.status(404).json({ error: 'Route not found', code: 'ROUTE_NOT_FOUND' }); + }); + // Handle SPA routing - app.get('*', (req, res) => { + app.get(/^(?!\/api(?:\/|$)).*/, (_req, res) => { res.sendFile(path.join(frontendDistPath, 'index.html')); }); + // Global error handler + app.use(errorHandler); + return app; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@server/src/index.ts` around lines 51 - 62, Add an explicit API not-found handler and prevent the SPA fallback from catching API routes: after mounting static assets (express.static(frontendDistPath)) register a handler for API misses (e.g., app.use('/api', (req,res) => res.status(404).json({error:'Not Found'}))) then replace the catch-all app.get('*') with a route that only serves index.html for non-API paths (for example using a middleware that checks req.path and calls res.sendFile(frontendDistPath/index.html) when the path does not start with '/api'); ensure errorHandler (the errorHandler middleware used via app.use(errorHandler)) remains the last middleware so sendFile failures still propagate to it.server/src/db/schema.ts-21-21 (1)
21-21:⚠️ Potential issue | 🟠 MajorAdd foreign keys for the new ownership columns.
These
user_idfields are just integers right now. Deleting a user will leave orphaned repositories, releases, configs, filters, and settings behind, and nothing prevents rows from pointing at nonexistent users. AddREFERENCES users(id)and choose the delete behavior explicitly, typicallyON DELETE CASCADE.Also applies to: 50-50, 67-67, 76-76, 90-90, 101-101, 108-111
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@server/src/db/schema.ts` at line 21, The user_id columns are plain INTEGERs and must become foreign keys to prevent orphans; update each table definition (e.g., repositories.user_id, releases.user_id, configs.user_id, filters.user_id, settings.user_id) to add "REFERENCES users(id)" and an explicit delete policy (typically "ON DELETE CASCADE" or another chosen behavior) so deletions of users cascade or restrict as intended; ensure you modify the CREATE TABLE statements in server/src/db/schema.ts to replace "user_id INTEGER NOT NULL" with the corresponding "user_id INTEGER NOT NULL REFERENCES users(id) ON DELETE CASCADE" (or your chosen policy) for each affected table.server/src/db/migrations.ts-26-43 (1)
26-43:⚠️ Potential issue | 🟠 MajorRebuild the old tables in v2 instead of only adding
user_id.This migration never recreates the existing tables, so upgraded databases keep their old primary/unique constraints. That means
repositoriescan remain globally keyed on the old shape, andsettingscan remain keyed only bykey, which breaks the new multi-user model on existing installs.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@server/src/db/migrations.ts` around lines 26 - 43, The migration currently only alters tables in migrations.ts by adding user_id in the ALTER TABLE loop (tables variable, defaultUserId), but it must rebuild the old tables so new primary/unique constraints include user_id; update the migration to, for each table in tables, create a new table with the v2 schema (including user_id in PK/unique indexes), copy rows from the old table into the new table (setting defaultUserId where missing), drop the old table, and rename the new table to the original name (recreate indexes/constraints afterwards); ensure this process runs inside a transaction and handle exceptions (skip if new schema already present) so repositories and settings gain the new multi-user keys rather than keeping old global uniques.server/src/db/migrations.ts-17-23 (1)
17-23:⚠️ Potential issue | 🟠 MajorBootstrap admin needs a valid bcrypt hash, not plain text.
The login flow uses
bcrypt.compare(password, user.password_hash)to verify credentials. Seedingpassword_hashwith the plain text string'CHANGE_ME'will cause authentication to fail for the default admin user on migrated instances, as bcrypt will reject a non-hash value. Store an actual bcrypt hash or create the account in a forced-reset state.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@server/src/db/migrations.ts` around lines 17 - 23, When inserting the bootstrap admin in migrations.ts (the block that checks hasRepos.c > 0 && hasUsers.c === 0 and sets defaultUserId), replace the plain 'CHANGE_ME' password string with a real bcrypt hash or create the user in a forced-reset state; specifically, generate a bcrypt hash (e.g., via bcrypt.hash or bcrypt.hashSync) for the temporary password and store that value in the password_hash column, or add a reset_required/force_reset flag on the users row and set password_hash to a valid hash placeholder, so bcrypt.compare(password, user.password_hash) will not fail for the seeded SuperAdmin.server/src/db/migrations.ts-45-47 (1)
45-47:⚠️ Potential issue | 🟠 Major
ALTER TABLE ... ADD COLUMNrejectsDEFAULT (datetime('now'))in SQLite—use a literal instead.SQLite
ALTER TABLE ... ADD COLUMNonly permits literal (constant) defaults:NULL, numeric values, strings, or blobs. Expressions likedatetime('now')are rejected, even though they work inCREATE TABLE. Sinceupdated_atisNOT NULL, you must provide a non-NULL literal default (e.g.,DEFAULT '0001-01-01 00:00:00'or a Unix timestamp). Without this fix, the migration fails; if caught by the try-catch block, the column silently goes missing on upgraded databases.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@server/src/db/migrations.ts` around lines 45 - 47, The ALTER TABLE in the migration uses a non-literal default (datetime('now')) which SQLite rejects; update the migration that executes db.exec("ALTER TABLE settings ADD COLUMN updated_at ...") so the ADD COLUMN uses a literal default (e.g., DEFAULT '0001-01-01 00:00:00' or a numeric timestamp) or alternatively add the column nullable then backfill and ALTER it to NOT NULL; modify the migration around the anonymous function handling db.exec for the settings table/updated_at column to apply one of these approaches and ensure existing rows are backfilled before enforcing NOT NULL.DOCKER.md-12-16 (1)
12-16:⚠️ Potential issue | 🟠 MajorShow a stable
ENCRYPTION_KEYin the run examples.This app stores encrypted credentials under
/app/data. If the container auto-generates a newENCRYPTION_KEYafter a recreate, those persisted values become unreadable on the next start, so documenting it as optional is dangerous for real deployments.📝 Proposed doc fix
docker run -d \ -p 8080:3000 \ + -e ENCRYPTION_KEY=replace-with-a-stable-32-byte-secret \ -v gsm-data:/app/data \ --name gsm \ banjuer/github-stars-manager:latest @@ services: app: image: banjuer/github-stars-manager:latest + environment: + ENCRYPTION_KEY: replace-with-a-stable-32-byte-secret ports: - "8080:3000" @@ -| `ENCRYPTION_KEY` | 32-byte key for sensitive data (auto-generated if missing). | +| `ENCRYPTION_KEY` | Required stable 32-byte key used to decrypt stored secrets across restarts. |Also applies to: 25-30, 54-61
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@DOCKER.md` around lines 12 - 16, Update the docker run examples to explicitly set a persistent ENCRYPTION_KEY environment variable instead of leaving it optional; modify the shown docker run commands (the blocks that currently mount /app/data and run banjuer/github-stars-manager:latest) to include -e ENCRYPTION_KEY="REPLACE_WITH_STABLE_KEY" (or instruct to fetch from a secrets manager) so the persisted encrypted credentials remain readable across recreates, and make the same change in the other example blocks that demonstrate running the container (the other docker run examples around the same snippets).src/components/SettingsPanel.tsx-507-519 (1)
507-519:⚠️ Potential issue | 🟠 MajorDon't expose a test button that never sends a test.
This handler only shows an alert and flips local loading state. The new "Test Notification" action looks implemented in the UI, but it never exercises the Apprise URL at all.
Also applies to: 1138-1150
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/components/SettingsPanel.tsx` around lines 507 - 519, The Test button handler (handleTestApprise) currently only shows an alert and toggles setIsTestingApprise without sending anything; change it to perform a real test by calling the backend test endpoint (e.g. POST /api/notifications/test) or reuse the existing sendNotification backend logic (invoke the same server-side function used for notifications) passing the saved Apprise URL and a small test payload, handle success/error responses to show user feedback, and ensure setIsTestingApprise is set true before the request and false in finally; locate handleTestApprise and update its body to make the network request and surface errors instead of just alerting.server/src/routes/auth.ts-9-10 (1)
9-10:⚠️ Potential issue | 🟠 MajorUse a dedicated required JWT secret here.
Reusing
config.encryptionKeyfor signing and silently falling back to'fallback_secret_for_dev_only'weakens auth. If the encryption key rotates or is auto-generated, existing sessions break; if config is missing, the fallback becomes a shared signing secret.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@server/src/routes/auth.ts` around lines 9 - 10, Replace the current reuse/fallback pattern for JWT signing: stop assigning JWT_SECRET = config.encryptionKey || 'fallback_secret_for_dev_only'. Instead require a dedicated signing secret (e.g., config.jwtSecret) and fail fast if it's absent—throw or exit during startup so the app never runs with a weak default; keep JWT_EXPIRES_IN as-is. Update any references that read JWT_SECRET (in auth middleware/handlers) to use the new config.jwtSecret variable and document that it must be provided in production.src/components/SettingsPanel.tsx-1-1 (1)
1-1:⚠️ Potential issue | 🟠 MajorKeep the Apprise field synced and clearable.
appriseUrlInputis captured frombackendUseronly on first render, so it stays stale if the profile arrives later. ThenappriseUrlInput || undefinedturns an intentional clear into "leave it unchanged", so users cannot remove an existing notification URL.🐛 Proposed fix
-import React, { useState } from 'react'; +import React, { useEffect, useState } from 'react'; @@ - const [appriseUrlInput, setAppriseUrlInput] = useState(backendUser?.apprise_url || ''); + const [appriseUrlInput, setAppriseUrlInput] = useState(''); + + useEffect(() => { + setAppriseUrlInput(backendUser?.apprise_url || ''); + }, [backendUser?.apprise_url]); @@ const updated = await authService.updateProfile({ - apprise_url: appriseUrlInput || undefined, + apprise_url: appriseUrlInput, password: newPasswordSync || undefined });Also applies to: 72-73, 483-489
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/components/SettingsPanel.tsx` at line 1, The appriseUrlInput state in SettingsPanel is initialized from backendUser only on first render and remains stale; also using `appriseUrlInput || undefined` treats an intentional empty string as "leave unchanged" so users can't clear the URL—update the component to (1) set appriseUrlInput via a useEffect that watches backendUser (or its notification URL) so the input stays synced when profile arrives/updates, and (2) change the submission/patch logic that currently uses `appriseUrlInput || undefined` to explicitly send undefined only when the user has not touched the field (or use a distinct sentinel) and send an empty string (or explicit null/empty value) when the user cleared the input so backend removes the stored URL; key symbols: SettingsPanel component, appriseUrlInput state, backendUser, and the submit/patch function handling the apprise value.
🟡 Minor comments (4)
README_zh.md-114-118 (1)
114-118:⚠️ Potential issue | 🟡 MinorRemove the duplicated
目标用户section.This block duplicates the later
## 目标用户heading at Line 175, and the leftoverOllama...text below now renders as orphaned content. Please keep a single section here and delete the stale leftovers.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@README_zh.md` around lines 114 - 118, Delete the duplicate "## 目标用户" block (the earlier occurrence) and any orphaned content that followed it (e.g., the leftover "Ollama..." text), keeping only the single, correct "## 目标用户" section later in the file; ensure there are no remaining duplicated headings or stray paragraphs so the README renders with one coherent 目标用户 section.server/src/routes/admin.ts-67-68 (1)
67-68:⚠️ Potential issue | 🟡 MinorReturn 404 when the target user does not exist.
This handler always reports success even when
DELETEaffects zero rows, so the admin UI can think an account was removed when nothing changed.🩹 Minimal fix
- db.prepare('DELETE FROM users WHERE id = ?').run(id); - res.json({ message: 'User deleted successfully' }); + const result = db.prepare('DELETE FROM users WHERE id = ?').run(id); + if (result.changes === 0) { + return res.status(404).json({ error: 'User not found', code: 'USER_NOT_FOUND' }); + } + res.json({ message: 'User deleted successfully' });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@server/src/routes/admin.ts` around lines 67 - 68, The DELETE handler currently always returns success after calling db.prepare('DELETE FROM users WHERE id = ?').run(id); — change it to capture the result of run (e.g., const result = db.prepare(...).run(id)) and inspect result.changes; if changes === 0 respond with res.status(404).json({ error: 'User not found' }) otherwise return the existing res.json({ message: 'User deleted successfully' }) so the admin UI gets a 404 when no row was deleted.src/services/auth.ts-62-65 (1)
62-65:⚠️ Potential issue | 🟡 MinorVariable shadowing:
datais redeclared.The variable
dataon line 63 shadows the function parameterdataon line 40. This could lead to confusion and potential bugs.🐛 Proposed fix
if (!res.ok) { - const data = await res.json().catch(() => ({})); - throw new Error(data.error || 'Profile update failed'); + const errorData = await res.json().catch(() => ({})); + throw new Error(errorData.error || 'Profile update failed'); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/services/auth.ts` around lines 62 - 65, In the function that accepts a parameter named data (the one making the fetch and checking res.ok), the local const data declared when parsing the response shadows the parameter; rename the parsed-response variable (e.g., responseData or parsed) and use that new name in the error throw (throw new Error(responseData.error || 'Profile update failed')) to avoid shadowing; update all references in that block (the res.json() catch and the Error construction) to the new identifier so the original parameter remains untouched.src/store/useAppStore.ts-270-272 (1)
270-272:⚠️ Potential issue | 🟡 MinorRemove unused
isBackendAvailableproperty.The
isBackendAvailableproperty is initialized but never used anywhere in the codebase. There is also no setter action to update it. Remove it from both the store initialization (line 272) and the type definition insrc/types/index.tsto eliminate unused code.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/store/useAppStore.ts` around lines 270 - 272, Remove the unused isBackendAvailable property from the app store initialization and its type definition: locate the property named isBackendAvailable in the state object in useAppStore (the backendUser/backendApiSecret block) and delete that key, then remove the corresponding isBackendAvailable field from the relevant interface/type in src/types/index.ts so the store type and implementation remain consistent and no unused symbol persists.
🧹 Nitpick comments (10)
src/components/SearchBar.tsx (3)
782-793: Variable shadowing:languageshadows the store variable.The loop variable
languageat line 782 shadows thelanguagefrom the store (line 15), which is used for i18n throughout the component. While this works correctly here, it's a maintenance hazard that could lead to subtle bugs.♻️ Proposed fix
- {availableLanguages.slice(0, 12).map(language => ( + {availableLanguages.slice(0, 12).map(lang => ( <button - key={language} - onClick={() => handleLanguageToggle(language)} + key={lang} + onClick={() => handleLanguageToggle(lang)} className={`px-3 py-1.5 rounded-lg text-sm transition-colors ${ - searchFilters.languages.includes(language) + searchFilters.languages.includes(lang) ? 'bg-blue-100 text-blue-700 dark:bg-blue-900 dark:text-blue-300' : 'bg-gray-100 text-gray-700 dark:bg-gray-700 dark:text-gray-300 hover:bg-gray-200 dark:hover:bg-gray-600' }`} > - {language} + {lang} </button> ))}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/components/SearchBar.tsx` around lines 782 - 793, The map callback uses a loop variable named `language` which shadows the component/store-level `language` identifier used for i18n; rename the loop variable (e.g., to `lang` or `availableLang`) in the `availableLanguages.slice(0, 12).map(...)` callback and update all usages inside that callback (the `key`, the `onClick` call to `handleLanguageToggle`, and the rendered value) so they reference the new variable, leaving the outer `language` store variable untouched and preventing shadowing with `searchFilters.languages` still used as before.
350-355: Remove commented-out code instead of leaving it in the codebase.If real-time search toggle is intentionally disabled per user request
#5, delete this code entirely. Version control preserves history if it's ever needed again. Keeping commented code adds noise and maintenance burden.♻️ Proposed fix
const handleInputChange = (e: React.ChangeEvent<HTMLInputElement>) => { const value = e.target.value; setSearchQuery(value); - - // Disable real-time search mode (following user request `#5`) - // if (value && !isRealTimeSearch) { - // setIsRealTimeSearch(true); - // } else if (!value && isRealTimeSearch) { - // setIsRealTimeSearch(false); - // } // Show search history when input is focused and empty🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/components/SearchBar.tsx` around lines 350 - 355, Remove the commented-out real-time search toggle block in SearchBar.tsx (the lines referencing value, isRealTimeSearch, and setIsRealTimeSearch); since the toggle is intentionally disabled per user request `#5`, delete the commented code entirely to reduce noise and rely on VCS for history, leaving no commented logic related to real-time search in the SearchBar component.
415-420: Consider usingonKeyDowninstead ofonKeyPress.The
onKeyPressevent (line 545) is deprecated. While this segment's comment update is accurate, the handler should migrate toonKeyDownfor better compatibility.♻️ Proposed fix
- const handleKeyPress = (e: React.KeyboardEvent) => { + const handleKeyDown = (e: React.KeyboardEvent) => { if (e.key === 'Enter') { // Trigger AI search or basic search on Enter handleAISearch(); } };And update the input element (around line 545):
- onKeyPress={handleKeyPress} + onKeyDown={handleKeyDown}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/components/SearchBar.tsx` around lines 415 - 420, The handler handleKeyPress should be migrated to use the onKeyDown event for modern browsers: change any use of onKeyPress on the SearchBar input to onKeyDown and update the handler signature if needed (e.g., React.KeyboardEvent<HTMLInputElement>), keeping the Enter check (e.key === 'Enter') and calling handleAISearch(); optionally rename handleKeyPress to handleKeyDown for clarity and update the input element's prop from onKeyPress to onKeyDown in the SearchBar component.src/components/LoginScreen.tsx (3)
55-62: Consider usingonKeyDowninstead of deprecatedonKeyPress.The
onKeyPressevent is deprecated in React. While functional, it's recommended to migrate toonKeyDownfor consistency with modern standards. Additionally, the check on line 57 is somewhat redundant since the event target is already an input element.♻️ Suggested refactor
- const handleKeyPress = async (e: React.KeyboardEvent<HTMLInputElement>) => { + const handleKeyDown = (e: React.KeyboardEvent<HTMLInputElement>) => { if (e.key === 'Enter' && !isLoading) { - if (document.activeElement?.tagName === 'INPUT') { - const form = (e.target as HTMLElement).closest('form'); - form?.requestSubmit(); - } + const form = (e.target as HTMLElement).closest('form'); + form?.requestSubmit(); } };Then update all
onKeyPress={handleKeyPress}toonKeyDown={handleKeyDown}in the input elements.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/components/LoginScreen.tsx` around lines 55 - 62, Replace the deprecated onKeyPress handler by renaming handleKeyPress to handleKeyDown and switching listeners on your input elements to onKeyDown; update the function to accept React.KeyboardEvent<HTMLInputElement>, check e.key === 'Enter' and !isLoading, and directly call (e.target as HTMLElement).closest('form')?.requestSubmit() (removing the redundant document.activeElement tagName check); update all occurrences of onKeyPress={handleKeyPress} to onKeyDown={handleKeyDown} and ensure no other references to handleKeyPress remain.
149-167: MissingautoCompleteattribute for GitHub token input.The GitHub token input lacks an
autoCompleteattribute. For sensitive tokens, consider addingautoComplete="off"to prevent browsers from storing or suggesting this value.🛡️ Suggested fix
<input type="password" placeholder="ghp_xxxxxxxxxxxxxxxxxxxx" value={githubToken} onChange={(e) => { setGithubTokenForm(e.target.value); setError(''); }} onKeyPress={handleKeyPress} disabled={isLoading} + autoComplete="off" className="w-full pl-10 pr-4 py-3 border border-gray-300 rounded-lg focus:ring-2 focus:ring-blue-500 focus:border-transparent bg-white text-gray-900 disabled:bg-gray-50 disabled:text-gray-500" />🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/components/LoginScreen.tsx` around lines 149 - 167, The GitHub token input in LoginScreen.tsx (the password input bound to githubToken and updated via setGithubTokenForm, with onKeyPress={handleKeyPress} and disabled={isLoading}) is missing an autoComplete attribute; update that input to include autoComplete="off" (or another appropriate value) to prevent browsers from storing/suggesting the token.
197-217: Help text shown regardless of view mode.The GitHub token creation instructions are always visible, but they're only relevant during registration. Consider conditionally rendering this section only when
!isLoginView.♻️ Suggested refactor
- + {!isLoginView && ( <div className="mt-6 p-4 bg-gray-50 rounded-lg"> <h3 className="font-medium text-gray-900 mb-2 text-sm"> {t('如何创建GitHub token:', 'How to create a GitHub token:')} </h3> ... </div> + )}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/components/LoginScreen.tsx` around lines 197 - 217, The GitHub token help block should only render during registration, so wrap the entire JSX block that starts with <div className="mt-6 p-4 bg-gray-50 rounded-lg"> (the GitHub token instructions) in a conditional check for !isLoginView; i.e., only render it when the component's isLoginView flag is false (ensure you're using the component prop/state named isLoginView), or move that block into the registration branch where isLoginView is false so the instructions don't appear on the login view.src/services/auth.ts (2)
53-60: Missing authentication check before making authenticated request.If
secretis empty (e.g., user not logged in or localStorage parsing failed), the request proceeds with an empty Bearer token. This will likely fail on the backend, but adding an early check would provide a clearer error message.🛡️ Suggested improvement
+ if (!secret) { + throw new Error('Not authenticated'); + } + const res = await fetch(`${url}/auth/profile`, {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/services/auth.ts` around lines 53 - 60, The code calls fetch with an Authorization header built from the variable secret without verifying it; add an early authentication guard before the fetch (check that secret is non-empty/valid) inside the function containing this fetch call (refer to the const res = await fetch(`${url}/auth/profile`, ...) block and the secret variable). If secret is missing, throw or return a clear authenticated error (e.g., "User not authenticated" or "Missing auth token") instead of proceeding to make the request, so callers get an explicit error rather than a backend failure.
44-51: Direct localStorage access couples this service to storage implementation details.Reading the auth secret directly from localStorage with a hardcoded key (
github-stars-manager) and path (state.backendApiSecret) creates tight coupling to the store's internal structure. If the storage key or structure changes, this code will silently fail.Consider accepting the token as a parameter or importing a method from the store/backendAdapter to retrieve it consistently.
♻️ Suggested refactor - accept token as parameter
- async updateProfile(data: { apprise_url?: string; password?: string }): Promise<{ id: number; username: string; role: string; appriseUrl: string | null }> { + async updateProfile(data: { apprise_url?: string; password?: string }, authToken: string): Promise<{ id: number; username: string; role: string; appriseUrl: string | null }> { const url = backend.backendUrl; if (!url) throw new Error('Backend not available'); - const storeData = localStorage.getItem('github-stars-manager'); - let secret = ''; - if (storeData) { - try { - const parsed = JSON.parse(storeData); - secret = parsed.state?.backendApiSecret || ''; - } catch { /* ignore */ } - } - const res = await fetch(`${url}/auth/profile`, { method: 'PATCH', headers: { 'Content-Type': 'application/json', - 'Authorization': `Bearer ${secret}` + 'Authorization': `Bearer ${authToken}` }, body: JSON.stringify(data), });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/services/auth.ts` around lines 44 - 51, The snippet reads the auth secret directly from localStorage using the hardcoded key 'github-stars-manager' and path 'state.backendApiSecret' (variables storeData and secret); replace this direct access by accepting the token as a parameter on the enclosing function (or, if caller cannot supply it, import and call a single store/backendAdapter method to retrieve the token) and remove the localStorage JSON.parse logic; ensure the new parameter or adapter call supplies the secret used where `secret` is referenced and fail fast (throw or return an explicit error) if the token is missing instead of silently continuing.src/store/useAppStore.ts (2)
470-472: Type castas anyindicates a type mismatch.The
as anycast onbackendUsersuggests the type doesn't match the expected schema inPersistedAppState. This should be addressed to maintain type safety.♻️ Suggested approach
Ensure the
backendUsertype inPersistedAppState(lines 107-108) matches the actual state type, then remove the cast:// 持久化后端会话 backendApiSecret: state.backendApiSecret, - backendUser: state.backendUser as any, + backendUser: state.backendUser,If there's a type mismatch between the state and persisted types, align them in the type definitions.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/store/useAppStore.ts` around lines 470 - 472, The persisted state currently casts backendUser as any when building the persisted snapshot; fix this by aligning types instead of casting: update the PersistedAppState type's backendUser field to match the actual backendUser shape used in the store (or implement a serialization/mapper that converts the store's backendUser to the PersistedAppState shape) and then remove the `as any` cast in the useAppStore persist block where backendUser is assigned; ensure function/type names referenced are PersistedAppState and backendUser in useAppStore.
275-278: Potential state inconsistency betweenuserandisAuthenticated.The
setUseraction setsisAuthenticated: !!user, butisAuthenticatedis also derived frombackendApiSecretelsewhere (line 152, 340, 440). This dual derivation could lead to inconsistent state ifuserandbackendApiSecretget out of sync.Consider consolidating the authentication logic to use a single source of truth.
♻️ Suggested fix
Either remove the
isAuthenticatedupdate fromsetUser:setUser: (user) => { console.log('Setting user:', user); - set({ user, isAuthenticated: !!user }); + set({ user }); },Or document clearly that
backendApiSecretis now the sole authority for authentication status.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/store/useAppStore.ts` around lines 275 - 278, The setUser action currently sets isAuthenticated (setUser) while isAuthenticated is also derived from backendApiSecret elsewhere, causing potential inconsistency; remove the isAuthenticated update from setUser so it only sets the user, and centralize authentication state changes to the backendApiSecret updater (e.g., setBackendApiSecret) or convert isAuthenticated into a derived getter that returns !!backendApiSecret; update any places that previously relied on setUser toggling isAuthenticated to instead rely on the centralized backendApiSecret-based logic.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: fefd3456-88bf-47f7-9b2e-14419ea8b480
⛔ Files ignored due to path filters (3)
dist/index.htmlis excluded by!**/dist/**package-lock.jsonis excluded by!**/package-lock.jsonserver/package-lock.jsonis excluded by!**/package-lock.json
📒 Files selected for processing (40)
.github/workflows/docker-publish.ymlDOCKER.mdDockerfileREADME.mdREADME_zh.mddocker-compose.ymlindex.htmlpackage.jsonserver/Dockerfileserver/data/data.db-shmserver/data/data.db-walserver/package.jsonserver/src/db/migrations.tsserver/src/db/schema.tsserver/src/index.tsserver/src/middleware/auth.tsserver/src/routes/admin.tsserver/src/routes/auth.tsserver/src/routes/categories.tsserver/src/routes/configs.tsserver/src/routes/health.tsserver/src/routes/proxy.tsserver/src/routes/releases.tsserver/src/routes/repositories.tsserver/src/services/notification.tsserver/src/services/releaseMonitor.tssrc/App.tsxsrc/components/Admin/UserManagement.tsxsrc/components/Header.tsxsrc/components/LoginScreen.tsxsrc/components/SearchBar.tsxsrc/components/SettingsPanel.tsxsrc/main.tsxsrc/services/auth.tssrc/services/backendAdapter.tssrc/services/githubApi.tssrc/store/useAppStore.tssrc/types/index.tssrc/vite-env.d.tsvite.config.ts
💤 Files with no reviewable changes (1)
- server/Dockerfile
🚧 Files skipped from review as they are similar to previous changes (1)
- .github/workflows/docker-publish.yml
| - **URL**: `http://localhost:8080` | ||
| - **First Run**: The first registered user automatically becomes the **SuperAdmin**. |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
# Verify whether registration is public and whether the first created user
# is automatically elevated to SuperAdmin.
echo "Auth/admin files:"
fd -p 'auth.ts' server
fd -p 'admin.ts' server
fd -p 'user*.ts' server
echo
echo "Potential bootstrap / registration logic:"
rg -n -C4 'register|signup|SuperAdmin|role|COUNT\(\*\)|count\(\*\)|first user|first-user|first registered' serverRepository: AmintaCCCP/GithubStarsManager
Length of output: 14087
Fix first-user SuperAdmin bootstrap—this is a critical security vulnerability on internet-facing deployments.
The /register endpoint currently grants SuperAdmin role to the first user who registers (checked via COUNT(*) = 0 in server/src/routes/auth.ts:29-32). In production, this lets any attacker who reaches the instance first take full admin control.
Remove automatic SuperAdmin assignment. Instead, bootstrap the initial admin out-of-band via:
- Seed user (CLI script run by operator)
- Environment variable checked only on empty database
- One-time operator token issued before first user registration
- Require admin registration to present a setup secret
The public /register endpoint itself is fine; only eliminate the automatic privilege escalation.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@README.md` around lines 68 - 69, The POST /register handler in
server/src/routes/auth.ts currently grants SuperAdmin when COUNT(*) == 0; remove
that automatic privilege escalation by eliminating the COUNT(*) == 0 branch in
the register logic (the route handler/function that assigns roles during
registration) and replace it with a secure bootstrap flow: accept a one-time
setup secret or operator-provided env var (checked only against an empty DB) or
require the operator to seed an admin via a CLI/seedUser function; ensure the
register handler only creates normal users unless the request presents a valid
bootstrap token/secret or the operator has explicitly seeded an admin via the
seedUser/CLI path.
| CREATE TABLE IF NOT EXISTS repositories ( | ||
| id INTEGER PRIMARY KEY, | ||
| user_id INTEGER NOT NULL, | ||
| name TEXT NOT NULL, | ||
| full_name TEXT NOT NULL UNIQUE, | ||
| full_name TEXT NOT NULL, |
There was a problem hiding this comment.
Make repository and release identities user-scoped.
repositories.id and releases.id are still global upstream IDs. As soon as two users sync the same GitHub repo or release, the second write will collide on the primary key and replace the first user's row, so per-user isolation is broken on fresh installs too. Use a composite key like (user_id, id) or a surrogate local PK plus UNIQUE(user_id, upstream_id).
Also applies to: 48-50
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@server/src/db/schema.ts` around lines 19 - 23, repositories.id and
releases.id are currently using global upstream IDs which will collide when
multiple users sync the same upstream items; update the schema for the
repositories and releases tables to make identities user-scoped by either (a)
replacing the current primary key with a composite primary key like (user_id,
id) where id is the upstream id, or (b) introduce a surrogate local primary key
(e.g. repo_pk, release_pk) and add a UNIQUE(user_id, id) constraint to preserve
the upstream id per-user; adjust the CREATE TABLE definitions for repositories
(columns id, user_id, name, full_name) and releases (their id and user_id
columns) to implement one of these patterns and ensure any foreign keys or
queries that referenced the old PK are updated to use the new key structure.
| const __dirname = path.dirname(new URL(import.meta.url).pathname); | ||
| const frontendDistPath = path.resolve(__dirname, '../../dist'); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cat -n server/src/index.ts | head -110 | tail -60Repository: AmintaCCCP/GithubStarsManager
Length of output: 1957
🏁 Script executed:
head -20 server/src/index.tsRepository: AmintaCCCP/GithubStarsManager
Length of output: 980
🏁 Script executed:
rg "fileURLToPath|new URL.*import.meta.url" server/Repository: AmintaCCCP/GithubStarsManager
Length of output: 306
🌐 Web query:
Node.js fileURLToPath import.meta.url pathname Windows ESM best practices
💡 Result:
Best practices (ESM) for import.meta.url → file paths (especially on Windows)
- Don’t use
new URL(import.meta.url).pathnameas a file path
URL.pathnameis a URL component (percent-encoded,/separators) and on Windows file URLs it typically looks like"/C:/..."(note the leading slash), which is not a valid Win32 path. UsefileURLToPath()instead. [3]
- Use
fileURLToPath(import.meta.url)for__filename
import { fileURLToPath } from 'node:url';
const __filename = fileURLToPath(import.meta.url);fileURLToPath() returns a platform-specific path (e.g., C:\... on Windows). [2]
- For
__dirname, convert a directory URL (not.pathname)
import { fileURLToPath } from 'node:url';
import { dirname } from 'node:path';
const __dirname = dirname(fileURLToPath(import.meta.url));(Equivalent pattern is described in Node’s ESM docs as the way to derive these values from import.meta.url.) [1]
- Prefer the newer built-ins when you can (Node ≥ 20.11)
- Node exposes
import.meta.filenameandimport.meta.dirname, which are effectively the “no boilerplate” versions of the above. [4]
- When calling Node APIs, consider passing URLs directly
- Many Node APIs accept
file:URLs; if you already havenew URL('./data.json', import.meta.url), you can often pass that URL object directly instead of converting to a string path (and avoid Windows path edge cases entirely). (See Node’snode:urlmodule docs for URL support in Node APIs and URL utilities.) [5]
Sources: Node ESM docs [1][4], Node node:url docs (fileURLToPath) [2][5], and a Windows-focused explanation of why pathname is wrong for paths [3].
Use fileURLToPath() for module paths instead of .pathname.
new URL(import.meta.url).pathname is URL-encoded and produces /C:/... on Windows (invalid filesystem path). This affects both the dist directory resolution and the main module check. Convert once at module level using fileURLToPath(import.meta.url) and use path.resolve() for the main module check.
Suggested fix
import express from 'express';
import cors from 'cors';
import helmet from 'helmet';
import morgan from 'morgan';
import path from 'node:path';
+import { fileURLToPath } from 'node:url';
import { config } from './config.js';
import { authMiddleware } from './middleware/auth.js';
import { errorHandler } from './middleware/errorHandler.js';
import { getDb, closeDb } from './db/connection.js';
import { runMigrations } from './db/migrations.js';
import authRouter from './routes/auth.js';
import healthRouter from './routes/health.js';
import repositoriesRouter from './routes/repositories.js';
import releasesRouter from './routes/releases.js';
import categoriesRouter from './routes/categories.js';
import configsRouter from './routes/configs.js';
import syncRouter from './routes/sync.js';
import proxyRouter from './routes/proxy.js';
import adminRouter from './routes/admin.js';
import { startReleaseMonitor } from './services/releaseMonitor.js';
+
+const moduleFilename = fileURLToPath(import.meta.url);
+const moduleDirname = path.dirname(moduleFilename);
export function createApp(): express.Express {
const app = express();
@@ -55,7 +59,7 @@ export function createApp(): express.Express {
app.use(errorHandler);
// Serve static UI in production
- const __dirname = path.dirname(new URL(import.meta.url).pathname);
- const frontendDistPath = path.resolve(__dirname, '../../dist');
+ const frontendDistPath = path.resolve(moduleDirname, '../../dist');
app.use(express.static(frontendDistPath));
// Handle SPA routing
@@ -100,7 +104,7 @@ function startServer(): void {
}
// Only start server when run directly (not imported for tests)
-const isMainModule = process.argv[1] && new URL(import.meta.url).pathname === new URL(`file://${process.argv[1]}`).pathname;
+const isMainModule = !!process.argv[1] && moduleFilename === path.resolve(process.argv[1]);
if (isMainModule) {
startServer();
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const __dirname = path.dirname(new URL(import.meta.url).pathname); | |
| const frontendDistPath = path.resolve(__dirname, '../../dist'); | |
| import express from 'express'; | |
| import cors from 'cors'; | |
| import helmet from 'helmet'; | |
| import morgan from 'morgan'; | |
| import path from 'node:path'; | |
| import { fileURLToPath } from 'node:url'; | |
| import { config } from './config.js'; | |
| import { authMiddleware } from './middleware/auth.js'; | |
| import { errorHandler } from './middleware/errorHandler.js'; | |
| import { getDb, closeDb } from './db/connection.js'; | |
| import { runMigrations } from './db/migrations.js'; | |
| import authRouter from './routes/auth.js'; | |
| import healthRouter from './routes/health.js'; | |
| import repositoriesRouter from './routes/repositories.js'; | |
| import releasesRouter from './routes/releases.js'; | |
| import categoriesRouter from './routes/categories.js'; | |
| import configsRouter from './routes/configs.js'; | |
| import syncRouter from './routes/sync.js'; | |
| import proxyRouter from './routes/proxy.js'; | |
| import adminRouter from './routes/admin.js'; | |
| import { startReleaseMonitor } from './services/releaseMonitor.js'; | |
| const moduleFilename = fileURLToPath(import.meta.url); | |
| const moduleDirname = path.dirname(moduleFilename); | |
| export function createApp(): express.Express { | |
| const app = express(); | |
| // ... middleware and route setup ... | |
| app.use(errorHandler); | |
| // Serve static UI in production | |
| const frontendDistPath = path.resolve(moduleDirname, '../../dist'); | |
| app.use(express.static(frontendDistPath)); | |
| // Handle SPA routing | |
| app.get('*', (req, res) => { | |
| res.sendFile(path.join(frontendDistPath, 'index.html')); | |
| }); | |
| return app; | |
| } | |
| function startServer(): void { | |
| // ... server startup logic ... | |
| } | |
| // Only start server when run directly (not imported for tests) | |
| const isMainModule = !!process.argv[1] && moduleFilename === path.resolve(process.argv[1]); | |
| if (isMainModule) { | |
| startServer(); | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@server/src/index.ts` around lines 55 - 56, Replace the URL.pathname usage by
converting import.meta.url to a filesystem path once with
fileURLToPath(import.meta.url), set __dirname using path.dirname on that result,
and then compute frontendDistPath via path.resolve(__dirname, '../../dist');
also update the main module check to use
path.resolve(fileURLToPath(import.meta.url)) (or compare resolved paths) so
Windows paths are correct; locate the current variables __dirname and
frontendDistPath and any main module check in this file and switch them to use
fileURLToPath(import.meta.url) + path.resolve.
| if (!warnedOnce) { | ||
| console.warn('⚠️ API_SECRET not set — auth disabled (dev mode)'); | ||
| warnedOnce = true; | ||
| const JWT_SECRET = config.encryptionKey || 'fallback_secret_for_dev_only'; |
There was a problem hiding this comment.
Fail closed when the JWT secret is missing.
Falling back to a public hard-coded secret makes token forgery trivial in any misconfigured environment. Reject startup instead of silently accepting fallback_secret_for_dev_only.
🔐 Minimal hardening
-import { config } from '../config.js';
-
-const JWT_SECRET = config.encryptionKey || 'fallback_secret_for_dev_only';
+import { config } from '../config.js';
+
+if (!config.encryptionKey) {
+ throw new Error('ENCRYPTION_KEY must be configured');
+}
+
+const JWT_SECRET = config.encryptionKey;📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const JWT_SECRET = config.encryptionKey || 'fallback_secret_for_dev_only'; | |
| if (!config.encryptionKey) { | |
| throw new Error('ENCRYPTION_KEY must be configured'); | |
| } | |
| const JWT_SECRET = config.encryptionKey; |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@server/src/middleware/auth.ts` at line 5, Replace the silent fallback to
'fallback_secret_for_dev_only' so the app fails closed: require
config.encryptionKey to be set when constructing JWT_SECRET (symbol JWT_SECRET)
and if it's missing log a clear error and terminate startup (throw an Error or
call process.exit(1)) instead of using the hard-coded secret; remove the
fallback usage and ensure any calling code expects JWT_SECRET to be non-empty.
| const db = getDb(); | ||
|
|
||
| // Check if user already exists | ||
| const existingUser = db.prepare('SELECT id FROM users WHERE username = ?').get(username); | ||
| if (existingUser) { | ||
| return res.status(409).json({ error: 'Username already exists' }); | ||
| } | ||
|
|
||
| // Determine role (first user is SuperAdmin) | ||
| const countRow = db.prepare('SELECT COUNT(*) as count FROM users').get() as { count: number }; | ||
| const isFirstUser = countRow.count === 0; | ||
| const role = isFirstUser ? 'SuperAdmin' : 'User'; | ||
|
|
||
| // Hash password | ||
| const salt = await bcrypt.genSalt(10); | ||
| const passwordHash = await bcrypt.hash(password, salt); | ||
|
|
||
| // Insert user | ||
| const insertResult = db.prepare( | ||
| 'INSERT INTO users (username, password_hash, role) VALUES (?, ?, ?)' | ||
| ).run(username, passwordHash, role); | ||
|
|
||
| const newUserId = insertResult.lastInsertRowid; | ||
|
|
||
| // Save github token if provided | ||
| const { github_token } = req.body; | ||
| if (github_token) { | ||
| const { encrypt } = await import('../services/crypto.js'); | ||
| const encryptedToken = encrypt(github_token, config.encryptionKey); | ||
| db.prepare( | ||
| 'INSERT INTO settings (key, value, user_id) VALUES (?, ?, ?)' | ||
| ).run('github_token', encryptedToken, newUserId); | ||
| } |
There was a problem hiding this comment.
Make registration atomic before assigning SuperAdmin.
Line 30 decides the role before the awaited bcrypt work on Lines 35-36, so two concurrent first registrations can both observe zero users and both insert SuperAdmin. The later settings insert can also fail after the user row is already committed. Please wrap role selection, user creation, and initial settings in one DB transaction.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@server/src/routes/auth.ts` around lines 21 - 53, Race condition: role
selection, user insert, and settings insert must be atomic. In getDb() use a
single DB transaction that re-reads the user count inside the transaction to
decide isFirstUser/role, inserts the user row (using the already-computed
passwordHash from bcrypt.genSalt/ bcrypt.hash or compute hash before starting
the transaction if preferred), captures lastInsertRowid as newUserId, and then
inserts the settings (github_token via encrypt and config.encryptionKey) before
committing; ensure the transaction rolls back on any error so you cannot end up
with two SuperAdmin rows or a user without settings (refer to symbols: getDb,
bcrypt.genSalt, bcrypt.hash, isFirstUser, role,
insertResult.lastInsertRowid/newUserId, settings insert, encrypt).
| const userId = (req as any).user?.id; | ||
| if (!userId) { | ||
| res.status(401).json({ error: 'Unauthorized', code: 'UNAUTHORIZED' }); | ||
| return; | ||
| } | ||
|
|
||
| const tokenRow = db.prepare('SELECT value FROM settings WHERE key = ? AND user_id = ?').get('github_token', userId) as { value: string } | undefined; |
There was a problem hiding this comment.
Scope AI/WebDAV configs to the authenticated user too.
GitHub token lookup is tenant-scoped now, but the AI and WebDAV proxies still resolve configs by bare id. Any user who obtains another tenant's configId can reuse that tenant's decrypted API key or WebDAV password.
Suggested fix
router.post('/api/proxy/ai', async (req, res) => {
try {
const db = getDb();
+ const userId = (req as any).user?.id;
+ if (!userId) {
+ res.status(401).json({ error: 'Unauthorized', code: 'UNAUTHORIZED' });
+ return;
+ }
const { configId, body: requestBody } = req.body as { configId: string; body: Record<string, unknown> };
@@
- const aiConfig = db.prepare('SELECT * FROM ai_configs WHERE id = ?').get(configId) as Record<string, unknown> | undefined;
+ const aiConfig = db.prepare('SELECT * FROM ai_configs WHERE id = ? AND user_id = ?').get(configId, userId) as Record<string, unknown> | undefined;
@@
router.post('/api/proxy/webdav', async (req, res) => {
try {
const db = getDb();
+ const userId = (req as any).user?.id;
+ if (!userId) {
+ res.status(401).json({ error: 'Unauthorized', code: 'UNAUTHORIZED' });
+ return;
+ }
const { configId, method, path, body: requestBody, headers: extraHeaders } = req.body as {
@@
- const webdavConfig = db.prepare('SELECT * FROM webdav_configs WHERE id = ?').get(configId) as Record<string, unknown> | undefined;
+ const webdavConfig = db.prepare('SELECT * FROM webdav_configs WHERE id = ? AND user_id = ?').get(configId, userId) as Record<string, unknown> | undefined;Also applies to: 92-95, 159-162
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@server/src/routes/proxy.ts` around lines 39 - 45, The config lookup must be
tenant-scoped to the authenticated user: use the authenticated user id ((req as
any).user?.id) when querying settings and when resolving config by configId so a
configId from another tenant cannot be reused. Update the SELECTs that currently
fetch settings by only key or id (e.g., the github_token lookup pattern and the
other config lookups referenced around the proxy handlers) to include AND
user_id = ? (or otherwise add a WHERE user_id = :userId) and pass in userId, or
after fetching by id verify row.user_id === userId and return 401/403 if not.
Ensure the variables tokenRow, any ai/webdav config fetches (configId usage) and
their decrypt flows enforce this user_id check consistently.
| const stmt = db.prepare(` | ||
| INSERT OR REPLACE INTO releases ( | ||
| id, tag_name, name, body, html_url, published_at, | ||
| id, user_id, tag_name, name, body, html_url, published_at, | ||
| prerelease, draft, is_read, assets, | ||
| repo_id, repo_full_name, repo_name | ||
| ) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?) | ||
| ) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?) | ||
| `); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "=== releases upsert in route ==="
sed -n '101,107p' server/src/routes/releases.ts
echo
echo "=== releases schema / migrations ==="
rg -n -C3 "releases|PRIMARY KEY|UNIQUE|ON CONFLICT|user_id" server/src/dbRepository: AmintaCCCP/GithubStarsManager
Length of output: 6625
🏁 Script executed:
cat -n server/src/db/schema.ts | sed -n '48,63p'Repository: AmintaCCCP/GithubStarsManager
Length of output: 622
Fix the releases table to prevent data overwrites across users.
The schema defines releases with only id INTEGER PRIMARY KEY (line 49 of schema.ts), not (id, user_id). This means the INSERT OR REPLACE at lines 101-107 will treat id as the sole conflict key. When two users sync releases with the same GitHub id, the second insert overwrites the first user's release data instead of upserting within that user's scope.
Change the primary key constraint to PRIMARY KEY (id, user_id) or add UNIQUE(id, user_id) in the schema and update the migration accordingly.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@server/src/routes/releases.ts` around lines 101 - 107, The releases table
currently uses a single-column primary key (id) so the INSERT OR REPLACE in
releases.ts (the prepared statement inserting into releases) will overwrite rows
for different users sharing the same GitHub id; update the schema in schema.ts
to scope uniqueness to the user by changing the constraint to PRIMARY KEY (id,
user_id) or add UNIQUE(id, user_id), and add a migration that alters/creates the
releases table accordingly (so the INSERT OR REPLACE continues to upsert per
(id,user_id) rather than globally by id).
| export async function sendNotification(appriseUrl: string, title: string, message: string): Promise<boolean> { | ||
| if (!appriseUrl) return false; | ||
|
|
||
| try { | ||
| const response = await fetch(appriseUrl, { | ||
| method: 'POST', | ||
| headers: { | ||
| 'Content-Type': 'application/json', | ||
| }, | ||
| body: JSON.stringify({ | ||
| title, | ||
| body: message, | ||
| }), | ||
| }); |
There was a problem hiding this comment.
Validate notification targets before fetch.
appriseUrl is treated as trusted input and fetched server-side. In a multi-user app, that is an SSRF primitive against internal services or cloud metadata unless you constrain the destination.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@server/src/services/notification.ts` around lines 1 - 14, The
sendNotification function currently calls fetch on the unvalidated appriseUrl,
exposing SSRF risks; before calling fetch in sendNotification, parse and
validate appriseUrl using the URL constructor, ensure the scheme is http(s)
(preferably require https), and enforce a destination constraint — either a
configured allowlist of hostnames/IPs or deny private/local/routable addresses
(block 127.0.0.0/8, 10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16, link-local, and
metadata endpoints), and optionally perform a DNS resolution check to compare
resolved IPs against the deny/allow lists; only proceed to perform
fetch(appriseUrl, ...) if the URL passes these checks and otherwise return
false/log warning.
新功能: - 多用户管理:支持用户注册、登录、权限管理 - 邮箱注册:使用邮箱注册,支持邮箱格式校验 - 用户资料修改:支持修改用户名和头像(邮箱不可修改) - 通知系统:支持Gotify、Discord、Telegram、邮件推送等通知服务 - Release监控:订阅仓库后自动检查新Release并发送通知 - 定时任务:支持自定义Cron表达式配置同步频率 - AI分析:新增star自动触发AI分析,结果包含在通知中 - 版本号动态读取:从package.json自动读取版本号 修复: - 修复仓库同步数量限制问题(支持分页获取所有仓库) - 修复手动同步没有通知的问题 - 修复Release订阅状态不同步到后端的问题 - 修复CategorySidebar中button嵌套问题 - 修复Gotify URL解析问题(支持priority参数) - 修复登录后数据加载问题
There was a problem hiding this comment.
Actionable comments posted: 9
Note
Due to the large number of review comments, Critical, Major severity comments were prioritized as inline comments.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/components/ReleaseTimeline.tsx (1)
168-227:⚠️ Potential issue | 🟠 MajorPrevent false-success refresh when auth is missing or expired.
handleRefreshnow proceeds without an auth precheck, and the downstream service can suppress request failures as empty arrays (src/services/githubApi.tsLine 174-214). That can lead to a misleading “Refresh completed! Found 0 new releases.” even when refresh actually failed.💡 Proposed guard in this component
const handleRefresh = async () => { setIsRefreshing(true); try { + const hasBackendSecret = (() => { + const storeData = localStorage.getItem('github-stars-manager'); + if (!storeData) return false; + try { + const parsed = JSON.parse(storeData); + return Boolean(parsed.state?.backendApiSecret); + } catch { + return false; + } + })(); + + if (!hasBackendSecret) { + alert(language === 'zh' ? '请先登录后再刷新。' : 'Please sign in before refreshing.'); + return; + } + const githubApi = new GitHubApiService(); const subscribedRepos = repositories.filter(repo => releaseSubscriptions.has(repo.id));🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/components/ReleaseTimeline.tsx` around lines 168 - 227, handleRefresh currently continues without verifying authentication and treats empty arrays from GitHubApiService as a successful refresh; add an auth precheck and fail-fast behavior: before looping call a GitHubApiService auth check (e.g., githubApi.isAuthenticated() or validate the token), and if not authenticated show an appropriate alert, setIsRefreshing(false) and return; also treat null/undefined/error responses from getRepositoryReleases/getIncrementalRepositoryReleases as a failure (not as empty arrays) — on such failure alert the user, setIsRefreshing(false), and skip calling addReleases/setLastRefreshTime so the UI won't display a false “Refresh completed” success; ensure setIsRefreshing(false) is called in all early-return and catch paths (including the catch block).
♻️ Duplicate comments (3)
README.md (1)
68-70:⚠️ Potential issue | 🔴 CriticalRemove “first registered user becomes SuperAdmin” from deployment guidance.
This documents an unsafe bootstrap pattern for internet-facing deployments. Replace it with an operator-controlled bootstrap flow (seed script, one-time setup token/secret, or explicit env-gated initialization), and keep normal
/registerusers non-admin by default.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@README.md` around lines 68 - 70, Remove the insecure "First Run: The first registered user automatically becomes the SuperAdmin." line and replace it with guidance for an operator-controlled bootstrap (e.g., run a seed script, use a one-time setup token/secret, or an env-gated initialization step) and explicitly state that normal /register-created users are non-admin by default; update the README section heading or bullet labeled "First Run" or "Registration" to describe the safe bootstrap flow and reference the operator-managed setup mechanism instead.server/src/routes/auth.ts (1)
30-58:⚠️ Potential issue | 🔴 CriticalRegistration still needs one transaction and per-user bootstrap.
The role decision, user insert, optional settings insert, and any default
scheduled_tasks/notification_preferencesrows all need to happen atomically. As written, concurrent first registrations can still produce twoSuperAdmins, later failures can leave a half-created user, and users created here never get the background rows that migration v4 only backfills for existing accounts. If you seed tasks here, schedule them immediately too becausestartScheduler()only loads jobs at startup.server/src/services/notification.ts (1)
1-120:⚠️ Potential issue | 🔴 CriticalStill validate and constrain notification destinations before
fetch().This function turns user-controlled notification URLs into server-side requests, and
validateNotificationUrl()only checks prefixes. In a multi-user app that is enough to hit private networks and metadata endpoints through the test endpoint or scheduled notifications. Enforce destination allow/deny rules before issuing the request.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@server/src/services/notification.ts` around lines 1 - 120, The code currently converts user-provided apprise/gotify/discord/telegram URLs into server-side HTTP requests without robust destination checks; update sendNotification to call and extend validateNotificationUrl (or a new validateDestination function) to enforce allow/deny rules before fetch: parse the computed targetUrl (use URL or new getTargetUrlFromApprise(appriseUrl) helper used inside sendNotification), resolve its hostname to IP(s) and reject loopback, link-local, private RFC1918 ranges and cloud metadata IPs (e.g., 169.254.169.254), and block non-whitelisted hostnames; fail early with false and log when a destination is disallowed. Ensure this validation runs after you construct targetUrl for schemes that transform the URL (apprise://, gotify://, discord://, telegram://) and before the fetch call, and update validateNotificationUrl to reflect allowed schemes while keeping it separate from network-level destination checks.
🟡 Minor comments (12)
README_zh.md-119-121 (1)
119-121:⚠️ Potential issue | 🟡 MinorOrphaned content appears incomplete.
Lines 119-121 contain orphaned text ("Ollama等本地AI服务", "其他: 任何兼容OpenAI API的服务") that appears to be leftover from a previous section or incomplete editing. This breaks the document flow after the "目标用户" section.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@README_zh.md` around lines 119 - 121, Remove or integrate the orphaned fragment "Ollama等本地AI服务" and the bullet "**其他**: 任何兼容OpenAI API的服务" — either delete these leftover lines or move them under the appropriate section (e.g., the list of supported local/compatible AI services) so the "目标用户" flow is restored; ensure any moved items are correctly formatted as list entries and that surrounding headings/paragraphs remain coherent.README_zh.md-32-32 (1)
32-32:⚠️ Potential issue | 🟡 MinorTypo: "镜、" should be "镜像".
Proposed fix
-您可以直接从 Docker Hub 拉取预构建的镜、,快速启动应用。 +您可以直接从 Docker Hub 拉取预构建的镜像,快速启动应用。🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@README_zh.md` at line 32, Fix the typo in README_zh.md by replacing the incorrect fragment "镜、" with the correct word "镜像" in the sentence that currently reads "您可以直接从 Docker Hub 拉取预构建的镜、,快速启动应用。", producing "您可以直接从 Docker Hub 拉取预构建的镜像,快速启动应用。"README_zh.md-168-173 (1)
168-173:⚠️ Potential issue | 🟡 MinorAdd blank lines around the table for Markdown compliance.
Per markdownlint MD058, tables should be surrounded by blank lines for consistent rendering.
Proposed fix
#### 环境变量 + | 变量 | 必填 | 说明 | |----------|----------|-------------| | `ENCRYPTION_KEY` | 否 | 用于加密存储密钥的 AES-256 密钥。未设置时自动生成。 | | `PORT` | 否 | 端口(默认:3000) | + ## 目标用户🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@README_zh.md` around lines 168 - 173, The Markdown table under the "#### 环境变量" heading needs blank lines before and after it to satisfy markdownlint MD058; edit the README_zh.md content so there is an empty line between the "#### 环境变量" heading and the table start, and another empty line after the table end (before the next paragraph or heading) to ensure proper table rendering.README.md-109-109 (1)
109-109:⚠️ Potential issue | 🟡 MinorUse a complete clone command instead of placeholder text.
git clone ...is not actionable for users. Provide the full repository URL to avoid setup friction.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@README.md` at line 109, Replace the placeholder text "git clone ..." with a runnable clone command that includes the repository's full HTTPS URL; specifically update the README line containing the string git clone ... so it reads the complete git clone command using the repo's HTTPS URL (for example: https://github.com/OWNER/REPO.git) so users can copy-and-paste to clone the repository directly.src/components/SettingsPanel.tsx-73-73 (1)
73-73:⚠️ Potential issue | 🟡 MinorIncorrect property name:
apprise_urlvsappriseUrl.Line 73 uses
backendUser?.apprise_url(snake_case) but thebackendUsertype usesappriseUrl(camelCase). Line 102 correctly usesappriseUrl.🐛 Proposed fix
- const [appriseUrlInput, setAppriseUrlInput] = useState(backendUser?.apprise_url || ''); + const [appriseUrlInput, setAppriseUrlInput] = useState(backendUser?.appriseUrl || '');🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/components/SettingsPanel.tsx` at line 73, The state initializer for appriseUrlInput uses the wrong backendUser property name (snake_case); change the initializer in SettingsPanel (the useState call that defines appriseUrlInput and setAppriseUrlInput) to read backendUser?.appriseUrl instead of backendUser?.apprise_url so it matches the backendUser type and the usage elsewhere.src/components/LoginScreen.tsx-48-52 (1)
48-52:⚠️ Potential issue | 🟡 MinorType mismatch: register response lacks
avatarUrlandappriseUrlfields.Per context snippet from
src/services/auth.ts, theregister()method returns a user object with only{ id, email, username, role, displayName }, while the store'sbackendUsertype expectsavatarUrlandappriseUrl. This will setundefinedfor those fields after registration, which may cause issues in components that access them (e.g., SettingsPanel).🛡️ Suggested fix
} else { const response = await authService.register(email, password, githubToken, displayName || undefined); setBackendApiSecret(response.token); - setBackendUser(response.user); + setBackendUser({ + ...response.user, + avatarUrl: null, + appriseUrl: null, + }); }Alternatively, update the
register()return type inauth.tsto include these fields.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/components/LoginScreen.tsx` around lines 48 - 52, The register() response lacks avatarUrl and appriseUrl but setBackendUser(response.user) expects them, so either update the authService.register implementation and its return type to include avatarUrl and appriseUrl on the returned user object, or change the LoginScreen.tsx call site to merge defaults before calling setBackendUser (e.g., create a user object from response.user with avatarUrl: '' and appriseUrl: '' when those fields are missing); update the types for register, the returned user shape, and any uses of backendUser (e.g., SettingsPanel) to match the chosen fix so no components encounter undefined fields.src/services/auth.ts-76-78 (1)
76-78:⚠️ Potential issue | 🟡 MinorVariable
datashadows parameter in outer scope.The error handling block declares a new
datavariable that shadows thedataparameter from line 40. This could cause confusion and potential bugs during maintenance.🐛 Proposed fix
if (!res.ok) { - const data = await res.json().catch(() => ({})); - throw new Error(data.error || 'Profile update failed'); + const errData = await res.json().catch(() => ({})); + throw new Error(errData.error || 'Profile update failed'); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/services/auth.ts` around lines 76 - 78, The error block in src/services/auth.ts declares a local "data" that shadows the outer "data" parameter; rename the local variable (e.g., to "errorData" or "parsedError") in the res.ok check where you call res.json().catch(() => ({})) and update the subsequent throw to use that new name so the outer "data" parameter (from the enclosing function or scope) is not shadowed; ensure only the local parsed error variable is used for error.message selection.src/components/Header.tsx-60-66 (1)
60-66:⚠️ Potential issue | 🟡 MinorHardcoded Chinese strings ignore language setting.
The alert messages are hardcoded in Chinese, but the component uses the
t()helper for bilingual support elsewhere. These should use the same pattern for consistency.🌐 Suggested fix for bilingual alerts
if (result.added > 0) { - alert(`同步完成!发现 ${result.added} 个新仓库,${result.removed} 个仓库被移除。`); + alert(t(`同步完成!发现 ${result.added} 个新仓库,${result.removed} 个仓库被移除。`, `Sync complete! Found ${result.added} new repositories, ${result.removed} removed.`)); } else if (result.removed > 0) { - alert(`同步完成!${result.removed} 个仓库被移除。`); + alert(t(`同步完成!${result.removed} 个仓库被移除。`, `Sync complete! ${result.removed} repositories removed.`)); } else { - alert('同步完成!所有仓库都是最新的。'); + alert(t('同步完成!所有仓库都是最新的。', 'Sync complete! All repositories are up to date.')); }Also update the error messages on lines 72 and 75.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/components/Header.tsx` around lines 60 - 66, Replace the hardcoded Chinese alert strings in the Header component with calls to the i18n helper t(), using interpolation for numeric values (e.g., t('sync.addedRemoved', { added: result.added, removed: result.removed }) etc.) so the three branches (added>0, removed>0, else) use localized keys; also update the error alert messages in the same component (the error paths referenced near the sync flow) to use t() with appropriate keys (e.g., t('sync.error') and t('sync.partialError', { message })) to keep bilingual consistency and include any error details via interpolation.src/store/useAppStore.ts-383-411 (1)
383-411:⚠️ Potential issue | 🟡 MinorFire-and-forget backend sync loses subscription state on failure.
The backend sync in
toggleReleaseSubscriptionupdates local state optimistically but only logs errors on sync failure. If the backend fails, the local state will be inconsistent with the server. Consider adding a rollback mechanism or at least surfacing the error to the user.🛡️ Suggested improvement with rollback
toggleReleaseSubscription: (repoId) => { const state = get(); const wasSubscribed = state.releaseSubscriptions.has(repoId); const newSubscribed = !wasSubscribed; set((state) => { const newSubscriptions = new Set(state.releaseSubscriptions); if (wasSubscribed) { newSubscriptions.delete(repoId); } else { newSubscriptions.add(repoId); } return { releaseSubscriptions: newSubscriptions }; }); // 同步到后端 if (state.isBackendAvailable) { fetch(`/api/repositories/${repoId}`, { method: 'PATCH', headers: { 'Content-Type': 'application/json', ...state.backendApiSecret ? { 'Authorization': `Bearer ${state.backendApiSecret}` } : {}, }, body: JSON.stringify({ subscribed_to_releases: newSubscribed }), - }).catch(err => console.error('Failed to sync release subscription:', err)); + }).catch(err => { + console.error('Failed to sync release subscription:', err); + // Rollback on failure + set((state) => { + const rollbackSubscriptions = new Set(state.releaseSubscriptions); + if (newSubscribed) { + rollbackSubscriptions.delete(repoId); + } else { + rollbackSubscriptions.add(repoId); + } + return { releaseSubscriptions: rollbackSubscriptions }; + }); + }); } },🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/store/useAppStore.ts` around lines 383 - 411, toggleReleaseSubscription currently does an optimistic update (uses get() and set() to flip state.releaseSubscriptions) but only logs backend errors, causing divergence if the PATCH fails; capture the previous subscription state (wasSubscribed) before the optimistic set and, in the fetch.catch handler, revert the change by calling set() to restore releaseSubscriptions (use the same Set logic with repoId and wasSubscribed) and also set an observable error flag (e.g., lastSyncError or releaseSyncError) in the store so the UI can surface the failure; reference toggleReleaseSubscription, releaseSubscriptions, isBackendAvailable, backendApiSecret and ensure the rollback and error-state update happen only on fetch failure.src/components/SettingsPanel.tsx-163-163 (1)
163-163:⚠️ Potential issue | 🟡 MinorRemove unnecessary
backend.isAvailablefrom the dependency array.
backend.isAvailableis a getter that always returnstrueand never changes. Including it in the dependency array is redundant and potentially misleading. The actual reactive dependency isbackendUser(fromuseAppStore), which is already correctly included. Consider either removingbackend.isAvailablefrom the dependency array or, if the condition needs to remain in the effect, keep it as a guard but don't list it as a dependency.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/components/SettingsPanel.tsx` at line 163, Remove the unnecessary backend.isAvailable entry from the useEffect dependency array: the reactive dependency is backendUser from useAppStore, while backend.isAvailable is a static getter that never changes; keep any guard using backend.isAvailable inside the effect body if needed, but update the dependency array to only include backendUser (i.e., change the dependency array referenced around the useEffect that currently lists backend.isAvailable and backendUser).server/src/index.ts-54-62 (1)
54-62:⚠️ Potential issue | 🟡 MinorStatic file serving and SPA routing placed after error handler.
The static file serving and catch-all SPA route are placed after
app.use(errorHandler)on line 52. Express middleware runs in order, so the error handler won't catch errors from the static middleware or the catch-all route. Consider placing error handler last.🔧 Suggested fix
// Admin routes app.use(adminRouter); - // Global error handler - app.use(errorHandler); - // Serve static UI in production const __dirname = path.dirname(new URL(import.meta.url).pathname); const frontendDistPath = path.resolve(__dirname, '../../dist'); app.use(express.static(frontendDistPath)); // Handle SPA routing app.get('*', (req, res) => { res.sendFile(path.join(frontendDistPath, 'index.html')); }); + // Global error handler (must be last) + app.use(errorHandler); + return app; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@server/src/index.ts` around lines 54 - 62, The error handler middleware (errorHandler) is currently registered before the static file serving (app.use(express.static(frontendDistPath))) and the SPA catch-all route (app.get('*', ...)), so it won't catch errors from those handlers; move the errorHandler registration to after the static middleware and the app.get('*', ...) route so errorHandler is the last app.use()/middleware registered, ensuring it can catch and handle errors from express.static and the wildcard route.server/src/services/scheduler.ts-658-660 (1)
658-660:⚠️ Potential issue | 🟡 MinorManual release checks always report zero work.
checkReleasesManually()returns{ checked: 0, newReleases: 0 }regardless of whatcheckReleases()just did, so the caller cannot tell whether anything happened.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@server/src/services/scheduler.ts` around lines 658 - 660, The function checkReleasesManually currently calls checkReleases(userId) but always returns { checked: 0, newReleases: 0 }; fix it by awaiting and returning the actual result from checkReleases—e.g., const result = await checkReleases(userId); return result;—and ensure the return type of checkReleasesManually matches whatever checkReleases returns (adjust the Promise<{checked:number; newReleases:number}> signature if necessary) so callers receive the real work counts.
🧹 Nitpick comments (9)
src/components/CategorySidebar.tsx (1)
148-158: Consider removingstopPropagationand addingaria-labelfor accessibility.
e.stopPropagation()is now unnecessary since the edit button is no longer nested inside the category selection button—no parent element would intercept this click event.- Icon-only buttons should have
aria-labelfor screen reader accessibility;titlealone is not sufficient.♻️ Suggested improvement
<button onClick={(e) => { - e.stopPropagation(); handleEditCategory(category); }} className="p-1 rounded opacity-0 group-hover:opacity-100 transition-opacity duration-200 hover:bg-gray-200 dark:hover:bg-gray-700" title={t('编辑分类', 'Edit category')} + aria-label={t('编辑分类', 'Edit category')} > <Edit3 className="w-3 h-3" /> </button>🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/components/CategorySidebar.tsx` around lines 148 - 158, Remove the unnecessary e.stopPropagation() from the onClick handler on the Edit button and add an accessible label to the icon-only button: update the button that calls handleEditCategory(category) (the one rendering the Edit3 icon) to remove the e.stopPropagation() call and add an aria-label (e.g., aria-label={t('编辑分类', 'Edit category')}) so screen readers can announce its purpose; you can keep the existing title if desired but ensure aria-label is present for accessibility.vite.config.ts (1)
25-36: Consider using appropriately sized icons for PWA.Both icon entries reference
icon.pngbut declare different sizes (512x512 and 192x192). If the actual file is 512x512, browsers will scale it down for the 192x192 use case, which may result in suboptimal quality. For best results, provide separate optimized assets for each size.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@vite.config.ts` around lines 25 - 36, The icons array in the PWA config uses the same file 'icon.png' for both sizes ('512x512' and '192x192'), which can degrade quality when scaled; replace the duplicated src with separate optimized assets (e.g., 'icon-512.png' for sizes:'512x512' and 'icon-192.png' for sizes:'192x192') and update the icons entries accordingly so each entry in the icons array references the correctly sized file and type.README.md (1)
37-42: Avoid:latestin production deployment examples.Using
banjuer/github-stars-manager:latestmakes deployments non-reproducible and can introduce unplanned upgrades. Prefer a versioned tag (e.g.,v1.0.1) in both Docker CLI and Compose examples.Also applies to: 51-53
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@README.md` around lines 37 - 42, Replace usage of the unpinned image tag "banjuer/github-stars-manager:latest" with a specific versioned tag (for example "banjuer/github-stars-manager:v1.0.1") in the Docker run command and the Docker Compose examples referenced (the occurrences of the image string in the README); update the two places (the docker run block and the compose snippet around lines 51-53) so deployments are reproducible and avoid using the :latest tag.src/components/LoginScreen.tsx (1)
132-132:onKeyPressis deprecated; useonKeyDowninstead.React's
onKeyPressevent is deprecated. UseonKeyDownfor keyboard event handling.♻️ Replace with onKeyDown
- onKeyPress={handleKeyPress} + onKeyDown={handleKeyPress}Also update the handler type from
React.KeyboardEvent<HTMLInputElement>(already correct) and change the event name reference if needed.Also applies to: 152-152, 172-172, 192-192
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/components/LoginScreen.tsx` at line 132, Replace deprecated onKeyPress props with onKeyDown for all input elements using the keyboard handler (e.g., change onKeyPress={handleKeyPress} to onKeyDown={handleKeyPress} or rename the function to handleKeyDown); ensure the handler signature stays React.KeyboardEvent<HTMLInputElement> and update any internal event name usage if you renamed the handler. Apply this change for every occurrence of onKeyPress in the file (the handler function referenced is handleKeyPress) so keyboard events are handled via onKeyDown instead of the deprecated onKeyPress.src/components/SettingsPanel.tsx (1)
601-606: Prefer using store action over direct setState.Directly calling
useAppStore.setState()bypasses the definedsetBackendUseraction and introduces inconsistent state update patterns. Additionally, line 604 usesapprise_url(snake_case) which doesn't match theappriseUrlproperty in the type definition.♻️ Use setBackendUser action
+ const { setBackendUser, backendUser } = useAppStore.getState(); const updated = await authService.updateProfile({ apprise_url: appriseUrlInput || undefined }); - useAppStore.setState(state => ({ - backendUser: state.backendUser ? { - ...state.backendUser, - apprise_url: updated.appriseUrl || null - } : null - })); + if (backendUser) { + setBackendUser({ + ...backendUser, + appriseUrl: updated.appriseUrl, + }); + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/components/SettingsPanel.tsx` around lines 601 - 606, The code is directly calling useAppStore.setState to update backendUser and uses snake_case apprise_url which mismatches the type; replace this direct setState call with the store action setBackendUser (invoke setBackendUser with an updater or new object) so updates go through the defined action, and set the property appriseUrl (not apprise_url) using updated.appriseUrl || null while preserving other backendUser fields (spread existing state.backendUser) and handling the null case when state.backendUser is falsy.src/store/useAppStore.ts (1)
495-497: Avoidas anytype assertion.The
as anycast onbackendUserbypasses type safety. If the type doesn't match the partialize return type, consider updating the type definition instead.♻️ Type-safe alternative
// 持久化后端会话 backendApiSecret: state.backendApiSecret, - backendUser: state.backendUser as any, + backendUser: state.backendUser,If this causes a type error, update the
PersistedAppStatetype to properly include thebackendUsertype.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/store/useAppStore.ts` around lines 495 - 497, The persist partialization is using a unsafe cast `backendUser as any`; remove the `as any` and make the types line up by updating the PersistedAppState (or the partialize return type) to include the actual type of backendUser used in useAppStore so the object returned from the partialize function matches PersistedAppState; specifically update the type definitions for PersistedAppState or adjust the partialize mapping for backendUser in useAppStore to return the correctly typed value instead of casting.src/services/auth.ts (1)
58-65: Duplicated localStorage parsing logic.The secret extraction from localStorage is duplicated in both
updateProfileandgetProfile. Consider extracting a private helper method.♻️ Suggested refactor
+ private getAuthSecret(): string { + const storeData = localStorage.getItem('github-stars-manager'); + if (!storeData) return ''; + try { + const parsed = JSON.parse(storeData); + return parsed.state?.backendApiSecret || ''; + } catch { + return ''; + } + } + async updateProfile(data: { // ... params }): Promise<...> { const url = backend.backendUrl; if (!url) throw new Error('Backend not available'); - const storeData = localStorage.getItem('github-stars-manager'); - let secret = ''; - if (storeData) { - try { - const parsed = JSON.parse(storeData); - secret = parsed.state?.backendApiSecret || ''; - } catch { /* ignore */ } - } + const secret = this.getAuthSecret();Also applies to: 96-103
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/services/auth.ts` around lines 58 - 65, The localStorage parsing and secret extraction duplicated in getProfile and updateProfile should be refactored into a single private helper (e.g., getStoredBackendApiSecret or parseGithubStarsManagerState) that reads 'github-stars-manager', JSON.parse safely (catching errors) and returns the state.backendApiSecret or empty string; replace the duplicate blocks in getProfile and updateProfile with calls to this helper and update any tests/usages accordingly so both functions reuse the shared method.server/src/routes/sync.ts (1)
91-92: Consider using proper request typing instead ofas any.The
(req as any).user?.idpattern bypasses TypeScript's type checking. Consider extending the Express Request type or using a typed middleware.♻️ Type-safe alternative
Create a typed request interface:
interface AuthenticatedRequest extends express.Request { user?: { id: number; role: string; /* other fields */ }; }Then use it in the handler:
-router.post('/api/sync/stars', async (req, res) => { +router.post('/api/sync/stars', async (req: AuthenticatedRequest, res) => { try { - const userId = (req as any).user?.id; + const userId = req.user?.id;Also applies to: 105-106
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@server/src/routes/sync.ts` around lines 91 - 92, The route handler uses (req as any).user?.id which bypasses TypeScript checks; define a typed request interface (e.g., AuthenticatedRequest extends express.Request with user?: { id: number; role?: string; ... }) and replace the untyped cast by typing the handler parameter as that interface so you can safely access req.user.id (update the handler signature where userId is read and the other occurrences around the user access at the later block referenced in the diff to use AuthenticatedRequest instead of express.Request/any); ensure any middleware that attaches req.user is consistent with this type.src/components/Header.tsx (1)
67-68: Full page reload may cause poor UX.
window.location.reload()forces a complete page refresh, losing any in-memory state and causing a flash. Consider refreshing the store data instead using the existing sync mechanisms.♻️ Alternative: Refresh store data
- window.location.reload(); + // Trigger store refresh instead of full reload + await syncFromBackend();This requires importing
syncFromBackendfrom the autoSync service.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/components/Header.tsx` around lines 67 - 68, The full-page reload in the Header component (remove the call to window.location.reload()) should be replaced with a data-only sync using the existing autoSync mechanism: import syncFromBackend from the autoSync service and call it (or await syncFromBackend()) at the same point (e.g., inside the Header's logout/refresh handler such as handleLogout or whatever function contains the reload) so the store refreshes without a hard reload; if you instead use a Redux/Vuex/Context action, dispatch the existing store refresh action from that same handler rather than reloading the page.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@server/src/db/migrations.ts`:
- Around line 26-43: The migration that only runs ALTER TABLE to add user_id
leaves prior PK/UNIQUE constraints intact and won't complete multi-user
migration; update the migration logic (around initializeSchema, the tables
array, and the ALTER TABLE loop that uses defaultUserId) to rebuild affected
tables (at minimum settings, repositories, releases — and any table that had
global unique/PK constraints) by: 1) creating a new temp table with the new
schema that includes user_id in the primary key/unique constraints, 2) copying
existing rows into the temp table populating user_id with defaultUserId, 3)
dropping the old table, 4) renaming the temp table to the original name, and 5)
recreating any indexes/foreign keys; ensure this replace-flow runs instead of a
simple ALTER TABLE for those identified tables and keep error handling around
each step.
- Around line 17-23: The seeded admin row in migrations.ts inserts a user
without an email and with a plaintext 'CHANGE_ME' password_hash, which prevents
login because authentication in routes/auth.ts uses email and expects a bcrypt
hash; update the INSERT in migrations.ts (the db.prepare(...) that sets
username/password_hash/role and assigns defaultUserId) to also set a valid email
and to store a real bcrypt digest (generate a secure random password and
bcrypt-hash it before inserting, or hash a generated placeholder and mark the
account as requiring reset) so the seeded account is reachable by auth.ts;
ensure the INSERT column list includes email and the value uses the hashed
password string.
- Around line 45-52: Replace the single batch ALTER TABLE approach: use PRAGMA
table_info('settings') to check if the 'updated_at' column is missing, then add
it with a syntactically-valid SQLite default (DEFAULT CURRENT_TIMESTAMP) via a
single db.exec() call; repeat this pattern for each column you need to add (do
not combine multiple ALTERs in one exec), and perform any backfill for
updated_at in a separate transaction after the column exists; ensure you do not
swallow errors that indicate an incomplete migration—only ignore a
missing-column case when PRAGMA confirmed the column already exists and
otherwise surface/throw errors so the migration does not advance prematurely.
In `@server/src/routes/auth.ts`:
- Around line 10-11: Replace the current JWT_SECRET usage that falls back to a
public string by requiring a dedicated signing secret: stop using
config.encryptionKey for JWT signing and remove the
'fallback_secret_for_dev_only' default; instead read a dedicated secret (e.g.,
config.jwtSecret or process.env.JWT_SECRET) into JWT_SECRET and make startup
fail fast (throw or process.exit with a clear error) if that value is missing or
empty; keep JWT_EXPIRES_IN as is but ensure any code that references JWT_SECRET
(token creation/verification) now uses the dedicated secret.
- Around line 134-158: The current update flow hashes and writes `password` and
`apprise_url` before validating `username` uniqueness, allowing partial commits
on a later 409; change the flow to validate all inputs first (perform the
`SELECT id FROM users WHERE username = ? AND id != ?` check for `username`) and
only after validation apply updates inside a single atomic transaction (use the
DB's transaction/BEGIN...COMMIT or `db.transaction`), precompute the
`passwordHash` before starting the transaction if needed, and then run the
`db.prepare(...).run(...)` statements for `password`, `apprise_url`, `username`,
`display_name`, and `avatar_url` within that transaction so either all updates
succeed or none are applied.
In `@server/src/services/notification.ts`:
- Around line 72-78: The outbound POST using fetch(targetUrl, ...) can hang
indefinitely; wrap the call in an AbortController with a configurable timeout
(e.g., 5s) so requests are bounded: create an AbortController, pass
controller.signal into fetch, start a setTimeout that calls controller.abort()
after the timeout, and clear that timeout when fetch resolves; update the code
around the fetch/response handling (the fetch call that assigns to `response`)
to catch abort errors and treat them as a failed request so scheduler and
/api/notifications/test don't hang.
- Line 86: Replace the current success log that prints the raw URL
(console.log(`✅ Notification sent successfully to ${appriseUrl.split('?')[0]}`))
with a redacted identifier: parse appriseUrl using the URL constructor, extract
only non-sensitive parts (hostname and the first path segment that identifies
the service, e.g., discord/telegram/gotify) and omit any path segments or query
parameters that contain tokens, then log "Notification sent successfully to
<redactedIdentifier>" instead; update this in
server/src/services/notification.ts where appriseUrl is used to ensure no
path-embedded tokens are ever logged.
In `@server/src/services/scheduler.ts`:
- Around line 536-550: updateNotificationPreferences currently uses "?? 1" which
forces omitted toggles back to true and fails when the row doesn't exist;
instead fetch the existing row via getNotificationPreferences(userId) (or query
notification_preferences), merge its fields with the provided
Partial<NotificationPreferences> so unspecified fields are preserved, then
perform an upsert (INSERT ... ON CONFLICT(user_id) DO UPDATE ...) or explicit
INSERT when absent and UPDATE when present to persist the merged values; ensure
you reference updateNotificationPreferences and getNotificationPreferences and
operate on the notification_preferences columns notify_new_release,
notify_star_added, notify_star_removed to avoid overwriting omitted toggles.
- Around line 124-222: The syncStars routine computes a diff then mutates
user-scoped tables without any per-user serialization, so concurrent runs can
race and produce duplicate inserts/notifications; fix by performing the
read-diff-and-write sequence inside a single DB transaction and making inserts
idempotent: wrap the selection of existingRepos, the deletion of removedStars,
and the insertion of newStars in one transaction (use an explicit transaction
via getDb().transaction / BEGIN IMMEDIATE or equivalent), change the INSERT into
an idempotent form (INSERT OR IGNORE / upsert) and determine the number of
actually inserted rows from the statement result (use run.changes) so
analyzeNewStars/sendStarNotification are only invoked for rows that were truly
inserted; apply the same transactional + idempotent-insert pattern to the other
affected functions (the blocks referenced at 428-492 and 553-656) and ensure
notifications are based on committed changes.
---
Outside diff comments:
In `@src/components/ReleaseTimeline.tsx`:
- Around line 168-227: handleRefresh currently continues without verifying
authentication and treats empty arrays from GitHubApiService as a successful
refresh; add an auth precheck and fail-fast behavior: before looping call a
GitHubApiService auth check (e.g., githubApi.isAuthenticated() or validate the
token), and if not authenticated show an appropriate alert,
setIsRefreshing(false) and return; also treat null/undefined/error responses
from getRepositoryReleases/getIncrementalRepositoryReleases as a failure (not as
empty arrays) — on such failure alert the user, setIsRefreshing(false), and skip
calling addReleases/setLastRefreshTime so the UI won't display a false “Refresh
completed” success; ensure setIsRefreshing(false) is called in all early-return
and catch paths (including the catch block).
---
Minor comments:
In `@README_zh.md`:
- Around line 119-121: Remove or integrate the orphaned fragment "Ollama等本地AI服务"
and the bullet "**其他**: 任何兼容OpenAI API的服务" — either delete these leftover lines
or move them under the appropriate section (e.g., the list of supported
local/compatible AI services) so the "目标用户" flow is restored; ensure any moved
items are correctly formatted as list entries and that surrounding
headings/paragraphs remain coherent.
- Line 32: Fix the typo in README_zh.md by replacing the incorrect fragment "镜、"
with the correct word "镜像" in the sentence that currently reads "您可以直接从 Docker
Hub 拉取预构建的镜、,快速启动应用。", producing "您可以直接从 Docker Hub 拉取预构建的镜像,快速启动应用。"
- Around line 168-173: The Markdown table under the "#### 环境变量" heading needs
blank lines before and after it to satisfy markdownlint MD058; edit the
README_zh.md content so there is an empty line between the "#### 环境变量" heading
and the table start, and another empty line after the table end (before the next
paragraph or heading) to ensure proper table rendering.
In `@README.md`:
- Line 109: Replace the placeholder text "git clone ..." with a runnable clone
command that includes the repository's full HTTPS URL; specifically update the
README line containing the string git clone ... so it reads the complete git
clone command using the repo's HTTPS URL (for example:
https://github.com/OWNER/REPO.git) so users can copy-and-paste to clone the
repository directly.
In `@server/src/index.ts`:
- Around line 54-62: The error handler middleware (errorHandler) is currently
registered before the static file serving
(app.use(express.static(frontendDistPath))) and the SPA catch-all route
(app.get('*', ...)), so it won't catch errors from those handlers; move the
errorHandler registration to after the static middleware and the app.get('*',
...) route so errorHandler is the last app.use()/middleware registered, ensuring
it can catch and handle errors from express.static and the wildcard route.
In `@server/src/services/scheduler.ts`:
- Around line 658-660: The function checkReleasesManually currently calls
checkReleases(userId) but always returns { checked: 0, newReleases: 0 }; fix it
by awaiting and returning the actual result from checkReleases—e.g., const
result = await checkReleases(userId); return result;—and ensure the return type
of checkReleasesManually matches whatever checkReleases returns (adjust the
Promise<{checked:number; newReleases:number}> signature if necessary) so callers
receive the real work counts.
In `@src/components/Header.tsx`:
- Around line 60-66: Replace the hardcoded Chinese alert strings in the Header
component with calls to the i18n helper t(), using interpolation for numeric
values (e.g., t('sync.addedRemoved', { added: result.added, removed:
result.removed }) etc.) so the three branches (added>0, removed>0, else) use
localized keys; also update the error alert messages in the same component (the
error paths referenced near the sync flow) to use t() with appropriate keys
(e.g., t('sync.error') and t('sync.partialError', { message })) to keep
bilingual consistency and include any error details via interpolation.
In `@src/components/LoginScreen.tsx`:
- Around line 48-52: The register() response lacks avatarUrl and appriseUrl but
setBackendUser(response.user) expects them, so either update the
authService.register implementation and its return type to include avatarUrl and
appriseUrl on the returned user object, or change the LoginScreen.tsx call site
to merge defaults before calling setBackendUser (e.g., create a user object from
response.user with avatarUrl: '' and appriseUrl: '' when those fields are
missing); update the types for register, the returned user shape, and any uses
of backendUser (e.g., SettingsPanel) to match the chosen fix so no components
encounter undefined fields.
In `@src/components/SettingsPanel.tsx`:
- Line 73: The state initializer for appriseUrlInput uses the wrong backendUser
property name (snake_case); change the initializer in SettingsPanel (the
useState call that defines appriseUrlInput and setAppriseUrlInput) to read
backendUser?.appriseUrl instead of backendUser?.apprise_url so it matches the
backendUser type and the usage elsewhere.
- Line 163: Remove the unnecessary backend.isAvailable entry from the useEffect
dependency array: the reactive dependency is backendUser from useAppStore, while
backend.isAvailable is a static getter that never changes; keep any guard using
backend.isAvailable inside the effect body if needed, but update the dependency
array to only include backendUser (i.e., change the dependency array referenced
around the useEffect that currently lists backend.isAvailable and backendUser).
In `@src/services/auth.ts`:
- Around line 76-78: The error block in src/services/auth.ts declares a local
"data" that shadows the outer "data" parameter; rename the local variable (e.g.,
to "errorData" or "parsedError") in the res.ok check where you call
res.json().catch(() => ({})) and update the subsequent throw to use that new
name so the outer "data" parameter (from the enclosing function or scope) is not
shadowed; ensure only the local parsed error variable is used for error.message
selection.
In `@src/store/useAppStore.ts`:
- Around line 383-411: toggleReleaseSubscription currently does an optimistic
update (uses get() and set() to flip state.releaseSubscriptions) but only logs
backend errors, causing divergence if the PATCH fails; capture the previous
subscription state (wasSubscribed) before the optimistic set and, in the
fetch.catch handler, revert the change by calling set() to restore
releaseSubscriptions (use the same Set logic with repoId and wasSubscribed) and
also set an observable error flag (e.g., lastSyncError or releaseSyncError) in
the store so the UI can surface the failure; reference
toggleReleaseSubscription, releaseSubscriptions, isBackendAvailable,
backendApiSecret and ensure the rollback and error-state update happen only on
fetch failure.
---
Duplicate comments:
In `@README.md`:
- Around line 68-70: Remove the insecure "First Run: The first registered user
automatically becomes the SuperAdmin." line and replace it with guidance for an
operator-controlled bootstrap (e.g., run a seed script, use a one-time setup
token/secret, or an env-gated initialization step) and explicitly state that
normal /register-created users are non-admin by default; update the README
section heading or bullet labeled "First Run" or "Registration" to describe the
safe bootstrap flow and reference the operator-managed setup mechanism instead.
In `@server/src/services/notification.ts`:
- Around line 1-120: The code currently converts user-provided
apprise/gotify/discord/telegram URLs into server-side HTTP requests without
robust destination checks; update sendNotification to call and extend
validateNotificationUrl (or a new validateDestination function) to enforce
allow/deny rules before fetch: parse the computed targetUrl (use URL or new
getTargetUrlFromApprise(appriseUrl) helper used inside sendNotification),
resolve its hostname to IP(s) and reject loopback, link-local, private RFC1918
ranges and cloud metadata IPs (e.g., 169.254.169.254), and block non-whitelisted
hostnames; fail early with false and log when a destination is disallowed.
Ensure this validation runs after you construct targetUrl for schemes that
transform the URL (apprise://, gotify://, discord://, telegram://) and before
the fetch call, and update validateNotificationUrl to reflect allowed schemes
while keeping it separate from network-level destination checks.
---
Nitpick comments:
In `@README.md`:
- Around line 37-42: Replace usage of the unpinned image tag
"banjuer/github-stars-manager:latest" with a specific versioned tag (for example
"banjuer/github-stars-manager:v1.0.1") in the Docker run command and the Docker
Compose examples referenced (the occurrences of the image string in the README);
update the two places (the docker run block and the compose snippet around lines
51-53) so deployments are reproducible and avoid using the :latest tag.
In `@server/src/routes/sync.ts`:
- Around line 91-92: The route handler uses (req as any).user?.id which bypasses
TypeScript checks; define a typed request interface (e.g., AuthenticatedRequest
extends express.Request with user?: { id: number; role?: string; ... }) and
replace the untyped cast by typing the handler parameter as that interface so
you can safely access req.user.id (update the handler signature where userId is
read and the other occurrences around the user access at the later block
referenced in the diff to use AuthenticatedRequest instead of
express.Request/any); ensure any middleware that attaches req.user is consistent
with this type.
In `@src/components/CategorySidebar.tsx`:
- Around line 148-158: Remove the unnecessary e.stopPropagation() from the
onClick handler on the Edit button and add an accessible label to the icon-only
button: update the button that calls handleEditCategory(category) (the one
rendering the Edit3 icon) to remove the e.stopPropagation() call and add an
aria-label (e.g., aria-label={t('编辑分类', 'Edit category')}) so screen readers can
announce its purpose; you can keep the existing title if desired but ensure
aria-label is present for accessibility.
In `@src/components/Header.tsx`:
- Around line 67-68: The full-page reload in the Header component (remove the
call to window.location.reload()) should be replaced with a data-only sync using
the existing autoSync mechanism: import syncFromBackend from the autoSync
service and call it (or await syncFromBackend()) at the same point (e.g., inside
the Header's logout/refresh handler such as handleLogout or whatever function
contains the reload) so the store refreshes without a hard reload; if you
instead use a Redux/Vuex/Context action, dispatch the existing store refresh
action from that same handler rather than reloading the page.
In `@src/components/LoginScreen.tsx`:
- Line 132: Replace deprecated onKeyPress props with onKeyDown for all input
elements using the keyboard handler (e.g., change onKeyPress={handleKeyPress} to
onKeyDown={handleKeyPress} or rename the function to handleKeyDown); ensure the
handler signature stays React.KeyboardEvent<HTMLInputElement> and update any
internal event name usage if you renamed the handler. Apply this change for
every occurrence of onKeyPress in the file (the handler function referenced is
handleKeyPress) so keyboard events are handled via onKeyDown instead of the
deprecated onKeyPress.
In `@src/components/SettingsPanel.tsx`:
- Around line 601-606: The code is directly calling useAppStore.setState to
update backendUser and uses snake_case apprise_url which mismatches the type;
replace this direct setState call with the store action setBackendUser (invoke
setBackendUser with an updater or new object) so updates go through the defined
action, and set the property appriseUrl (not apprise_url) using
updated.appriseUrl || null while preserving other backendUser fields (spread
existing state.backendUser) and handling the null case when state.backendUser is
falsy.
In `@src/services/auth.ts`:
- Around line 58-65: The localStorage parsing and secret extraction duplicated
in getProfile and updateProfile should be refactored into a single private
helper (e.g., getStoredBackendApiSecret or parseGithubStarsManagerState) that
reads 'github-stars-manager', JSON.parse safely (catching errors) and returns
the state.backendApiSecret or empty string; replace the duplicate blocks in
getProfile and updateProfile with calls to this helper and update any
tests/usages accordingly so both functions reuse the shared method.
In `@src/store/useAppStore.ts`:
- Around line 495-497: The persist partialization is using a unsafe cast
`backendUser as any`; remove the `as any` and make the types line up by updating
the PersistedAppState (or the partialize return type) to include the actual type
of backendUser used in useAppStore so the object returned from the partialize
function matches PersistedAppState; specifically update the type definitions for
PersistedAppState or adjust the partialize mapping for backendUser in
useAppStore to return the correctly typed value instead of casting.
In `@vite.config.ts`:
- Around line 25-36: The icons array in the PWA config uses the same file
'icon.png' for both sizes ('512x512' and '192x192'), which can degrade quality
when scaled; replace the duplicated src with separate optimized assets (e.g.,
'icon-512.png' for sizes:'512x512' and 'icon-192.png' for sizes:'192x192') and
update the icons entries accordingly so each entry in the icons array references
the correctly sized file and type.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 570e1cc2-b48f-4433-aae8-a54ecd1fd4fb
⛔ Files ignored due to path filters (2)
server/better_sqlite3.tar.gzis excluded by!**/*.gzserver/package-lock.jsonis excluded by!**/package-lock.json
📒 Files selected for processing (31)
DOCKER.mdREADME.mdREADME_zh.mdUPDATE_FEATURE_GUIDE.mddocker-compose.ymlpackage.jsonscripts/update-version.cjsserver/data/data.db-shmserver/data/data.db-walserver/package.jsonserver/src/config.tsserver/src/db/migrations.tsserver/src/index.tsserver/src/routes/auth.tsserver/src/routes/configs.tsserver/src/routes/sync.tsserver/src/services/notification.tsserver/src/services/scheduler.tssrc/components/CategorySidebar.tsxsrc/components/Header.tsxsrc/components/LoginScreen.tsxsrc/components/ReleaseTimeline.tsxsrc/components/SettingsPanel.tsxsrc/services/auth.tssrc/services/autoSync.tssrc/services/updateService.tssrc/store/useAppStore.tssrc/vite-env.d.tsversions/README.mdversions/version-info.xmlvite.config.ts
💤 Files with no reviewable changes (1)
- server/src/config.ts
✅ Files skipped from review due to trivial changes (3)
- scripts/update-version.cjs
- UPDATE_FEATURE_GUIDE.md
- versions/README.md
🚧 Files skipped from review as they are similar to previous changes (1)
- server/package.json
| if (hasRepos.c > 0 && hasUsers.c === 0) { | ||
| // Create a default SuperAdmin if migrating existing instance | ||
| const result = db.prepare(` | ||
| INSERT INTO users (username, password_hash, role) | ||
| VALUES ('admin', 'CHANGE_ME', 'SuperAdmin') | ||
| `).run(); | ||
| defaultUserId = result.lastInsertRowid as number; |
There was a problem hiding this comment.
The seeded migration admin can never log in.
This row has no email and stores password_hash = 'CHANGE_ME', but server/src/routes/auth.ts authenticates by email and expects a real bcrypt digest. Upgraded instances with preexisting repositories will end up with data assigned to an unreachable owner account.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@server/src/db/migrations.ts` around lines 17 - 23, The seeded admin row in
migrations.ts inserts a user without an email and with a plaintext 'CHANGE_ME'
password_hash, which prevents login because authentication in routes/auth.ts
uses email and expects a bcrypt hash; update the INSERT in migrations.ts (the
db.prepare(...) that sets username/password_hash/role and assigns defaultUserId)
to also set a valid email and to store a real bcrypt digest (generate a secure
random password and bcrypt-hash it before inserting, or hash a generated
placeholder and mark the account as requiring reset) so the seeded account is
reachable by auth.ts; ensure the INSERT column list includes email and the value
uses the hashed password string.
| // 3. Migrate existing data to have user_id if they don't have it defined properly | ||
| // Note: Since we used 'initializeSchema', tables might be fresh ones with new columns | ||
| // depending on SQLite's ALTER abilities. Actually, since SQLite doesn't support adding | ||
| // columns with NOT NULL constraints trivially without a default, we should ideally recreate | ||
| // tables or trust that initialize schema handles the 'CREATE TABLE IF NOT EXISTS' nicely | ||
| // but wouldn't alter existing. Let's do a safe alter table for existing users: | ||
|
|
||
| const tables = ['repositories', 'releases', 'categories', 'ai_configs', 'webdav_configs', 'asset_filters', 'settings']; | ||
|
|
||
| for (const table of tables) { | ||
| try { | ||
| db.exec(`ALTER TABLE ${table} ADD COLUMN user_id INTEGER NOT NULL DEFAULT ${defaultUserId}`); | ||
| } catch (e: any) { | ||
| if (!e.message.includes('duplicate column name')) { | ||
| console.error(`Error adding user_id to ${table}:`, e.message); | ||
| } | ||
| } | ||
| } |
There was a problem hiding this comment.
Adding user_id alone does not finish the multi-user migration.
initializeSchema(db) plus ALTER TABLE ... ADD COLUMN user_id preserves whatever PK/UNIQUE rules the old single-user tables already had. If settings, repositories, or releases were globally keyed before, they still are now, so a migrated instance will overwrite or reject another user's rows. These tables need to be rebuilt in their new shape and copied across instead of only appending user_id.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@server/src/db/migrations.ts` around lines 26 - 43, The migration that only
runs ALTER TABLE to add user_id leaves prior PK/UNIQUE constraints intact and
won't complete multi-user migration; update the migration logic (around
initializeSchema, the tables array, and the ALTER TABLE loop that uses
defaultUserId) to rebuild affected tables (at minimum settings, repositories,
releases — and any table that had global unique/PK constraints) by: 1) creating
a new temp table with the new schema that includes user_id in the primary
key/unique constraints, 2) copying existing rows into the temp table populating
user_id with defaultUserId, 3) dropping the old table, 4) renaming the temp
table to the original name, and 5) recreating any indexes/foreign keys; ensure
this replace-flow runs instead of a simple ALTER TABLE for those identified
tables and keep error handling around each step.
| 3: (db) => { | ||
| try { | ||
| db.exec('ALTER TABLE settings ADD COLUMN updated_at TEXT NOT NULL DEFAULT (datetime(\'now\'))'); | ||
| } catch (e: any) { | ||
| if (!e.message.includes('duplicate column name')) { | ||
| console.error('Error adding updated_at to settings:', e.message); | ||
| } | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
What restrictions does SQLite place on ALTER TABLE ... ADD COLUMNdefault values, and if one statement in a multi-statementALTER TABLE ...; ALTER TABLE ...;batch fails withduplicate column name, are later statements still executed?
💡 Result:
SQLite ALTER TABLE ... ADD COLUMN DEFAULT restrictions
When using ALTER TABLE ... ADD COLUMN, SQLite allows most CREATE TABLE column syntax except these relevant restrictions on defaults: the new column may not have a default value of CURRENT_TIME, CURRENT_DATE, CURRENT_TIMESTAMP, or any expression in parentheses (e.g. DEFAULT (1) is rejected). If you add the column as NOT NULL, it must have a non-NULL default. If foreign keys are enabled and you add a REFERENCES column, its default must be NULL. (sqlite.org)
If one statement fails with duplicate column name, do later statements run?
It depends how you execute the “batch”:
- Single API call with multiple semicolon-separated statements (e.g.,
sqlite3_exec): if one statement errors (such asduplicate column name), SQLite stops and skips all subsequent statements in that SQL string. (sqlite.org) sqlite3CLI / script input: by default it continues after an error; it only stops if you enable.bail on(or run with-bail). (sqlite.org)
Citations:
- 1: https://sqlite.org/draft/lang_altertable.html
- 2: https://sqlite.org/forum/info/c97ef71dd755f6087fc96847f077c5dcd73c270375bc52665bd726cbb14da235?utm_source=openai
- 3: https://sqlite.org/cli.html?utm_source=openai
🏁 Script executed:
# Check the code at lines 99-111 to see if similar pattern exists
sed -n '99,111p' server/src/db/migrations.tsRepository: AmintaCCCP/GithubStarsManager
Length of output: 546
🏁 Script executed:
# Find where db.exec is defined or used to understand its behavior
fd migrations.ts | head -5
cat server/src/db/migrations.ts | wc -lRepository: AmintaCCCP/GithubStarsManager
Length of output: 104
🏁 Script executed:
# Check the beginning of the file and imports to understand what db is
head -30 server/src/db/migrations.tsRepository: AmintaCCCP/GithubStarsManager
Length of output: 1460
🏁 Script executed:
# Search for db initialization or setup
rg "new Database|require.*sqlite|import.*sqlite" server/src/db/Repository: AmintaCCCP/GithubStarsManager
Length of output: 343
🏁 Script executed:
# Check better-sqlite3 documentation behavior by searching for any exec() usage patterns in the codebase
rg "\.exec\(" server/src/db/Repository: AmintaCCCP/GithubStarsManager
Length of output: 633
The SQLite ALTER TABLE pattern here is not upgrade-safe.
DEFAULT (datetime('now')) is forbidden syntax in SQLite's ALTER TABLE ... ADD COLUMN (you must use DEFAULT CURRENT_TIMESTAMP without parentheses, or a literal value). Additionally, better-sqlite3's db.exec() stops and rolls back the transaction on the first error, so in migrations with multiple ALTER TABLE statements in one batch (e.g., lines 99–111), any failure skips remaining columns. The error handler silently ignores 'duplicate column name' and allows the migration version to advance despite an incomplete schema.
Check PRAGMA table_info(...) for missing columns, add them one by one in separate statements, and backfill updated_at in a subsequent transaction.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@server/src/db/migrations.ts` around lines 45 - 52, Replace the single batch
ALTER TABLE approach: use PRAGMA table_info('settings') to check if the
'updated_at' column is missing, then add it with a syntactically-valid SQLite
default (DEFAULT CURRENT_TIMESTAMP) via a single db.exec() call; repeat this
pattern for each column you need to add (do not combine multiple ALTERs in one
exec), and perform any backfill for updated_at in a separate transaction after
the column exists; ensure you do not swallow errors that indicate an incomplete
migration—only ignore a missing-column case when PRAGMA confirmed the column
already exists and otherwise surface/throw errors so the migration does not
advance prematurely.
| const JWT_SECRET = config.encryptionKey || 'fallback_secret_for_dev_only'; | ||
| const JWT_EXPIRES_IN = '7d'; |
There was a problem hiding this comment.
Don't sign JWTs with the encryption key or a known fallback.
If config.encryptionKey is unset, every deployment falls back to the same public string, so tokens are forgeable. Even when it is set, reusing the encryption key for signing widens the impact of a single secret leak. Fail fast unless a dedicated JWT secret is configured.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@server/src/routes/auth.ts` around lines 10 - 11, Replace the current
JWT_SECRET usage that falls back to a public string by requiring a dedicated
signing secret: stop using config.encryptionKey for JWT signing and remove the
'fallback_secret_for_dev_only' default; instead read a dedicated secret (e.g.,
config.jwtSecret or process.env.JWT_SECRET) into JWT_SECRET and make startup
fail fast (throw or process.exit with a clear error) if that value is missing or
empty; keep JWT_EXPIRES_IN as is but ensure any code that references JWT_SECRET
(token creation/verification) now uses the dedicated secret.
| if (password) { | ||
| const salt = await bcrypt.genSalt(10); | ||
| const passwordHash = await bcrypt.hash(password, salt); | ||
| db.prepare('UPDATE users SET password_hash = ? WHERE id = ?').run(passwordHash, userId); | ||
| } | ||
|
|
||
| if (apprise_url !== undefined) { | ||
| db.prepare('UPDATE users SET apprise_url = ? WHERE id = ?').run(apprise_url, userId); | ||
| } | ||
|
|
||
| if (username !== undefined) { | ||
| const existing = db.prepare('SELECT id FROM users WHERE username = ? AND id != ?').get(username, userId); | ||
| if (existing) { | ||
| return res.status(409).json({ error: 'Username already taken' }); | ||
| } | ||
| db.prepare('UPDATE users SET username = ? WHERE id = ?').run(username, userId); | ||
| } | ||
|
|
||
| if (display_name !== undefined) { | ||
| db.prepare('UPDATE users SET display_name = ? WHERE id = ?').run(display_name, userId); | ||
| } | ||
|
|
||
| if (avatar_url !== undefined) { | ||
| db.prepare('UPDATE users SET avatar_url = ? WHERE id = ?').run(avatar_url, userId); | ||
| } |
There was a problem hiding this comment.
Profile updates are non-atomic on validation failure.
password and apprise_url are written before the username uniqueness check. A request that later returns 409 for username can still mutate other fields, which makes the failed update partially committed. Validate first, then apply the changes in one transaction.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@server/src/routes/auth.ts` around lines 134 - 158, The current update flow
hashes and writes `password` and `apprise_url` before validating `username`
uniqueness, allowing partial commits on a later 409; change the flow to validate
all inputs first (perform the `SELECT id FROM users WHERE username = ? AND id !=
?` check for `username`) and only after validation apply updates inside a single
atomic transaction (use the DB's transaction/BEGIN...COMMIT or
`db.transaction`), precompute the `passwordHash` before starting the transaction
if needed, and then run the `db.prepare(...).run(...)` statements for
`password`, `apprise_url`, `username`, `display_name`, and `avatar_url` within
that transaction so either all updates succeed or none are applied.
| const response = await fetch(targetUrl, { | ||
| method: 'POST', | ||
| headers: { | ||
| 'Content-Type': 'application/json', | ||
| }, | ||
| body: JSON.stringify(body), | ||
| }); |
There was a problem hiding this comment.
Outbound notification calls need a timeout.
A dead endpoint can leave /api/notifications/test and scheduler jobs waiting indefinitely. Wrap the fetch() in an AbortController timeout so failures are bounded.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@server/src/services/notification.ts` around lines 72 - 78, The outbound POST
using fetch(targetUrl, ...) can hang indefinitely; wrap the call in an
AbortController with a configurable timeout (e.g., 5s) so requests are bounded:
create an AbortController, pass controller.signal into fetch, start a setTimeout
that calls controller.abort() after the timeout, and clear that timeout when
fetch resolves; update the code around the fetch/response handling (the fetch
call that assigns to `response`) to catch abort errors and treat them as a
failed request so scheduler and /api/notifications/test don't hang.
| return false; | ||
| } | ||
|
|
||
| console.log(`✅ Notification sent successfully to ${appriseUrl.split('?')[0]}`); |
There was a problem hiding this comment.
Success logging is leaking webhook secrets.
appriseUrl.split('?')[0] still contains path-embedded tokens for Discord, Telegram, Gotify, and Apprise URLs. Log only a redacted host/service identifier.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@server/src/services/notification.ts` at line 86, Replace the current success
log that prints the raw URL (console.log(`✅ Notification sent successfully to
${appriseUrl.split('?')[0]}`)) with a redacted identifier: parse appriseUrl
using the URL constructor, extract only non-sensitive parts (hostname and the
first path segment that identifies the service, e.g., discord/telegram/gotify)
and omit any path segments or query parameters that contain tokens, then log
"Notification sent successfully to <redactedIdentifier>" instead; update this in
server/src/services/notification.ts where appriseUrl is used to ensure no
path-embedded tokens are ever logged.
| async function syncStars(userId: number): Promise<void> { | ||
| const db = getDb(); | ||
|
|
||
| const tokenRow = db.prepare('SELECT value FROM settings WHERE user_id = ? AND key = ?').get(userId, 'github_token') as { value: string } | undefined; | ||
| if (!tokenRow) { | ||
| console.log(`No GitHub token for user ${userId}, skipping sync_stars`); | ||
| return; | ||
| } | ||
|
|
||
| const githubToken = decrypt(tokenRow.value, config.encryptionKey); | ||
|
|
||
| const allStars: any[] = []; | ||
| let page = 1; | ||
| const perPage = 100; | ||
|
|
||
| while (true) { | ||
| const response = await fetch(`https://api.github.com/user/starred?per_page=${perPage}&page=${page}&sort=updated`, { | ||
| headers: { | ||
| 'Authorization': `token ${githubToken}`, | ||
| 'Accept': 'application/vnd.github.v3+json', | ||
| 'User-Agent': 'GithubStarsManager' | ||
| } | ||
| }); | ||
|
|
||
| if (!response.ok) { | ||
| throw new Error(`GitHub API error: ${response.status}`); | ||
| } | ||
|
|
||
| const stars = await response.json() as any[]; | ||
|
|
||
| if (stars.length === 0) break; | ||
|
|
||
| allStars.push(...stars); | ||
|
|
||
| if (stars.length < perPage) break; | ||
|
|
||
| page++; | ||
|
|
||
| await new Promise(resolve => setTimeout(resolve, 100)); | ||
| } | ||
|
|
||
| console.log(`Fetched ${allStars.length} starred repos for user ${userId}`); | ||
|
|
||
| const existingRepos = db.prepare('SELECT id, full_name FROM repositories WHERE user_id = ?').all(userId) as { id: number; full_name: string }[]; | ||
| const existingMap = new Map(existingRepos.map(r => [r.full_name, r.id])); | ||
| const newStars: any[] = []; | ||
| const removedStars: string[] = []; | ||
|
|
||
| for (const star of allStars) { | ||
| if (!existingMap.has(star.full_name)) { | ||
| newStars.push(star); | ||
| } | ||
| existingMap.delete(star.full_name); | ||
| } | ||
|
|
||
| for (const [fullName] of existingMap) { | ||
| removedStars.push(fullName); | ||
| } | ||
|
|
||
| const insertRepo = db.prepare(` | ||
| INSERT INTO repositories ( | ||
| id, user_id, name, full_name, description, html_url, stargazers_count, | ||
| language, created_at, updated_at, pushed_at, starred_at, owner_login, owner_avatar_url, topics | ||
| ) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?) | ||
| `); | ||
|
|
||
| for (const repo of newStars) { | ||
| insertRepo.run( | ||
| repo.id, | ||
| userId, | ||
| repo.name, | ||
| repo.full_name, | ||
| repo.description, | ||
| repo.html_url, | ||
| repo.stargazers_count, | ||
| repo.language, | ||
| repo.created_at, | ||
| repo.updated_at, | ||
| repo.pushed_at, | ||
| new Date().toISOString(), | ||
| repo.owner?.login, | ||
| repo.owner?.avatar_url, | ||
| JSON.stringify(repo.topics || []) | ||
| ); | ||
| } | ||
|
|
||
| for (const fullName of removedStars) { | ||
| db.prepare('DELETE FROM repositories WHERE user_id = ? AND full_name = ?').run(userId, fullName); | ||
| } | ||
|
|
||
| console.log(`Synced stars for user ${userId}: ${newStars.length} added, ${removedStars.length} removed`); | ||
|
|
||
| if (newStars.length > 0) { | ||
| const analyzedRepos = await analyzeNewStars(userId, newStars); | ||
| await sendStarNotification(userId, newStars.length, removedStars.length, analyzedRepos); | ||
| } else if (removedStars.length > 0) { | ||
| await sendStarNotification(userId, 0, removedStars.length, []); | ||
| } | ||
| } |
There was a problem hiding this comment.
Serialize manual and scheduled jobs per user.
These paths all compute a diff from current DB state and then mutate the same user-scoped tables without any user-level lock or transactional write phase. If a cron run overlaps a manual run, both can classify the same repo/release as new, then one side fails on insert or emits duplicate notifications.
Also applies to: 428-492, 553-656
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@server/src/services/scheduler.ts` around lines 124 - 222, The syncStars
routine computes a diff then mutates user-scoped tables without any per-user
serialization, so concurrent runs can race and produce duplicate
inserts/notifications; fix by performing the read-diff-and-write sequence inside
a single DB transaction and making inserts idempotent: wrap the selection of
existingRepos, the deletion of removedStars, and the insertion of newStars in
one transaction (use an explicit transaction via getDb().transaction / BEGIN
IMMEDIATE or equivalent), change the INSERT into an idempotent form (INSERT OR
IGNORE / upsert) and determine the number of actually inserted rows from the
statement result (use run.changes) so analyzeNewStars/sendStarNotification are
only invoked for rows that were truly inserted; apply the same transactional +
idempotent-insert pattern to the other affected functions (the blocks referenced
at 428-492 and 553-656) and ensure notifications are based on committed changes.
| export function updateNotificationPreferences(userId: number, updates: Partial<NotificationPreferences>): NotificationPreferences | null { | ||
| const db = getDb(); | ||
|
|
||
| db.prepare(` | ||
| UPDATE notification_preferences | ||
| SET notify_new_release = ?, notify_star_added = ?, notify_star_removed = ? | ||
| WHERE user_id = ? | ||
| `).run( | ||
| updates.notify_new_release ?? 1, | ||
| updates.notify_star_added ?? 1, | ||
| updates.notify_star_removed ?? 1, | ||
| userId | ||
| ); | ||
|
|
||
| return getNotificationPreferences(userId); |
There was a problem hiding this comment.
Partial preference updates currently re-enable omitted toggles.
Using ?? 1 means a request that updates one field silently turns every omitted field back on. If the row does not exist yet, the UPDATE is a no-op and getNotificationPreferences() recreates defaults instead of applying the requested values. Merge with the current row or use an upsert.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@server/src/services/scheduler.ts` around lines 536 - 550,
updateNotificationPreferences currently uses "?? 1" which forces omitted toggles
back to true and fails when the row doesn't exist; instead fetch the existing
row via getNotificationPreferences(userId) (or query notification_preferences),
merge its fields with the provided Partial<NotificationPreferences> so
unspecified fields are preserved, then perform an upsert (INSERT ... ON
CONFLICT(user_id) DO UPDATE ...) or explicit INSERT when absent and UPDATE when
present to persist the merged values; ensure you reference
updateNotificationPreferences and getNotificationPreferences and operate on the
notification_preferences columns notify_new_release, notify_star_added,
notify_star_removed to avoid overwriting omitted toggles.
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (1)
server/src/index.ts (1)
30-30: Consider tightening CSP when feasible.Using
'unsafe-inline'and'unsafe-eval'inscriptSrcsignificantly weakens CSP protection against XSS. While this may be necessary for the current SPA setup, consider migrating to nonce-based or hash-based CSP directives in the future for stronger security posture.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@server/src/index.ts` at line 30, The Content-Security-Policy entry uses scriptSrc: ["'self'", "'unsafe-inline'", "'unsafe-eval'"] which weakens XSS protections; remove or avoid "'unsafe-inline'" and "'unsafe-eval'" and instead implement nonce- or hash-based script allowances for the CSP middleware (the scriptSrc configuration in server/src/index.ts). Update the server code that sets CSP to generate a per-request nonce (e.g., attach to response.locals or request context) and inject that nonce into your HTML templates for inline scripts, or compute script hashes for static inline scripts, then replace "'unsafe-inline'" and "'unsafe-eval'" with "nonce-<value>" or the hash values so only authorized scripts run; ensure any code that relies on eval is refactored away or explicitly reviewed before re-allowing similar behavior.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@server/src/index.ts`:
- Around line 75-77: The SPA catch-all app.get('*') is returning index.html for
missing API endpoints; to fix, add an explicit API 404 handler using
app.use('/api', ...) placed before the SPA catch-all so any unmatched /api/*
request returns a JSON 404 (e.g., call res.status(404).json({ error: 'Not found'
})), and ensure the existing app.get('*') remains last to serve index.html only
for non-API routes.
- Around line 67-72: The errorHandler is registered before the static file
serving, so errors thrown while serving static assets (and any SPA routing
middleware) won't be caught; move the static middleware registration (the
app.use(express.static(frontendDistPath)) call and any SPA routing middleware)
to occur before the app.use(errorHandler) registration so errorHandler is the
last middleware registered and can catch errors from static/SPA routes.
---
Nitpick comments:
In `@server/src/index.ts`:
- Line 30: The Content-Security-Policy entry uses scriptSrc: ["'self'",
"'unsafe-inline'", "'unsafe-eval'"] which weakens XSS protections; remove or
avoid "'unsafe-inline'" and "'unsafe-eval'" and instead implement nonce- or
hash-based script allowances for the CSP middleware (the scriptSrc configuration
in server/src/index.ts). Update the server code that sets CSP to generate a
per-request nonce (e.g., attach to response.locals or request context) and
inject that nonce into your HTML templates for inline scripts, or compute script
hashes for static inline scripts, then replace "'unsafe-inline'" and
"'unsafe-eval'" with "nonce-<value>" or the hash values so only authorized
scripts run; ensure any code that relies on eval is refactored away or
explicitly reviewed before re-allowing similar behavior.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 5cd0935c-972e-4263-b884-c4dfd65a8efa
📒 Files selected for processing (3)
.gitignoreserver/data/.gitkeepserver/src/index.ts
✅ Files skipped from review due to trivial changes (1)
- .gitignore
| app.use(errorHandler); | ||
|
|
||
| // Serve static UI in production | ||
| const __dirname = path.dirname(new URL(import.meta.url).pathname); | ||
| const frontendDistPath = path.resolve(__dirname, '../../dist'); | ||
| app.use(express.static(frontendDistPath)); |
There was a problem hiding this comment.
Error handler should be registered after static middleware.
The errorHandler is registered at line 67, but static file serving is added at line 72. Errors occurring during static file serving won't be caught by the error handler. Consider moving static serving and SPA routing before the error handler registration.
♻️ Suggested reordering
+ // Serve static UI in production
+ const __dirname = path.dirname(new URL(import.meta.url).pathname);
+ const frontendDistPath = path.resolve(__dirname, '../../dist');
+ app.use(express.static(frontendDistPath));
+
+ // Handle SPA routing (exclude API routes)
+ app.get('*', (req, res, next) => {
+ if (req.path.startsWith('/api')) {
+ return next();
+ }
+ res.sendFile(path.join(frontendDistPath, 'index.html'));
+ });
+
// Global error handler
app.use(errorHandler);
-
- // Serve static UI in production
- const __dirname = path.dirname(new URL(import.meta.url).pathname);
- const frontendDistPath = path.resolve(__dirname, '../../dist');
- app.use(express.static(frontendDistPath));
-
- // Handle SPA routing
- app.get('*', (req, res) => {
- res.sendFile(path.join(frontendDistPath, 'index.html'));
- });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@server/src/index.ts` around lines 67 - 72, The errorHandler is registered
before the static file serving, so errors thrown while serving static assets
(and any SPA routing middleware) won't be caught; move the static middleware
registration (the app.use(express.static(frontendDistPath)) call and any SPA
routing middleware) to occur before the app.use(errorHandler) registration so
errorHandler is the last middleware registered and can catch errors from
static/SPA routes.
| app.get('*', (req, res) => { | ||
| res.sendFile(path.join(frontendDistPath, 'index.html')); | ||
| }); |
There was a problem hiding this comment.
SPA catch-all will return HTML for API 404s.
The app.get('*') handler matches all unhandled GET requests, including /api/* routes that don't exist. This means API 404s will return index.html with status 200 instead of a proper JSON error response, breaking API clients.
🛠️ Proposed fix to exclude API routes from SPA fallback
// Handle SPA routing
- app.get('*', (req, res) => {
+ app.get('*', (req, res, next) => {
+ // Don't serve SPA for API routes - let them 404 properly
+ if (req.path.startsWith('/api')) {
+ return next();
+ }
res.sendFile(path.join(frontendDistPath, 'index.html'));
});You may also want to add an explicit API 404 handler before the SPA catch-all:
app.use('/api', (req, res) => {
res.status(404).json({ error: 'Not found' });
});🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@server/src/index.ts` around lines 75 - 77, The SPA catch-all app.get('*') is
returning index.html for missing API endpoints; to fix, add an explicit API 404
handler using app.use('/api', ...) placed before the SPA catch-all so any
unmatched /api/* request returns a JSON 404 (e.g., call res.status(404).json({
error: 'Not found' })), and ensure the existing app.get('*') remains last to
serve index.html only for non-API routes.
DOCKERHUB_USERNAME、DOCKERHUB_TOKEN
在仓库配置以上变量后,新版本发布时将构建镜像并推送到dockerhub
Summary by CodeRabbit
New Features
Documentation
Chores