fix: post-merge follow-ups for direct API fallback#3
Conversation
Address CodeRabbit review items and code quality issues from PR #2: - Check persisted keys (not just process.env) in applyDirectApiFallback - Normalize model IDs before saving to config (prepend provider prefix) - Seed deepseek in readApiKeys/readApiKeyHints default objects - Add DeepSeek test coverage to api-key-store tests - Update CLAUDE.md for new model-validator and model-fetcher modules - Extract handleStart helpers to stay under 50-line function limit - Split config.test.js (825→4 files) to stay under 300-line file limit Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
|
Caution Review failedThe pull request is closed. ℹ️ Recent review info⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (1)
📝 WalkthroughWalkthroughIntroduces model-fetching and validation utilities, alias auto-repair and direct-API fallback, extracts CLI handlers, refactors sidecar startup to use resolve/validate helpers, tightens alias input sanitization, adds DeepSeek API-key support, and adds extensive tests for config, API keys, and model validation. Changes
Sequence Diagram(s)sequenceDiagram
participant CLI as CLI (bin/sidecar.js)
participant StartHelpers as start-helpers
participant Config as config module
participant Validator as model-validator
participant AliasResolver as alias-resolver
participant Sidecar as startSidecar
CLI->>StartHelpers: resolveModelFromArgs(args)
StartHelpers->>Config: resolveModel(modelArg / alias)
Config->>AliasResolver: applyDirectApiFallback / autoRepairAlias (if needed)
AliasResolver->>Config: (may call) saveConfig
StartHelpers->>Validator: validateFallbackModel(args, alias)
Validator-->>StartHelpers: validation result / error (exit on failure)
StartHelpers->>CLI: return normalized model & agent
CLI->>Sidecar: startSidecar(args with validated model & agent)
Sidecar-->>CLI: running
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 8
🧹 Nitpick comments (3)
bin/sidecar.js (1)
118-208: This entrypoint is still over the 300-line cap.The helper extraction improves
handleStart(), butbin/sidecar.jsis still 430+ lines. Please move the start-model resolution/validation helpers or command handlers into dedicated modules so future CLI changes do not keep expanding the entrypoint.As per coding guidelines, "Keep any single file under 300 lines of code".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@bin/sidecar.js` around lines 118 - 208, The entrypoint exceeds the 300-line limit; extract the helper functions and command handler into separate modules: move resolveModelFromArgs and validateFallbackModel into a new module (e.g., src/cli/model-resolver.js) and move handleStart into its own module (e.g., src/cli/commands/start.js), export them and replace the in-file definitions with require/imports in bin/sidecar.js; ensure resolveModelFromArgs, validateFallbackModel, and handleStart keep the same signatures and that handleStart still requires startSidecar from ../src/index and returns/awaits the startSidecar call as before so behavior is unchanged.tests/config-misc.test.js (1)
12-39: Extract the shared config-test harness.
beforeEach/afterEach, the temp-dir env setup, andloadModule()are now repeated here, intests/config-resolve.test.js, and intests/config-null-alias.test.js. Pulling that into a small helper will keep these suites from drifting as the config bootstrap changes.As per coding guidelines, "Extract duplicate logic to shared utilities instead of repeating code".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/config-misc.test.js` around lines 12 - 39, Extract the duplicated test harness (the beforeEach/afterEach that creates tempDir, saves/restores originalEnv, sets SIDECAR_CONFIG_DIR and clears API key env vars, calls jest.resetModules, and the helper function loadModule()) into a shared test utility module (e.g., export a setupTestHarness function and loadModule helper) and import it from each test file; update tests/config-misc.test.js, tests/config-resolve.test.js, and tests/config-null-alias.test.js to call the shared setup/teardown helper instead of duplicating the beforeEach/afterEach and the loadModule function, ensuring the helper exposes the tempDir and a teardown restore if needed and that tests still call loadModule() from the shared utility.tests/api-key-store.test.js (1)
243-250: Split this suite before it grows further.The DeepSeek additions are fine, but this file is already ~650 lines and mixes persistence, masking, validation, and removal flows in one place. Please split provider-specific and validation-heavy cases into focused suites before this gets harder to maintain.
As per coding guidelines, "Keep any single file under 300 lines of code".
Also applies to: 397-419, 475-479, 524-531
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/api-key-store.test.js` around lines 243 - 250, The test file is too large and mixes unrelated responsibilities; split provider-specific and validation-heavy cases into smaller suites by moving the DeepSeek provider tests (the "it('should save deepseek key with correct env var name')" case and other provider-specific specs) into a new focused test file (e.g., api-key-store.providers.test.js) and move validation/masking/removal tests (those referencing saveApiKey, loadApiKey, maskApiKey, removeApiKey) into another file (e.g., api-key-store.validation.test.js); ensure each new file reuses or reinitializes the shared fixtures (tmpDir, fs setup/teardown, and process.env cleanup) so tests remain isolated, update any imports to reference the same exported functions (saveApiKey, loadApiKey, maskApiKey, removeApiKey), and keep each resulting test file under ~300 lines.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@CLAUDE.md`:
- Around line 205-206: The CLAUDE.md documentation is stale: add
normalizeModelId to the Key Modules table and add tests/config-hash.test.js to
the directory/testing tables so docs match the code; update each referenced
section (including the lines around the model-fetcher/model-validator entries
and the directory/testing lists at the other noted locations) to list
normalizeModelId (from src/utils/model-validator.js) and include
tests/config-hash.test.js in the tests file list, ensuring wording and
formatting follow existing table conventions.
In `@src/sidecar/setup.js`:
- Around line 34-42: The input validation in addAlias() currently accepts
whitespace-only and the literal 'null'; trim both inputs and reject empty or
placeholder values: call name = name.trim() and modelString =
modelString.trim(), then throw errors if name === '' or name === 'null', or if
modelString === '' or modelString === 'null'; update the error messages to
reference the trimmed values so callers (and saveConfig()) won't silently drop
aliases.
In `@src/utils/config.js`:
- Around line 151-155: The file src/utils/config.js and the function
resolveModel exceed repository size/function length limits; extract one of the
two logical groups into a new sibling utility module: either move the
alias-repair and fallback flow (functions autoRepairAlias,
applyDirectApiFallback and related logic used by resolveModel) into a new utils
module and import them into config.js, or move the hash/table helper functions
(the helpers referenced around the other flagged sections) out to a new module
and import them; update resolveModel to call the relocated functions
(referencing resolveModel, autoRepairAlias, applyDirectApiFallback, and the
hash/table helper names) and adjust exports/imports and any tests or call sites
accordingly so config.js and resolveModel fall below the 300-line/50-line
limits.
- Around line 201-211: autoRepairAlias currently writes a repaired alias to disk
using saveConfig inside autoRepairAlias (which also uses DEFAULT_ALIASES,
config.aliases, process.stderr.write and returns applyDirectApiFallback), and if
saveConfig throws it aborts resolution; change this to best-effort by wrapping
the config mutation + saveConfig call in a try/catch so any thrown error is
caught and handled (log a non-fatal message to stderr via process.stderr.write)
but do not rethrow — always return applyDirectApiFallback(defaultModel) so the
in-memory fallback works even when persisting fails.
In `@src/utils/model-validator.js`:
- Around line 15-27: The non-TTY remediation message uses raw model IDs (e.g.
relevant[0]?.id) so aliases like "gemini=gemini-3-flash" get persisted
incorrectly; update the code that builds the headless remediation path to call
normalizeModelId(provider, relevant[0].id) (and any other spots that interpolate
relevant[0]?.id, e.g., the block that emits the "--validate-model" suggestion
and the similar spot around line 127) so the provider prefix is always included
when constructing alias suggestions and saved aliases.
In `@tests/config-fallback.test.js`:
- Around line 8-17: The test currently fully mocks readApiKeyValues so it never
exercises the real persisted-key/.env lookup; add one non-mocked integration
test that creates a real .env file with a provider API key, ensures getEnvPath()
can find it (i.e., don't mock getEnvPath/readApiKeyValues for this test), then
call config.resolveModel() with an alias that depends on that persisted key and
assert the resolved provider/key is correct; reference the existing test file
tests/config-fallback.test.js, the mocked symbol readApiKeyValues, and the
functions getEnvPath() and config.resolveModel() so you place the new test
alongside the others but without the jest.mock for this specific case (use
jest.unmock or a separate test block that writes the .env, invokes
resolveModel(), and cleans up the file afterwards).
In `@tests/config-null-alias.test.js`:
- Around line 120-132: The tests currently only assert the repaired alias is
"some string with a slash"; tighten them by asserting the resolved value equals
the expected default alias from getDefaultAliases(): for the first case compare
result to getDefaultAliases().gemini, and for the other case compare to
getDefaultAliases()['gemini-pro']; update the assertions in the blocks that call
loadConfig() and config.resolveModel('gemini') (and the 'gemini-pro' case) to
use strict equality against those getDefaultAliases() entries instead of just
checking for a slash.
In `@tests/model-validator.test.js`:
- Around line 243-299: The file is getting too long; split the normalization and
persistence test suites into a new spec so the original stays under 300 lines.
Create a new test file and move the two top-level describe blocks that reference
normalizeModelId and the "promptModelSelection saves normalized id" test (they
use normalizeModelId and validateDirectModel, plus mocks for model-fetcher,
api-key-store, config, and readline) into it, preserving the same beforeEach
require of ../src/utils/model-validator and the mock setups
(keyStore.readApiKeyValues, fetcher.fetchModelsFromProvider, jest.spyOn for
readline and process.stderr). Ensure the new spec resets modules/cleans mocks
the same way the original file does so tests remain isolated and update any
test-runner/config if needed.
---
Nitpick comments:
In `@bin/sidecar.js`:
- Around line 118-208: The entrypoint exceeds the 300-line limit; extract the
helper functions and command handler into separate modules: move
resolveModelFromArgs and validateFallbackModel into a new module (e.g.,
src/cli/model-resolver.js) and move handleStart into its own module (e.g.,
src/cli/commands/start.js), export them and replace the in-file definitions with
require/imports in bin/sidecar.js; ensure resolveModelFromArgs,
validateFallbackModel, and handleStart keep the same signatures and that
handleStart still requires startSidecar from ../src/index and returns/awaits the
startSidecar call as before so behavior is unchanged.
In `@tests/api-key-store.test.js`:
- Around line 243-250: The test file is too large and mixes unrelated
responsibilities; split provider-specific and validation-heavy cases into
smaller suites by moving the DeepSeek provider tests (the "it('should save
deepseek key with correct env var name')" case and other provider-specific
specs) into a new focused test file (e.g., api-key-store.providers.test.js) and
move validation/masking/removal tests (those referencing saveApiKey, loadApiKey,
maskApiKey, removeApiKey) into another file (e.g.,
api-key-store.validation.test.js); ensure each new file reuses or reinitializes
the shared fixtures (tmpDir, fs setup/teardown, and process.env cleanup) so
tests remain isolated, update any imports to reference the same exported
functions (saveApiKey, loadApiKey, maskApiKey, removeApiKey), and keep each
resulting test file under ~300 lines.
In `@tests/config-misc.test.js`:
- Around line 12-39: Extract the duplicated test harness (the
beforeEach/afterEach that creates tempDir, saves/restores originalEnv, sets
SIDECAR_CONFIG_DIR and clears API key env vars, calls jest.resetModules, and the
helper function loadModule()) into a shared test utility module (e.g., export a
setupTestHarness function and loadModule helper) and import it from each test
file; update tests/config-misc.test.js, tests/config-resolve.test.js, and
tests/config-null-alias.test.js to call the shared setup/teardown helper instead
of duplicating the beforeEach/afterEach and the loadModule function, ensuring
the helper exposes the tempDir and a teardown restore if needed and that tests
still call loadModule() from the shared utility.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: ea584d7c-3a6f-4bc4-ad72-7122727cbdc2
📒 Files selected for processing (14)
CLAUDE.mdbin/sidecar.jssrc/sidecar/setup.jssrc/utils/api-key-store.jssrc/utils/config.jssrc/utils/model-validator.jstests/api-key-store.test.jstests/config-fallback.test.jstests/config-hash.test.jstests/config-misc.test.jstests/config-null-alias.test.jstests/config-resolve.test.jstests/config.test.jstests/model-validator.test.js
| describe('normalizeModelId', () => { | ||
| let normalizeModelId; | ||
|
|
||
| beforeEach(() => { | ||
| const validator = require('../src/utils/model-validator'); | ||
| normalizeModelId = validator.normalizeModelId; | ||
| }); | ||
|
|
||
| it('should prepend provider when id lacks prefix', () => { | ||
| expect(normalizeModelId('google', 'gemini-3-flash')).toBe('google/gemini-3-flash'); | ||
| }); | ||
|
|
||
| it('should return as-is when id already has provider prefix', () => { | ||
| expect(normalizeModelId('google', 'google/gemini-3-flash')).toBe('google/gemini-3-flash'); | ||
| }); | ||
|
|
||
| it('should handle nested model ids (provider/org/model)', () => { | ||
| expect(normalizeModelId('openai', 'openai/gpt-4o')).toBe('openai/gpt-4o'); | ||
| }); | ||
|
|
||
| it('should prepend provider for bare model name', () => { | ||
| expect(normalizeModelId('anthropic', 'claude-sonnet-4.6')).toBe('anthropic/claude-sonnet-4.6'); | ||
| }); | ||
| }); | ||
|
|
||
| describe('promptModelSelection saves normalized id', () => { | ||
| it('should normalize model id before saving to config', async () => { | ||
| const config = require('../src/utils/config'); | ||
| const fetcher = require('../src/utils/model-fetcher'); | ||
| const keyStore = require('../src/utils/api-key-store'); | ||
|
|
||
| keyStore.readApiKeyValues.mockReturnValue({ google: 'test-key' }); | ||
| // Models returned WITHOUT provider prefix (simulating provider API) | ||
| fetcher.fetchModelsFromProvider.mockResolvedValue([ | ||
| { id: 'gemini-3-flash', name: 'Gemini 3 Flash' }, | ||
| { id: 'gemini-3-pro', name: 'Gemini 3 Pro' }, | ||
| ]); | ||
|
|
||
| const mockRl = { question: jest.fn(), close: jest.fn() }; | ||
| mockRl.question.mockImplementation((_prompt, cb) => cb('1')); | ||
| jest.spyOn(require('readline'), 'createInterface').mockReturnValue(mockRl); | ||
| jest.spyOn(process.stderr, 'write').mockImplementation(() => true); | ||
|
|
||
| await validateDirectModel('google/gemini-old', 'gemini'); | ||
|
|
||
| // Should save with provider prefix even though selected.id was bare | ||
| expect(config.saveConfig).toHaveBeenCalledWith( | ||
| expect.objectContaining({ | ||
| aliases: expect.objectContaining({ | ||
| gemini: 'google/gemini-3-flash', | ||
| }), | ||
| }) | ||
| ); | ||
|
|
||
| process.stderr.write.mockRestore(); | ||
| }); | ||
| }); |
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
Split this spec before it grows further.
Adding these suites pushes the file to 344 lines. Please peel the normalization/persistence cases into their own spec so the heavy module-reset setup stays under the 300-line limit.
As per coding guidelines, "Keep any single file under 300 lines of code."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/model-validator.test.js` around lines 243 - 299, The file is getting
too long; split the normalization and persistence test suites into a new spec so
the original stays under 300 lines. Create a new test file and move the two
top-level describe blocks that reference normalizeModelId and the
"promptModelSelection saves normalized id" test (they use normalizeModelId and
validateDirectModel, plus mocks for model-fetcher, api-key-store, config, and
readline) into it, preserving the same beforeEach require of
../src/utils/model-validator and the mock setups (keyStore.readApiKeyValues,
fetcher.fetchModelsFromProvider, jest.spyOn for readline and process.stderr).
Ensure the new spec resets modules/cleans mocks the same way the original file
does so tests remain isolated and update any test-runner/config if needed.
|
@claude review this PR |
|
Claude finished @jrenaldi79's task in 3m 0s —— View job PR ReviewTodo List
SummaryThis PR addresses CodeRabbit follow-ups from #2. The intent is solid — persisted key fallback, null alias defense, DeepSeek support, Bugs1. Headless remediation message uses un-normalized model IDs [ The ...which // Line 73-78 — apply normalizeModelId here too
const list = relevant.slice(0, 10).map(m => ` ${normalizeModelId(provider, m.id)}`).join('\n');
// ...
`Fix with: sidecar setup --add-alias ${alias}=${relevant[0] ? normalizeModelId(provider, relevant[0].id) : 'provider/model'}`2. if (config && config.aliases) {
config.aliases[alias] = defaultModel;
saveConfig(config); // ← no try/catch; throws on disk-full / readonly dir
}
return applyDirectApiFallback(defaultModel);If try {
saveConfig(config);
} catch (err) {
process.stderr.write(`Notice: Could not persist repaired alias '${alias}' (${err.message}). Using default for this session only.\n`);
}
return applyDirectApiFallback(defaultModel);Hard Limit Violations (Code Quality Rules)3. This PR extracted helpers out of 4.
5. The 6 new DeepSeek tests are fine in isolation, but the file is now 659 lines. This was already large before this PR; the addition tips it significantly past the limit. Worth splitting at least the provider-specific and validation suites into separate files. 6. The new Minor Issues7. The current check is const alias = typeof name === 'string' ? name.trim() : '';
if (!alias || alias === 'null') { throw new Error(...); }8. Test assertions too loose in expect(result).toContain('/'); // ← passes for any model stringTightening to Test Coverage Gap9. All 5 tests mock CLAUDE.md Stale10. Missing entries:
Positives
|
Bugs fixed: - Normalize model IDs in headless error path (model-validator.js) - Wrap autoRepairAlias saveConfig in try/catch for best-effort persistence File size violations resolved: - bin/sidecar.js 434→283 lines (extracted start-helpers.js, cli-handlers.js) - config.js 350→299 lines (extracted alias-resolver.js) - api-key-store.test.js 659→3 files, all under 300 lines - model-validator.test.js 344→2 files, all under 300 lines Minor fixes: - addAlias rejects 'null' name and trims whitespace - Tightened config-null-alias.test.js assertions to check exact defaults - Added un-mocked integration test for persisted key fallback - Updated CLAUDE.md with normalizeModelId + 5 new test files Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (4)
src/utils/config.js (1)
112-166:resolveModelis slightly over the 50-line function limit.The function is currently 54 lines. While the logic is clear and the delegation to
autoRepairAliasis good, this exceeds the coding guideline of "Keep any single function under 50 lines of code."Consider extracting the default-value resolution (lines 139-165) into a helper function like
resolveDefaultModel(config, effectiveAliases).♻️ Suggested extraction
+/** Resolve model from config default */ +function resolveDefaultModel(config, effectiveAliases) { + if (!config || !config.default) { + throw new Error( + 'No model specified and no default configured. Run \'sidecar setup\' to set a default model.' + ); + } + + const defaultValue = config.default; + + if (defaultValue.includes('/')) { + return defaultValue; + } + + if (effectiveAliases[defaultValue] !== undefined) { + const resolved = effectiveAliases[defaultValue]; + if (!resolved || resolved === 'null') { + return autoRepairAlias(defaultValue, config, DEFAULT_ALIASES, saveConfig); + } + return applyDirectApiFallback(resolved); + } + + throw new Error( + `Default alias '${defaultValue}' not found in aliases. Run 'sidecar setup' to fix configuration.` + ); +} function resolveModel(modelArg) { const config = loadConfig(); const effectiveAliases = getEffectiveAliases(); if (modelArg !== undefined && modelArg !== null) { // ... existing modelArg handling ... } - // modelArg is undefined - use default - if (!config || !config.default) { - throw new Error( - 'No model specified and no default configured. Run \'sidecar setup\' to set a default model.' - ); - } - // ... rest of default handling ... + return resolveDefaultModel(config, effectiveAliases); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/utils/config.js` around lines 112 - 166, The resolveModel function exceeds the 50-line limit; extract the default-resolution block (logic that handles when modelArg is undefined: checking config.default, handling full model strings, resolving aliases, calling autoRepairAlias, and applyDirectApiFallback) into a new helper named resolveDefaultModel(config, effectiveAliases) and replace that block in resolveModel with a single call to resolveDefaultModel(config, effectiveAliases). Ensure the new helper uses the same symbols as the original block (config.default, DEFAULT_ALIASES, saveConfig, autoRepairAlias, applyDirectApiFallback) and preserves the same thrown error messages for missing defaults or unknown aliases so behavior remains identical.tests/model-validator-normalize.test.js (1)
27-34: Redundant mock declarations in beforeEach.The mocks at lines 28-33 duplicate the file-level mocks at lines 7-12. After
jest.resetModules()(line 24), the file-level mocks are still active. These inner mock calls are unnecessary and add noise.♻️ Suggested simplification
beforeEach(() => { jest.resetModules(); jest.clearAllMocks(); - // Re-require after reset to get fresh module with mocks - jest.mock('../src/utils/model-fetcher'); - jest.mock('../src/utils/api-key-store'); - jest.mock('../src/utils/config'); - jest.mock('../src/utils/logger', () => ({ - logger: { debug: jest.fn(), info: jest.fn(), warn: jest.fn(), error: jest.fn() } - })); - const validator = require('../src/utils/model-validator');🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/model-validator-normalize.test.js` around lines 27 - 34, The beforeEach contains redundant jest.mock calls for '../src/utils/model-fetcher', '../src/utils/api-key-store', '../src/utils/config', and '../src/utils/logger' that duplicate the file-level mocks; remove those mock declarations from the beforeEach (keep jest.resetModules() there) so the top-level mocks remain active and the test re-requires a fresh module without re-declaring the same mocks, ensuring the mocked logger object (logger.debug/info/warn/error) stays consistent.bin/sidecar.js (1)
123-127: Consider clarifying the consecutive model assignments.Lines 125-126 assign to
args.modeltwice in succession:args.model = model; // from resolveModelFromArgs args.model = await validateFallbackModel(args, alias); // potentially correctedThe first assignment is needed because
validateFallbackModelreads fromargs.model(line 46 of start-helpers.js). This works correctly but the intent isn't immediately obvious.💡 Suggested clarification
async function handleStart(args) { const { model, alias } = resolveModelFromArgs(args); - args.model = model; - args.model = await validateFallbackModel(args, alias); + args.model = model; // Set resolved model for validation + args.model = await validateFallbackModel(args, alias); // May return corrected modelOr consolidate:
async function handleStart(args) { const { model, alias } = resolveModelFromArgs(args); + // validateFallbackModel needs args.model set; may return a corrected model args.model = model; args.model = await validateFallbackModel(args, alias);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@bin/sidecar.js` around lines 123 - 127, The consecutive assignments to args.model in handleStart are confusing because resolveModelFromArgs returns model which is then overwritten by validateFallbackModel that reads args.model; either make the flow explicit by setting args.model = model, then call validateFallbackModel(args, alias) (and keep that order) with a brief inline comment explaining that validateFallbackModel reads args.model, or refactor validateFallbackModel to accept the model directly (change its signature and callers to pass model) and then assign args.model = await validateFallbackModel(model, alias) so only one clear assignment occurs; update handleStart and any callers of validateFallbackModel (and its definition) accordingly.tests/api-key-store-validation.test.js (1)
80-92: Network error test usessetTimeout(0)which is generally reliable but worth noting.The async error simulation pattern works because Jest's test runner will wait for the promise. However, this pattern relies on microtask ordering. Consider using
setImmediateor explicit promise resolution for more deterministic behavior, though this is a minor concern.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/api-key-store-validation.test.js` around lines 80 - 92, In the 'should resolve invalid for network error' test, replace the non-deterministic setTimeout(() => ...) used inside https.get.mockImplementation with a deterministic scheduling method (e.g., setImmediate or process.nextTick or resolving a Promise) so the mocked 'error' handler on the fake req fires reliably; update the mockImplementation that locates req.on mock calls and invokes the 'error' callback to use setImmediate (or Promise.resolve().then(...)) instead, referencing the existing https.get.mockImplementation and validateApiKey usage in the test.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/cli-handlers.js`:
- Around line 79-84: The code reads and parses metadata.json without error
handling; wrap the JSON.parse(fs.readFileSync(metaPath, 'utf-8')) in a try/catch
around the read+parse (referencing metaPath and the variable meta) to catch
malformed JSON, log a clear error including taskId and the parse error via
console.error or the existing logger, and exit or return non-zero instead of
letting an exception bubble; ensure you don't overwrite the file when parse
fails and keep the subsequent meta.status update and writeFileSync only in the
successful path.
---
Nitpick comments:
In `@bin/sidecar.js`:
- Around line 123-127: The consecutive assignments to args.model in handleStart
are confusing because resolveModelFromArgs returns model which is then
overwritten by validateFallbackModel that reads args.model; either make the flow
explicit by setting args.model = model, then call validateFallbackModel(args,
alias) (and keep that order) with a brief inline comment explaining that
validateFallbackModel reads args.model, or refactor validateFallbackModel to
accept the model directly (change its signature and callers to pass model) and
then assign args.model = await validateFallbackModel(model, alias) so only one
clear assignment occurs; update handleStart and any callers of
validateFallbackModel (and its definition) accordingly.
In `@src/utils/config.js`:
- Around line 112-166: The resolveModel function exceeds the 50-line limit;
extract the default-resolution block (logic that handles when modelArg is
undefined: checking config.default, handling full model strings, resolving
aliases, calling autoRepairAlias, and applyDirectApiFallback) into a new helper
named resolveDefaultModel(config, effectiveAliases) and replace that block in
resolveModel with a single call to resolveDefaultModel(config,
effectiveAliases). Ensure the new helper uses the same symbols as the original
block (config.default, DEFAULT_ALIASES, saveConfig, autoRepairAlias,
applyDirectApiFallback) and preserves the same thrown error messages for missing
defaults or unknown aliases so behavior remains identical.
In `@tests/api-key-store-validation.test.js`:
- Around line 80-92: In the 'should resolve invalid for network error' test,
replace the non-deterministic setTimeout(() => ...) used inside
https.get.mockImplementation with a deterministic scheduling method (e.g.,
setImmediate or process.nextTick or resolving a Promise) so the mocked 'error'
handler on the fake req fires reliably; update the mockImplementation that
locates req.on mock calls and invokes the 'error' callback to use setImmediate
(or Promise.resolve().then(...)) instead, referencing the existing
https.get.mockImplementation and validateApiKey usage in the test.
In `@tests/model-validator-normalize.test.js`:
- Around line 27-34: The beforeEach contains redundant jest.mock calls for
'../src/utils/model-fetcher', '../src/utils/api-key-store',
'../src/utils/config', and '../src/utils/logger' that duplicate the file-level
mocks; remove those mock declarations from the beforeEach (keep
jest.resetModules() there) so the top-level mocks remain active and the test
re-requires a fresh module without re-declaring the same mocks, ensuring the
mocked logger object (logger.debug/info/warn/error) stays consistent.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 2fb06e27-ed51-4284-b962-8e3a74c1d862
📒 Files selected for processing (14)
CLAUDE.mdbin/sidecar.jssrc/cli-handlers.jssrc/sidecar/setup.jssrc/utils/alias-resolver.jssrc/utils/config.jssrc/utils/model-validator.jssrc/utils/start-helpers.jstests/api-key-store-readwrite.test.jstests/api-key-store-validation.test.jstests/api-key-store.test.jstests/config-fallback.test.jstests/config-null-alias.test.jstests/model-validator-normalize.test.js
🚧 Files skipped from review as they are similar to previous changes (3)
- tests/config-null-alias.test.js
- CLAUDE.md
- src/sidecar/setup.js
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…renaldi79#2) * fix: fall back to direct provider API when OpenRouter key is missing When a user has GEMINI_API_KEY, OPENAI_API_KEY, or ANTHROPIC_API_KEY set but not OPENROUTER_API_KEY, aliases like --model gemini now automatically route through the direct provider API instead of failing with "OPENROUTER_API_KEY required". Adds applyDirectApiFallback() which strips the openrouter/ prefix from alias-resolved models when the direct provider key is available. OpenRouter is still preferred when both keys are present. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * feat: validate direct API fallback models exist on provider When the direct API fallback triggers (no OpenRouter key), validate that the model actually exists on the provider before proceeding. If validation fails: - Interactive mode: prompts user to pick from available models, saves choice to config.json so they're never prompted again - Headless mode: fails with available models list and fix command Adds model-validator.js with validateDirectModel() using the existing model-fetcher.js infrastructure. Network errors skip validation gracefully (don't block the user). Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: address PR feedback — opt-in validation, TTY safety, dedup - Make --validate-model opt-in instead of always-on (feedback jrenaldi79#2) - Add TTY guard in both bin/sidecar.js and model-validator.js (feedback jrenaldi79#3/7) - Detect default alias when --model is omitted (feedback jrenaldi79#6) - Replace duplicate DIRECT_API_KEYS with PROVIDER_ENV_MAP import (feedback jrenaldi79#4) - Add deepseek to PROVIDER_ENV_MAP (feedback #1) - Use 'openrouter/'.length instead of magic number 11 (feedback jrenaldi79#4) - Add malformed config safety in promptModelSelection (feedback jrenaldi79#8) - Add tests for TTY guard and malformed config behavior Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: address round 2 PR feedback — logging, saveConfig safety, deepseek - Add logger.warn + stderr notice when direct API fallback activates (#1) - Register --validate-model as boolean flag in cli.js, add to usage text (jrenaldi79#2) - Add stderr notice pointing users to --validate-model on fallback (jrenaldi79#3) - Wrap saveConfig in try/catch in promptModelSelection (jrenaldi79#4) - Add deepseek to VALIDATION_ENDPOINTS in api-key-store.js (jrenaldi79#5) - Add CLI tests for --validate-model flag - Suppress stderr in config fallback tests Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: update api-key-store test to expect 5 providers (includes deepseek) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: fix pre-existing test failures in mcp-server and api-key-store tests - Add model param to mcp-server sidecar_start test calls (resolveModel now requires a model or configured default) - Update api-key-store provider count assertion from 4 to 5 (deepseek) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> --------- Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Summary
Addresses CodeRabbit review items and code quality issues from PR #2:
applyDirectApiFallback— now reads~/.config/sidecar/.envviareadApiKeyValues(), not justprocess.envnormalizeModelId()helper ensures provider prefix before saving to configreadApiKeys/readApiKeyHints— default objects now include all 5 providersmodel-validator.jsandmodel-fetcher.jsmoduleshandleStarthelpers — function now 47 lines (was 75), under 50-line limitconfig.test.js— 825 lines → 4 files, all under 300 linesTest plan
.envfileCloses follow-up items from #2
🤖 Generated with Claude Code
Summary by CodeRabbit
New Features
Bug Fixes
Tests