Enhance Gemini CLI server management and documentation#54
Enhance Gemini CLI server management and documentation#54mtdewwolf wants to merge 3 commits intoaaditagrawal:mainfrom
Conversation
📝 WalkthroughWalkthroughThis pull request substantially refactors the project as a Gemini-focused fork, adding cross-platform command resolution utilities, enhancing Gemini CLI server management with Windows-specific handling and error capture, implementing provider-kind normalization across persistence/runtime layers via migrations, consolidating model-options logic into a shared module with custom-model support, and updating provider settings UI to display both discovered and configured models. Documentation reflects the fork's identity and contribution guidelines. Changes
Sequence Diagram(s)sequenceDiagram
participant Client as Web Client
participant Server as Server Runtime
participant DB as Database
participant Cache as Effect Cache
Client->>Server: Load thread with provider
Server->>DB: Query persisted session (provider: "gemini")
DB-->>Server: Return legacy provider value
Server->>Server: normalizePersistedProviderKindName("gemini")
Server->>Cache: Cache.set(threadId, normalized)
Server-->>Client: Return thread with provider: "geminiCli"
note over Server,Cache: Subsequent requests use cached normalized value
Client->>Server: Query provider again
Server->>Cache: Cache.getOption(threadId)
Cache-->>Server: Return cached ModelSelection
Server-->>Client: Normalized provider immediately
sequenceDiagram
participant UI as Settings UI
participant Logic as Model Logic Module
participant Config as Custom Models Config
participant Discovered as Discovered Models
participant Built as Built-in Catalogs
UI->>Logic: resolveModelOptionsByProvider(settings)
Logic->>Built: buildModelOptionsByProvider(customModels)
Built-->>Logic: { provider: [...built models] }
alt Has Discovered Models
Logic->>Discovered: mergeDiscoveredModels(base, discovered)
Discovered->>Discovered: Dedupe, normalize slugs
Discovered->>Discovered: Enrich with pricing tiers
Discovered-->>Logic: Merged model options
end
Logic-->>UI: Record<ProviderKind, ModelOptionEntry[]>
UI->>UI: Render model picker with all options
Estimated code review effort🎯 4 (Complex) | ⏱️ ~50 minutes Possibly related PRs
Suggested labels
Poem
🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (2 warnings)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 10
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
packages/shared/src/shell.ts (1)
91-99:⚠️ Potential issue | 🟡 MinorDon't trim real environment values.
Lines 92-97 already remove the delimiter newlines. Line 99 then trims all remaining leading/trailing whitespace, which changes legitimate env values instead of just treating empty captures as missing.
Proposed fix
- return value.trim().length > 0 ? value.trim() : undefined; + return value.length > 0 ? value : undefined;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/shared/src/shell.ts` around lines 91 - 99, The function currently strips delimiter newlines then calls value.trim(), which can alter legitimate environment values; instead preserve internal leading/trailing whitespace by removing the final .trim() usage. Update the return logic that references the variable value (the slice computed into value after the startsWith/endsWith newline removals) to return value if value.length > 0, otherwise undefined (i.e., replace the value.trim() checks with direct value.length checks and return the raw value).packages/contracts/src/model.ts (1)
405-414:⚠️ Potential issue | 🟠 MajorUse canonical prefixed defaults for
opencodeandkilo.
MODEL_OPTIONS_BY_PROVIDERnow defines canonical slugs likeopenai/gpt-5, but defaults remaingpt-5. This creates avoidable inconsistency and can break paths that consume defaults without normalization.Suggested fix
export const DEFAULT_MODEL_BY_PROVIDER: Record<ProviderKind, ModelSlug> = { codex: "gpt-5.4", copilot: "claude-sonnet-4.6", claudeAgent: "claude-sonnet-4-6", cursor: "opus-4.6-thinking", - opencode: "gpt-5", + opencode: "openai/gpt-5", geminiCli: "gemini-2.5-pro", amp: "smart", - kilo: "gpt-5", + kilo: "openai/gpt-5", } as const satisfies Record<ProviderKind, ModelSlug>;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/contracts/src/model.ts` around lines 405 - 414, DEFAULT_MODEL_BY_PROVIDER contains unprefixed slugs for the opencode and kilo providers which conflicts with canonical slugs in MODEL_OPTIONS_BY_PROVIDER; update the entries in DEFAULT_MODEL_BY_PROVIDER for the ProviderKind keys "opencode" and "kilo" to use the canonical prefixed slug "openai/gpt-5" so defaults match MODEL_OPTIONS_BY_PROVIDER and downstream consumers that expect prefixed values.
🧹 Nitpick comments (3)
apps/server/src/persistence/Layers/OrchestrationEventStore.ts (1)
44-55: Skip patching already-canonical provider strings.
patchStringProvider()currently marks any valid provider as patched, even when normalization returns the same value. That forces an avoidable payload clone for already-normalized rows on every replay.Suggested refactor
const patchStringProvider = (field: string) => { const value = p[field]; if (typeof value !== "string") { return; } const normalizedProvider = normalizePersistedProviderKindName(value); - if (normalizedProvider === null) { + if (normalizedProvider === null || normalizedProvider === value) { return; } p[field] = normalizedProvider; patched = true; };🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/server/src/persistence/Layers/OrchestrationEventStore.ts` around lines 44 - 55, The patchStringProvider function marks rows as patched even when normalization is a no-op; change patchStringProvider (which reads p[field] and calls normalizePersistedProviderKindName) to only assign p[field] = normalizedProvider and set patched = true when normalizedProvider is non-null AND normalizedProvider !== value (i.e., the normalized string actually differs from the original). Keep the existing typeof check for string and the null check for normalizedProvider, but avoid modifying p or toggling patched when the value is already canonical.apps/web/src/providerModelOptions.ts (1)
72-78: Consider documenting the merge ordering behavior.For non-copilot providers, discovered models that aren't in base are prepended (line 78:
[...additions, ...merged]), while for copilot they're appended (line 69). This difference may be intentional for UI ordering purposes, but adding a brief comment explaining why would help maintainability.📝 Suggested documentation
+ // For most providers, prepend discovered-only models so newly available + // models appear at the top of the picker. const additions = dedupedModels.filter((m) => !existing.has(m.slug)); result[provider] = [...additions, ...merged];🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/web/src/providerModelOptions.ts` around lines 72 - 78, Add a brief comment explaining the merge ordering behavior around discoveredBySlug, merged, additions and result[provider]: state that for non-copilot providers the deduped discovered models (additions) are intentionally prepended via result[provider] = [...additions, ...merged] to prioritize newly discovered models in the UI, whereas for the copilot path the discovered models are appended (refer to the code handling "copilot") — place the comment above the merged/additions block near the map creation to clarify intent for future maintainers.apps/web/src/components/chat/ProviderModelPicker.logic.test.ts (1)
26-31: Model slug assertions may be brittle if contracts change.The test hardcodes specific model slugs like
"claude-sonnet-4.6","gpt-5.3-codex","openai/gpt-5", and"gemini-2.5-pro". If the built-in model catalogs inpackages/contractsare updated, these assertions will fail.Consider adding a comment noting that these slugs must stay in sync with
MODEL_OPTIONS_BY_PROVIDERin contracts, or extract the expected slugs from the contracts to make the test more resilient.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/web/src/components/chat/ProviderModelPicker.logic.test.ts` around lines 26 - 31, The test hardcodes specific model slugs (e.g., in assertions against modelOptions.copilot, modelOptions.cursor, modelOptions.opencode, modelOptions.geminiCli, modelOptions.amp, modelOptions.kilo) which will break if the contracts' built-in catalogs change; update the test to derive expected slugs from the canonical source (MODEL_OPTIONS_BY_PROVIDER in packages/contracts) or at minimum add a clear comment in ProviderModelPicker.logic.test explaining that these hardcoded slugs must remain in sync with MODEL_OPTIONS_BY_PROVIDER, and preferably replace the hardcoded values by importing MODEL_OPTIONS_BY_PROVIDER and asserting membership against that source instead of literal strings.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@apps/server/src/geminiCliServerManager.test.ts`:
- Around line 22-30: The test "uses piped stdio without a shell" is incomplete
because it doesn't assert the spawn options' shell setting; update the test that
calls buildGeminiSpawnOptions to also assert options.shell is false (or rename
the test to not claim "without a shell" if you intend a different behavior).
Specifically, inside the it block referencing buildGeminiSpawnOptions and the
returned options, add an assertion for options.shell === false (or change the
test description) so regressions that enable a shell will fail.
In `@apps/server/src/geminiCliServerManager.ts`:
- Around line 218-268: The Windows npm shim detection fails because
resolveGeminiShimEntryPoint builds
node_modules/@google/gemini-cli/bundle/gemini.js which doesn't exist; update
resolveGeminiShimEntryPoint to check for
node_modules/@google/gemini-cli/dist/index.js (use existsSync on that path)
instead of bundle/gemini.js so the shim branch in resolveGeminiSpawnPlan
triggers correctly, and update the related test assertion to expect
dist/index.js as the shim entry point; keep references to
resolveGeminiShimEntryPoint, resolveGeminiSpawnPlan, and the existsSync check
when making the change.
In `@apps/server/src/orchestration/Layers/ProviderCommandReactor.ts`:
- Around line 168-172: threadModelSelections is being created as an
expiring/evictable Cache which allows active thread model selections to be
dropped; change it to authoritative, non-evicting storage so active sessions
cannot lose their custom model slug. Replace the Cache.make call that constructs
threadModelSelections with a non-evicting structure (e.g., a plain Map<string,
ModelSelection> or a Cache configured with no timeToLive and no capacity-based
eviction) and ensure downstream fallbacks that read thread.modelSelection
continue to consult this authoritative store; update any helper code that relied
on Cache.make semantics so get/set/remove on threadModelSelections behave
correctly for long-lived sessions.
In `@apps/server/src/provider/Layers/CursorAdapter.ts`:
- Around line 1615-1620: The forced-stop fallback currently calls
stopSessionInternal(context, { emitExitEvent: true }) which makes the emitted
session.exited appear "graceful"; modify the call to thread an explicit exit
kind/reason (e.g., stopSessionInternal(context, { emitExitEvent: true, exitKind:
"forced-stop", exitReason: "cancel-fallback" })) and update stopSessionInternal
(and any session.exited emission logic) to prefer this passed
exitKind/exitReason so telemetry/restart logic sees a non-graceful termination
from the session/cancel fallback.
- Around line 1621-1622: The current code calls completeTurn using the live
context.turnState which can be overwritten by a new turn before the original
prompt resolves; instead, capture the original turn identifier (e.g., store a
local turnId when sendTurn/session/prompt is started) and use that captured id
to gate completion (pass it to completeTurn or verify context.turnState.id
matches the captured id before clearing); do not clear or overwrite
context.turnState until the in-flight prompt settles — alternatively keep the
original turn pinned (e.g., await the prompt response while holding the captured
turn) and only then call completeTurn for that specific turnId so a newly
started turn cannot be closed by a late completion.
In `@CONTRIBUTING.md`:
- Line 27: Update the clone target in CONTRIBUTING.md so contributors clone this
fork instead of mtdewwolf's repo: replace the URL string
"https://github.com/mtdewwolf/t3code-gemini.git" with the fork URL
"https://github.com/aaditagrawal/t3code.git" and add a brief note to always use
--repo aaditagrawal/t3code when running gh pr create to ensure PRs target this
repository.
In `@package.json`:
- Line 79: Remove "electron" and "node-pty" from the root package.json
trustedDependencies list and instead add "electron" to the trustedDependencies
of the desktop workspace package.json (the workspace that uses the desktop app)
and add "node-pty" to the trustedDependencies of the server workspace
package.json (the workspace that runs the server). Update the respective
workspace package.json files' trustedDependencies arrays to include those exact
package names so each workspace declares only the dependencies it trusts (leave
other root entries unchanged).
In `@packages/shared/src/shell.ts`:
- Around line 67-70: The captureCmd branch for isFish && name === "PATH"
currently uses echo $PATH piped to string replace which corrupts entries with
spaces; change that branch so captureCmd uses fish's idiomatic serialization of
the PATH by emitting the PATH as a single quoted value (so it is joined with
':'), or alternatively use fish's string join ':' on the PATH list, instead of
the echo $PATH | string replace approach.
- Around line 117-120: The current execFile call always uses ["-lc",
buildEnvironmentCaptureCommand(...)] which skips interactive startup files for
shells like zsh and can drop user PATH/exports; update the flag selection before
the execFile invocation: detect zsh (e.g. shell includes "zsh") and use ["-ilc",
buildEnvironmentCaptureCommand(names, isFish)] so zsh sources interactive rc,
detect fish via isFish and use ["-ic", buildEnvironmentCaptureCommand(names,
isFish)] (or fish's appropriate interactive flag) so interactive blocks run, and
fall back to ["-lc", ...] for other shells; keep the same execFile(...) call but
pass the chosen flags.
In `@README.md`:
- Around line 55-56: Replace all occurrences of the installer and clone URLs
that point to "mtdewwolf/t3code-gemini" with the fork target
"aaditagrawal/t3code"; specifically update the curl install URL
"https://raw.githubusercontent.com/mtdewwolf/t3code-gemini/main/scripts/install.sh"
and any git clone or raw.githubusercontent links elsewhere (noted around the
other occurrences) so they reference
"https://raw.githubusercontent.com/aaditagrawal/t3code/..." or
"https://github.com/aaditagrawal/t3code.git" as appropriate to ensure all README
install/clone commands target the fork.
---
Outside diff comments:
In `@packages/contracts/src/model.ts`:
- Around line 405-414: DEFAULT_MODEL_BY_PROVIDER contains unprefixed slugs for
the opencode and kilo providers which conflicts with canonical slugs in
MODEL_OPTIONS_BY_PROVIDER; update the entries in DEFAULT_MODEL_BY_PROVIDER for
the ProviderKind keys "opencode" and "kilo" to use the canonical prefixed slug
"openai/gpt-5" so defaults match MODEL_OPTIONS_BY_PROVIDER and downstream
consumers that expect prefixed values.
In `@packages/shared/src/shell.ts`:
- Around line 91-99: The function currently strips delimiter newlines then calls
value.trim(), which can alter legitimate environment values; instead preserve
internal leading/trailing whitespace by removing the final .trim() usage. Update
the return logic that references the variable value (the slice computed into
value after the startsWith/endsWith newline removals) to return value if
value.length > 0, otherwise undefined (i.e., replace the value.trim() checks
with direct value.length checks and return the raw value).
---
Nitpick comments:
In `@apps/server/src/persistence/Layers/OrchestrationEventStore.ts`:
- Around line 44-55: The patchStringProvider function marks rows as patched even
when normalization is a no-op; change patchStringProvider (which reads p[field]
and calls normalizePersistedProviderKindName) to only assign p[field] =
normalizedProvider and set patched = true when normalizedProvider is non-null
AND normalizedProvider !== value (i.e., the normalized string actually differs
from the original). Keep the existing typeof check for string and the null check
for normalizedProvider, but avoid modifying p or toggling patched when the value
is already canonical.
In `@apps/web/src/components/chat/ProviderModelPicker.logic.test.ts`:
- Around line 26-31: The test hardcodes specific model slugs (e.g., in
assertions against modelOptions.copilot, modelOptions.cursor,
modelOptions.opencode, modelOptions.geminiCli, modelOptions.amp,
modelOptions.kilo) which will break if the contracts' built-in catalogs change;
update the test to derive expected slugs from the canonical source
(MODEL_OPTIONS_BY_PROVIDER in packages/contracts) or at minimum add a clear
comment in ProviderModelPicker.logic.test explaining that these hardcoded slugs
must remain in sync with MODEL_OPTIONS_BY_PROVIDER, and preferably replace the
hardcoded values by importing MODEL_OPTIONS_BY_PROVIDER and asserting membership
against that source instead of literal strings.
In `@apps/web/src/providerModelOptions.ts`:
- Around line 72-78: Add a brief comment explaining the merge ordering behavior
around discoveredBySlug, merged, additions and result[provider]: state that for
non-copilot providers the deduped discovered models (additions) are
intentionally prepended via result[provider] = [...additions, ...merged] to
prioritize newly discovered models in the UI, whereas for the copilot path the
discovered models are appended (refer to the code handling "copilot") — place
the comment above the merged/additions block near the map creation to clarify
intent for future maintainers.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 3c217672-44ff-4be1-95fd-a63479fd56d7
⛔ Files ignored due to path filters (1)
bun.lockis excluded by!**/*.lock
📒 Files selected for processing (32)
CLAUDE.mdCONTRIBUTING.mdREADME.mdapps/server/src/commandPath.tsapps/server/src/geminiCliServerManager.test.tsapps/server/src/geminiCliServerManager.tsapps/server/src/open.tsapps/server/src/orchestration/Layers/ProviderCommandReactor.tsapps/server/src/persistence/Layers/OrchestrationEventStore.test.tsapps/server/src/persistence/Layers/OrchestrationEventStore.tsapps/server/src/persistence/Migrations.tsapps/server/src/persistence/Migrations/020_NormalizeLegacyProviderKinds.test.tsapps/server/src/persistence/Migrations/020_NormalizeLegacyProviderKinds.tsapps/server/src/persistence/Migrations/021_RepairProjectionThreadProposedPlanImplementationColumns.test.tsapps/server/src/persistence/Migrations/021_RepairProjectionThreadProposedPlanImplementationColumns.tsapps/server/src/provider/Layers/CursorAdapter.tsapps/server/src/provider/Layers/ProviderSessionDirectory.test.tsapps/server/src/provider/Layers/ProviderSessionDirectory.tsapps/server/src/provider/providerKind.tsapps/web/src/components/ChatView.tsxapps/web/src/components/chat/ProviderModelPicker.logic.test.tsapps/web/src/components/chat/ProviderModelPicker.tsxapps/web/src/components/settings/SettingsPanels.browser.tsxapps/web/src/components/settings/SettingsPanels.tsxapps/web/src/components/ui/sidebar.tsxapps/web/src/providerModelOptions.tsapps/web/src/providerModels.tsapps/web/src/routes/_chat.$threadId.tsxpackage.jsonpackages/contracts/src/model.tspackages/shared/src/model.tspackages/shared/src/shell.ts
| it("uses piped stdio without a shell", () => { | ||
| const options = buildGeminiSpawnOptions({ | ||
| cwd: "/tmp", | ||
| env: {}, | ||
| }); | ||
|
|
||
| expect(options.cwd).toBe("/tmp"); | ||
| expect(options.stdio).toEqual(["pipe", "pipe", "pipe"]); | ||
| }); |
There was a problem hiding this comment.
Assert shell: false or rename the test.
The test says "without a shell", but it never checks options.shell, so a shell-based regression would still pass.
Suggested test fix
it("uses piped stdio without a shell", () => {
const options = buildGeminiSpawnOptions({
cwd: "/tmp",
env: {},
});
expect(options.cwd).toBe("/tmp");
expect(options.stdio).toEqual(["pipe", "pipe", "pipe"]);
+ expect(options.shell).toBe(false);
});📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| it("uses piped stdio without a shell", () => { | |
| const options = buildGeminiSpawnOptions({ | |
| cwd: "/tmp", | |
| env: {}, | |
| }); | |
| expect(options.cwd).toBe("/tmp"); | |
| expect(options.stdio).toEqual(["pipe", "pipe", "pipe"]); | |
| }); | |
| it("uses piped stdio without a shell", () => { | |
| const options = buildGeminiSpawnOptions({ | |
| cwd: "/tmp", | |
| env: {}, | |
| }); | |
| expect(options.cwd).toBe("/tmp"); | |
| expect(options.stdio).toEqual(["pipe", "pipe", "pipe"]); | |
| expect(options.shell).toBe(false); | |
| }); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/server/src/geminiCliServerManager.test.ts` around lines 22 - 30, The
test "uses piped stdio without a shell" is incomplete because it doesn't assert
the spawn options' shell setting; update the test that calls
buildGeminiSpawnOptions to also assert options.shell is false (or rename the
test to not claim "without a shell" if you intend a different behavior).
Specifically, inside the it block referencing buildGeminiSpawnOptions and the
returned options, add an assertion for options.shell === false (or change the
test description) so regressions that enable a shell will fail.
| export function resolveGeminiSpawnPlan( | ||
| input: { | ||
| readonly binaryPath: string; | ||
| readonly args: ReadonlyArray<string>; | ||
| readonly cwd: string; | ||
| readonly env: NodeJS.ProcessEnv; | ||
| }, | ||
| platform: NodeJS.Platform = process.platform, | ||
| ): GeminiSpawnPlan { | ||
| const options = buildGeminiSpawnOptions({ | ||
| cwd: input.cwd, | ||
| env: input.env, | ||
| }); | ||
|
|
||
| if (platform !== "win32") { | ||
| return { | ||
| command: input.binaryPath, | ||
| args: [...input.args], | ||
| options, | ||
| }; | ||
| } | ||
|
|
||
| const resolvedBinaryPath = | ||
| resolveCommandPath(input.binaryPath, { | ||
| platform, | ||
| env: input.env, | ||
| }) ?? input.binaryPath; | ||
|
|
||
| if (extname(resolvedBinaryPath).toLowerCase() === ".js") { | ||
| return { | ||
| command: resolveNodeCommand(input.env), | ||
| args: [resolvedBinaryPath, ...input.args], | ||
| options, | ||
| }; | ||
| } | ||
|
|
||
| const bundledEntryPoint = resolveGeminiShimEntryPoint(resolvedBinaryPath); | ||
| if (bundledEntryPoint) { | ||
| return { | ||
| command: resolveNodeCommand(input.env), | ||
| args: [bundledEntryPoint, ...input.args], | ||
| options, | ||
| }; | ||
| } | ||
|
|
||
| return { | ||
| command: resolvedBinaryPath, | ||
| args: [...input.args], | ||
| options, | ||
| }; | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
What is the npm global installation structure for @google/gemini-cli on Windows?
💡 Result:
On Windows, a global npm install -g @google/gemini-cli`` installs into your global prefix (npm config get prefix) using npm’s standard global layout: packages go under `{prefix}/lib/node_modules` (effectively `{prefix}/node_modules` on Windows) and command shims go in `{prefix}` as `.cmd` (and PowerShell) launchers. [1]
So you’ll typically see (paths vary with your prefix):
{prefix}\
gemini.cmd <-- Windows shim created by npm for the CLI "gemini" [1]
gemini.ps1 <-- PowerShell shim (often created alongside .cmd) [1]
node_modules\
`@google`\
gemini-cli\
package.json
dist\
index.js <-- actual entrypoint; package publishes "bin": { "gemini": "dist/index.js" } [2]
node_modules\ <-- dependencies
Common default per-user prefix example (if installed with typical settings) is:
C:\Users\<you>\AppData\Roaming\npm
…so the key locations become:
- CLI shim:
C:\Users\<you>\AppData\Roaming\npm\gemini.cmd - Package:
C:\Users\<you>\AppData\Roaming\npm\node_modules\@google\gemini-cli\...[3]
Quick commands to confirm on your machine
npm config get prefixnpm root -g(shows where@google/gemini-cliis installed)
Sources: npm global folder/shim behavior [1], gemini-cli published bin entry [2], Windows prefix example [3].
[1] (docs.npmjs.com)
[2] (github.com)
[3] (pad.carpentries.org)
Citations:
- 1: https://docs.npmjs.com/cli/v8/commands/npm-install/?utm_source=openai
- 2: npx https://github.com/google-gemini/gemini-cli fails silently while npm install -g @google/gemini-cli works google-gemini/gemini-cli#2077
- 3: https://pad.carpentries.org/2025-11-28-sejong/export/pdf?utm_source=openai
🏁 Script executed:
# Find the resolveGeminiShimEntryPoint function definition
rg -A 10 "function resolveGeminiShimEntryPoint" apps/server/src/Repository: aaditagrawal/t3code
Length of output: 852
🏁 Script executed:
# Check the test file for expected paths
rg -B 3 -A 3 "bundle/gemini.js\|bundledEntryPoint" apps/server/src/geminiCliServerManager.test.tsRepository: aaditagrawal/t3code
Length of output: 45
🏁 Script executed:
# Verify the actual structure by examining imports and path resolution in the main file
head -50 apps/server/src/geminiCliServerManager.tsRepository: aaditagrawal/t3code
Length of output: 1365
🏁 Script executed:
# Get the complete resolveGeminiShimEntryPoint function
rg -A 20 "function resolveGeminiShimEntryPoint" apps/server/src/geminiCliServerManager.tsRepository: aaditagrawal/t3code
Length of output: 679
🏁 Script executed:
# Search for test file with different patterns
fd "geminiCliServerManager.test" apps/server/src/Repository: aaditagrawal/t3code
Length of output: 109
🏁 Script executed:
# Check if test file exists and search for any test assertions about the shim
rg "resolveGeminiShimEntryPoint" apps/server/Repository: aaditagrawal/t3code
Length of output: 302
🏁 Script executed:
# Check the test file for relevant test cases
cat apps/server/src/geminiCliServerManager.test.tsRepository: aaditagrawal/t3code
Length of output: 24551
🏁 Script executed:
# Check if `@google/gemini-cli` actually has a bundle/gemini.js file
# by searching for any references to dist/index.js or bundle in the codebase
rg "dist/index\.js|bundle" apps/server/src/geminiCliServerManager.tsRepository: aaditagrawal/t3code
Length of output: 334
Fix Windows npm shim detection path: expected bundle/gemini.js does not exist in actual package
The resolveGeminiShimEntryPoint function constructs the path node_modules/@google/gemini-cli/bundle/gemini.js, but the actual @google/gemini-cli package publishes its entry point as dist/index.js (per the "bin": { "gemini": "dist/index.js" } declaration in its package.json). The existsSync check will always fail on Windows, causing the shim detection to fall through and spawn the binary path directly, breaking the intended npm shim handling. Change the path construction to use dist/index.js instead of bundle/gemini.js, and update the corresponding test assertion.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/server/src/geminiCliServerManager.ts` around lines 218 - 268, The
Windows npm shim detection fails because resolveGeminiShimEntryPoint builds
node_modules/@google/gemini-cli/bundle/gemini.js which doesn't exist; update
resolveGeminiShimEntryPoint to check for
node_modules/@google/gemini-cli/dist/index.js (use existsSync on that path)
instead of bundle/gemini.js so the shim branch in resolveGeminiSpawnPlan
triggers correctly, and update the related test assertion to expect
dist/index.js as the shim entry point; keep references to
resolveGeminiShimEntryPoint, resolveGeminiSpawnPlan, and the existsSync check
when making the change.
| const threadModelSelections = yield* Cache.make<string, ModelSelection>({ | ||
| capacity: 10_000, | ||
| timeToLive: Duration.hours(24), | ||
| lookup: () => Effect.fail(new Error("not found")), | ||
| }); |
There was a problem hiding this comment.
Don't expire per-thread model selections while a session is still alive.
This state used to live in a plain Map, but it now ages out after 24 hours and can also be evicted under pressure. The downstream lookups on Lines 313-315, 399-401, and 759-765 fall back to thread.modelSelection on a cache miss, so a long-lived thread can silently lose a custom model slug and restart/send subsequent turns with the wrong model. Please keep this authoritative state non-evicting for active sessions, or persist it with the thread/session state instead.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/server/src/orchestration/Layers/ProviderCommandReactor.ts` around lines
168 - 172, threadModelSelections is being created as an expiring/evictable Cache
which allows active thread model selections to be dropped; change it to
authoritative, non-evicting storage so active sessions cannot lose their custom
model slug. Replace the Cache.make call that constructs threadModelSelections
with a non-evicting structure (e.g., a plain Map<string, ModelSelection> or a
Cache configured with no timeToLive and no capacity-based eviction) and ensure
downstream fallbacks that read thread.modelSelection continue to consult this
authoritative store; update any helper code that relied on Cache.make semantics
so get/set/remove on threadModelSelections behave correctly for long-lived
sessions.
| "Cursor ACP session/cancel failed or is unavailable; terminating process as fallback.", | ||
| cancelResult.error, | ||
| ); | ||
| // Terminate the process to ensure the turn actually stops. | ||
| // This will trigger the 'exit' handler which cleans up the session. | ||
| yield* stopSessionInternal(context, { emitExitEvent: true }); |
There was a problem hiding this comment.
Forced-stop fallback is reported as a graceful exit.
This path is entered because session/cancel failed, but stopSessionInternal(..., { emitExitEvent: true }) will emit session.exited with exitKind: "graceful" and bypass the real exit handler. That makes restart/recovery logic and telemetry see a clean shutdown when we actually killed the process.
Please thread an explicit exit reason/kind into stopSessionInternal, or let the child exit event publish the final process outcome for this fallback.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/server/src/provider/Layers/CursorAdapter.ts` around lines 1615 - 1620,
The forced-stop fallback currently calls stopSessionInternal(context, {
emitExitEvent: true }) which makes the emitted session.exited appear "graceful";
modify the call to thread an explicit exit kind/reason (e.g.,
stopSessionInternal(context, { emitExitEvent: true, exitKind: "forced-stop",
exitReason: "cancel-fallback" })) and update stopSessionInternal (and any
session.exited emission logic) to prefer this passed exitKind/exitReason so
telemetry/restart logic sees a non-graceful termination from the session/cancel
fallback.
| } else { | ||
| yield* completeTurn(context, "interrupted", "Turn interrupted by user.", "cancelled"); |
There was a problem hiding this comment.
Don't complete the turn before the cancelled prompt settles.
sendTurn() still has an in-flight session/prompt tied to shared context, so clearing context.turnState here reopens the session too early. If a new turn starts before the old prompt resolves, that late completion path can close the new turn instead of the interrupted one.
Please keep the original turn pinned until the prompt response arrives, or gate completion on a captured turnId rather than the current context.turnState.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/server/src/provider/Layers/CursorAdapter.ts` around lines 1621 - 1622,
The current code calls completeTurn using the live context.turnState which can
be overwritten by a new turn before the original prompt resolves; instead,
capture the original turn identifier (e.g., store a local turnId when
sendTurn/session/prompt is started) and use that captured id to gate completion
(pass it to completeTurn or verify context.turnState.id matches the captured id
before clearing); do not clear or overwrite context.turnState until the
in-flight prompt settles — alternatively keep the original turn pinned (e.g.,
await the prompt response while holding the captured turn) and only then call
completeTurn for that specific turnId so a newly started turn cannot be closed
by a late completion.
| ## Development Setup | ||
|
|
||
| Small, focused bug fixes. | ||
| 1. Clone the repository: `git clone https://github.com/mtdewwolf/t3code-gemini.git` |
There was a problem hiding this comment.
Fix clone target to the active fork repository.
The setup command currently clones mtdewwolf/t3code-gemini, which can route contributors away from this fork’s contribution flow.
Suggested doc fix
-1. Clone the repository: `git clone https://github.com/mtdewwolf/t3code-gemini.git`
+1. Clone the repository: `git clone https://github.com/aaditagrawal/t3code.git`Based on learnings: "Always explicitly target the fork with --repo aaditagrawal/t3code when running gh pr create. Never run gh write commands against any repo other than aaditagrawal/t3code."
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| 1. Clone the repository: `git clone https://github.com/mtdewwolf/t3code-gemini.git` | |
| 1. Clone the repository: `git clone https://github.com/aaditagrawal/t3code.git` |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@CONTRIBUTING.md` at line 27, Update the clone target in CONTRIBUTING.md so
contributors clone this fork instead of mtdewwolf's repo: replace the URL string
"https://github.com/mtdewwolf/t3code-gemini.git" with the fork URL
"https://github.com/aaditagrawal/t3code.git" and add a brief note to always use
--repo aaditagrawal/t3code when running gh pr create to ensure PRs target this
repository.
| ] | ||
| }, | ||
| "trustedDependencies": [ | ||
| "electron", |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Verify where electron is declared and whether lifecycle-script trust is justified.
set -euo pipefail
echo "== package.json files declaring electron =="
fd -i '^package\.json$' | xargs -I{} sh -c 'echo "--- {} ---"; rg -n "\"electron\"\\s*:" "{}" || true'
echo
echo "== occurrences of trustedDependencies across repo =="
rg -n -C2 '"trustedDependencies"|"electron"|"node-pty"' --glob '**/package.json'Repository: aaditagrawal/t3code
Length of output: 1155
🏁 Script executed:
# Check if electron in apps/desktop has any lifecycle scripts that would justify trustedDependencies
cat apps/desktop/package.json | jq -r '.scripts // empty'
# Also check what Bun's documentation context might be for trustedDependencies
# Search for any comments or context around trustedDependencies in the repo
rg -n -B3 -A3 "trustedDependencies" --type jsonRepository: aaditagrawal/t3code
Length of output: 607
🌐 Web query:
Bun trustedDependencies monorepo workspace packages
💡 Result:
In a Bun workspaces monorepo (e.g. "workspaces": ["packages/*"]), each workspace package should declare its own dependencies in its own packages/<name>/package.json. [2]
To allow lifecycle scripts (postinstall, native builds, etc.) for a dependency, add it to trustedDependencies in the package.json of the workspace that uses it (or run bun pm trust <pkg> from that workspace, since it edits the current workspace’s package.json). [1][3]
Notes:
trustedDependenciesonly trusts the exact package(s) listed, not their transitive dependencies. [1]- After adding trust, do a fresh install so Bun re-runs the previously-blocked scripts (Bun’s docs suggest deleting
node_modulesandbun.lockthen runningbun install). [1] - To see what got blocked, use
bun pm untrusted, then trust what you need withbun pm trust <name>(or--all). [3]
Sources: Bun trusted dependencies guide [1], Bun workspaces guide [2], bun pm CLI docs (untrusted/trust) [3].
🏁 Script executed:
# Check if apps/desktop and apps/server have trustedDependencies declarations
echo "=== apps/desktop/package.json trustedDependencies ==="
jq '.trustedDependencies // "not defined"' apps/desktop/package.json
echo
echo "=== apps/server/package.json trustedDependencies ==="
jq '.trustedDependencies // "not defined"' apps/server/package.jsonRepository: aaditagrawal/t3code
Length of output: 198
Move electron and node-pty to their respective workspace package.json files instead of root.
Per Bun monorepo guidelines, trustedDependencies should be declared in each workspace that uses the package. electron belongs in apps/desktop/package.json and node-pty in apps/server/package.json. Keeping them at root unnecessarily expands the trust surface to all workspaces.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@package.json` at line 79, Remove "electron" and "node-pty" from the root
package.json trustedDependencies list and instead add "electron" to the
trustedDependencies of the desktop workspace package.json (the workspace that
uses the desktop app) and add "node-pty" to the trustedDependencies of the
server workspace package.json (the workspace that runs the server). Update the
respective workspace package.json files' trustedDependencies arrays to include
those exact package names so each workspace declares only the dependencies it
trusts (leave other root entries unchanged).
| const captureCmd = | ||
| isFish && name === "PATH" | ||
| ? `echo $PATH | string replace -a ' ' ':' || true` | ||
| : `printenv ${name} || true`; |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
In fish shell, does echo $PATHpreserve path entries containing spaces, and isstring join ":" $PATH the correct way to serialize the PATH list?
💡 Result:
-
echo $PATHin fish does not reliably preserve/round‑trip entries containing spaces. In fish, variables are lists; outside quotes$PATHexpands to multiple arguments (one per entry) andechoprints them separated by spaces. Fish does not do word-splitting on spaces inside an element, but the printed output is ambiguous becauseechodoesn’t add quoting. (fishshell.com) -
To serialize PATH as a colon string, prefer
echo "$PATH"(orprintf '%s\n' "$PATH"). For path variables (names ending inPATH, includingPATHitself), fish joins elements with:when expanded inside double quotes, and path variables are defined to be implicitly joined/split on:for compatibility. (fishshell.com) -
string join ":" $PATHworks but is usually unnecessary forPATH. It will join list elements with:(equivalent intent), but in fish the idiomatic/correct way forPATHspecifically is simply:echo "$PATH"(fishshell.com)
Citations:
- 1: https://fishshell.com/docs/3.5/language.html
- 2: https://fishshell.com/docs/3.5/language.html
- 3: https://fishshell.com/docs/3.5/language.html
Use the idiomatic fish approach for serializing PATH.
The current code echo $PATH | string replace -a ' ' ':' corrupts path entries containing spaces because unquoted $PATH expands to multiple space-separated arguments, losing the list structure. In fish, path variables automatically join with : when quoted, so the correct solution is:
Proposed fix
const captureCmd =
isFish && name === "PATH"
- ? `echo $PATH | string replace -a ' ' ':' || true`
+ ? `echo "$PATH" || true`
: `printenv ${name} || true`;Alternatively, string join ':' $PATH || true also works but is less idiomatic for PATH in fish.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/shared/src/shell.ts` around lines 67 - 70, The captureCmd branch for
isFish && name === "PATH" currently uses echo $PATH piped to string replace
which corrupts entries with spaces; change that branch so captureCmd uses fish's
idiomatic serialization of the PATH by emitting the PATH as a single quoted
value (so it is joined with ':'), or alternatively use fish's string join ':' on
the PATH list, instead of the echo $PATH | string replace approach.
| const isFish = shell.endsWith("fish"); | ||
| // C023: Use -lc instead of -ilc to avoid shell pollution from interactive mode. | ||
| // Login shell (-l) is sufficient to source profile/login scripts. | ||
| const output = execFile(shell, ["-lc", buildEnvironmentCaptureCommand(names, isFish)], { |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
For zsh and fish, which startup files are sourced by zsh -lc '...'/fish -lc '...'versus-ilc, and do ~/.zshrcorconfig.fish run in the non-interactive case?
💡 Result:
zsh (zsh -lc '…' vs zsh -ilc '…')
zsh startup files depend on whether the shell is login (-l) and/or interactive (-i). zsh reads: zshenv for every zsh; zprofile/zlogin for login shells; zshrc for interactive shells. [1]
| Invocation | Login? | Interactive? | Files sourced (user files; system-wide equivalents may also run) |
|---|---|---|---|
zsh -lc '…' |
yes | no | ~/.zshenv, ~/.zprofile, ~/.zlogin (not ~/.zshrc) [1] |
zsh -ilc '…' |
yes | yes | ~/.zshenv, ~/.zprofile, ~/.zshrc, ~/.zlogin [1] |
So: ~/.zshrc does not run in the non-interactive (-lc) case. [1]
fish (fish -lc '…' vs fish -ilc '…')
fish reads its configuration on startup (conf.d snippets, then ~/.config/fish/config.fish) for every shell, whether login and/or interactive. [2][3] You can disable this with --no-config. [4]
| Invocation | Login? | Interactive? | Does config.fish run? |
Notes |
|---|---|---|---|---|
fish -lc '…' |
yes | no | yes [3] | if status --is-interactive blocks won’t run; if status --is-login blocks will. [2][3] |
fish -ilc '…' |
yes | yes | yes [3] | Both status --is-interactive and status --is-login tests are true. [2][3] |
So: fish config.fish does run in the non-interactive (-lc) case (unless --no-config). [3][4]
Sources:
[1] zsh guide (startup files and when they run) (zsh.sourceforge.io)
[2] fish docs: config is read for every shell; use status --is-interactive/--is-login (fishshell.com)
[3] fish language docs: config file order; executed on startup of every shell (fishshell.com)
[4] fish(1) man page: -l, -i, -c, and --no-config (manpages.debian.org)
Citations:
- 1: https://zsh.sourceforge.io/Guide/zshguide_us.pdf
- 2: https://fishshell.com/docs/3.5/index.html
- 3: https://fishshell.com/docs/3.5/language.html
- 4: https://manpages.debian.org/bookworm/fish-common/fish.1.en.html
-lc skips interactive startup files, affecting zsh more than fish.
Line 120 uses a non-interactive login shell (-lc). For zsh, this means ~/.zshrc is not sourced—users who add PATH or exports there will lose those values. For fish, config.fish is still sourced, but if status --is-interactive blocks within it won't execute—so environment exports nested in interactive blocks will be lost.
This could cause command discovery regression for users relying on interactive-only config blocks, particularly in zsh where the entire interactive rc file is skipped.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/shared/src/shell.ts` around lines 117 - 120, The current execFile
call always uses ["-lc", buildEnvironmentCaptureCommand(...)] which skips
interactive startup files for shells like zsh and can drop user PATH/exports;
update the flag selection before the execFile invocation: detect zsh (e.g. shell
includes "zsh") and use ["-ilc", buildEnvironmentCaptureCommand(names, isFish)]
so zsh sources interactive rc, detect fish via isFish and use ["-ic",
buildEnvironmentCaptureCommand(names, isFish)] (or fish's appropriate
interactive flag) so interactive blocks run, and fall back to ["-lc", ...] for
other shells; keep the same execFile(...) call but pass the chosen flags.
| bash <(curl -fsSL https://raw.githubusercontent.com/mtdewwolf/t3code-gemini/main/scripts/install.sh) | ||
| ``` |
There was a problem hiding this comment.
Update installer and clone URLs to this fork target.
Current commands point to mtdewwolf/t3code-gemini, which can send users to a different repository than this fork’s expected workflow.
Suggested doc fix
-bash <(curl -fsSL https://raw.githubusercontent.com/mtdewwolf/t3code-gemini/main/scripts/install.sh)
+bash <(curl -fsSL https://raw.githubusercontent.com/aaditagrawal/t3code/main/scripts/install.sh)
-bash <(curl -fsSL https://raw.githubusercontent.com/mtdewwolf/t3code-gemini/main/scripts/install.sh)
+bash <(curl -fsSL https://raw.githubusercontent.com/aaditagrawal/t3code/main/scripts/install.sh)
-git clone https://github.com/mtdewwolf/t3code-gemini.git
-cd t3code-gemini
+git clone https://github.com/aaditagrawal/t3code.git
+cd t3codeBased on learnings: "Never create PRs, push branches, post comments, or perform any write operation against pingdotgg/t3code or any upstream/third-party repo. All write operations must target aaditagrawal/t3code fork only."
Also applies to: 60-61, 70-71
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@README.md` around lines 55 - 56, Replace all occurrences of the installer and
clone URLs that point to "mtdewwolf/t3code-gemini" with the fork target
"aaditagrawal/t3code"; specifically update the curl install URL
"https://raw.githubusercontent.com/mtdewwolf/t3code-gemini/main/scripts/install.sh"
and any git clone or raw.githubusercontent links elsewhere (noted around the
other occurrences) so they reference
"https://raw.githubusercontent.com/aaditagrawal/t3code/..." or
"https://github.com/aaditagrawal/t3code.git" as appropriate to ensure all README
install/clone commands target the fork.
|
Follow-up fixes for this PR are pushed on the fork-owned branch Commit: This batch addresses the unresolved review items I validated locally:
Validation run locally:
I did not push to the contributor fork branch because this checkout is restricted to writes against |
|
Rebased my follow-up branch onto current Updated branch: Conflict handling:
Validation on the rebased branch:
If you want, I can keep using this follow-up branch for any review comments that come in on |
|
Rebase and conflict resolution on a Friday night? The dedication is real. 🍞 The surgical conflict handling and continuous validation are much appreciated. I'm totally on board with keeping this branch alive for any remaining #54 review items—way better than playing branch bingo every time a new tweak comes in. Let me know if you need another set of eyes on anything, otherwise I'll be watching from the toaster slot. |
|
The conflict banner on this PR still appears because GitHub is evaluating the original cross-repo head branch: I verified locally that the conflict is resolvable, but clearing that banner would require writing back to the third-party head repo. Per this fork's repo policy, I am not doing writes to third-party repos. To keep this fully inside
Validation on the replacement branch:
So the short answer is: the original #54 banner will keep showing until |
|
Addressed in #58 |
Pull Request Description: Gemini CLI Enhancements & Persistence Improvements
This PR introduces several significant improvements to the Gemini CLI provider, core orchestration persistence, and overall system reliability.
Key Changes
1. Gemini CLI Provider Enhancements
geminiCliServerManager.tsfor more robust lifecycle management of the Gemini CLI process.geminiCliServerManager.test.tsto ensure reliable behavior across different runtime scenarios.providerKind.tsandproviderModelOptions.tsto unify how different providers are handled across the system.2. Persistence Layer Upgrades
OrchestrationEventStore.tsfor better reliability when persisting and retrieving orchestration events.020_NormalizeLegacyProviderKinds.ts: Migration to normalize provider kind names in existing databases.021_RepairProjectionThreadProposedPlanImplementationColumns.ts: Fix for missing or inconsistent columns in thread projections.ProviderSessionDirectory.tsto more accurately track active and stale provider sessions.3. UI/UX Refinements
SettingsPanels.tsxandSettingsPanels.browser.tsxto provide a more intuitive configuration experience for custom model slugs and themes.ProviderModelPicker.tsx(and added logic tests) for smoother provider and model selection in the chat view.sidebar.tsxfor better alignment and styling.4. Core System Improvements
commandPath.tsto centralize and improve how CLI binary paths are resolved.packages/contracts/src/model.tsandpackages/shared/src/model.tsto reflect the improved provider and event structures.packages/shared/src/shell.tsfor more reliable command execution across different platforms.Why These Changes?
These updates were driven by a need for more reliable Gemini CLI integration and better persistence for long-running orchestration sessions. The migrations ensure that existing users' data is correctly transitioned to the new, more consistent internal models.
Verification Results
apps/serverandapps/webpass.Maintained by mtdewwolf | Forked from aaditagrawal/t3code
Summary by CodeRabbit
New Features
Bug Fixes
Documentation
Chores