feat: add cross-platform service management helper#421
feat: add cross-platform service management helper#421KooshaPari wants to merge 3 commits intomainfrom
Conversation
|
Warning You have reached your daily quota limit. Please wait up to 24 hours and I will start processing your requests again! |
📝 WalkthroughWalkthroughThis pull request introduces cross-platform service management capabilities via new shell scripts and task definitions, adds flag validation for CLI incognito options, enhances authentication security with fallback logic and path validation, filters sensitive model fields from API responses, refactors authentication and configuration handling across multiple modules, and reorganizes imports throughout the codebase for consistency. Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~22 minutes Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Warning There were issues while running some tools. Please review the errors and either fix the tool's configuration or disable the tool if it's a critical failure. 🔧 golangci-lint (2.5.0)Error: can't load config: the Go language version (go1.25) used to build golangci-lint is lower than the targeted Go version (1.26.0) Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
Found overlap with the replay recovery work in #425: this PR adds service-management helpers/tasks and docs language around service control. Since #425 replays the same legacy service intent on a clean base and removes unresolved merge markers, please consolidate to avoid duplicated helper paths ( vs ) and task surface drift. If you want to keep both, please call out the delta from #425 explicitly (especially Windows behavior and tests) and avoid reintroducing merge markers. |
|
Found overlap with replay recovery work in PR #425: this PR adds service-management helpers/tasks and docs language around service control. Since #425 replays the same legacy service intent on a clean base and removes unresolved merge markers, please consolidate to avoid duplicated helper paths (cliproxy-service.sh vs service) and task drift. If both remain, please explicitly call out the delta from #425 (especially Windows behavior and tests). |
There was a problem hiding this comment.
Actionable comments posted: 18
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (33)
sdk/auth/kilo.go (3)
34-36: 🧹 Nitpick | 🔵 TrivialSilently recovering from a
nilcontext masks caller bugs — return an error instead.The Go documentation states: "Do not pass a nil Context, even if a function permits it." Replacing a nil
ctxwithcontext.Background()hides a programming error in the caller. The nil branch should be a hard error.🛡️ Proposed fix
- if ctx == nil { - ctx = context.Background() - } + if ctx == nil { + return nil, fmt.Errorf("cliproxy auth: context must not be nil") + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@sdk/auth/kilo.go` around lines 34 - 36, The function currently swallows a nil context by replacing ctx with context.Background(); instead, detect ctx == nil and return a hard error to surface caller bugs—update the nil-check in sdk/auth/kilo.go so that when ctx is nil the function returns an error (e.g., fmt.Errorf or errors.New) rather than assigning context.Background(), and keep all downstream logic unchanged; locate the nil branch referencing the local variable ctx to make this change.
30-121: 🛠️ Refactor suggestion | 🟠 Major
Loginexceeds the 40-line function length limit by more than 2×.The function body runs from line 30 to line 121 (~91 lines). As per coding guidelines,
**/*.gosets a "Maximum function length: 40 lines". Extract at minimum:
- org selection logic (lines 66–91) →
selectOrg(ctx, profile, opts) (string, error)- token/metadata construction (lines 99–120) →
buildAuth(status, orgID, defaults, provider) (*coreauth.Auth, error)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@sdk/auth/kilo.go` around lines 30 - 121, The Login function is too long; extract the org selection block that inspects profile.Orgs and prompts (currently using profile, opts, and fmt) into a new function selectOrg(ctx, profile, opts) (string, error) and extract the token/metadata/storage construction that builds kilo.KiloTokenStorage, computes fileName via kilo.CredentialFileName, and returns the *coreauth.Auth (currently using status, orgID, defaults and a.Provider()) into a new function buildAuth(status, orgID, defaults, provider) (*coreauth.Auth, error); update Login to call selectOrg to obtain orgID (handling returned error) and call buildAuth to produce the final Auth, preserving existing error handling and defaults fallback logic and reusing symbols kilocodeAuth, profile, opts, status, defaults, kilo.KiloTokenStorage, kilo.CredentialFileName, coreauth.Auth, and a.Provider().
43-95:⚠️ Potential issue | 🟠 MajorReplace all
fmt.Print*calls withzerolog— and avoid logging PII.Two distinct issues here:
Guideline violation: Every status/warning line in
Login(lines 43, 49, 50, 52, 58, 67, 69, 83, 84, 87, 95) usesfmt.Println/fmt.Printfinstead ofzerolog. As per coding guidelines,**/*.gomust "Use zerolog for logging (NOT fmt.Print)".PII leak: Line 58 emits the user's email address to stdout:
fmt.Printf("Authentication successful for %s\n", status.UserEmail)Email addresses are PII and must not appear in unstructured log output. Replace with a non-identifying token (e.g. a truncated user ID) or omit the value entirely. The same concern applies to org names printed at lines 83, 84, and 87.
♻️ Sketch of zerolog replacement (line 58 as example)
- fmt.Printf("Authentication successful for %s\n", status.UserEmail) + log.Info().Msg("kilo: authentication successful")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@sdk/auth/kilo.go` around lines 43 - 95, Replace all fmt.Print*/fmt.Printf calls inside the Login function with zerolog logger calls (e.g., logger.Info()/Warn()/Debug()) and remove any direct PII values from logs: stop printing status.UserEmail and org names/IDs to stdout and instead log non-identifying info (e.g., "authentication succeeded" or a truncated/non-sensitive user token) or omit sensitive fields; update the blocks around kilocodeAuth.InitiateDeviceFlow, kilocodeAuth.PollForToken, kilocodeAuth.GetProfile, the org-selection logic (profile.Orgs and opts.Prompt) and the kilocodeAuth.GetDefaults warning to use logger (zerolog) and ensure messages are structured (use logger.*().Str/Bool/... for non-PII fields if needed) rather than fmt.Printf, preserving behavior for defaulting to profile.Orgs[0].ID but not printing org.Name or IDs as raw PII.internal/translator/codex/claude/codex_claude_response.go (6)
373-378: 🧹 Nitpick | 🔵 TrivialNon-idiomatic index loop — use
range.♻️ Proposed refactor
- for i := 0; i < len(arr); i++ { - n := arr[i].Get("name").String() - if n != "" { - names = append(names, n) - } - } + for _, item := range arr { + if n := item.Get("name").String(); n != "" { + names = append(names, n) + } + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/translator/codex/claude/codex_claude_response.go` around lines 373 - 378, The loop over arr is non-idiomatic by using an index; change it to a range loop to improve readability and safety: replace the for i := 0; i < len(arr); i++ { n := arr[i].Get("name").String() ... } pattern with for _, item := range arr { n := item.Get("name").String(); if n != "" { names = append(names, n) } }, keeping the same variables (arr, names) and behavior.
181-182: 🧹 Nitpick | 🔵 TrivialInconsistent
output +=vsoutput =— both branches should useoutput =.Lines 181–182 and 195–196 use
output +=while all other branches useoutput =. Sinceoutputis always""at that point this is not a bug today, but it is misleading and fragile if initialisation ever changes.♻️ Proposed fix
- output += "event: content_block_delta\n" - output += fmt.Sprintf("data: %s\n\n", template) + output = "event: content_block_delta\n" + output += fmt.Sprintf("data: %s\n\n", template)Apply the same change at line 195–196.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/translator/codex/claude/codex_claude_response.go` around lines 181 - 182, The code uses "output +=" in the branches that append the "event: content_block_delta" and corresponding data line (building the variable output with template) while other branches use "output ="; change both instances that currently do "output +=" (the ones creating "event: content_block_delta" and its data with fmt.Sprintf using template) to "output =" so the code consistently assigns to the output variable (replace the two occurrences that concatenate with direct assignment).
47-52: 🛠️ Refactor suggestion | 🟠 MajorCache the type assertion to avoid ~15 repeated
(*param).(*ConvertCodexResponseToClaudeParams)casts.After the nil guard, assign once and reuse throughout:
♻️ Proposed refactor
if *param == nil { *param = &ConvertCodexResponseToClaudeParams{ HasToolCall: false, BlockIndex: 0, } } + cp := (*param).(*ConvertCodexResponseToClaudeParams)Then replace every subsequent
(*param).(*ConvertCodexResponseToClaudeParams)reference withcp.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/translator/codex/claude/codex_claude_response.go` around lines 47 - 52, The nil-check initializes *param to a *ConvertCodexResponseToClaudeParams but the code then repeatedly does the expensive type assertion (*param).(*ConvertCodexResponseToClaudeParams); fix by caching the assertion: after the nil guard perform a single assertion into a local variable (e.g., cp := (*param).(*ConvertCodexResponseToClaudeParams)) and then replace all subsequent occurrences of (*param).(*ConvertCodexResponseToClaudeParams) with cp; ensure cp is used consistently in functions handling ConvertCodexResponseToClaudeParams to remove repeated casts.
388-390:⚠️ Potential issue | 🟡 Minor
ctxparameter inClaudeTokenCountis never used — rename to_or remove.An unused named parameter in an exported function will be flagged by
staticcheck/golint, conflicting with the "all new code must pass golint" requirement. If the context is not needed, rename it:♻️ Proposed fix
-func ClaudeTokenCount(ctx context.Context, count int64) string { +func ClaudeTokenCount(_ context.Context, count int64) string {As per coding guidelines: "All new code must pass: go fmt, go vet, golint."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/translator/codex/claude/codex_claude_response.go` around lines 388 - 390, The function ClaudeTokenCount has an unused ctx parameter; update the signature to remove the unused parameter or rename it to the blank identifier to satisfy linters — e.g., change ClaudeTokenCount(ctx context.Context, count int64) to ClaudeTokenCount(_ context.Context, count int64) or ClaudeTokenCount(count int64); update any call sites accordingly and ensure the function body remains unchanged (it should still return fmt.Sprintf(`{"input_tokens":%d}`, count)).
54-54: 🧹 Nitpick | 🔵 TrivialRemove commented-out debug log; if logging is needed, use
zerolog.- // log.Debugf("rawJSON: %s", string(rawJSON))If debug logging is ever needed here, use
zerologper the project's logging standard (e.g.,log.Debug().Bytes("rawJSON", rawJSON).Msg("")).As per coding guidelines: "Use zerolog for logging (NOT fmt.Print)."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/translator/codex/claude/codex_claude_response.go` at line 54, Remove the commented-out fmt-style debug line in codex_claude_response.go; either delete that commented log or replace it with a zerolog debug call that logs the rawJSON variable using a Bytes field (e.g., use log.Debug(), add Bytes("rawJSON", rawJSON), and call Msg with a short message) to conform to the project's zerolog logging standard.
46-202: 🛠️ Refactor suggestion | 🟠 MajorFunction exceeds maximum 40-line limit — split into focused helpers.
ConvertCodexResponseToClaudeis ~156 lines. EachtypeStrbranch is an independent state-machine handler and can be extracted into a small, named function (e.g.,handleResponseCreated,handleContentPartAdded,handleFunctionCallDelta, etc.), leaving the top-level function as a dispatcher.The same applies to
ConvertCodexResponseToClaudeNonStream(lines 217–342, ~125 lines).As per coding guidelines: "Maximum function length: 40 lines."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/translator/codex/claude/codex_claude_response.go` around lines 46 - 202, ConvertCodexResponseToClaude is too long; extract each typeStr branch into focused helpers (e.g., handleResponseCreated, handleReasoningSummaryPartAdded, handleReasoningSummaryTextDelta, handleContentPartAdded, handleOutputTextDelta, handleContentPartDone, handleCompleted, handleOutputItemAdded, handleOutputItemDone, handleFunctionCallArgumentsDelta, handleFunctionCallArgumentsDone) that accept the parsed rootResult (gjson.Result), originalRequestRawJSON, and the param pointer and return the event output string and any param mutations; then replace the big switch/if chain in ConvertCodexResponseToClaude with a short dispatcher that calls the appropriate helper based on typeStr and returns its result; apply the same refactor to ConvertCodexResponseToClaudeNonStream so each branch is a small helper, preserve all existing behavior (sjson mutations, BlockIndex and HasToolCall updates, buildReverseMapFromClaudeOriginalShortToOriginal usage, extractResponsesUsage) and keep helpers in the same package.internal/auth/kiro/codewhisperer_client.go (4)
83-127: 🛠️ Refactor suggestion | 🟠 Major
GetUsageLimitsexceeds the 40-line function length limit.The function spans 45 lines (lines 83–127). Extract the request-header construction block (lines 91–100) into a helper such as
buildGetUsageLimitsRequest(ctx, url, accessToken string) (*http.Request, error)to bring both functions within the limit.As per coding guidelines: "Maximum function length: 40 lines".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/auth/kiro/codewhisperer_client.go` around lines 83 - 127, GetUsageLimits exceeds the 40-line limit; extract the request header construction into a helper function named buildGetUsageLimitsRequest(ctx, url, accessToken string) (*http.Request, error). Move the block that builds xAmzUserAgent, userAgent, sets Authorization, x-amz-user-agent, User-Agent, amz-sdk-invocation-id, amz-sdk-request, Connection headers and the http.NewRequestWithContext call into that helper, return the prepared *http.Request (or error), then update GetUsageLimits to call buildGetUsageLimitsRequest and proceed with the existing httpClient.Do/response handling.
92-93:⚠️ Potential issue | 🟠 MajorHardcoded
os/windowsin User-Agent is incorrect on Linux and macOS.The
userAgentstring on line 93 unconditionally reportsos/windows, but this is a cross-platform binary. This misrepresents the runtime environment to the AWS CodeWhisperer backend and could affect server-side telemetry or future OS-specific policy enforcement.🐛 Proposed fix using `runtime.GOOS`
+import "runtime" +func goosToUserAgentOS() string { + switch runtime.GOOS { + case "darwin": + return "macos" + case "windows": + return "windows" + default: + return "linux" + } +}-userAgent := fmt.Sprintf("aws-sdk-js/1.0.0 ua/2.1 os/windows lang/js md/nodejs#20.16.0 api/codewhispererruntime#1.0.0 m/E KiroIDE-%s-%s", kiroVersion, c.machineID) +userAgent := fmt.Sprintf("aws-sdk-js/1.0.0 ua/2.1 os/%s lang/js md/nodejs#20.16.0 api/codewhispererruntime#1.0.0 m/E KiroIDE-%s-%s", goosToUserAgentOS(), kiroVersion, c.machineID)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/auth/kiro/codewhisperer_client.go` around lines 92 - 93, The User-Agent string construction (variables xAmzUserAgent and userAgent in codewhisperer_client.go) hardcodes "os/windows"; change it to detect the OS at runtime using runtime.GOOS and interpolate that value into both fmt.Sprintf calls (preserving kiroVersion and c.machineID usage) so the UA accurately reflects the host OS on Linux/macOS/Windows.
15-15:⚠️ Potential issue | 🟠 MajorReplace
logruswithzerologper project logging standards.The file imports
github.com/sirupsen/logrus, which violates the coding guideline mandatingzerolog. Additionally, Logrus is now maintenance-only and its developers recommend migrating to other libraries.All
log.Debugf(...)call sites in this file should be converted to zerolog's chained API, e.g.:-log "github.com/sirupsen/logrus" +log "github.com/rs/zerolog/log"Then update each call site, for example:
-log.Debugf("codewhisperer: GET %s", url) +log.Debug().Str("url", url).Msg("codewhisperer: GET request")As per coding guidelines: "Use zerolog for logging (NOT fmt.Print)".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/auth/kiro/codewhisperer_client.go` at line 15, Replace the logrus import and call sites with zerolog: remove/import alias "github.com/sirupsen/logrus" and add "github.com/rs/zerolog" (and optionally "github.com/rs/zerolog/log") then change every log.Debugf(...) usage in this file to zerolog's chained API (e.g., log.Debug().Msgf(...) or log.Debug().Str("key", val).Msgf(...)) so formatting and structured fields are preserved; update the import alias and all references to the log symbol accordingly (search for occurrences of log.Debugf in this file and update them to the zerolog pattern).
115-115:⚠️ Potential issue | 🟠 MajorRemove PII from debug logs — email and userId are being leaked.
Two locations expose user PII in log output:
- Line 115: The raw response body is logged in full. The response includes
userInfo.emailanduserInfo.userId(seeUserInfostruct, lines 40–42), meaning every successful API call logs the user's email and internal identifier.- Line 139: The user's email is explicitly interpolated into a log message.
Debug logs are frequently persisted or forwarded to log aggregation systems, making this a compliance risk under GDPR/CCPA.
🔒 Proposed fix
-log.Debugf("codewhisperer: status=%d, body=%s", resp.StatusCode, string(body)) +log.Debugf("codewhisperer: status=%d, body_length=%d", resp.StatusCode, len(body))-log.Debugf("codewhisperer: got email from API: %s", resp.UserInfo.Email) +log.Debugf("codewhisperer: got email from API")Also applies to: 139-139
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/auth/kiro/codewhisperer_client.go` at line 115, The debug logs currently print the raw response body and interpolate user email (leaking PII); update the logging in the code that handles the CodeWhisperer response (the log.Debugf that logs resp.StatusCode and body and the later log that prints user email) to avoid emitting UserInfo.email and UserInfo.userId. Instead either omit the response body entirely from debug logs or parse it and redact/mask userInfo.email and userInfo.userId (e.g., replace with "<redacted>" or a truncated/hash form) before logging; also change the explicit email interpolation to log a non-PII marker (e.g., "<redacted>" or user existence/status) so no raw email or internal id appears in logs.sdk/cliproxy/auth/conductor.go (1)
2302-2313:⚠️ Potential issue | 🔴 CriticalData race: raw
*Authpointer escapes the read lock and is passed toPrepareRequest.
ais the live pointer stored inm.auths. Afterm.mu.RUnlock()(Line 2308),refreshAuthcan concurrently mutate fields on that exact pointer (e.g.,current.NextRefreshAfter,current.LastError) while holding the write lock. Passingatop.PrepareRequest(req, a)at Line 2313 creates an unsynchronised concurrent read/write, which is a data race.Every other path in this file that reads an auth for use outside the lock first calls
.Clone()— seepickNext(Line 1713),pickNextMixed(Line 1796),snapshotAuths(Line 1895), etc. The same pattern must be applied here.🔒 Proposed fix
m.mu.RLock() a := m.auths[authID] var exec ProviderExecutor if a != nil { exec = m.executors[executorKeyFromAuth(a)] } m.mu.RUnlock() - if a == nil || exec == nil { + var authCopy *Auth + if a != nil { + authCopy = a.Clone() + } + if authCopy == nil || exec == nil { return nil } if p, ok := exec.(RequestPreparer); ok && p != nil { - return p.PrepareRequest(req, a) + return p.PrepareRequest(req, authCopy) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@sdk/cliproxy/auth/conductor.go` around lines 2302 - 2313, Read-locking currently reads the live *Auth from m.auths into variable a and then unlocks, which allows concurrent mutation and causes a data race when passing a to p.PrepareRequest; instead, while holding the lock clone the auth and lookup the executor: get a := m.auths[authID], if a != nil replace it with aClone := a.Clone() (or call Clone() on the stored auth) and set exec := m.executors[executorKeyFromAuth(a)], then release the lock and use aClone when calling p.PrepareRequest(req, aClone); ensure you still check for nil on the cloned auth and executor and keep all references to the live pointer from escaping the lock.pkg/llmproxy/translator/antigravity/openai/chat-completions/antigravity_openai_request.go (4)
435-436: 🧹 Nitpick | 🔵 Trivial
itoahelper is redundant — and its comment is misleading.The comment claims it avoids importing
strconv, butfmtis already imported. Either inlinefmt.Sprintf("%d", i)at call sites (they're already usingfmtthroughout) or usestrconv.Itoa, which is more idiomatic and avoids thefmtformatting overhead.♻️ Proposed fix
-// itoa converts int to string without strconv import for few usages. -func itoa(i int) string { return fmt.Sprintf("%d", i) }Add to imports:
+ "strconv"Replace each
itoa(p)call site withstrconv.Itoa(p).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/llmproxy/translator/antigravity/openai/chat-completions/antigravity_openai_request.go` around lines 435 - 436, The itoa helper and its misleading comment should be removed and replaced with the idiomatic strconv.Itoa at call sites: delete the itoa function and its comment, add "strconv" to the imports, and replace every call to itoa(...) in this file (references to the itoa helper) with strconv.Itoa(...); this avoids unnecessary fmt.Sprintf formatting and makes intent clear.
32-32: 🛠️ Refactor suggestion | 🟠 MajorHardcoded default model name violates config-driven guideline.
"gemini-2.5-pro"is baked into the envelope literal. If the default model needs to change (or differs per deployment), this requires a code change rather than a config update.Based on learnings: "Use template strings for messages instead of hardcoded messages and config-driven logic instead of code-driven".
♻️ Suggested approach
-out := []byte(`{"project":"","request":{"contents":[]},"model":"gemini-2.5-pro"}`) +out := []byte(`{"project":"","request":{"contents":[]},"model":""}`) +defaultModel := config.GetString("llmproxy.antigravity.default_model") // viper-backed +out, _ = sjson.SetBytes(out, "model", defaultModel)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/llmproxy/translator/antigravity/openai/chat-completions/antigravity_openai_request.go` at line 32, The JSON envelope is using a hardcoded model name in the `out` byte literal (`"model":"gemini-2.5-pro"`), which must be driven from configuration; change creation of the envelope in antigravity_openai_request.go so the model value comes from a config/option (e.g., a `cfg.DefaultModel`, `opts.Model`, or a `GetDefaultModel()` call) instead of the literal, and construct the envelope programmatically (using a small struct and encoding/json or by templating) to inject the configured model into the `model` field while keeping `out` as the serialized result.
29-433: 🛠️ Refactor suggestion | 🟠 Major
ConvertOpenAIRequestToAntigravityfar exceeds the 40-line function limit.The function body spans ~404 lines. The coding guidelines impose a hard maximum of 40 lines per function. This single function should be decomposed into focused helpers (e.g.,
buildGenerationConfig,buildSystemInstruction,buildContents,buildTools).As per coding guidelines: "Maximum function length: 40 lines".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/llmproxy/translator/antigravity/openai/chat-completions/antigravity_openai_request.go` around lines 29 - 433, ConvertOpenAIRequestToAntigravity is ~400 lines and violates the 40-line function rule; refactor it by extracting the large logical blocks into small helpers and keep the top-level function to 10–15 lines that composes them. Specifically: move temperature/top_p/top_k/max_tokens/modalities/reasoning_effort/image_config/video_config mapping into buildGenerationConfig(rawJSON []byte) ([]byte, error) (referencing the "request.generationConfig" keys and the reasoning_effort handling), move the messages -> systemInstruction and messages -> request.contents logic into buildSystemInstructionAndContents(rawJSON []byte) ([]byte, error) (use symbols tcID2Name, toolResponses, hasAntigravityParts, geminiCLIFunctionThoughtSignature, itoa), and move the tools -> request.tools logic into buildToolsNode(rawJSON []byte) ([]byte, error) (reuse util.RenameKey, common.SanitizeToolSearchForGemini and the function/tool node assembly). Then rewrite ConvertOpenAIRequestToAntigravity to create the base envelope, call these helpers to set fields on the envelope, and finally call common.AttachDefaultSafetySettings(out, "request.safetySettings"); ensure error handling/log warnings are preserved and helper names match those suggested so reviewers can find the moved logic.
277-281:⚠️ Potential issue | 🔴 Critical
[]byte(fargs)will be base64-encoded byencoding/json, not stored as a string.When
gjson.Valid(fargs)isfalse, the fallback on line 280 passes[]byte(fargs)tosjson.SetBytes. Becauseencoding/jsonmarshals[]byteas a base64 string, the stored value will be the base64 representation of the raw string bytes rather than the plain string — almost certainly not the intended behavior.🐛 Proposed fix
-node, _ = sjson.SetBytes(node, "parts."+itoa(p)+".functionCall.args.params", []byte(fargs)) +node, _ = sjson.SetBytes(node, "parts."+itoa(p)+".functionCall.args.params", fargs)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/llmproxy/translator/antigravity/openai/chat-completions/antigravity_openai_request.go` around lines 277 - 281, The code currently calls sjson.SetBytes(node, "parts."+itoa(p)+".functionCall.args.params", []byte(fargs)) when gjson.Valid(fargs) is false, which causes encoding/json to marshal the []byte as base64; instead set the param as a JSON string (not a byte slice). Replace that call with sjson.Set(node, "parts."+itoa(p)+".functionCall.args.params", fargs) (or use sjson.SetBytes with a properly quoted JSON string like []byte(strconv.Quote(fargs))) so the stored value is the plain string; keep the gjson.Valid branch using sjson.SetRawBytes as-is.pkg/llmproxy/translator/gemini/openai/chat-completions/gemini_openai_request.go (4)
398-399: 🛠️ Refactor suggestion | 🟠 MajorReplace custom
itoawithstrconv.Itoa.The comment acknowledges this exists only to avoid importing
strconv. That is precisely the pattern the guideline prohibits.♻️ Proposed fix
-// itoa converts int to string without strconv import for few usages. -func itoa(i int) string { return fmt.Sprintf("%d", i) }Add
"strconv"to the import block and replace allitoa(p)call-sites withstrconv.Itoa(p).As per coding guidelines: "NEVER create custom implementations when an OSS library exists."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/llmproxy/translator/gemini/openai/chat-completions/gemini_openai_request.go` around lines 398 - 399, Replace the custom itoa helper with the standard library: remove the itoa function and add "strconv" to the import block, then replace all call sites of itoa(...) with strconv.Itoa(...); ensure no other references to itoa remain (search for "itoa(") and run go vet/build to confirm imports are cleaned up.
197-207:⚠️ Potential issue | 🟡 MinorMagic-number slicing on
imageURLwithout adata:prefix guard.
imageURL[5:]assumes the URL starts with"data:"(5 bytes) andpieces[1][7:]assumes the data segment starts with"base64,"(7 bytes). Neither assumption is validated; a non-data-URL or a URL with a different encoding (e.g. URL-encoded) silently produces a garbage mime/data split. The identical pattern is repeated for assistant messages at lines 246–249.🛡️ Proposed fix (both occurrences)
- if len(imageURL) > 5 { - pieces := strings.SplitN(imageURL[5:], ";", 2) - if len(pieces) == 2 && len(pieces[1]) > 7 { - mime := pieces[0] - data := pieces[1][7:] + if strings.HasPrefix(imageURL, "data:") { + pieces := strings.SplitN(imageURL[len("data:"):], ";", 2) + if len(pieces) == 2 && strings.HasPrefix(pieces[1], "base64,") { + mime := pieces[0] + data := pieces[1][len("base64,"):]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/llmproxy/translator/gemini/openai/chat-completions/gemini_openai_request.go` around lines 197 - 207, The code slices imageURL with imageURL[5:] and pieces[1][7:] assuming a "data:" prefix and "base64," marker; update the logic around the imageURL parsing (the block that reads imageURL, builds pieces, and sets parts.+itoa(p)+.inlineData.* and .thoughtSignature) to first validate strings.HasPrefix(imageURL, "data:"), then locate the first comma (e.g. strings.Index(imageURL, ",")) to split metadata vs payload rather than fixed offsets, validate the metadata contains a mime (non-empty) and that the payload marker indicates base64 when expected (e.g. strings.Contains(metadata, "base64")), and only then assign mime and data; otherwise skip or handle as a non-data URL. Apply the same validated parsing change to the duplicate assistant-message parsing block (the other imageURL handling section).
28-396:⚠️ Potential issue | 🟠 Major
ConvertOpenAIRequestToGeminifar exceeds the 40-line function length limit (~368 lines).The function handles message role routing, image/file parsing, tool declarations, modality mapping, thinking config, and generation config — all as a monolith. This needs to be decomposed into focused helpers (e.g.,
buildSystemInstruction,buildContents,buildTools,buildGenerationConfig).As per coding guidelines: "Maximum function length: 40 lines."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/llmproxy/translator/gemini/openai/chat-completions/gemini_openai_request.go` around lines 28 - 396, The ConvertOpenAIRequestToGemini function is too large and must be split into focused helpers; refactor it by extracting the major responsibilities into functions like buildGenerationConfig (handle reasoning_effort -> thinkingConfig, temperature/top_p/top_k, candidateCount, modalities, image_config/video_config), buildSystemInstruction (system/developer -> system_instruction parts logic), buildContents (messages -> contents including user/assistant/tool parsing, image_url/file handling, function/tool call parts, tcID2Name and toolResponses logic, hasGeminiParts usage), and buildTools (tools -> functionDeclarations, googleSearch/codeExecution/urlContext passthrough); each helper should accept the parsed rawJSON (or gjson.Result) and return the JSON fragment ([]byte or sjson-ready structure) and be called from ConvertOpenAIRequestToGemini to assemble the final out, preserving existing symbols like geminiFunctionThoughtSignature, tcID2Name, toolResponses, hasGeminiParts, and common.NormalizeOpenAIFunctionSchemaForGemini so behavior remains unchanged.
290-295:⚠️ Potential issue | 🔴 CriticalUse
SetRawBytesinstead ofSetBytesfor embedding JSON responses.
respcontains raw JSON fromc.Raw(line 148). When passed as[]bytetosjson.SetBytes, it is treated as a JSON string, resulting in escaping and quoting ({"result":"ok"}becomes"{\"result\":\"ok\"}"). This corrupts the tool response payload. UseSetRawBytesto insert the JSON directly without stringification.Fix
- toolNode, _ = sjson.SetBytes(toolNode, "parts."+itoa(pp)+".functionResponse.response.result", []byte(resp)) + toolNode, _ = sjson.SetRawBytes(toolNode, "parts."+itoa(pp)+".functionResponse.response.result", []byte(resp))🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/llmproxy/translator/gemini/openai/chat-completions/gemini_openai_request.go` around lines 290 - 295, The code is currently using sjson.SetBytes to insert toolResponses into toolNode which causes JSON to be string-escaped; change the call in the block that handles toolResponses (variables: resp, toolResponses, toolNode, pp, itoa) to use sjson.SetRawBytes instead of sjson.SetBytes and pass the raw byte slice (e.g., []byte(resp) or ensure resp is a []byte) so the JSON from c.Raw is inserted as raw JSON into the path "parts."+itoa(pp)+".functionResponse.response.result" (keep the resp=="" => "{}" fallback).sdk/auth/kiro.go (3)
153-157: 🛠️ Refactor suggestion | 🟠 Major
fmt.Printf/fmt.Printlnused instead ofzerolog.Success output at Lines 154–156, 236–238, and 322–324 uses
fmt.Printf/fmt.Printlndirectly. This is a guidelines violation throughout the file.Replace with structured zerolog calls (e.g.,
log.Info().Str("account", tokenData.Email).Msg("Kiro authentication completed")).As per coding guidelines: "Use zerolog for logging (NOT fmt.Print)".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@sdk/auth/kiro.go` around lines 153 - 157, Replace the direct fmt.Printf/fmt.Println success prints that reference tokenData.Email and the literal "Kiro authentication completed successfully" with structured zerolog calls: use log.Info().Str("account", tokenData.Email).Msg("Kiro authentication completed") when tokenData.Email is non-empty, and log.Info().Msg("Kiro authentication completed") when empty; update all occurrences (the print statements that check tokenData.Email and the identical messages at the other two locations) to follow this pattern to comply with the zerolog logging guideline.
101-111:⚠️ Potential issue | 🟠 Major
client_secretstored as plaintext inMetadata(and written to disk).
"client_secret": tokenData.ClientSecretappears in the metadata map increateAuthRecord(Line 110),LoginWithAuthCode(Line 223), andImportFromKiroIDE(Line 304). This map is serialised to a JSON file on disk, leaving the OAuth client secret in cleartext in the credential store.At minimum, consider omitting it from persisted metadata and reloading it from the device registration cache (
loadDeviceRegistrationCredentials) at refresh time — which is already the fallback path for imported tokens (Lines 351-356).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@sdk/auth/kiro.go` around lines 101 - 111, The metadata map currently includes the OAuth client secret (key "client_secret") in createAuthRecord and the callers LoginWithAuthCode and ImportFromKiroIDE which causes it to be written to disk; remove the "client_secret": tokenData.ClientSecret entry from the metadata map in these functions so the secret is not persisted, and update any code paths that expect metadata.ClientSecret during refresh to instead call loadDeviceRegistrationCredentials as the fallback (reuse the existing fallback logic used for imported tokens) so refreshes can retrieve the secret from the device registration cache when needed.
331-410: 🛠️ Refactor suggestion | 🟠 Major
Refresh(and several other functions) exceed the 40-line maximum.
Refreshis ~79 lines;createAuthRecordis ~95 lines;LoginWithAuthCodeis ~61 lines;ImportFromKiroIDEis ~69 lines. All violate the project's function-length limit. SinceRefreshis directly modified in this PR, it is the most immediate candidate for extraction.Suggested split for
Refresh: extract the credential-loading/resolution block (Lines 336-356) into a helper, and the post-refresh metadata-update block (Lines 390-407) into another.As per coding guidelines: "Maximum function length: 40 lines".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@sdk/auth/kiro.go` around lines 331 - 410, The Refresh method in KiroAuthenticator is over the 40-line limit; extract two helper functions to shorten it: (1) create resolveClientCredentials(clientID, clientSecret, clientIDHash string) (clientIDOut, clientSecretOut string, err error) that encapsulates the device-registration lookup and IDC credential validation (uses loadDeviceRegistrationCredentials and checks authMethod == "idc"), and (2) create applyTokenDataToAuth(auth *coreauth.Auth, tokenData *kiroauth.KiroTokenData, clientID, clientSecret string) *coreauth.Auth that clones auth, parses ExpiresAt, sets UpdatedAt/LastRefreshedAt/Metadata keys (access_token, refresh_token, expires_at, last_refresh, client_id, client_secret) and NextRefreshAfter. Replace the credential-resolution block and the post-refresh metadata-update block in Refresh with calls to these two helpers (passing startURL/region/authMethod as needed) so Refresh remains under 40 lines while preserving existing behavior and error handling.sdk/api/handlers/gemini/gemini_handlers.go (1)
110-116:⚠️ Potential issue | 🔴 Critical
targetModelis a direct reference to the registry entry; mutatingtargetModel["name"]corrupts shared state.In
GeminiGetHandler,targetModelis assigned directly fromavailableModels(line 105), which is a slice from the global registry. The mutation at line 113 (targetModel["name"] = "models/" + name) writes back into the registry's map, permanently changing the stored name for all future requests.In
GeminiModels, the copy (normalizedModel := make(map[string]any, ...)) avoids this — the same defensive copy should be applied here.🐛 Proposed fix
if targetModel != nil { + // Defensive copy to avoid mutating the registry entry + safeCopy := make(map[string]any, len(targetModel)) + for k, v := range targetModel { + safeCopy[k] = v + } + targetModel = safeCopy if name, ok := targetModel["name"].(string); ok && name != "" && !strings.HasPrefix(name, "models/") { targetModel["name"] = "models/" + name } - c.JSON(http.StatusOK, filterGeminiModelFields(targetModel)) + c.JSON(http.StatusOK, filterGeminiModelFields(targetModel))🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@sdk/api/handlers/gemini/gemini_handlers.go` around lines 110 - 116, In GeminiGetHandler the handler mutates targetModel (sourced from availableModels) which corrupts the global registry; create a defensive copy (like normalizedModel in GeminiModels) before altering the "name" field and before passing it to filterGeminiModelFields/ c.JSON so the global availableModels map is never modified; locate GeminiGetHandler, availableModels, targetModel and replace direct mutation with a shallow copy of the map, update the copy's "name" if needed, then return the filtered copy.pkg/llmproxy/api/handlers/management/config_basic.go (1)
31-31:⚠️ Potential issue | 🔴 CriticalReturn the actual config instead of creating a zero-valued pointer.
new(*h.cfg)allocates a new zero-valued*config.Config, not the existing configuration. This causesGetConfigto respond with an empty config object regardless of the actual settings. All other getter methods in this file returnh.cfgvalues directly (e.g.,h.cfg.Debug,h.cfg.UsageStatisticsEnabled).🐛 Fix
- c.JSON(200, new(*h.cfg)) + c.JSON(200, h.cfg)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/llmproxy/api/handlers/management/config_basic.go` at line 31, The handler currently returns a newly allocated zero-valued pointer via new(*h.cfg) instead of the real configuration; update the response to return the actual config object (h.cfg) in the c.JSON call so GetConfig returns the real settings (replace new(*h.cfg) with h.cfg in the c.JSON(200, ...) call).pkg/llmproxy/translator/gemini-cli/openai/chat-completions/gemini-cli_openai_request.go (6)
390-392: 🛠️ Refactor suggestion | 🟠 MajorRemove custom
itoa— usestrconv.Itoainstead.
itoais a trivial wrapper aroundfmt.Sprintf("%d", i), which is slower thanstrconv.Itoaand duplicates standard-library functionality. As per coding guidelines, custom implementations must not be created when an OSS/stdlib alternative exists.♻️ Proposed fix
-// itoa converts int to string without strconv import for few usages. -func itoa(i int) string { return fmt.Sprintf("%d", i) }Add
"strconv"to the import block and replace allitoa(p)/itoa(pp)call sites withstrconv.Itoa(p)/strconv.Itoa(pp). Iffmtis then unused, remove it from the imports.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/llmproxy/translator/gemini-cli/openai/chat-completions/gemini-cli_openai_request.go` around lines 390 - 392, Remove the custom itoa function and switch to strconv.Itoa: add "strconv" to the import block, replace all calls to itoa(p) / itoa(pp) with strconv.Itoa(p) / strconv.Itoa(pp), delete the itoa(i int) function, and if fmt becomes unused after these changes remove fmt from the imports (or keep it only if still used elsewhere).
390-392: 🛠️ Refactor suggestion | 🟠 MajorReplace custom
itoawithstrconv.Itoa.
itoais a thin wrapper aroundfmt.Sprintf("%d", i)— a standard-library function already covers this. As per coding guidelines, custom implementations must not be created when an OSS/stdlib alternative exists.♻️ Proposed fix
+import "strconv" ... -// itoa converts int to string without strconv import for few usages. -func itoa(i int) string { return fmt.Sprintf("%d", i) }Replace all
itoa(p)call sites withstrconv.Itoa(p).fmtcan then be removed from the import block if no other usages remain.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/llmproxy/translator/gemini-cli/openai/chat-completions/gemini-cli_openai_request.go` around lines 390 - 392, The file defines a custom itoa(i int) using fmt.Sprintf; replace all uses of itoa(...) (e.g., itoa(p)) with strconv.Itoa(...) and remove the itoa function; then remove the fmt import if there are no other fmt usages to rely on the standard strconv implementation instead.
28-388: 🛠️ Refactor suggestion | 🟠 Major
ConvertOpenAIRequestToGeminiCLIfar exceeds the 40-line function limit — decompose into helpers.The function body spans ~360 lines. Extract discrete translation stages (e.g.,
applyThinkingConfig,applyGenerationConfig,buildSystemInstruction,buildContents,buildTools) into focused, testable helpers. As per coding guidelines, maximum function length is 40 lines.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/llmproxy/translator/gemini-cli/openai/chat-completions/gemini-cli_openai_request.go` around lines 28 - 388, ConvertOpenAIRequestToGeminiCLI is too large; break it into smaller helpers: extract the thinkingConfig mapping into applyThinkingConfig(modelName string, rawJSON []byte, out []byte) []byte, temperature/top_p/top_k/candidateCount/modalities/image/video mapping into applyGenerationConfig(rawJSON []byte, out []byte) []byte, the messages => systemInstruction/contents logic into buildContents(rawJSON []byte, out []byte) []byte (internally using helper functions for tool ID/name mapping and toolResponses), and the tools => request.tools construction into buildTools(rawJSON []byte, out []byte) []byte; refactor ConvertOpenAIRequestToGeminiCLI to call these helpers in sequence and return common.AttachDefaultSafetySettings(out, "request.safetySettings") to preserve existing behavior and side-effects, and add or update unit tests for each new helper to cover the previous code paths.
28-388: 🛠️ Refactor suggestion | 🟠 Major
ConvertOpenAIRequestToGeminiCLIvastly exceeds the 40-line function limit — decompose into helpers.The function spans ~360 lines. Extract each translation stage into a focused, independently testable helper, for example:
applyThinkingConfig(out []byte, rawJSON []byte) []byteapplyGenerationConfig(out []byte, rawJSON []byte) []bytebuildSystemInstruction(out []byte, messages []gjson.Result) []bytebuildContents(out []byte, messages []gjson.Result, tcID2Name, toolResponses map[string]string) []bytebuildTools(out []byte, rawJSON []byte) []byteAs per coding guidelines, the maximum function length is 40 lines.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/llmproxy/translator/gemini-cli/openai/chat-completions/gemini-cli_openai_request.go` around lines 28 - 388, ConvertOpenAIRequestToGeminiCLI is far too large; split its stages into small helpers and have the top-level function compose them. Extract the reasoning_effort mapping into applyThinkingConfig(out []byte, rawJSON []byte) []byte, the temperature/top_p/top_k/candidate/modalities/image/video mapping into applyGenerationConfig(out []byte, rawJSON []byte) []byte, the messages->systemInstruction/tool responses logic into buildSystemInstruction(out []byte, messages []gjson.Result) []byte and buildContents(out []byte, messages []gjson.Result, tcID2Name map[string]string, toolResponses map[string]string) []byte (preserve tcID2Name/toolResponses construction), and the tools processing into buildTools(out []byte, rawJSON []byte) []byte; update ConvertOpenAIRequestToGeminiCLI to call these helpers in order and return common.AttachDefaultSafetySettings(out, "request.safetySettings") so behavior remains identical while each function stays under ~40 lines and remains unit-testable.
284-294:⚠️ Potential issue | 🔴 CriticalUse
sjson.SetRawBytesfor tool response JSON to prevent base64 encoding.
respis a raw JSON string fromc.Raw(line 148). Passing[]byte(resp)tosjson.SetBytescauses the byte slice to be treated as a Go[]bytevalue and base64-encoded in the output, corrupting the tool-response payload. Usesjson.SetRawBytesinstead, consistent with the correct pattern at line 267 for function arguments.Fix
- toolNode, _ = sjson.SetBytes(toolNode, "parts."+itoa(pp)+".functionResponse.response.result", []byte(resp)) + toolNode, _ = sjson.SetRawBytes(toolNode, "parts."+itoa(pp)+".functionResponse.response.result", []byte(resp))🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/llmproxy/translator/gemini-cli/openai/chat-completions/gemini-cli_openai_request.go` around lines 284 - 294, The code sets tool response JSON using sjson.SetBytes which base64-encodes the []byte and corrupts the JSON; change the call that sets "parts."+itoa(pp)+".functionResponse.response.result" to use sjson.SetRawBytes (like the earlier pattern for function arguments) so the raw JSON string from toolResponses[fid] is embedded as JSON, not base64-encoded; update the call that currently passes []byte(resp) and leave the surrounding logic with toolNode, pp and out unchanged.
284-294:⚠️ Potential issue | 🔴 CriticalReplace
sjson.SetByteswithsjson.SetRawBytesfor injecting raw JSON content.
respis populated fromc.Raw(line 148), a gjson raw JSON token containing the actual JSON structure. Usingsjson.SetByteswith[]byte(resp)treats it as a string value and escapes/quotes it, embedding the JSON text as a literal string within the result (e.g.,"{\\"status\\":\\"ok\\"}"instead of{"status":"ok"}). The tool response sent to Gemini CLI will be a mangled escaped string rather than structured JSON.Use
sjson.SetRawBytes, which injects the bytes as raw JSON verbatim. Line 267 already uses the correct pattern forfargs(another gjson raw JSON value).Fix
- toolNode, _ = sjson.SetBytes(toolNode, "parts."+itoa(pp)+".functionResponse.response.result", []byte(resp)) + toolNode, _ = sjson.SetRawBytes(toolNode, "parts."+itoa(pp)+".functionResponse.response.result", []byte(resp))🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/llmproxy/translator/gemini-cli/openai/chat-completions/gemini-cli_openai_request.go` around lines 284 - 294, The code currently uses sjson.SetBytes when writing the tool response into toolNode (see variable resp and the assignment to "parts."+itoa(pp)+".functionResponse.response.result"), which causes raw JSON in resp to be escaped as a string; change that call to sjson.SetRawBytes so resp (derived from c.Raw) is injected as raw JSON; update the SetBytes call that sets the functionResponse.response.result to use sjson.SetRawBytes(toolNode, "parts."+itoa(pp)+".functionResponse.response.result", []byte(resp)) and ensure the surrounding logic that later writes out with sjson.SetRawBytes(out, "request.contents.-1", toolNode) remains consistent.
ℹ️ Review info
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
📒 Files selected for processing (70)
README.mdTaskfile.ymlcmd/server/main.gocmd/server/main_kiro_flags_test.gointernal/api/handlers/management/auth_files.gointernal/api/modules/amp/proxy.gointernal/auth/copilot/copilot_auth.gointernal/auth/kiro/aws.gointernal/auth/kiro/aws_auth.gointernal/auth/kiro/codewhisperer_client.gointernal/auth/kiro/cooldown.gointernal/auth/kiro/fingerprint.gointernal/auth/kiro/jitter.gointernal/auth/kiro/metrics.gointernal/auth/kiro/oauth.gointernal/auth/kiro/oauth_web.gointernal/auth/kiro/protocol_handler.gointernal/auth/kiro/social_auth.gointernal/auth/kiro/sso_oidc.gointernal/browser/browser.gointernal/translator/codex/claude/codex_claude_response.gointernal/translator/codex/openai/responses/codex_openai-responses_request_test.gointernal/translator/kiro/common/utils.gointernal/translator/kiro/openai/init.gointernal/translator/kiro/openai/kiro_openai_response.gointernal/translator/kiro/openai/kiro_openai_stream.gopkg/llmproxy/access/reconcile.gopkg/llmproxy/api/handlers/management/config_basic.gopkg/llmproxy/api/server.gopkg/llmproxy/cmd/config_cast.gopkg/llmproxy/managementasset/updater.gopkg/llmproxy/registry/registry_coverage_test.gopkg/llmproxy/translator/antigravity/claude/antigravity_claude_request.gopkg/llmproxy/translator/antigravity/claude/init.gopkg/llmproxy/translator/antigravity/gemini/init.gopkg/llmproxy/translator/antigravity/openai/chat-completions/antigravity_openai_request.gopkg/llmproxy/translator/antigravity/openai/chat-completions/init.gopkg/llmproxy/translator/antigravity/openai/responses/init.gopkg/llmproxy/translator/claude/gemini-cli/init.gopkg/llmproxy/translator/claude/gemini/init.gopkg/llmproxy/translator/claude/openai/responses/init.gopkg/llmproxy/translator/codex/claude/init.gopkg/llmproxy/translator/codex/gemini-cli/init.gopkg/llmproxy/translator/codex/gemini/init.gopkg/llmproxy/translator/codex/openai/responses/init.gopkg/llmproxy/translator/gemini-cli/claude/init.gopkg/llmproxy/translator/gemini-cli/gemini/init.gopkg/llmproxy/translator/gemini-cli/openai/chat-completions/gemini-cli_openai_request.gopkg/llmproxy/translator/gemini-cli/openai/responses/init.gopkg/llmproxy/translator/gemini/claude/init.gopkg/llmproxy/translator/gemini/gemini-cli/init.gopkg/llmproxy/translator/gemini/gemini/init.gopkg/llmproxy/translator/gemini/openai/chat-completions/gemini_openai_request.gopkg/llmproxy/translator/gemini/openai/responses/init.gopkg/llmproxy/translator/kiro/claude/init.gopkg/llmproxy/translator/kiro/openai/init.gopkg/llmproxy/translator/openai/claude/init.gopkg/llmproxy/translator/openai/gemini-cli/init.gopkg/llmproxy/translator/openai/gemini/init.gopkg/llmproxy/translator/openai/openai/responses/init.goscripts/cliproxy-service-test.shscripts/cliproxy-service.shsdk/api/handlers/gemini/gemini_handlers.gosdk/api/handlers/handlers.gosdk/auth/antigravity.gosdk/auth/filestore.gosdk/auth/kilo.gosdk/auth/kiro.gosdk/cliproxy/auth/conductor.gosdk/cliproxy/service.go
| // Priority 2: For IDC, use startUrl identifier | ||
| if authMethod == "idc" && tokenData.StartURL != "" { | ||
| identifier := ExtractIDCIdentifier(tokenData.StartURL) | ||
| if identifier != "" { | ||
| return fmt.Sprintf("kiro-%s-%s-%05d.json", authMethod, identifier, seq) | ||
| return fmt.Sprintf("kiro-%s-%s.json", authMethod, identifier) | ||
| } | ||
| } | ||
|
|
||
| // Priority 3: Fallback to authMethod only with sequence | ||
| return fmt.Sprintf("kiro-%s-%05d.json", authMethod, seq) | ||
| // Priority 3: Fallback to authMethod only | ||
| return fmt.Sprintf("kiro-%s.json", authMethod) |
There was a problem hiding this comment.
Token filename collisions when email is unavailable and multiple accounts share the same IDC startURL or authMethod.
The comment (Line 492) says the sequence suffix was only needed "when email is unavailable", yet both the IDC-identifier path (Line 513) and the fallback path (Line 518) have now dropped it entirely. Two users authenticating via the same IDC tenant (startURL) without email will produce the same filename (kiro-idc-d-1234567890.json), causing one token to silently overwrite the other. The fallback path is even broader — all accounts with the same authMethod and no email/startURL collide on a single file.
🐛 Proposed fix — restore a stable disambiguator for the non-email paths
// Priority 2: For IDC, use startUrl identifier
if authMethod == "idc" && tokenData.StartURL != "" {
identifier := ExtractIDCIdentifier(tokenData.StartURL)
if identifier != "" {
- return fmt.Sprintf("kiro-%s-%s.json", authMethod, identifier)
+ // Stable hash of startURL avoids collisions when multiple IDC
+ // tenants share the same subdomain prefix.
+ return fmt.Sprintf("kiro-%s-%s.json", authMethod, identifier)
+ // NOTE: If multi-user-per-tenant support is needed, re-add a
+ // sequence suffix here (e.g. -1, -2) or use a profile identifier.
}
}
- // Priority 3: Fallback to authMethod only
- return fmt.Sprintf("kiro-%s.json", authMethod)
+ // Priority 3: Fallback to authMethod only — risk of collision for
+ // multiple accounts; restore sequence suffix if multi-account needed.
+ return fmt.Sprintf("kiro-%s.json", authMethod)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@internal/auth/kiro/aws.go` around lines 509 - 518, The filename generation
currently drops the "sequence" disambiguator causing token files to collide when
email is absent; update the IDC branch and the fallback to append a stable
disambiguator derived from the token (instead of removing it). Implement a
helper like computeTokenDisambiguator(tokenData) that returns a short stable
string (prefer an existing stable id field such as tokenData.Sub / AccountID if
available, or a short hash of StartURL+authMethod+tokenData.Sub), then change
the two returns in the function that use ExtractIDCIdentifier,
tokenData.StartURL and authMethod to include that disambiguator (e.g.,
fmt.Sprintf("kiro-%s-%s-%s.json", authMethod, identifierOrEmpty, disambiguator)
and fmt.Sprintf("kiro-%s-%s.json", authMethod, disambiguator)). Ensure the
helper is deterministic and short to avoid filename length issues and use it
wherever the code previously relied on the omitted "sequence" suffix.
| configaccess.Register(&sdkconfig.SDKConfig{ | ||
| APIKeys: append([]string(nil), newCfg.APIKeys...), | ||
| }) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Find the SDKConfig struct definition
rg -n 'type SDKConfig struct' --type go -A 20Repository: KooshaPari/cliproxyapi-plusplus
Length of output: 3698
🏁 Script executed:
#!/bin/bash
# Find the reconcile.go file and examine the context around lines 89-91
fd -e go -path '*/reconcile.go' -exec cat -n {} +Repository: KooshaPari/cliproxyapi-plusplus
Length of output: 243
🏁 Script executed:
#!/bin/bash
# Find server.go and the castToSDKConfig function
fd -e go -path '*/server.go' -exec cat -n {} + | head -200Repository: KooshaPari/cliproxyapi-plusplus
Length of output: 243
🏁 Script executed:
#!/bin/bash
# Search for all SDKConfig field accesses in the config_access package
rg -n 'SDKConfig\.' --type go -C 3Repository: KooshaPari/cliproxyapi-plusplus
Length of output: 57
🏁 Script executed:
#!/bin/bash
# Find reconcile.go file
fd reconcile.goRepository: KooshaPari/cliproxyapi-plusplus
Length of output: 136
🏁 Script executed:
#!/bin/bash
# Find server.go and castToSDKConfig
fd server.goRepository: KooshaPari/cliproxyapi-plusplus
Length of output: 354
🏁 Script executed:
#!/bin/bash
# Search for configaccess.Register calls and context
rg -n 'configaccess.Register' -C 5Repository: KooshaPari/cliproxyapi-plusplus
Length of output: 3556
🏁 Script executed:
#!/bin/bash
# Search for where SDKConfig fields are actually accessed in configaccess package
rg -n 'SDKConfig' pkg/llmproxy/access/ -C 3Repository: KooshaPari/cliproxyapi-plusplus
Length of output: 1691
🏁 Script executed:
#!/bin/bash
# Look for ProxyURL usage specifically
rg -n 'ProxyURL' --type go -C 2Repository: KooshaPari/cliproxyapi-plusplus
Length of output: 50388
🏁 Script executed:
#!/bin/bash
# Find the config_access provider implementation to see which fields it actually uses
cat -n pkg/llmproxy/access/config_access/provider.goRepository: KooshaPari/cliproxyapi-plusplus
Length of output: 4423
🏁 Script executed:
#!/bin/bash
# Look at the internal version for comparison
cat -n internal/access/reconcile.go | head -100Repository: KooshaPari/cliproxyapi-plusplus
Length of output: 3912
🏁 Script executed:
#!/bin/bash
# Check if there are any other usages of SDKConfig fields in config_access package
rg -n 'ProxyURL|ForceModelPrefix|RequestLog|PassthroughHeaders|Streaming' pkg/llmproxy/access/config_access/Repository: KooshaPari/cliproxyapi-plusplus
Length of output: 57
🏁 Script executed:
#!/bin/bash
# Look at the pkg/llmproxy version of reconcile.go around lines 89-91
cat -n pkg/llmproxy/access/reconcile.go | sed -n '80,100p'Repository: KooshaPari/cliproxyapi-plusplus
Length of output: 1079
🏁 Script executed:
#!/bin/bash
# Find castToSDKConfig in server.go files
rg -n 'castToSDKConfig' -A 15 --type goRepository: KooshaPari/cliproxyapi-plusplus
Length of output: 5746
🏁 Script executed:
#!/bin/bash
# Check if there are other places where reconcile is called
rg -n 'ApplyAccessProviders\|ReconcileProviders' --type go -B 2 -A 2Repository: KooshaPari/cliproxyapi-plusplus
Length of output: 57
Pass the full SDKConfig to configaccess.Register() for consistency with the internal version.
While configaccess.Register() currently only reads APIKeys (other fields are unused), the explicit partial construction creates inconsistency with internal/access/reconcile.go line 88, which passes the full config. If the function's usage later expands to other fields, this code would silently miss those updates. For maintainability, pass &newCfg.SDKConfig directly.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@pkg/llmproxy/access/reconcile.go` around lines 89 - 91, Replace the partial
construction of SDKConfig with passing the full SDKConfig object to keep
behavior consistent with internal/access/reconcile.go: call
configaccess.Register(&newCfg.SDKConfig) instead of building a new struct with
only APIKeys; this ensures configaccess.Register receives all fields from
newCfg.SDKConfig (referenced symbols: configaccess.Register, newCfg, SDKConfig).
| func castToSDKConfig(cfg *config.SDKConfig) *sdkconfig.SDKConfig { | ||
| return (*sdkconfig.SDKConfig)(unsafe.Pointer(cfg)) | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# Find both SDKConfig struct definitions
ast-grep --pattern 'type SDKConfig struct { $$$ }'Repository: KooshaPari/cliproxyapi-plusplus
Length of output: 4608
🏁 Script executed:
# Also check the imports to understand which package is which
rg "package config|package sdkconfig" pkg/llmproxy -B 2 -A 2Repository: KooshaPari/cliproxyapi-plusplus
Length of output: 2724
🏁 Script executed:
# Check reconcile.go to understand the field-by-field copy mentioned
cat -n pkg/llmproxy/access/reconcile.go | head -100Repository: KooshaPari/cliproxyapi-plusplus
Length of output: 3805
🏁 Script executed:
# Check server.go imports to identify which config/sdkconfig packages are used
head -50 pkg/llmproxy/api/server.go | grep -E "import|config|sdkconfig"Repository: KooshaPari/cliproxyapi-plusplus
Length of output: 277
🏁 Script executed:
# Also verify the full definition of both SDKConfig structs with field alignment awareness
cat -n internal/config/sdk_config.go | sed -n '1,40p'Repository: KooshaPari/cliproxyapi-plusplus
Length of output: 2459
🏁 Script executed:
# And the pkg/llmproxy version
cat -n pkg/llmproxy/config/sdk_types.go | sed -n '1,40p'Repository: KooshaPari/cliproxyapi-plusplus
Length of output: 2460
Remove unsafe.Pointer cast and replace with explicit field copy.
Per Go's unsafe.Pointer rules, the conversion is valid only "provided that the two types share an equivalent memory layout." While both config.SDKConfig and sdkconfig.SDKConfig appear to have identical field layouts today, this cast creates two problems:
- Aliasing hazard: Mutations through either pointer mutate the same underlying memory, posing a concurrency risk on a concurrent HTTP server.
- Brittleness: There is no mechanism enforcing that the two struct definitions stay in sync. Future changes to one struct could silently break the cast without compile-time detection.
Replace with an explicit field-by-field copy, consistent with the approach used elsewhere in the codebase:
🔒 Proposed fix
-func castToSDKConfig(cfg *config.SDKConfig) *sdkconfig.SDKConfig {
- return (*sdkconfig.SDKConfig)(unsafe.Pointer(cfg))
-}
+func toSDKConfig(cfg *config.SDKConfig) *sdkconfig.SDKConfig {
+ if cfg == nil {
+ return nil
+ }
+ return &sdkconfig.SDKConfig{
+ ProxyURL: cfg.ProxyURL,
+ ForceModelPrefix: cfg.ForceModelPrefix,
+ RequestLog: cfg.RequestLog,
+ APIKeys: append([]string(nil), cfg.APIKeys...),
+ PassthroughHeaders: cfg.PassthroughHeaders,
+ Streaming: cfg.Streaming,
+ NonStreamKeepAliveInterval: cfg.NonStreamKeepAliveInterval,
+ }
+}Also remove the "unsafe" import.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| func castToSDKConfig(cfg *config.SDKConfig) *sdkconfig.SDKConfig { | |
| return (*sdkconfig.SDKConfig)(unsafe.Pointer(cfg)) | |
| } | |
| func toSDKConfig(cfg *config.SDKConfig) *sdkconfig.SDKConfig { | |
| if cfg == nil { | |
| return nil | |
| } | |
| return &sdkconfig.SDKConfig{ | |
| ProxyURL: cfg.ProxyURL, | |
| ForceModelPrefix: cfg.ForceModelPrefix, | |
| RequestLog: cfg.RequestLog, | |
| APIKeys: append([]string(nil), cfg.APIKeys...), | |
| PassthroughHeaders: cfg.PassthroughHeaders, | |
| Streaming: cfg.Streaming, | |
| NonStreamKeepAliveInterval: cfg.NonStreamKeepAliveInterval, | |
| } | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@pkg/llmproxy/api/server.go` around lines 70 - 72, Replace the unsafe.Pointer
cast in castToSDKConfig by creating and returning a new sdkconfig.SDKConfig
populated via explicit field-by-field copies from the input *config.SDKConfig
(mirror every exported field from config.SDKConfig to sdkconfig.SDKConfig), and
remove the "unsafe" import; update castToSDKConfig to allocate a new
sdkconfig.SDKConfig, copy each corresponding field, and return its pointer to
avoid aliasing/brittleness between the two struct types.
| "github.com/router-for-me/CLIProxyAPI/v6/pkg/llmproxy/misc" | ||
| "github.com/router-for-me/CLIProxyAPI/v6/pkg/llmproxy/translator/gemini/common" | ||
| "github.com/router-for-me/CLIProxyAPI/v6/pkg/llmproxy/util" | ||
| log "github.com/sirupsen/logrus" |
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
Replace logrus with zerolog per coding guidelines.
logrus is imported and used throughout this file, but the project mandates zerolog for all logging.
As per coding guidelines: "Use zerolog for logging (NOT fmt.Print)" — logrus is not the approved logger.
♻️ Proposed import swap
-log "github.com/sirupsen/logrus"
+log "github.com/rs/zerolog/log"All log.Warnf(...) call sites must then be updated to use zerolog's fluent API, e.g.:
-log.Warnf("Failed to rename parameters for tool '%s': %v", fn.Get("name").String(), errRename)
+log.Warn().Str("tool", fn.Get("name").String()).Err(errRename).Msg("failed to rename parameters for tool")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@pkg/llmproxy/translator/antigravity/openai/chat-completions/antigravity_openai_request.go`
at line 12, Replace the logrus import with zerolog by swapping
"github.com/sirupsen/logrus" to "github.com/rs/zerolog/log" and update all
log.Warnf(...) call sites in this file (e.g., in functions that currently call
log.Warnf) to zerolog's fluent API like log.Warn().Msgf("...%s...", args...),
converting any structured fields to zerolog methods (e.g.,
log.Warn().Err(err).Msg("...") or log.Warn().Str("key", val).Msgf(...)) so the
file uses zerolog consistently.
| "github.com/router-for-me/CLIProxyAPI/v6/pkg/llmproxy/translator/gemini/common" | ||
| "github.com/router-for-me/CLIProxyAPI/v6/pkg/llmproxy/misc" | ||
| "github.com/router-for-me/CLIProxyAPI/v6/pkg/llmproxy/translator/gemini/common" | ||
| log "github.com/sirupsen/logrus" |
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
Replace logrus with zerolog throughout this file.
logrus is imported at line 11 and used at lines 217, 330, 343, 353, and 363. As per coding guidelines, zerolog must be used for logging (not logrus or fmt.Print).
♻️ Proposed import swap
- log "github.com/sirupsen/logrus"
+ "github.com/rs/zerolog/log"Update each call site from log.Warnf("msg %v", arg) to log.Warn().Msgf("msg %v", arg) (or use structured fields via log.Warn().Str("key", val).Msg("msg") for better observability).
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@pkg/llmproxy/translator/gemini-cli/openai/chat-completions/gemini-cli_openai_request.go`
at line 11, Replace the logrus import with zerolog and update all logging call
sites that use the log alias in this file: change the import
"github.com/sirupsen/logrus" to use zerolog (e.g., "github.com/rs/zerolog/log"),
then replace calls like log.Warnf("...%v", v) with log.Warn().Msgf("...%v", v)
(or use structured calls like log.Warn().Str("key", val).Msg("...")), and
likewise convert any log.Infof/log.Errorf etc. to log.Info()/log.Error().Msgf or
structured Msg variants; also replace any fmt.Print usage with appropriate
zerolog calls so all logging in this file uses zerolog's API.
🛠️ Refactor suggestion | 🟠 Major
Replace logrus with zerolog throughout this file.
Line 11 imports logrus and it is used at lines 217, 330, 343, 353, and 363. The project mandates zerolog for all logging. As per coding guidelines, use zerolog for logging (NOT fmt.Print), and by extension not logrus.
♻️ Proposed fix (import swap)
- log "github.com/sirupsen/logrus"
+ "github.com/rs/zerolog/log"Then update call sites from log.Warnf("...", args...) to log.Warn().Msgf("...", args...) (or log.Warn().Str(...).Msg(...) for structured fields).
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@pkg/llmproxy/translator/gemini-cli/openai/chat-completions/gemini-cli_openai_request.go`
at line 11, Replace the logrus import alias (log "github.com/sirupsen/logrus")
with the zerolog logger import (github.com/rs/zerolog/log) and update every
callsite that uses logrus formatting methods to zerolog idioms: change
log.Warnf(...)/log.Infof(...)/log.Errorf(...) to
log.Warn().Msgf(...)/log.Info().Msgf(...)/log.Error().Msgf(...), or convert to
structured logging using log.Warn().Str("key", val).Msg("message") (apply
similarly for other levels) for the log occurrences in this file (the import and
all uses of the log identifier).
| if authMethod == "idc" && (clientID == "" || clientSecret == "") { | ||
| return nil, fmt.Errorf("missing idc client credentials for %s; re-authenticate with --kiro-aws-login", auth.ID) | ||
| } |
There was a problem hiding this comment.
auth.ID in the error message leaks PII.
auth.ID is set to a filename derived from the user's email via SanitizeEmailForFilename (e.g., kiro-idc-user-at-example-com.json). Embedding it in an error string that surfaces to callers, logs, or UI violates PII hygiene — the error context can identify the user.
Use a non-identifying token (e.g., just the auth method and a truncated hash) instead:
🛡️ Proposed fix
- if authMethod == "idc" && (clientID == "" || clientSecret == "") {
- return nil, fmt.Errorf("missing idc client credentials for %s; re-authenticate with --kiro-aws-login", auth.ID)
+ if authMethod == "idc" && (clientID == "" || clientSecret == "") {
+ return nil, fmt.Errorf("missing idc client credentials; re-authenticate with --kiro-aws-login")
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if authMethod == "idc" && (clientID == "" || clientSecret == "") { | |
| return nil, fmt.Errorf("missing idc client credentials for %s; re-authenticate with --kiro-aws-login", auth.ID) | |
| } | |
| if authMethod == "idc" && (clientID == "" || clientSecret == "") { | |
| return nil, fmt.Errorf("missing idc client credentials; re-authenticate with --kiro-aws-login") | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@sdk/auth/kiro.go` around lines 357 - 359, The error message currently returns
the full auth.ID (PII) in the conditional that checks authMethod == "idc" and
empty clientID/clientSecret; change the returned error to avoid leaking auth.ID
by either omitting it and referencing only the auth method (e.g., "missing idc
client credentials; re-authenticate with --kiro-aws-login") or include a
non-identifying token such as a truncated hash of auth.ID (compute a short hash
from auth.ID and include that instead). Update the fmt.Errorf call in the block
that uses authMethod, clientID, clientSecret and auth.ID so the message no
longer contains raw auth.ID.
| } | ||
| } | ||
| if authMethod == "idc" && (clientID == "" || clientSecret == "") { | ||
| return nil, fmt.Errorf("missing idc client credentials for %s; re-authenticate with --kiro-aws-login", auth.ID) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Find where kiro CLI flags are registered to confirm the exact flag name.
rg -n --type=go -C3 'kiro-aws'Repository: KooshaPari/cliproxyapi-plusplus
Length of output: 4321
🏁 Script executed:
#!/bin/bash
# Search for where the --kiro-aws-login flag is registered/defined
rg -n --type=go 'Flags\(\)|String\(|Bool\(' pkg/llmproxy/cmd/kiro_login.go internal/cmd/kiro_login.go -B2 -A2 | head -100Repository: KooshaPari/cliproxyapi-plusplus
Length of output: 57
🏁 Script executed:
#!/bin/bash
# Search for flag definitions more broadly in the kiro_login command files
rg -n --type=go 'kiro-aws-login|AddCommand|PersistentFlags|Flags' pkg/llmproxy/cmd/kiro_login.go internal/cmd/kiro_login.goRepository: KooshaPari/cliproxyapi-plusplus
Length of output: 300
🏁 Script executed:
#!/bin/bash
# Search for the actual command definition and flag setup
rg -n --type=go -B5 -A20 'var.*Command|func.*Command|&cobra.Command' pkg/llmproxy/cmd/kiro_login.go | head -80Repository: KooshaPari/cliproxyapi-plusplus
Length of output: 57
🏁 Script executed:
#!/bin/bash
# View the structure of the kiro_login command file
wc -l pkg/llmproxy/cmd/kiro_login.go
head -60 pkg/llmproxy/cmd/kiro_login.goRepository: KooshaPari/cliproxyapi-plusplus
Length of output: 2082
🏁 Script executed:
#!/bin/bash
# Search for where CLI commands are registered/added
rg -n --type=go 'kiro-aws-login' --max-count=20Repository: KooshaPari/cliproxyapi-plusplus
Length of output: 517
🏁 Script executed:
#!/bin/bash
# Search for where command-line flags are defined/parsed (likely in a root command or main file)
rg -n --type=go 'NewCommand|AddCommand|StringVar|BoolVar' --max-count=30 | grep -i kiroRepository: KooshaPari/cliproxyapi-plusplus
Length of output: 223
🏁 Script executed:
#!/bin/bash
# Search for the root command or main CLI setup
rg -n --type=go 'rootCmd|kiroCmd|LoginOptions' cmd/cliproxyctl/main.go | head -30Repository: KooshaPari/cliproxyapi-plusplus
Length of output: 714
🏁 Script executed:
#!/bin/bash
# Search for DoKiroAWSLogin or AWS login related functions
rg -n --type=go 'DoKiroAWSLogin|DoKiroGoogleLogin|aws.*[Ll]ogin' --max-count=20Repository: KooshaPari/cliproxyapi-plusplus
Length of output: 1443
🏁 Script executed:
#!/bin/bash
# Search for where DoKiroAWSLogin is called and how flags/options are passed
rg -n --type=go 'DoKiroAWSLogin' -B5 -A5Repository: KooshaPari/cliproxyapi-plusplus
Length of output: 2251
🏁 Script executed:
#!/bin/bash
# Search for where the kiro provider is handled and flag/option parsing
rg -n --type=go '"kiro".*aws|provider.*kiro' -B3 -A3 | head -100Repository: KooshaPari/cliproxyapi-plusplus
Length of output: 2864
🏁 Script executed:
#!/bin/bash
# Search for where provider is parsed and how it routes to different login functions
rg -n --type=go 'case.*kiro|"kiro"' cmd/cliproxyctl/main.go | head -50Repository: KooshaPari/cliproxyapi-plusplus
Length of output: 162
🏁 Script executed:
#!/bin/bash
# Look at the context around where kiro provider is handled
sed -n '80,120p' cmd/cliproxyctl/main.goRepository: KooshaPari/cliproxyapi-plusplus
Length of output: 1404
🏁 Script executed:
#!/bin/bash
# Search for main function and argument parsing to see how flags are registered
sed -n '1,100p' cmd/cliproxyctl/main.go | head -50Repository: KooshaPari/cliproxyapi-plusplus
Length of output: 1336
🏁 Script executed:
#!/bin/bash
# Search for where the provider argument is parsed and if there's a flag mechanism
sed -n '150,300p' cmd/cliproxyctl/main.goRepository: KooshaPari/cliproxyapi-plusplus
Length of output: 4479
🏁 Script executed:
#!/bin/bash
# Continue searching for the login flag definitions
sed -n '300,400p' cmd/cliproxyctl/main.goRepository: KooshaPari/cliproxyapi-plusplus
Length of output: 3649
The error message references a non-existent CLI flag --kiro-aws-login.
The --kiro-aws-login flag is not registered in the CLI argument parser. While DoKiroAWSLogin() function exists and is referenced in help text, there is no corresponding command-line flag to invoke it. Users encountering this error will be unable to follow the remediation instruction. Either register the flag, or update the error message to direct users to the actual recovery mechanism (e.g., re-authentication via the configured provider or API).
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@sdk/auth/kiro.go` at line 358, The error message returned where auth.ID is
used references a non-existent CLI flag; update the fmt.Errorf call that
currently says "missing idc client credentials for %s; re-authenticate with
--kiro-aws-login" to instruct users to re-authenticate via the actual mechanism
(e.g., call DoKiroAWSLogin or use the configured provider/API) or point to the
correct CLI command/flag if one exists; ensure you change the string returned by
that fmt.Errorf so it no longer mentions --kiro-aws-login but instead references
DoKiroAWSLogin or the proper recovery step and still includes auth.ID for
context.
| func authLogRef(auth *Auth) string { | ||
| if auth == nil { | ||
| return "provider=unknown auth_id_hash=" | ||
| } |
There was a problem hiding this comment.
Inconsistent nil-guard output: auth_id_hash= has an empty value.
When auth == nil the function returns "provider=unknown auth_id_hash=", where the hash segment is an empty string. Every non-nil path produces a 16-char hex hash, so callers or log parsers expecting a fixed shape will get a structurally different value.
🛠 Proposed fix
func authLogRef(auth *Auth) string {
if auth == nil {
- return "provider=unknown auth_id_hash="
+ return "provider=unknown auth_id_hash=unknown"
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@sdk/cliproxy/auth/conductor.go` around lines 2282 - 2285, The nil guard in
authLogRef produces "provider=unknown auth_id_hash=" which breaks callers
expecting a 16-char hex hash; update authLogRef so when auth == nil it returns
the same shape by supplying a 16-character hex placeholder (e.g.,
"0000000000000000" or another explicit sentinel) and keep the "provider=unknown"
text; modify the authLogRef function to emit "provider=unknown
auth_id_hash=<16-char-hex>" to match the non-nil output format.
| func authLogRef(auth *Auth) string { | ||
| if auth == nil { | ||
| return "provider=unknown auth_id_hash=" | ||
| } | ||
| provider := strings.TrimSpace(auth.Provider) | ||
| if provider == "" { | ||
| provider = "unknown" | ||
| } | ||
| sum := sha256.Sum256([]byte(strings.TrimSpace(auth.ID))) | ||
| hash := hex.EncodeToString(sum[:8]) | ||
| return "provider=" + provider + " auth_id_hash=" + hash | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Find all call sites of authLogRef in the repository
rg -n --type go -C 2 '\bauthLogRef\s*\('Repository: KooshaPari/cliproxyapi-plusplus
Length of output: 57
🏁 Script executed:
#!/bin/bash
# Read the InjectCredentials method and surrounding context
sed -n '2295,2320p' sdk/cliproxy/auth/conductor.goRepository: KooshaPari/cliproxyapi-plusplus
Length of output: 920
🏁 Script executed:
#!/bin/bash
# Also check the refreshAuth pattern and how cloning is used elsewhere
sed -n '2142,2160p' sdk/cliproxy/auth/conductor.goRepository: KooshaPari/cliproxyapi-plusplus
Length of output: 588
🏁 Script executed:
#!/bin/bash
# Check pickNext and pickNextMixed for safe copy patterns
rg -n --type go -A 10 'func.*pickNext' sdk/cliproxy/auth/conductor.go | head -40Repository: KooshaPari/cliproxyapi-plusplus
Length of output: 1141
🏁 Script executed:
#!/bin/bash
# Read authLogRef to see the exact nil-path format
sed -n '2282,2293p' sdk/cliproxy/auth/conductor.goRepository: KooshaPari/cliproxyapi-plusplus
Length of output: 421
🏁 Script executed:
#!/bin/bash
# Check how pickNext clones the Auth before releasing lock
sed -n '1666,1720p' sdk/cliproxy/auth/conductor.goRepository: KooshaPari/cliproxyapi-plusplus
Length of output: 1890
Fix data race in InjectCredentials by cloning the auth pointer before releasing the lock.
The method reads a live *Auth pointer from m.auths under read lock, then releases the lock and passes that pointer to PrepareRequest. Concurrent calls to refreshAuth mutate fields on the same pointer while holding the write lock, creating a race window between RUnlock() and PrepareRequest() return. Clone the auth before releasing the lock, following the pattern in pickNext (line 1708: authCopy := selected.Clone() before m.mu.RUnlock()).
Additionally:
authLogRefhas no call sites anywhere in the repository and should be removed.- The nil-path return in
authLogRefis inconsistent:"provider=unknown auth_id_hash="(empty hash) versus non-nil"provider=... auth_id_hash=<16-char-hex>". If retained, the nil case should return a complete formatted string.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@sdk/cliproxy/auth/conductor.go` around lines 2282 - 2293, In
InjectCredentials, avoid the data race by cloning the *Auth obtained from
m.auths (use the existing Clone method as in pickNext where authCopy :=
selected.Clone()) while still holding the read lock, then RUnlock() and pass the
cloned copy to PrepareRequest; ensure no fields of the original pointer are
accessed after RUnlock(). Also remove the unused authLogRef function entirely
(it's unreferenced), or if you prefer to keep it, change its nil path to return
a fully formatted string consistent with the non-nil path (include a stable hash
value) so the outputs are uniform.
| # -- Service Operations -- | ||
| service:status: | ||
| desc: "Show service status for installed system service" | ||
| cmds: | ||
| - ./scripts/cliproxy-service.sh status | ||
|
|
||
| service:start: | ||
| desc: "Start cliproxyapi++ service (systemd/launchd/Windows helper)" | ||
| cmds: | ||
| - ./scripts/cliproxy-service.sh start | ||
|
|
||
| service:stop: | ||
| desc: "Stop cliproxyapi++ service (systemd/launchd/Windows helper)" | ||
| cmds: | ||
| - ./scripts/cliproxy-service.sh stop | ||
|
|
||
| service:restart: | ||
| desc: "Restart cliproxyapi++ service (systemd/launchd/Windows helper)" | ||
| cmds: | ||
| - ./scripts/cliproxy-service.sh restart | ||
|
|
||
| service:install: | ||
| desc: "Install cliproxyapi++ service scaffold (systemd/launchd)" | ||
| cmds: | ||
| - ./scripts/cliproxy-service.sh install | ||
|
|
||
| service:test: | ||
| desc: "Run cliproxy service helper contract tests" | ||
| cmds: | ||
| - ./scripts/cliproxy-service-test.sh |
There was a problem hiding this comment.
service:start, service:stop, service:restart, and service:install invoke mutating system operations without any guard or dry-run mode.
Running task service:install or task service:start directly modifies the system (creates a launchd/systemd unit, starts a daemon). These are safe to list as DX helpers, but consider adding a confirmation prompt or an --install flag guard to avoid accidental execution. Also, service:test is placed before the fast lint:changed step in quality:ci (Line 347); if the test script is slow, it unnecessarily delays the lint gate.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@Taskfile.yml` around lines 482 - 511, The Task targets service:start,
service:stop, service:restart, and service:install currently perform mutating
system operations unguarded; change them to require an explicit guard (e.g.,
only run when a TASK_CONFIRM or TASK_FORCE env var is set or when an explicit
--install/--force flag is passed) or implement an interactive confirmation
prompt before invoking ./scripts/cliproxy-service.sh, and ensure service:test is
moved later in the quality:ci sequence (after lint:changed) so slow service
helper tests do not block the fast lint gate; update the Taskfile targets
service:start/service:stop/service:restart/service:install and the quality:ci
target to implement these guards and reorder service:test accordingly.
|
Superseded by layered CI-fix PR #513. |
Summary
Details
Validation
Rebased onto latest main (v6.8.87-0).
during development passed.
gui/501/com.router-for-me.cliproxyapi-plusplus = {
active count = 1
path = /Users/kooshapari/Library/LaunchAgents/com.router-for-me.cliproxyapi-plusplus.plist
type = LaunchAgent
state = running
program = /opt/homebrew/bin/cliproxyapi++
arguments = {
/opt/homebrew/bin/cliproxyapi++
--config
/opt/homebrew/etc/cliproxyapi/config.yaml
}
working directory = /opt/homebrew/etc/cliproxyapi
stdout path = /opt/homebrew/var/log/cliproxyapi-plusplus.log
stderr path = /opt/homebrew/var/log/cliproxyapi-plusplus.err
inherited environment = {
DISPLAY => /private/tmp/com.apple.launchd.UWJhyMfyKa/org.xquartz:0
SSH_AUTH_SOCK => /private/tmp/com.apple.launchd.7TXCaMgRj8/Listeners
}
default environment = {
PATH => /usr/bin:/bin:/usr/sbin:/sbin
}
environment = {
OSLogRateLimit => 64
XPC_SERVICE_NAME => com.router-for-me.cliproxyapi-plusplus
}
domain = gui/501 [100021]
asid = 100021
minimum runtime = 10
exit timeout = 5
runs = 2
pid = 23483
immediate reason = non-ipc demand
forks = 0
execs = 1
initialized = 1
trampolined = 1
started suspended = 0
proxy started suspended = 0
checked allocations = 0 (queried = 1)
checked allocations reason = no host
checked allocations flags = 0x0
last exit code = 0
semaphores = {
after crash => 1
}
resource coalition = {
ID = 40531
type = resource
state = active
active count = 1
name = com.router-for-me.cliproxyapi-plusplus
}
jetsam coalition = {
ID = 40532
type = jetsam
state = active
active count = 1
name = com.router-for-me.cliproxyapi-plusplus
}
spawn type = daemon (3)
jetsam priority = 40
jetsam memory limit (active) = (unlimited)
jetsam memory limit (inactive) = (unlimited)
jetsamproperties category = daemon
jetsam thread limit = 32
cpumon = default
properties = runatload | inferred program
} works for local launchd check (where available).
Summary by CodeRabbit
Release Notes
New Features
Bug Fixes
Improvements
Refactoring
Style