feat(commands): replace PromptPal with SSE streaming from server API#233
feat(commands): replace PromptPal with SSE streaming from server API#233
Conversation
Replace the PromptPal SDK-based AI service with direct SSE streaming from the ShellTime server API. Tokens are now displayed incrementally as they arrive, improving perceived responsiveness. - Rewrite AIService to use net/http SSE client targeting /api/v1/ai/command-suggest - Update query command for streaming with incremental token display - Remove PromptPal dependency, config files, and build-time variables - Update goreleaser ldflags and daemon to remove ppEndpoint/ppToken - Update all query tests for QueryCommandStream with callback pattern Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
|
You have reached your Codex usage limits for code reviews. You can see your limits in the Codex usage dashboard. |
Summary of ChangesHello @AnnatarHe, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request fundamentally changes how the CLI interacts with the AI command suggestion service. Instead of relying on a third-party SDK, the application now directly consumes a Server-Sent Events stream from the ShellTime server. This strategic shift not only removes an external dependency but also delivers a more dynamic and responsive user experience by displaying AI suggestions incrementally as they are generated, making the Highlights
Changelog
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
Codecov Report❌ Patch coverage is
Flags with carried forward coverage won't be shown. Click here to find out more.
... and 2 files with indirect coverage changes 🚀 New features to boost your workflow:
|
There was a problem hiding this comment.
Code Review
This pull request is a great improvement, replacing the PromptPal SDK with a direct SSE streaming implementation from the server API. This enhances the perceived responsiveness of shelltime q by displaying tokens incrementally. The changes are well-contained, and the removal of the PromptPal dependency and related build-time variables cleans up the codebase nicely. The tests have also been updated to reflect the new streaming behavior. I have a couple of suggestions in model/ai_service.go to improve the robustness of URL construction and the efficiency of the HTTP client usage.
| apiURL := strings.TrimRight(endpoint.APIEndpoint, "/") + "/api/v1/ai/command-suggest" | ||
|
|
||
| clientOptions := promptpal.PromptPalClientOptions{ | ||
| Timeout: &config.Timeout, | ||
| ApplyTemporaryToken: &applyTokenFunc, | ||
| req, err := http.NewRequestWithContext(ctx, http.MethodPost, apiURL, bytes.NewReader(body)) |
There was a problem hiding this comment.
The current method of constructing the URL via string concatenation can be fragile. For instance, if endpoint.APIEndpoint were to contain a path, it could lead to an incorrect URL. Using net/url.JoinPath is a more robust way to join URL components. You will need to add "net/url" to your imports.
| apiURL := strings.TrimRight(endpoint.APIEndpoint, "/") + "/api/v1/ai/command-suggest" | |
| clientOptions := promptpal.PromptPalClientOptions{ | |
| Timeout: &config.Timeout, | |
| ApplyTemporaryToken: &applyTokenFunc, | |
| req, err := http.NewRequestWithContext(ctx, http.MethodPost, apiURL, bytes.NewReader(body)) | |
| apiURL, err := url.JoinPath(endpoint.APIEndpoint, "api/v1/ai/command-suggest") | |
| if err != nil { | |
| return fmt.Errorf("failed to create api url: %w", err) | |
| } | |
| req, err := http.NewRequestWithContext(ctx, http.MethodPost, apiURL, bytes.NewReader(body)) |
| req.Header.Set("Authorization", "CLI "+endpoint.Token) | ||
|
|
||
| client := promptpal.NewPromptPalClient(config.Endpoint, config.Token, clientOptions) | ||
| client := &http.Client{Timeout: 2 * time.Minute} |
There was a problem hiding this comment.
| if err := scanner.Err(); err != nil { | ||
| return fmt.Errorf("error reading stream: %w", err) | ||
| } | ||
|
|
||
| return response.ResponseMessage, nil | ||
| } No newline at end of file | ||
| return nil |
There was a problem hiding this comment.
🟡 SSE error event silently swallowed if stream ends before error data line arrives
If the SSE stream ends (EOF or connection drop) after receiving event: error but before the corresponding data: line arrives, the isError flag is true when the scanner loop exits, yet the function returns nil on line 92 — silently swallowing the error.
Root Cause
The isError flag is set to true on line 68 when event: error is encountered. The error is only surfaced on line 76 when a subsequent data: line is read while isError is true. However, if the stream terminates (e.g., server closes connection, network interruption) between the event: error line and the data: line, the scanner loop ends normally, scanner.Err() returns nil, and the function returns nil on line 92.
For example, this stream:
data: partial_token
event: error
<EOF>
Would deliver partial_token via onToken, set isError = true, then exit the loop and return nil. The caller (commandQuery at commands/query.go:79) would treat this as a successful completion, displaying an incomplete result with no error indication.
Impact: An error condition from the server is silently ignored, causing the user to see a partial/incomplete command suggestion with no indication that something went wrong.
| if err := scanner.Err(); err != nil { | |
| return fmt.Errorf("error reading stream: %w", err) | |
| } | |
| return response.ResponseMessage, nil | |
| } | |
| \ No newline at end of file | |
| return nil | |
| if err := scanner.Err(); err != nil { | |
| return fmt.Errorf("error reading stream: %w", err) | |
| } | |
| if isError { | |
| return fmt.Errorf("server error: stream ended unexpectedly after error event") | |
| } | |
| return nil | |
Was this helpful? React with 👍 or 👎 to provide feedback.
Summary
POST /api/v1/ai/command-suggest)shelltime qppEndpoint/ppTokenbuild-time variables and PromptPal dependency entirelyDependencies
Test plan
go build ./...compilesgo test ./commands/...— all 17 query tests passshelltime q "list files modified today"shows tokens streaming incrementally🤖 Generated with Claude Code