Skip to content

Comments

feat(commands): replace PromptPal with SSE streaming from server API#233

Merged
AnnatarHe merged 1 commit intomainfrom
feat/ai-command-suggest-sse-streaming
Feb 13, 2026
Merged

feat(commands): replace PromptPal with SSE streaming from server API#233
AnnatarHe merged 1 commit intomainfrom
feat/ai-command-suggest-sse-streaming

Conversation

@AnnatarHe
Copy link
Contributor

@AnnatarHe AnnatarHe commented Feb 13, 2026

Summary

  • Replace PromptPal SDK with direct SSE streaming from ShellTime server (POST /api/v1/ai/command-suggest)
  • Tokens now display incrementally as they arrive, improving perceived responsiveness of shelltime q
  • Remove ppEndpoint/ppToken build-time variables and PromptPal dependency entirely
  • Clean up goreleaser ldflags and daemon unused variables

Dependencies

Test plan

  • go build ./... compiles
  • go test ./commands/... — all 17 query tests pass
  • shelltime q "list files modified today" shows tokens streaming incrementally
  • Verify auto-run classification still works after streaming completes
  • Verify goreleaser builds without PP_ENDPOINT/PP_TOKEN env vars

🤖 Generated with Claude Code


Open with Devin

Replace the PromptPal SDK-based AI service with direct SSE streaming
from the ShellTime server API. Tokens are now displayed incrementally
as they arrive, improving perceived responsiveness.

- Rewrite AIService to use net/http SSE client targeting /api/v1/ai/command-suggest
- Update query command for streaming with incremental token display
- Remove PromptPal dependency, config files, and build-time variables
- Update goreleaser ldflags and daemon to remove ppEndpoint/ppToken
- Update all query tests for QueryCommandStream with callback pattern

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@chatgpt-codex-connector
Copy link

You have reached your Codex usage limits for code reviews. You can see your limits in the Codex usage dashboard.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @AnnatarHe, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request fundamentally changes how the CLI interacts with the AI command suggestion service. Instead of relying on a third-party SDK, the application now directly consumes a Server-Sent Events stream from the ShellTime server. This strategic shift not only removes an external dependency but also delivers a more dynamic and responsive user experience by displaying AI suggestions incrementally as they are generated, making the shelltime q command feel much faster and more interactive.

Highlights

  • AI Service Migration: Replaced the PromptPal SDK with direct Server-Sent Events (SSE) streaming from the ShellTime server API (POST /api/v1/ai/command-suggest), eliminating the external dependency.
  • Improved Responsiveness: Implemented incremental display of AI-generated command tokens as they arrive, significantly enhancing the perceived speed and user experience of the shelltime q command.
  • Dependency Removal & Cleanup: Removed all PromptPal-related build-time variables (ppEndpoint, ppToken), configuration files (promptpal.yml), and associated code, leading to a cleaner codebase and reduced external dependencies.
  • Refactored AI Interface: Updated the internal AI service interface to support streaming responses, changing from a single QueryCommand call to a QueryCommandStream method with a token callback.
Changelog
  • .goreleaser.yaml
    • Removed ppEndpoint and ppToken ldflags from all build configurations.
  • cmd/cli/main.go
    • Removed global ppEndpoint and ppToken variables.
    • Modified NewAIService initialization to no longer require AIServiceConfig parameters, relying instead on the user's configured token.
  • cmd/daemon/main.go
    • Removed global ppEndpoint and ppToken variables.
  • commands/query.go
    • Updated error messages by removing emoji prefixes.
    • Introduced logic to read APIEndpoint and Token from the configuration service to construct an Endpoint for the AI service.
    • Replaced the blocking aiService.QueryCommand call with aiService.QueryCommandStream to handle incremental token reception.
    • Implemented streaming output logic, stopping the loading spinner and printing tokens as they arrive.
    • Removed the displayCommand helper function as its functionality is now integrated into the streaming output.
    • Simplified auto-run confirmation and tip messages by removing redundant print statements and emojis.
    • Changed the type of system context variables passed to the AI service from PPPromptGuessNextPromptVariables to CommandSuggestVariables.
  • commands/query_test.go
    • Updated all relevant test cases to use the new QueryCommandStream method and its streaming mock behavior.
    • Modified mock configurations to include APIEndpoint and Token for the ReadConfigFile calls.
    • Adjusted assertions for config read errors to reflect the new error handling flow.
    • Removed the test for the displayCommand function.
  • go.mod
    • Removed the github.com/PromptPal/go-sdk dependency.
  • go.sum
    • Cleaned up checksum entries related to the removed PromptPal/go-sdk and other no longer needed indirect dependencies.
  • model/ai_service.go
    • Completely refactored the AI service implementation to remove PromptPal/go-sdk.
    • Redefined the AIService interface to include QueryCommandStream for SSE-based communication.
    • Introduced sseAIService to handle HTTP POST requests and parse Server-Sent Events for command suggestions.
    • Implemented logic to send command suggestion requests, read event streams, and invoke a callback for each received token.
    • Added CommandSuggestVariables and Endpoint structs to define request parameters and server endpoint details.
  • promptpal.yml
    • Removed the promptpal.yml configuration file entirely.
Activity
  • The pull request introduces a significant architectural change by replacing the PromptPal SDK with direct SSE streaming.
  • The author, AnnatarHe, has provided a detailed summary and test plan, including steps to verify compilation, unit tests, incremental streaming, auto-run classification, and goreleaser builds.
  • A dependency on a server-side PR (https://github.com/shelltime/server/pull/295) has been identified, indicating a coordinated effort across repositories.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@codecov
Copy link

codecov bot commented Feb 13, 2026

Codecov Report

❌ Patch coverage is 40.27778% with 43 lines in your changes missing coverage. Please review.

Files with missing lines Patch % Lines
model/ai_service.go 0.00% 38 Missing ⚠️
commands/query.go 90.62% 2 Missing and 1 partial ⚠️
cmd/cli/main.go 0.00% 2 Missing ⚠️
Flag Coverage Δ
unittests 37.39% <40.27%> (?)

Flags with carried forward coverage won't be shown. Click here to find out more.

Files with missing lines Coverage Δ
cmd/daemon/main.go 0.00% <ø> (ø)
cmd/cli/main.go 0.00% <0.00%> (ø)
commands/query.go 85.04% <90.62%> (-1.50%) ⬇️
model/ai_service.go 0.00% <0.00%> (ø)

... and 2 files with indirect coverage changes

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request is a great improvement, replacing the PromptPal SDK with a direct SSE streaming implementation from the server API. This enhances the perceived responsiveness of shelltime q by displaying tokens incrementally. The changes are well-contained, and the removal of the PromptPal dependency and related build-time variables cleans up the codebase nicely. The tests have also been updated to reflect the new streaming behavior. I have a couple of suggestions in model/ai_service.go to improve the robustness of URL construction and the efficiency of the HTTP client usage.

Comment on lines +41 to +43
apiURL := strings.TrimRight(endpoint.APIEndpoint, "/") + "/api/v1/ai/command-suggest"

clientOptions := promptpal.PromptPalClientOptions{
Timeout: &config.Timeout,
ApplyTemporaryToken: &applyTokenFunc,
req, err := http.NewRequestWithContext(ctx, http.MethodPost, apiURL, bytes.NewReader(body))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The current method of constructing the URL via string concatenation can be fragile. For instance, if endpoint.APIEndpoint were to contain a path, it could lead to an incorrect URL. Using net/url.JoinPath is a more robust way to join URL components. You will need to add "net/url" to your imports.

Suggested change
apiURL := strings.TrimRight(endpoint.APIEndpoint, "/") + "/api/v1/ai/command-suggest"
clientOptions := promptpal.PromptPalClientOptions{
Timeout: &config.Timeout,
ApplyTemporaryToken: &applyTokenFunc,
req, err := http.NewRequestWithContext(ctx, http.MethodPost, apiURL, bytes.NewReader(body))
apiURL, err := url.JoinPath(endpoint.APIEndpoint, "api/v1/ai/command-suggest")
if err != nil {
return fmt.Errorf("failed to create api url: %w", err)
}
req, err := http.NewRequestWithContext(ctx, http.MethodPost, apiURL, bytes.NewReader(body))

req.Header.Set("Authorization", "CLI "+endpoint.Token)

client := promptpal.NewPromptPalClient(config.Endpoint, config.Token, clientOptions)
client := &http.Client{Timeout: 2 * time.Minute}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Creating a new http.Client for each request is inefficient as it prevents the reuse of underlying TCP connections. It's a best practice to create a single http.Client and reuse it. I'd recommend adding it as a field to the sseAIService struct and initializing it once in NewAIService.

Copy link

@devin-ai-integration devin-ai-integration bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Devin Review found 1 potential issue.

View 7 additional findings in Devin Review.

Open in Devin Review

Comment on lines 88 to 92
if err := scanner.Err(); err != nil {
return fmt.Errorf("error reading stream: %w", err)
}

return response.ResponseMessage, nil
} No newline at end of file
return nil

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 SSE error event silently swallowed if stream ends before error data line arrives

If the SSE stream ends (EOF or connection drop) after receiving event: error but before the corresponding data: line arrives, the isError flag is true when the scanner loop exits, yet the function returns nil on line 92 — silently swallowing the error.

Root Cause

The isError flag is set to true on line 68 when event: error is encountered. The error is only surfaced on line 76 when a subsequent data: line is read while isError is true. However, if the stream terminates (e.g., server closes connection, network interruption) between the event: error line and the data: line, the scanner loop ends normally, scanner.Err() returns nil, and the function returns nil on line 92.

For example, this stream:

data: partial_token
event: error
<EOF>

Would deliver partial_token via onToken, set isError = true, then exit the loop and return nil. The caller (commandQuery at commands/query.go:79) would treat this as a successful completion, displaying an incomplete result with no error indication.

Impact: An error condition from the server is silently ignored, causing the user to see a partial/incomplete command suggestion with no indication that something went wrong.

Suggested change
if err := scanner.Err(); err != nil {
return fmt.Errorf("error reading stream: %w", err)
}
return response.ResponseMessage, nil
}
\ No newline at end of file
return nil
if err := scanner.Err(); err != nil {
return fmt.Errorf("error reading stream: %w", err)
}
if isError {
return fmt.Errorf("server error: stream ended unexpectedly after error event")
}
return nil
Open in Devin Review

Was this helpful? React with 👍 or 👎 to provide feedback.

@AnnatarHe AnnatarHe merged commit acc0341 into main Feb 13, 2026
8 checks passed
@AnnatarHe AnnatarHe deleted the feat/ai-command-suggest-sse-streaming branch February 13, 2026 16:35
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant