feat: implement smooth typewriter-style streaming with batched writes#1
feat: implement smooth typewriter-style streaming with batched writes#1
Conversation
- Add timer-based render queue in response.lua for 33ms cadence updates - Replace per-char string concat with newline-aware chunk processing - Implement incremental buffer updates to avoid full redraws - Add batched transcript writes with size/time thresholds (4KB/250ms) - Replace flush_response_buffer with maybe_flush for streaming chunks This eliminates bulk text dumps when LLM bursts data, creating a steady typewriter effect while reducing disk IO jank. Signed-off-by: assagman <ahmetsercansagman@gmail.com>
Avoid calling append_text for completion markers since it resets is_streaming and is_complete flags. Instead, add separator and completion text directly to state.lines. Signed-off-by: assagman <ahmetsercansagman@gmail.com>
- Add flush_pending_sync() to process pending text before headers - Fix race condition where prompt text appeared after Response header - Reduce chars/tick (12-128) for more natural typewriter feel - Increase timer frequency to 60fps (16ms) for smoother updates - Adjust adaptive thresholds for gradual speed-up with backlog Signed-off-by: assagman <ahmetsercansagman@gmail.com>
Summary of ChangesHello @assagman, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly enhances the user experience for streaming output by introducing a sophisticated typewriter-style rendering mechanism and optimizing transcript persistence. The changes aim to provide a consistently smooth UI update cadence, even when faced with large data bursts, and to eliminate UI jank caused by frequent disk I/O during streaming. This results in a more fluid and responsive interaction with streaming content. Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Pull request overview
This PR implements smooth typewriter-style streaming for response rendering by introducing a timer-based render scheduler and batched transcript writes. The implementation addresses UI jank from large burst uploads and frequent disk I/O.
Changes:
- Add timer-based render scheduler with adaptive character consumption (12-128 chars per 16ms tick)
- Replace per-chunk transcript flushes with size/time-based batching (~4KB threshold, 250ms interval)
- Implement incremental buffer updates to avoid full redraws during streaming
Reviewed changes
Copilot reviewed 4 out of 4 changed files in this pull request and generated 4 comments.
| File | Description |
|---|---|
| lua/ghost/response.lua | Adds render timer, pending chunks queue, incremental buffer updates, and smooth typewriter rendering |
| lua/ghost/transcript.lua | Implements batched writes with ResponseBuffer structure and maybe_flush function |
| lua/ghost/receiver.lua | Switches from flush_response_buffer to maybe_flush_response_buffer for streaming |
| .opencode/plans/1768551208297-calm-island.md | Planning document describing the implementation approach |
Comments suppressed due to low confidence (1)
lua/ghost/response.lua:522
- The update_tool_call function now calls update_buffer() which triggers a full redraw instead of leveraging incremental updates. For smooth rendering during streaming, tool call updates should also flush pending chunks first and potentially use incremental buffer updates. However, there's no test coverage for tool call updates during active streaming to verify this behavior works correctly.
function M.update_tool_call(tool_id, tool_name, status, kind)
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
There was a problem hiding this comment.
Code Review
This pull request introduces a sophisticated typewriter-style rendering mechanism for streaming AI responses, which significantly improves the user experience by providing smooth, continuous output. It also cleverly batches transcript writes to reduce I/O-related UI jank. The implementation is well-structured, using a timer-based scheduler and incremental buffer updates. My review focuses on improving maintainability by reducing code duplication and magic numbers, and addresses one potential bug in the rendering logic to ensure its robustness.
- Fix rendered_current_line empty string handling bug (HIGH) - Replace duplicated flush logic with flush_pending_sync() call - Deduplicate current line completion in handle_response() - Extract magic numbers to named constants for readability - Extract reset_state() helper to reduce clear()/close() duplication - Add clarifying comment for constants documentation - Default flush=true in cleanup_session() to prevent data loss Signed-off-by: assagman <ahmetsercansagman@gmail.com>
Signed-off-by: assagman <ahmetsercansagman@gmail.com>
Summary
Implements a typewriter-style renderer for streaming output that maintains smooth ~33ms UI update cadence, even when upstream delivers large bursts. Also reduces UI jank by batching transcript writes instead of flushing per-chunk.
Changes
Breaking Changes
None
Testing