fix(streaming): add buffer size limit to StreamProcessor#84
fix(streaming): add buffer size limit to StreamProcessor#84
Conversation
Greptile OverviewGreptile SummaryAdded memory safety guard to Key Changes:
Impact: Confidence Score: 4/5
|
| Filename | Overview |
|---|---|
| src/cortex-engine/src/streaming.rs | Added buffer size limit (10K events) to prevent unbounded memory growth; silently drops oldest events when full |
Sequence Diagram
sequenceDiagram
participant Caller
participant StreamProcessor
participant Buffer as VecDeque<StreamEvent>
Note over StreamProcessor: MAX_BUFFER_SIZE = 10,000
Caller->>StreamProcessor: process(event)
StreamProcessor->>Buffer: check len()
alt Buffer length >= 10,000
Buffer->>Buffer: pop_front() (drop oldest)
Note over Buffer: Oldest event discarded
end
StreamProcessor->>Buffer: push_back(new event)
Note over Buffer: Buffer maintains max 10K events
Caller->>StreamProcessor: drain_events()
StreamProcessor->>Buffer: drain(..)
Buffer-->>Caller: Vec<StreamEvent>
Note over Buffer: Buffer emptied
| // Enforce buffer size limit to prevent unbounded memory growth | ||
| if self.buffer.len() >= MAX_BUFFER_SIZE { | ||
| self.buffer.pop_front(); | ||
| } | ||
| self.buffer.push_back(event); |
There was a problem hiding this comment.
Consider logging when dropping events - silent data loss could be hard to debug
| // Enforce buffer size limit to prevent unbounded memory growth | |
| if self.buffer.len() >= MAX_BUFFER_SIZE { | |
| self.buffer.pop_front(); | |
| } | |
| self.buffer.push_back(event); | |
| // Enforce buffer size limit to prevent unbounded memory growth | |
| if self.buffer.len() >= MAX_BUFFER_SIZE { | |
| self.buffer.pop_front(); | |
| // Consider: log::warn!("StreamProcessor buffer full, dropping oldest event"); | |
| } |
Note: If this suggestion doesn't match your team's coding style, reply to this and let me know. I'll remember it for next time!
Prompt To Fix With AI
This is a comment left during a code review.
Path: src/cortex-engine/src/streaming.rs
Line: 291:295
Comment:
Consider logging when dropping events - silent data loss could be hard to debug
```suggestion
// Enforce buffer size limit to prevent unbounded memory growth
if self.buffer.len() >= MAX_BUFFER_SIZE {
self.buffer.pop_front();
// Consider: log::warn!("StreamProcessor buffer full, dropping oldest event");
}
```
<sub>Note: If this suggestion doesn't match your team's coding style, reply to this and let me know. I'll remember it for next time!</sub>
How can I resolve this? If you propose a fix, please make it concise.|
Closing to consolidate: This streaming buffer fix will be merged with PR #85 (ToolState transition validation) into a consolidated protocol robustness PR. |
## Summary This PR consolidates **2 protocol robustness improvements**. ### Included PRs: - #84: Add buffer size limit to StreamProcessor - #85: Add ToolState transition validation ### Key Changes: - Added MAX_BUFFER_SIZE constant (10,000 events) for StreamProcessor - Modified process() to drop oldest events when buffer is full - Pre-allocated buffer capacity in new() for better performance - Added can_transition_to() method to ToolState enum - Updated update_tool_state to log warnings on invalid transitions - Documented valid state machine transitions ### Files Modified: - src/cortex-engine/src/streaming.rs - src/cortex-protocol/src/protocol/message_parts.rs Closes #84, #85
Summary
Adds a maximum buffer size limit to StreamProcessor to prevent unbounded memory growth during long streaming sessions.
Problem
The StreamProcessor's event buffer (VecDeque) has no size limit. If drain_events() is not called regularly, the buffer can grow indefinitely during long streams, potentially causing memory exhaustion.
Changes
Testing
Verification