fix(opencode): scale prune thresholds to model context size#21209
Open
okuyam2y wants to merge 1 commit intoanomalyco:devfrom
Open
fix(opencode): scale prune thresholds to model context size#21209okuyam2y wants to merge 1 commit intoanomalyco:devfrom
okuyam2y wants to merge 1 commit intoanomalyco:devfrom
Conversation
PRUNE_PROTECT (40K) and PRUNE_MINIMUM (20K) are hardcoded constants that worked well for 200K context models but are too aggressive for modern 1M+ models (Claude Opus, GPT-5.4, Gemini 2.5 Pro). At 1M context, 40K is only 4% — tool results get pruned far too early, causing the model to lose access to recent work. Scale thresholds proportionally: protect = max(40K, context * 0.2), minimum = max(20K, protect * 0.5). This preserves the existing behavior for ≤200K models while giving larger contexts appropriate headroom (200K protected at 1M context).
Author
|
CI unit test failures on Linux and Windows are both the same pre-existing flaky test ( |
6 tasks
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Issue for this PR
Closes #21208
Type of change
What does this PR do?
PRUNE_PROTECT(40K) andPRUNE_MINIMUM(20K) are hardcoded constants that worked for 200K context models but are too aggressive for 1M+. At 1M context, only 4% of tool output is protected — results from recent turns get cleared well before context is full.Adds
pruneThresholds(contextLimit)that scales proportionally:protect = max(40K, context × 0.2)minimum = max(20K, protect × 0.5)Preserves existing behavior for ≤200K models. The context limit is resolved from the session's last user message model via
provider.getModel, falling back to 0 (default thresholds) when unavailable.How did you verify your code works?
pruneThresholds(200K, 1M, 2M, fallback)Checklist