Skip to content

fix(opencode): scale prune thresholds to model context size#21209

Open
okuyam2y wants to merge 1 commit intoanomalyco:devfrom
okuyam2y:fix/prune-threshold-scaling
Open

fix(opencode): scale prune thresholds to model context size#21209
okuyam2y wants to merge 1 commit intoanomalyco:devfrom
okuyam2y:fix/prune-threshold-scaling

Conversation

@okuyam2y
Copy link
Copy Markdown

@okuyam2y okuyam2y commented Apr 6, 2026

Issue for this PR

Closes #21208

Type of change

  • Bug fix
  • New feature
  • Refactor / code improvement
  • Documentation

What does this PR do?

PRUNE_PROTECT (40K) and PRUNE_MINIMUM (20K) are hardcoded constants that worked for 200K context models but are too aggressive for 1M+. At 1M context, only 4% of tool output is protected — results from recent turns get cleared well before context is full.

Adds pruneThresholds(contextLimit) that scales proportionally:

  • protect = max(40K, context × 0.2)
  • minimum = max(20K, protect × 0.5)

Preserves existing behavior for ≤200K models. The context limit is resolved from the session's last user message model via provider.getModel, falling back to 0 (default thresholds) when unavailable.

How did you verify your code works?

  • 4 new unit tests for pruneThresholds (200K, 1M, 2M, fallback)
  • All 42 compaction tests pass
  • Typecheck passes

Checklist

  • I have tested my changes locally
  • I have not included unrelated changes in this PR

PRUNE_PROTECT (40K) and PRUNE_MINIMUM (20K) are hardcoded constants
that worked well for 200K context models but are too aggressive for
modern 1M+ models (Claude Opus, GPT-5.4, Gemini 2.5 Pro). At 1M
context, 40K is only 4% — tool results get pruned far too early,
causing the model to lose access to recent work.

Scale thresholds proportionally: protect = max(40K, context * 0.2),
minimum = max(20K, protect * 0.5). This preserves the existing
behavior for ≤200K models while giving larger contexts appropriate
headroom (200K protected at 1M context).
@okuyam2y
Copy link
Copy Markdown
Author

okuyam2y commented Apr 6, 2026

CI unit test failures on Linux and Windows are both the same pre-existing flaky test (concurrent loop callers get same result) — unrelated to this PR. All compaction tests pass, typecheck passes, and e2e passes on both platforms.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

bug: prune thresholds are too aggressive for 1M+ context models

1 participant