Skip to content

fix(opencode): cherry-pick GLM thinking recovery follow-up#65

Merged
terisuke merged 1 commit intodevfrom
fix/cherry-pick-glm-followup
Apr 5, 2026
Merged

fix(opencode): cherry-pick GLM thinking recovery follow-up#65
terisuke merged 1 commit intodevfrom
fix/cherry-pick-glm-followup

Conversation

@terisuke
Copy link
Copy Markdown

@terisuke terisuke commented Apr 5, 2026

Cherry-pick of PR #63 (d33e51a) which was accidentally merged into the feature branch instead of dev.

Changes:

  • RepetitionError triggers compaction instead of stopping the turn
  • Filter stale unfinished assistant messages from resumed sessions
  • 3 regression tests added

Original PR: #63

Cherry-pick of PR #63 (d33e51a) which was accidentally merged into
the feature branch instead of dev.

- RepetitionError triggers compaction instead of stopping the turn
- Filter stale unfinished assistant messages from resumed sessions
- 3 regression tests added

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Copilot AI review requested due to automatic review settings April 5, 2026 08:16
@terisuke terisuke merged commit 0bbc6d1 into dev Apr 5, 2026
4 of 8 checks passed
@github-actions
Copy link
Copy Markdown

github-actions bot commented Apr 5, 2026

Thanks for your contribution!

This PR doesn't have a linked issue. All PRs must reference an existing issue.

Please:

  1. Open an issue describing the bug/feature (if one doesn't exist)
  2. Add Fixes #<number> or Closes #<number> to this PR description

See CONTRIBUTING.md for details.

@github-actions
Copy link
Copy Markdown

github-actions bot commented Apr 5, 2026

This PR doesn't fully meet our contributing guidelines and PR template.

What needs to be fixed:

  • PR description is missing required template sections. Please use the PR template.

Please edit this PR description to address the above within 2 hours, or it will be automatically closed.

If you believe this was flagged incorrectly, please let a maintainer know.

Copy link
Copy Markdown

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Cherry-picks follow-up fixes to make session turns more resilient to GLM “thinking” corruption by (1) compacting instead of halting on repetition loops and (2) preventing previously-saved unfinished assistant output from being re-injected into resumed turns.

Changes:

  • Trigger session compaction on RepetitionError (rather than stopping the turn).
  • Sanitize prompt history by filtering stale unfinished assistant messages (missing finish) when building the next turn and when returning loop results.
  • Add regression coverage for repetition-loop compaction and unfinished-session recovery; remove duplicate imports in scenario harness.

Reviewed changes

Copilot reviewed 5 out of 5 changed files in this pull request and generated 1 comment.

Show a summary per file
File Description
packages/opencode/src/session/processor.ts Treat repetition-loop errors as compaction triggers in halt.
packages/opencode/src/session/prompt.ts Sanitize message history to ignore unfinished assistant pollution and adjust loop return behavior.
packages/opencode/test/session/processor-effect.test.ts Regression test: repetition loops in text output return "compact".
packages/opencode/test/session/prompt-effect.test.ts Regression tests: ignore unfinished assistant pollution; return last finished assistant when stale unfinished trails.
packages/opencode/test/scenario/harness.ts Remove duplicate imports.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +253 to +260
const sanitize = (msgs: MessageV2.WithParts[]) => {
const drop = msgs.filter((msg) => msg.info.role === "assistant" && msg.info.summary !== true && !msg.info.finish)
if (drop.length === 0) return msgs
const ids = new Set(drop.map((msg) => msg.info.id))
log.warn("ignoring incomplete assistant messages", {
sessionID: drop[0]?.info.sessionID,
ids: [...ids],
})
Copy link

Copilot AI Apr 5, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sanitize() logs a warn every time it encounters the same unfinished assistant messages, but it doesn’t mutate/persist anything that would prevent repeated warnings on subsequent turns. If a session has a permanently-stale unfinished assistant message, this can spam logs on every loop() call. Consider de-duping (e.g., cache warned messageIDs per session/instance), lowering to debug, or only logging once per session until the stale message set changes.

Suggested change
const sanitize = (msgs: MessageV2.WithParts[]) => {
const drop = msgs.filter((msg) => msg.info.role === "assistant" && msg.info.summary !== true && !msg.info.finish)
if (drop.length === 0) return msgs
const ids = new Set(drop.map((msg) => msg.info.id))
log.warn("ignoring incomplete assistant messages", {
sessionID: drop[0]?.info.sessionID,
ids: [...ids],
})
const warnedIncompleteAssistantMessageIDsBySession = new Map<SessionID, Set<MessageID>>()
const sanitize = (msgs: MessageV2.WithParts[]) => {
const drop = msgs.filter((msg) => msg.info.role === "assistant" && msg.info.summary !== true && !msg.info.finish)
if (drop.length === 0) return msgs
const sessionID = drop[0]?.info.sessionID
const ids = new Set(drop.map((msg) => msg.info.id))
const warned = warnedIncompleteAssistantMessageIDsBySession.get(sessionID) ?? new Set<MessageID>()
const unwarnedIDs = [...ids].filter((id) => !warned.has(id))
if (unwarnedIDs.length > 0) {
log.warn("ignoring incomplete assistant messages", {
sessionID,
ids: unwarnedIDs,
})
}
warnedIncompleteAssistantMessageIDsBySession.set(sessionID, ids)

Copilot uses AI. Check for mistakes.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants