Feature hasn't been suggested before.
Describe the enhancement you want to request
When using models with large context windows (e.g., Claude Opus 4.6 with 1 million input context), the input context is rarely exhausted. However, the per-response output token cap (OUTPUT_TOKEN_MAX, default 32K) frequently causes the model to stop mid-task with finish_reason: "length". The user must then manually type "continue" to resume — which breaks the autonomous agent workflow.
Current behavior:
The loop in prompt.ts exits when finish_reason is anything other than "tool-calls" or "unknown":
if (
lastAssistant?.finish &&
!["tool-calls", "unknown"].includes(lastAssistant.finish) &&
lastUser.id < lastAssistant.id
) {
break
}
When finish_reason is "length", the loop breaks and waits for user input.
Requested behavior:
When finish_reason is "length", it would be nice for OpenCode to automatically inject a synthetic "continue" user message and re-enter the loop — the same pattern already used for subtask summaries. The model's output was truncated, not intentionally finished, so the loop should continue.
Suggested implementation
- Add "length" to the continuation conditions in prompt.ts, or handle it separately
- When finish_reason === "length" is detected, inject a synthetic user message (e.g., "Continue from where you left off.") and continue the loop instead of breaking
- The infrastructure for synthetic user messages already exists (used for subtask command summaries)
- Consider adding a config option (compaction.auto_continue or similar) to make this opt-in/out
- The MessageV2.OutputLengthError type is already defined in message-v2.ts but appears unused
- The plugin system cannot solve this, as there is no hook between the finish reason being set and the loop exit decision
Feature hasn't been suggested before.
Describe the enhancement you want to request
When using models with large context windows (e.g., Claude Opus 4.6 with 1 million input context), the input context is rarely exhausted. However, the per-response output token cap (
OUTPUT_TOKEN_MAX, default 32K) frequently causes the model to stop mid-task withfinish_reason: "length". The user must then manually type "continue" to resume — which breaks the autonomous agent workflow.Current behavior:
The loop in
prompt.tsexits whenfinish_reasonis anything other than"tool-calls"or"unknown":When finish_reason is "length", the loop breaks and waits for user input.
Requested behavior:
When finish_reason is "length", it would be nice for OpenCode to automatically inject a synthetic "continue" user message and re-enter the loop — the same pattern already used for subtask summaries. The model's output was truncated, not intentionally finished, so the loop should continue.
Suggested implementation