Skip to content

fix(session): prevent assistant-last message array from reaching provider APIs#18421

Closed
brendandebeasi wants to merge 1 commit intoanomalyco:devfrom
brendandebeasi:fork/fix/prefill-guard
Closed

fix(session): prevent assistant-last message array from reaching provider APIs#18421
brendandebeasi wants to merge 1 commit intoanomalyco:devfrom
brendandebeasi:fork/fix/prefill-guard

Conversation

@brendandebeasi
Copy link
Copy Markdown

Summary

  • Fixes "This model does not support assistant message prefill" errors that occur when the model message array ends with an assistant message
  • Adds a guard in toModelMessages that appends a synthetic user message when the filtered array ends with an assistant role

Problem

After compaction, error recovery, or when filtered-out user messages leave an assistant message at the tail of the model message array, some providers and the AI SDK reject the conversation. This manifests as:

AI_APICallError: This model does not support assistant message prefill. 
The conversation must end with a user message.

Observed with anthropic/claude-opus-4-6 where the error comes from the AI SDK's message validation before reaching the Anthropic API.

Solution

Added a guard at the end of toModelMessages() in message-v2.ts that detects when the filtered message array ends with an assistant role and appends a synthetic "Continue." user message to satisfy provider requirements.

This complements the existing synthetic user message injection in prompt.ts (line 506) which handles the task/command case but doesn't cover all edge cases (compaction, error recovery, etc.).

Related

  • Existing workaround in prompt.ts:505-528 for task command case
  • Affects all providers that don't support assistant prefill

…ider APIs

Some providers and the AI SDK reject conversations ending with an
assistant message (no prefill support). This can happen after compaction,
error recovery, or filtered-out user messages leave an assistant message
at the tail of the model message array.

Add a guard in toModelMessages that appends a synthetic user message
when the filtered array ends with an assistant role.
@github-actions github-actions bot added needs:compliance This means the issue will auto-close after 2 hours. needs:issue labels Mar 20, 2026
@github-actions
Copy link
Copy Markdown
Contributor

This PR doesn't fully meet our contributing guidelines and PR template.

What needs to be fixed:

  • PR description is missing required template sections. Please use the PR template.

Please edit this PR description to address the above within 2 hours, or it will be automatically closed.

If you believe this was flagged incorrectly, please let a maintainer know.

@github-actions
Copy link
Copy Markdown
Contributor

Thanks for your contribution!

This PR doesn't have a linked issue. All PRs must reference an existing issue.

Please:

  1. Open an issue describing the bug/feature (if one doesn't exist)
  2. Add Fixes #<number> or Closes #<number> to this PR description

See CONTRIBUTING.md for details.

@github-actions
Copy link
Copy Markdown
Contributor

The following comment was made by an LLM, it may be inaccurate:

Based on the search results, I found two potentially related PRs:

Potential Duplicates/Related PRs:

  1. PR fix: disable assistant prefill for Claude 4.6 models #14772 - "fix: disable assistant prefill for Claude 4.6 models"

    • Related to the same issue of Claude models rejecting assistant message prefill
    • May address part of the same problem for a specific provider
  2. PR fix: keep Copilot Claude prompts user-final across continuation turns #16921 - "fix: keep Copilot Claude prompts user-final across continuation turns"

    • Related to ensuring conversations end with user messages (user-final)
    • Addresses a similar pattern of managing message endings across turns
  3. PR fix(session): prevent infinite loop in auto-compaction when assistant ended its turn #15532 - "fix(session): prevent infinite loop in auto-compaction when assistant ended its turn"

    • Related to the compaction edge case mentioned in your PR description as contributing to the issue

The current PR (18421) appears to be a more comprehensive solution addressing the root cause across all providers, whereas the earlier PRs may have been targeted fixes for specific scenarios.

@github-actions
Copy link
Copy Markdown
Contributor

This pull request has been automatically closed because it was not updated to meet our contributing guidelines within the 2-hour window.

Feel free to open a new pull request that follows our guidelines.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant