fix(session): prevent assistant-last message array from reaching provider APIs#18421
fix(session): prevent assistant-last message array from reaching provider APIs#18421brendandebeasi wants to merge 1 commit intoanomalyco:devfrom
Conversation
…ider APIs Some providers and the AI SDK reject conversations ending with an assistant message (no prefill support). This can happen after compaction, error recovery, or filtered-out user messages leave an assistant message at the tail of the model message array. Add a guard in toModelMessages that appends a synthetic user message when the filtered array ends with an assistant role.
|
This PR doesn't fully meet our contributing guidelines and PR template. What needs to be fixed:
Please edit this PR description to address the above within 2 hours, or it will be automatically closed. If you believe this was flagged incorrectly, please let a maintainer know. |
|
Thanks for your contribution! This PR doesn't have a linked issue. All PRs must reference an existing issue. Please:
See CONTRIBUTING.md for details. |
|
The following comment was made by an LLM, it may be inaccurate: Based on the search results, I found two potentially related PRs: Potential Duplicates/Related PRs:
The current PR (18421) appears to be a more comprehensive solution addressing the root cause across all providers, whereas the earlier PRs may have been targeted fixes for specific scenarios. |
|
This pull request has been automatically closed because it was not updated to meet our contributing guidelines within the 2-hour window. Feel free to open a new pull request that follows our guidelines. |
Summary
toModelMessagesthat appends a synthetic user message when the filtered array ends with an assistant roleProblem
After compaction, error recovery, or when filtered-out user messages leave an assistant message at the tail of the model message array, some providers and the AI SDK reject the conversation. This manifests as:
Observed with
anthropic/claude-opus-4-6where the error comes from the AI SDK's message validation before reaching the Anthropic API.Solution
Added a guard at the end of
toModelMessages()inmessage-v2.tsthat detects when the filtered message array ends with an assistant role and appends a synthetic"Continue."user message to satisfy provider requirements.This complements the existing synthetic user message injection in
prompt.ts(line 506) which handles the task/command case but doesn't cover all edge cases (compaction, error recovery, etc.).Related
prompt.ts:505-528for task command case