Description
When non-Claude models (e.g. kimi-k2.5) are configured via the Anthropic SDK (@ai-sdk/anthropic) with interleaved reasoning capability, reasoning content from previous assistant turns is silently dropped in subsequent requests.
The root cause is that the Anthropic SDK only knows how to serialize thinking blocks — it has no concept of a top-level reasoning_content field on assistant messages. So when opencode replays prior assistant messages that contain reasoning parts, the SDK discards them and the upstream API never receives the reasoning history.
Steps to reproduce
- Configure a non-Claude model (e.g. kimi-k2.5) using
@ai-sdk/anthropic provider
- Enable thinking/reasoning
- Have a multi-turn conversation where the model produces reasoning content
- Observe that on subsequent turns, the prior reasoning content is missing from the request sent to the API
Description
When non-Claude models (e.g. kimi-k2.5) are configured via the Anthropic SDK (
@ai-sdk/anthropic) withinterleavedreasoning capability, reasoning content from previous assistant turns is silently dropped in subsequent requests.The root cause is that the Anthropic SDK only knows how to serialize
thinkingblocks — it has no concept of a top-levelreasoning_contentfield on assistant messages. So when opencode replays prior assistant messages that contain reasoning parts, the SDK discards them and the upstream API never receives the reasoning history.Steps to reproduce
@ai-sdk/anthropicprovider