Skip to content

fix: merge anthropic parallel tool results#7875

Merged
Soulter merged 1 commit intoAstrBotDevs:masterfrom
daniel5u:fix/anthropicParallelTool
Apr 28, 2026
Merged

fix: merge anthropic parallel tool results#7875
Soulter merged 1 commit intoAstrBotDevs:masterfrom
daniel5u:fix/anthropicParallelTool

Conversation

@daniel5u
Copy link
Copy Markdown
Contributor

@daniel5u daniel5u commented Apr 28, 2026

修复了 #7871 中提到的问题,将 Anthropic API 下的 payload 构建方式进行调整,以匹配 Anthropic SDK 对于并行调用工具的要求(将连续的 tool_result 合并在一条进行返回)

Modifications / 改动点

  • This is NOT a breaking change. / 这不是一个破坏性变更。

Screenshots or Test Results / 运行截图或测试结果


Checklist / 检查清单

  • 😊 If there are new features added in the PR, I have discussed it with the authors through issues/emails, etc.
    / 如果 PR 中有新加入的功能,已经通过 Issue / 邮件等方式和作者讨论过。

  • 👀 My changes have been well-tested, and "Verification Steps" and "Screenshots" have been provided above.
    / 我的更改经过了良好的测试,并已在上方提供了“验证步骤”和“运行截图”

  • 🤓 I have ensured that no new dependencies are introduced, OR if new dependencies are introduced, they have been added to the appropriate locations in requirements.txt and pyproject.toml.
    / 我确保没有引入新依赖库,或者引入了新依赖库的同时将其添加到 requirements.txtpyproject.toml 文件相应位置。

  • 😮 My changes do not introduce malicious code.
    / 我的更改没有引入恶意代码。

Summary by Sourcery

Adjust Anthropic provider payload construction to correctly group tool call results for parallel tool usage.

Bug Fixes:

  • Merge consecutive Anthropic tool results into a single user message to align with the SDK’s parallel tool-calling expectations.

Tests:

  • Add unit tests covering merging of consecutive tool results, handling of single tool results, and ensuring non-consecutive results are not merged.

@dosubot dosubot Bot added size:M This PR changes 30-99 lines, ignoring generated files. area:provider The bug / feature is about AI Provider, Models, LLM Agent, LLM Agent Runner. labels Apr 28, 2026
Copy link
Copy Markdown
Contributor

@sourcery-ai sourcery-ai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey - I've found 1 issue, and left some high level feedback:

  • The can_append_to_previous_tool_results condition in _prepare_payload is fairly dense; consider extracting this check into a small helper (e.g., _is_tool_result_user_message(last_message)) to improve readability and make the intent clearer.
  • In the tool-result merge logic, you rely on new_messages[-1] when new_messages is non-empty; adding an early return or guard when the last message's content is not a list (instead of computing last_content and all(...)) could simplify the control flow and make it easier to follow.
Prompt for AI Agents
Please address the comments from this code review:

## Overall Comments
- The `can_append_to_previous_tool_results` condition in `_prepare_payload` is fairly dense; consider extracting this check into a small helper (e.g., `_is_tool_result_user_message(last_message)`) to improve readability and make the intent clearer.
- In the tool-result merge logic, you rely on `new_messages[-1]` when `new_messages` is non-empty; adding an early return or guard when the last message's content is not a list (instead of computing `last_content` and `all(...)`) could simplify the control flow and make it easier to follow.

## Individual Comments

### Comment 1
<location path="tests/test_anthropic_kimi_code_provider.py" line_range="98-102" />
<code_context>
         )
+
+
+def _make_anthropic_provider_for_payload_tests() -> anthropic_source.ProviderAnthropic:
+    return anthropic_source.ProviderAnthropic(
+        provider_config={"model": "claude-test"},
+        provider_settings={},
+        use_api_key=False,
+    )
+
</code_context>
<issue_to_address>
**suggestion (testing):** Consider one test where the sequence starts with a tool message to guard the empty-`new_messages` path

The new `last_message = new_messages[-1] if new_messages else None` path isn’t covered by a case where the first input message is `role: "tool"`. Please add a test where `messages` begins with a tool message (no prior user/assistant), to confirm `_prepare_payload` correctly handles initially empty `new_messages` and still produces a single user message with one `tool_result`.
</issue_to_address>

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

Comment on lines +98 to +102
def _make_anthropic_provider_for_payload_tests() -> anthropic_source.ProviderAnthropic:
return anthropic_source.ProviderAnthropic(
provider_config={"model": "claude-test"},
provider_settings={},
use_api_key=False,
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion (testing): Consider one test where the sequence starts with a tool message to guard the empty-new_messages path

The new last_message = new_messages[-1] if new_messages else None path isn’t covered by a case where the first input message is role: "tool". Please add a test where messages begins with a tool message (no prior user/assistant), to confirm _prepare_payload correctly handles initially empty new_messages and still produces a single user message with one tool_result.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request modifies the Anthropic provider's payload preparation to merge consecutive tool results into a single user message, ensuring compatibility with Anthropic's API requirements for tool usage. New unit tests have been added to validate this merging behavior and ensure non-consecutive results are handled correctly. I have no feedback to provide.

@dosubot dosubot Bot added the lgtm This PR has been approved by a maintainer label Apr 28, 2026
@Soulter Soulter merged commit 66d620d into AstrBotDevs:master Apr 28, 2026
21 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

area:provider The bug / feature is about AI Provider, Models, LLM Agent, LLM Agent Runner. lgtm This PR has been approved by a maintainer size:M This PR changes 30-99 lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants