Skip to content

fix(provider): skills-like re-query 对不支持 tool_choice 的模型降级为 auto 而非移除全部工具#7856

Closed
Hola-Gracias wants to merge 3 commits intoAstrBotDevs:masterfrom
Hola-Gracias:fix/#7853
Closed

fix(provider): skills-like re-query 对不支持 tool_choice 的模型降级为 auto 而非移除全部工具#7856
Hola-Gracias wants to merge 3 commits intoAstrBotDevs:masterfrom
Hola-Gracias:fix/#7853

Conversation

@Hola-Gracias
Copy link
Copy Markdown

@Hola-Gracias Hola-Gracias commented Apr 27, 2026

fix: #7853

Modifications / 改动点

  1. astrbot/core/provider/sources/openai_source.py —
    错误处理器中「不支持工具」的判定排除 tool_choice 关键词,避免将 tool_choice 不兼容误判为模型不支持工具调用。同时将原始错误信息降级为 DEBUG 日志。
  2. astrbot/core/agent/runners/tool_loop_agent_runner.py — _resolve_tool_exec 中两次 re-query 提取为 _requery() 方法,先以 tool_choice="required" 请求,若 provider 返回 tool_choice 相关错误则自动降级为 "auto" 重试,保留完整工具定义。
  • This is NOT a breaking change. / 这不是一个破坏性变更。

Screenshots or Test Results / 运行截图或测试结果

[2026-04-27 23:02:16.274] [Core] [INFO] [runners.tool_loop_agent_runner:1340]: tool_choice='required' 不被当前模型支持,降级为 'auto' 重试。

[2026-04-27 23:02:16.275] [Core] [DBUG] [runners.tool_loop_agent_runner:1343]: 原始错误: Error code: 400 - {'error': {'message': 'deepseek-reasoner does not support this tool_choice', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_request_error'}}


Checklist / 检查清单

  • 😊 If there are new features added in the PR, I have discussed it with the authors through issues/emails, etc.
    / 如果 PR 中有新加入的功能,已经通过 Issue / 邮件等方式和作者讨论过。

  • 👀 My changes have been well-tested, and "Verification Steps" and "Screenshots" have been provided above.
    / 我的更改经过了良好的测试,并已在上方提供了“验证步骤”和“运行截图”

  • 🤓 I have ensured that no new dependencies are introduced, OR if new dependencies are introduced, they have been added to the appropriate locations in requirements.txt and pyproject.toml.
    / 我确保没有引入新依赖库,或者引入了新依赖库的同时将其添加到 requirements.txtpyproject.toml 文件相应位置。

  • 😮 My changes do not introduce malicious code.
    / 我的更改没有引入恶意代码。

Summary by Sourcery

Handle models that do not support tool_choice="required" by degrading to tool_choice="auto" while preserving tool definitions, and avoid misclassifying tool_choice incompatibility as lack of tool support.

Bug Fixes:

  • Fix skills-like re-query so that models without tool_choice="required" support fall back to tool_choice="auto" instead of dropping all tools.
  • Avoid treating tool_choice-related errors as signals that a model does not support tools, and log the original error message at debug level.

@dosubot dosubot Bot added size:M This PR changes 30-99 lines, ignoring generated files. area:core The bug / feature is about astrbot's core, backend area:provider The bug / feature is about AI Provider, Models, LLM Agent, LLM Agent Runner. labels Apr 27, 2026
Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a _requery method in the tool loop agent runner to handle cases where a model does not support tool_choice='required', falling back to auto. It also refines error handling in the OpenAI source to distinguish between general tool support issues and specific tool_choice errors. Feedback suggests adding unit tests for the new fallback logic, addressing potential model name mismatches when using fallback providers, and implementing a caching mechanism for tool_choice support to avoid redundant API failures and reduce latency.

Comment on lines +1320 to +1353
async def _requery(
self,
contexts: list,
param_subset: ToolSet,
) -> LLMResponse | None:
"""Send a re-query with tool_choice='required', falling back to 'auto' if
the provider does not support the 'required' value (e.g. deepseek-reasoner).
"""
try:
return await self.provider.text_chat(
contexts=self._sanitize_contexts_for_provider(contexts),
func_tool=param_subset,
model=self.req.model,
session_id=self.req.session_id,
extra_user_content_parts=self.req.extra_user_content_parts,
tool_choice="required",
abort_signal=self._abort_signal,
)
except Exception as e:
if "tool_choice" in str(e).lower():
logger.info(
f"tool_choice='required' 不被当前模型支持,降级为 'auto' 重试。",
)
logger.debug(f"原始错误: {e}")
return await self.provider.text_chat(
contexts=self._sanitize_contexts_for_provider(contexts),
func_tool=param_subset,
model=self.req.model,
session_id=self.req.session_id,
extra_user_content_parts=self.req.extra_user_content_parts,
tool_choice="auto",
abort_signal=self._abort_signal,
)
raise
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

新的 re-query 逻辑及其降级处理应该附带单元测试,以确保它能正确识别 tool_choice 错误并按预期进行重试。考虑到不同 LLM 提供商的错误消息多种多样,通过测试验证这一逻辑的健壮性尤为重要。

References
  1. 新功能(如处理 tool_choice 降级逻辑)应附带相应的单元测试。

return await self.provider.text_chat(
contexts=self._sanitize_contexts_for_provider(contexts),
func_tool=param_subset,
model=self.req.model,
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

在 re-query 过程中直接传递 self.req.model 可能会在启用备用提供商(fallback provider)时导致请求失败。备用提供商通常有自己特定的模型配置,可能无法识别原始请求中的模型名称。建议在当前提供商不是主提供商时,考虑允许其使用自身默认的模型配置(例如传递 None)。

Comment on lines +1338 to +1353
except Exception as e:
if "tool_choice" in str(e).lower():
logger.info(
f"tool_choice='required' 不被当前模型支持,降级为 'auto' 重试。",
)
logger.debug(f"原始错误: {e}")
return await self.provider.text_chat(
contexts=self._sanitize_contexts_for_provider(contexts),
func_tool=param_subset,
model=self.req.model,
session_id=self.req.session_id,
extra_user_content_parts=self.req.extra_user_content_parts,
tool_choice="auto",
abort_signal=self._abort_signal,
)
raise
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

如果模型不支持 tool_choice='required',每次调用 _requery 都会先产生一次必然失败的 API 请求。由于 _resolve_tool_exec 可能会在一次执行中调用 _requery 两次,这会增加延迟。建议在 runner 实例中缓存该模型是否支持 required 的状态。在单线程 asyncio 事件循环中,同步修改此类共享状态是原子的,不会产生竞态条件。

References
  1. 在单线程 asyncio 事件循环中,同步函数(不含 'await' 的代码块)是原子执行的,在修改共享状态时是安全的。

Copy link
Copy Markdown
Contributor

@sourcery-ai sourcery-ai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey - I've found 1 issue, and left some high level feedback:

  • In _requery, catching a broad Exception and checking for 'tool_choice' in the string is a bit fragile; consider narrowing the exception type (e.g., to your provider error class) or using structured error attributes if available to avoid misclassifying other failures.
  • The string checks in _handle_api_error and _requery are duplicated and rely on specific English phrases; it might be worth centralizing these model-capability checks or introducing a helper to encapsulate the logic so behavior stays consistent as providers or messages evolve.
Prompt for AI Agents
Please address the comments from this code review:

## Overall Comments
- In `_requery`, catching a broad `Exception` and checking for `'tool_choice'` in the string is a bit fragile; consider narrowing the exception type (e.g., to your provider error class) or using structured error attributes if available to avoid misclassifying other failures.
- The string checks in `_handle_api_error` and `_requery` are duplicated and rely on specific English phrases; it might be worth centralizing these model-capability checks or introducing a helper to encapsulate the logic so behavior stays consistent as providers or messages evolve.

## Individual Comments

### Comment 1
<location path="astrbot/core/agent/runners/tool_loop_agent_runner.py" line_range="1320-1329" />
<code_context>

         return llm_resp, subset

+    async def _requery(
+        self,
+        contexts: list,
+        param_subset: ToolSet,
+    ) -> LLMResponse | None:
+        """Send a re-query with tool_choice='required', falling back to 'auto' if
+        the provider does not support the 'required' value (e.g. deepseek-reasoner).
+        """
+        try:
+            return await self.provider.text_chat(
+                contexts=self._sanitize_contexts_for_provider(contexts),
+                func_tool=param_subset,
+                model=self.req.model,
+                session_id=self.req.session_id,
+                extra_user_content_parts=self.req.extra_user_content_parts,
+                tool_choice="required",
+                abort_signal=self._abort_signal,
+            )
+        except Exception as e:
+            if "tool_choice" in str(e).lower():
+                logger.info(
</code_context>
<issue_to_address>
**issue (bug_risk):** Avoid catching and handling asyncio.CancelledError inside _requery

Catching `Exception` here will also swallow `asyncio.CancelledError`, breaking cooperative cancellation (e.g., aborted requests or upstream task cancels). Please re-raise `CancelledError` before handling other exceptions, e.g.:

```python
        except Exception as e:
            if isinstance(e, asyncio.CancelledError):
                raise
            if "tool_choice" in str(e).lower():
                ...
```

This keeps cancellation semantics intact while still handling the `tool_choice` fallback.
</issue_to_address>

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

Comment on lines +1320 to +1329
async def _requery(
self,
contexts: list,
param_subset: ToolSet,
) -> LLMResponse | None:
"""Send a re-query with tool_choice='required', falling back to 'auto' if
the provider does not support the 'required' value (e.g. deepseek-reasoner).
"""
try:
return await self.provider.text_chat(
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

issue (bug_risk): Avoid catching and handling asyncio.CancelledError inside _requery

Catching Exception here will also swallow asyncio.CancelledError, breaking cooperative cancellation (e.g., aborted requests or upstream task cancels). Please re-raise CancelledError before handling other exceptions, e.g.:

        except Exception as e:
            if isinstance(e, asyncio.CancelledError):
                raise
            if "tool_choice" in str(e).lower():
                ...

This keeps cancellation semantics intact while still handling the tool_choice fallback.

@Soulter Soulter closed this in 6b36e1a Apr 28, 2026
LIghtJUNction pushed a commit that referenced this pull request Apr 28, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

area:core The bug / feature is about astrbot's core, backend area:provider The bug / feature is about AI Provider, Models, LLM Agent, LLM Agent Runner. size:M This PR changes 30-99 lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Bug] 当工具调用模式为skills-like时,deepseek-v4-flash模型会报错

1 participant