Add supports_parallel_tool_calls flag to included mcps#17667
Merged
josiah-openai merged 1 commit intomainfrom Apr 13, 2026
Merged
Add supports_parallel_tool_calls flag to included mcps#17667josiah-openai merged 1 commit intomainfrom
supports_parallel_tool_calls flag to included mcps#17667josiah-openai merged 1 commit intomainfrom
Conversation
Contributor
|
All contributors have signed the CLA ✍️ ✅ |
bf9bc5a to
6ffbc1b
Compare
Contributor
Author
|
I have read the CLA Document and I hereby sign the CLA |
6ffbc1b to
8b08616
Compare
support_parallel_tool_calls flag to included mcpssupports_parallel_tool_calls flag to included mcps
jif-oai
approved these changes
Apr 13, 2026
Collaborator
jif-oai
left a comment
There was a problem hiding this comment.
Ok once my comments are processed
| .await | ||
| .list_all_tools() | ||
| .await; | ||
| let parallel_mcp_server_names = exec |
Collaborator
There was a problem hiding this comment.
this looks unused here...
Contributor
Author
There was a problem hiding this comment.
Done - this one is slightly weird because it looks like js_repl uses its own tool call router/dispatcher, but I didn't want to mess around with the parameters too much to allow you to leave out the server names.
8b08616 to
eb849d9
Compare
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Why
For more advanced MCP usage, we want the model to be able to emit parallel MCP tool calls and have Codex execute eligible ones concurrently, instead of forcing all MCP calls through the serial block.
The main design choice was where to thread the config. I made this server-level because parallel safety depends on the MCP server implementation. Codex reads the flag from
mcp_servers, threads the opted-in server names intoToolRouter, and checks the parsedToolPayload::Mcp { server, .. }at execution time. That avoids relying on model-visible tool names, which can be incomplete in deferred/search-tool paths or ambiguous for similarly named servers/tools.What was added
Added
supports_parallel_tool_callsfor MCP servers.Before:
After:
MCP calls remain serial by default. Only tools from opted-in servers are eligible to run in parallel. Docs also now warn to enable this only when the server’s tools are safe to run concurrently, especially around shared state or read/write races.
Testing
Tested with a local stdio MCP server exposing real delay tools. The model/Responses side was mocked only to deterministically emit two MCP calls in the same turn.
Each test called
query_with_delayandquery_with_delay_2with{ "seconds": 25 }.58.79s31.73s56.70sPR with flag enabled showed both tools start before either completed; main and PR-without-flag completed the first delay before starting the second.
Also added an integration test.
Additional checks:
cargo test -p codex-toolspassedcargo test -p codex-core mcp_parallel_support_uses_exact_payload_serverpassedgit diff --checkpassed