Python: Fix FoundryAgent stripping model from PromptAgent requests#5526
Python: Fix FoundryAgent stripping model from PromptAgent requests#5526benke520 wants to merge 2 commits intomicrosoft:mainfrom
Conversation
Move run_options.pop('model', None) inside the _uses_foundry_agent_session()
conditional so that model is only stripped for hosted agent sessions (where
the server manages the model) and preserved for PromptAgent requests that
require it in the Responses API call.
Fixes microsoft#5525
There was a problem hiding this comment.
Pull request overview
Fixes a regression in the Python Foundry agent client where RawFoundryAgentChatClient._prepare_options() stripped the model field from all requests, breaking PromptAgent calls that use the OpenAI Responses API (which requires model).
Changes:
- Strip
modelonly when a Foundry hosted agent session is detected (_uses_foundry_agent_session(conversation_id)), preservingmodelfor non-session (PromptAgent) requests. - Update the existing unit test to assert
modelis preserved for non-session requests. - Add a new unit test to assert
modelis stripped for hosted agent sessions and that session fields are mapped intoextra_body.
Reviewed changes
Copilot reviewed 2 out of 2 changed files in this pull request and generated no comments.
| File | Description |
|---|---|
python/packages/foundry/agent_framework_foundry/_agent.py |
Moves run_options.pop("model", None) into the hosted-session-only branch so PromptAgent requests keep model. |
python/packages/foundry/tests/foundry/test_foundry_agent.py |
Adjusts/extends coverage to validate model preservation for non-session requests and stripping for hosted sessions. |
Python Test Coverage Report •
Python Unit Test Overview
|
||||||||||||||||||||||||||||||
| if _uses_foundry_agent_session(conversation_id): | ||
| run_options.pop("previous_response_id", None) | ||
| run_options.pop("conversation", None) | ||
| run_options.pop("model", None) |
There was a problem hiding this comment.
The fix correctly resolves the PromptAgent 400. One question on the gating choice: this uses _uses_foundry_agent_session(conversation_id) as a proxy for "model is server-managed," which covers the two cases under test (PromptAgent with no session, HostedAgent with session) but not HostedAgent-without-session, which is supported per the docstring at _agent.py:848-853, 900-903 (HostedAgents only need a session for preview-only APIs / isolation_key lazy creation).
Two questions:
- Is HostedAgent-without-session expected to send model to the Responses API? This PR preserves the pre-Python: update FoundryAgent for hosted agent sessions #5447 behavior for that path, so it's not a regression, but if model is supposed to be agent-owned for all HostedAgent calls (which the _check_model_presence override at _agent.py:395 hints at as an intent for the agent class as a whole), the gate should probably key on agent type rather than session shape. A test for HostedAgent + no session would pin the intended behavior either way. @TaoChenOSU, thoughts here?
- It might be worth a one-line comment on
_uses_foundry_agent_session(...)here explaining why it's the right gate (like, "Foundry sessions manage model server-side; Responses-API path requires it"). Right now a future reader has to reverse-engineer the rationale from Python: update FoundryAgent for hosted agent sessions #5447 + this PR.
There was a problem hiding this comment.
Yes, the platform tolerates model in the request for HostedAgent v1 and v2-no-session. Verified live against a Foundry endpoint. Typically users don't pass model for hosted agents (the server owns it), but if they do, the platform simply ignores it -- no error.
The strip is only functionally necessary for the agent_session_id path where the wire format is fundamentally different (it's a routing envelope, not a real Responses API call).
Live validation results with model explicitly in default_options:
| Scenario | Without fix (main) | With fix |
|---|---|---|
| PromptAgent | ❌ 400 'model' is a required property |
✅ model preserved |
HostedAgent v1 (no allow_preview) + model |
✅ | ✅ (model stays, platform ignores it) |
HostedAgent v2 no server session (allow_preview=True, no isolation_key) + model |
✅ | ✅ (model stays, platform ignores it) |
HostedAgent v2 with server session (allow_preview=True + isolation_key) + model |
✅ (model stripped) | ✅ (gate fires, model stripped) |
So to answer directly: yes, HostedAgent-without-session sends model to the Responses API, and the platform is fine with it. This PR preserves pre-#5447 behavior for that path (no regression), and the platform's tolerance means there's no correctness issue either.
| assert result == {} | ||
|
|
||
|
|
||
| async def test_raw_foundry_agent_chat_client_prepare_options_strips_model_for_hosted_session() -> None: |
There was a problem hiding this comment.
I'm not sure if this covers everything, we have three scenario's that we need to cover:
- Prompt agents in foundry
- Hosted Agents v1
- Hosted Agents v2
and each has their own particular things it does and does not cover, for now the allow_preview flag is the surefire way to talk to v2 Hosted Agents, but the others also need to be covered well
There was a problem hiding this comment.
The gate _uses_foundry_agent_session(conversation_id) correctly handles all three:
- PromptAgent:
conversation_idisNone(first turn) orresp_*/conv_*(subsequent turns) -- gate returnsFalse-- model preserved ✅ - HostedAgent v1: Same as PromptAgent from the wire perspective -- uses
resp_*continuity, no agent session ID -- gate returnsFalse-- model stays (harmless, platform ignores it) ✅ - HostedAgent v2 with session: Only when
isolation_keycreates a server session doesconversation_idbecome an opaque agent session handle -- gate returnsTrue-- model stripped ✅
I validated all three live against a Foundry endpoint. The allow_preview flag doesn't affect the gate logic -- what matters is whether the platform returns an agent session handle (opaque string, not resp_*/conv_*) as the conversation_id, which only happens with v2 + isolation_key.
I've also added a new test (test_raw_foundry_agent_chat_client_prepare_options_preserves_model_for_resp_continuation) that explicitly covers the HostedAgent v1 / v2-no-session path -- verifying model and previous_response_id are preserved when conversation_id is resp_*. All 6 _prepare_options tests pass.
Adds test_raw_foundry_agent_chat_client_prepare_options_preserves_model_for_resp_continuation to explicitly verify that HostedAgent v1 / v2-no-session paths (where conversation_id starts with resp_) preserve model and previous_response_id without triggering the hosted-session gate.
Bug Fix
Fixes #5525
Problem
FoundryAgent._prepare_options() unconditionally strips the model key from
un_options via
un_options.pop('model', None). This was introduced in PR #5447 to support hosted agent sessions.
However, PromptAgents use the OpenAI Responses API which requires model in every request. Stripping it causes a 400 Bad Request error:
BadRequestError: Error code: 400 - {'error': {'message': "'model' is a required property"}}Root Cause
The
un_options.pop('model', None) call was placed outside the if _uses_foundry_agent_session(conversation_id): block, so it runs for all FoundryAgent requests — not just hosted agent sessions where the server manages the model.
Fix
Move
un_options.pop('model', None) inside the if _uses_foundry_agent_session(conversation_id): conditional block, so model is only stripped for hosted agent sessions and preserved for PromptAgent requests.
Tests
All 35 tests pass (2 skipped).