Skip to content

Subagents stay stuck as "awaiting instruction" and do not reliably follow the default GPT-5.4 model #14866

@meAloex

Description

@meAloex

What version of the Codex App are you using (From “About Codex” dialog)?

26.313.4956.0

What subscription do you have?

Pro

What platform is your computer?

Microsoft Windows NT 10.0.26200.0 x64

What issue are you seeing?

When I ask Codex Desktop to spawn multiple subagents, some of them keep appearing in the background agents popup as "awaiting instruction" even after they already returned results in the parent chat. They keep hanging in the popup above the chat and look unfinished/stale.

Image Image

At the same time, subagent model selection feels inconsistent. My local Codex config is set to model = "gpt-5.4" and model_reasoning_effort = "high", and the main composer was on GPT-5.4. However, in the same general workflow I saw subagents labeled "Uses GPT-5.1-Codex-Mini" and "Uses GPT-5.2".

Image Image

If I explicitly tell the assistant to use GPT-5.4 Medium for spawned agents, it can do that. So from a user perspective the current behavior feels non-deterministic: sometimes subagents use older/smaller models, and sometimes they follow the explicit instruction.

Image

What steps can reproduce the bug?

  1. Open Codex Desktop on Windows and use GPT-5.4 in the main chat/composer
  2. Ask Codex to study a repository and spawn several subagents/explorers in parallel
  3. Wait for the subagents to finish and for their findings to be posted back into the parent thread
  4. Open the background agents popup or inspect the spawned agents in the left-side history / agent list
  5. Observe that some agents still remain visible as background agents and show "awaiting instruction" even though they already produced their result
  6. Also observe that the model labels/tooltips for spawned agents can show GPT-5.1-Codex-Mini or GPT-5.2 instead of GPT-5.4

In my case, the problematic session was:
019cf78a-90fb-74e3-9b04-5612b35b9c03

What is the expected behavior?

Completed subagents should not remain in an ambiguous "awaiting instruction" state after they have already returned their result to the parent thread.

If subagents are intentionally kept open for reuse, the UI should clearly distinguish:

  • completed and reusable
  • actively waiting for new input
  • actually still running

Model selection should also be predictable. Ideally there should be a setting for the default subagent model/effort, for example:

  • follow the current main chat model
  • or let the user choose a fixed default model/effort from a menu

It should still be possible to override the model explicitly for a specific spawned agent when needed.

Additional information

  • App package version confirmed locally: 26.313.4956.0
  • Platform confirmed locally: Microsoft Windows NT 10.0.26200.0 x64
  • Subscription: ChatGPT Pro
  • Problem session id: 019cf78a-90fb-74e3-9b04-5612b35b9c03
  • Session metadata showed cli_version = 0.115.0-alpha.11
  • My local Codex config had model = "gpt-5.4" and model_reasoning_effort = "high"
  • In the problematic session JSONL I can see spawn_agent entries using gpt-5.1-codex-mini
  • In the UI/screenshots I also saw spawned agents labeled GPT-5.2
  • In the same session's local JSONL there were many more spawn_agent entries than close_agent entries (62 vs 15), which may be related to the stuck "awaiting instruction" state
  • Attached screenshots show:
    • the main composer on GPT-5.4
    • spawned agents labeled GPT-5.1-Codex-Mini / GPT-5.2
    • multiple agents still shown as background agents / awaiting instruction after they already responded

Metadata

Metadata

Assignees

No one assigned

    Labels

    appIssues related to the Codex desktop appbugSomething isn't workingmodel-behaviorIssues related to behaviors exhibited by the modeltool-callsIssues related to tool callingwindows-osIssues related to Codex on Windows systems

    Type

    No type
    No fields configured for issues without a type.

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions