Merged
Conversation
We receive bug reports from users who attempt to override one of the three built-in model providers (openai, ollama, or lmstuio). Currently, these overrides are silently ignored. This PR makes it an error to override them. ## Summary - add validation for `model_providers` so `openai`, `ollama`, and `lmstudio` keys now produce clear configuration errors instead of being silently ignored
## Summary - regenerate `sdk/python` protocol-derived artifacts on latest `origin/main` - update `notification_registry.py` to match the regenerated notification set - fix the stale SDK test expectation for `GranularAskForApproval` ## Validation - `cd sdk/python && python scripts/update_sdk_artifacts.py generate-types` - `cd sdk/python && python -m pytest`
make plugins' `defaultPrompt` an array, but keep backcompat for strings. the array is limited by app-server to 3 entries of up to 128 chars (drops extra entries, `None`s-out ones that are too long) without erroring if those invariants are violating. added tests, tested locally.
…ol (openai#14501) This extends dynamic_tool_calls to allow us to hide a tool from the model context but still use it as part of the general tool calling runtime (for ex from js_repl/code_mode)
## Summary - normalize effective readable, writable, and unreadable sandbox roots after resolving special paths so symlinked roots use canonical runtime paths - add a protocol regression test for a symlinked writable root with a denied child and update protocol expectations to canonicalized effective paths - update macOS seatbelt tests to assert against effective normalized roots produced by the shared policy helpers ## Testing - just fmt - cargo test -p codex-protocol - cargo test -p codex-core explicit_unreadable_paths_are_excluded_ - cargo clippy -p codex-protocol -p codex-core --tests -- -D warnings ## Notes - This is intended to fix the symlinked TMPDIR bind failure in bubblewrap described in openai#14672. Fixes openai#14672
CXC-392 [With 401](https://openai.sentry.io/issues/7333870443/?project=4510195390611458&query=019ce8f8-560c-7f10-a00a-c59553740674&referrer=issue-stream) <img width="1909" height="555" alt="401 auth tags in Sentry" src="https://github.com/user-attachments/assets/412ea950-61c4-4780-9697-15c270971ee3" /> - auth_401_*: preserved facts from the latest unauthorized response snapshot - auth_*: latest auth-related facts from the latest request attempt - auth_recovery_*: unauthorized recovery state and follow-up result Without 401 <img width="1917" height="522" alt="happy-path auth tags in Sentry" src="https://github.com/user-attachments/assets/3381ed28-8022-43b0-b6c0-623a630e679f" /> ###### Summary - Add client-visible 401 diagnostics for auth attachment, upstream auth classification, and 401 request id / cf-ray correlation. - Record unauthorized recovery mode, phase, outcome, and retry/follow-up status without changing auth behavior. - Surface the highest-signal auth and recovery fields on uploaded client bug reports so they are usable in Sentry. - Preserve original unauthorized evidence under `auth_401_*` while keeping follow-up result tags separate. ###### Rationale (from spec findings) - The dominant bucket needed proof of whether the client attached auth before send or upstream still classified the request as missing auth. - Client uploads needed to show whether unauthorized recovery ran and what the client tried next. - Request id and cf-ray needed to be preserved on the unauthorized response so server-side correlation is immediate. - The bug-report path needed the same auth evidence as the request telemetry path, otherwise the observability would not be operationally useful. ###### Scope - Add auth 401 and unauthorized-recovery observability in `codex-rs/core`, `codex-rs/codex-api`, and `codex-rs/otel`, including feedback-tag surfacing. - Keep auth semantics, refresh behavior, retry behavior, endpoint classification, and geo-denial follow-up work out of this PR. ###### Trade-offs - This exports only safe auth evidence: header presence/name, upstream auth classification, request ids, and recovery state. It does not export token values or raw upstream bodies. - This keeps websocket connection reuse as a transport clue because it can help distinguish stale reused sessions from fresh reconnects. - Misroute/base-url classification and geo-denial are intentionally deferred to a separate follow-up PR so this review stays focused on the dominant auth 401 bucket. ###### Client follow-up - PR 2 will add misroute/provider and geo-denial observability plus the matching feedback-tag surfacing. - A separate host/app-server PR should log auth-decision inputs so pre-send host auth state can be correlated with client request evidence. - `device_id` remains intentionally separate until there is a safe existing source on the feedback upload path. ###### Testing - `cargo test -p codex-core refresh_available_models_sorts_by_priority` - `cargo test -p codex-core emit_feedback_request_tags_` - `cargo test -p codex-core emit_feedback_auth_recovery_tags_` - `cargo test -p codex-core auth_request_telemetry_context_tracks_attached_auth_and_retry_phase` - `cargo test -p codex-core extract_response_debug_context_decodes_identity_headers` - `cargo test -p codex-core identity_auth_details` - `cargo test -p codex-core telemetry_error_messages_preserve_non_http_details` - `cargo test -p codex-core --all-features --no-run` - `cargo test -p codex-otel otel_export_routing_policy_routes_api_request_auth_observability` - `cargo test -p codex-otel otel_export_routing_policy_routes_websocket_connect_auth_observability` - `cargo test -p codex-otel otel_export_routing_policy_routes_websocket_request_transport_observability`
- [x] Add resource_uri and other things to _meta to shortcut resource lookup and speed things up.
- [x] Bypass tool search and stuff tool specs directly into model context when either a. Tool search is not available for the model or b. There are not that many tools to search for.
… to /stop (openai#14602) ### Motivation - Interrupting a running turn (Ctrl+C / Esc) currently also terminates long‑running background shells, which is surprising for workflows like local dev servers or file watchers. - The existing cleanup command name was confusing; callers expect an explicit command to stop background terminals rather than a UI clear action. - Make background‑shell termination explicit and surface a clearer command name while preserving backward compatibility. ### Description - Renamed the background‑terminal cleanup slash command from `Clean` (`/clean`) to `Stop` (`/stop`) and kept `clean` as an alias in the command parsing/visibility layer, updated the user descriptions and command popup wiring accordingly. - Updated the unified‑exec footer text and snapshots to point to `/stop` (and trimmed corresponding snapshot output to match the new label). - Changed interrupt behavior so `Op::Interrupt` (Ctrl+C / Esc interrupt) no longer closes or clears tracked unified exec / background terminal processes in the TUI or core cleanup path; background shells are now preserved after an interrupt. - Updated protocol/docs to clarify that `turn/interrupt` (or `Op::Interrupt`) interrupts the active turn but does not terminate background terminals, and that `thread/backgroundTerminals/clean` is the explicit API to stop those shells. - Updated unit/integration tests and insta snapshots in the TUI and core unified‑exec suites to reflect the new semantics and command name. ### Testing - Ran formatting with `just fmt` in `codex-rs` (succeeded). - Ran `cargo test -p codex-protocol` (succeeded). - Attempted `cargo test -p codex-tui` but the build could not complete in this environment due to a native build dependency that requires `libcap` development headers (the `codex-linux-sandbox` vendored build step); install `libcap-dev` / make `libcap.pc` available in `PKG_CONFIG_PATH` to run the TUI test suite locally. - Updated and accepted the affected `insta` snapshots for the TUI changes so visual diffs reflect the new `/stop` wording and preserved interrupt behavior. ------ [Codex Task](https://chatgpt.com/codex/tasks/task_i_69b39c44b6dc8323bd133ae206310fae)
## Summary - reuse a guardian subagent session across approvals so reviews keep a stable prompt cache key and avoid one-shot startup overhead - clear the guardian child history before each review so prior guardian decisions do not leak into later approvals - include the `smart_approvals` -> `guardian_approval` feature flag rename in the same PR to minimize release latency on a very tight timeline - add regression coverage for prompt-cache-key reuse without prior-review prompt bleed ## Request - Bug/enhancement request: internal guardian prompt-cache and latency improvement request --------- Co-authored-by: Codex <noreply@openai.com>
- [x] Preserve tool_params keys.
Fix the layering conflict when a project profile is used with agents. This PR clean the config layering and make sure the agent config > project profile Fix openai#13849, openai#14671
…penai#14806) Fix openai#14161 This fixes sub-agent [[skills.config]] overrides being ignored when parent and child share the same cwd. The root cause was that turn skill loading rebuilt from cwd-only state and reused a cwd-scoped cache, so role-local skill enable/disable overrides did not reliably affect the spawned agent's effective skill set. This change switches turn construction to use the effective per-turn config and adds a config-aware skills cache keyed by skill roots plus final disabled paths.
Make `interrupted` an agent state and make it not final. As a result, a `wait` won't return on an interrupted agent and no notification will be send to the parent agent. The rationals are: * If a user interrupt a sub-agent for any reason, you don't want the parent agent to instantaneously ask the sub-agent to restart * If a parent agent interrupt a sub-agent, no need to add a noisy notification in the parent agen
The issue was due to a circular `Drop` schema where the embedded app-server wait for some listeners that wait for this app-server them-selves. The fix is an explicit cleaning **Repro:** * Start codex * Ask it to spawn a sub-agent * Close Codex * It takes 5s to exit
This PR replicates the `tui` code directory and creates a temporary parallel `tui_app_server` directory. It also implements a new feature flag `tui_app_server` to select between the two tui implementations. Once the new app-server-based TUI is stabilized, we'll delete the old `tui` directory and feature flag.
###### Why/Context/Summary - Exclude injected AGENTS.md instructions and standalone skill payloads from memory stage 1 inputs so memory generation focuses on conversation content instead of prompt scaffolding. - Strip only the AGENTS fragment from mixed contextual user messages during stage-1 serialization, which preserves environment context in the same message. - Keep subagent notifications in the memory input, and add focused unit coverage for the fragment classifier, rollout policy, and stage-1 serialization path. ###### Test plan - `just fmt` - `cargo test -p codex-core --lib contextual_user_message` - `cargo test -p codex-core --lib rollout::policy` - `cargo test -p codex-core --lib memories::phase1`
…penai#14139) # Summary This PR introduces the Windows sandbox runner IPC foundation that later unified_exec work will build on. The key point is that this is intentionally infrastructure-only. The new IPC transport, runner plumbing, and ConPTY helpers are added here, but the active elevated Windows sandbox path still uses the existing request-file bootstrap. In other words, this change prepares the transport and module layout we need for unified_exec without switching production behavior over yet. Part of this PR is also a source-layout cleanup: some Windows sandbox files are moved into more explicit `elevated/`, `conpty/`, and shared locations so it is clearer which code is for the elevated sandbox flow, which code is legacy/direct-spawn behavior, and which helpers are shared between them. That reorganization is intentional in this first PR so later behavioral changes do not also have to carry a large amount of file-move churn. # Why This Is Needed For unified_exec Windows elevated sandboxed unified_exec needs a long-lived, bidirectional control channel between the CLI and a helper process running under the sandbox user. That channel has to support: - starting a process and reporting structured spawn success/failure - streaming stdout/stderr back incrementally - forwarding stdin over time - terminating or polling a long-lived process - supporting both pipe-backed and PTY-backed sessions The existing elevated one-shot path is built around a request-file bootstrap and does not provide those primitives cleanly. Before we can turn on Windows sandbox unified_exec, we need the underlying runner protocol and transport layer that can carry those lifecycle events and streams. # Why Windows Needs More Machinery Than Linux Or macOS Linux and macOS can generally build unified_exec on top of the existing sandbox/process model: the parent can spawn the child directly, retain normal ownership of stdio or PTY handles, and manage the lifetime of the sandboxed process without introducing a second control process. Windows elevated sandboxing is different. To run inside the sandbox boundary, we cross into a different user/security context and then need to manage a long-lived process from outside that boundary. That means we need an explicit helper process plus an IPC transport to carry spawn, stdin, output, and exit events back and forth. The extra code here is mostly that missing Windows sandbox infrastructure, not a conceptual difference in unified_exec itself. # What This PR Adds - the framed IPC message types and transport helpers for parent <-> runner communication - the renamed Windows command runner with both the existing request-file bootstrap and the dormant IPC bootstrap - named-pipe helpers for the elevated runner path - ConPTY helpers and process-thread attribute plumbing needed for PTY-backed sessions - shared sandbox/process helpers that later PRs will reuse when switching live execution paths over - early file/module moves so later PRs can focus on behavior rather than layout churn # What This PR Does Not Yet Do - it does not switch the active elevated one-shot path over to IPC yet - it does not enable Windows sandbox unified_exec yet - it does not remove the existing request-file bootstrap yet So while this code compiles and the new path has basic validation, it is not yet the exercised production path. That is intentional for this first PR: the goal here is to land the transport and runner foundation cleanly before later PRs start routing real command execution through it. # Follow-Ups Planned follow-up PRs will: 1. switch elevated one-shot Windows sandbox execution to the new runner IPC path 2. layer Windows sandbox unified_exec sessions on top of the same transport 3. remove the legacy request-file path once the IPC-based path is live # Validation - `cargo build -p codex-windows-sandbox`
- **Summary** - expose `exit` through the code mode bridge and module so scripts can stop mid-flight - surface the helper in the description documentation - add a regression test ensuring `exit()` terminates execution cleanly - **Testing** - Not run (not requested)
## Stack Position 1/4. Base PR in the realtime stack. ## Base - `main` ## Unblocks - openai#14830 ## Scope - Split the realtime websocket request builders into `common`, `v1`, and `v2` modules. - Keep runtime behavior unchanged in this PR. --------- Co-authored-by: Codex <noreply@openai.com>
## Why Once the repo-local lint exists, `codex-rs` needs to follow the checked-in convention and CI needs to keep it from drifting. This commit applies the fallback `/*param*/` style consistently across existing positional literal call sites without changing those APIs. The longer-term preference is still to avoid APIs that require comments by choosing clearer parameter types and call shapes. This PR is intentionally the mechanical follow-through for the places where the existing signatures stay in place. After rebasing onto newer `main`, the rollout also had to cover newly introduced `tui_app_server` call sites. That made it clear the first cut of the CI job was too expensive for the common path: it was spending almost as much time installing `cargo-dylint` and re-testing the lint crate as a representative test job spends running product tests. The CI update keeps the full workspace enforcement but trims that extra overhead from ordinary `codex-rs` PRs. ## What changed - keep a dedicated `argument_comment_lint` job in `rust-ci` - mechanically annotate remaining opaque positional literals across `codex-rs` with exact `/*param*/` comments, including the rebased `tui_app_server` call sites that now fall under the lint - keep the checked-in style aligned with the lint policy by using `/*param*/` and leaving string and char literals uncommented - cache `cargo-dylint`, `dylint-link`, and the relevant Cargo registry/git metadata in the lint job - split changed-path detection so the lint crate's own `cargo test` step runs only when `tools/argument-comment-lint/*` or `rust-ci.yml` changes - continue to run the repo wrapper over the `codex-rs` workspace, so product-code enforcement is unchanged Most of the code changes in this commit are intentionally mechanical comment rewrites or insertions driven by the lint itself. ## Verification - `./tools/argument-comment-lint/run.sh --workspace` - `cargo test -p codex-tui-app-server -p codex-tui` - parsed `.github/workflows/rust-ci.yml` locally with PyYAML --- * -> openai#14652 * openai#14651
### Motivation - Prevent newly-created skills from being placed in unexpected locations by prompting for an install path and defaulting to a discoverable location so skills are usable immediately. - Make the `skill-creator` instructions explicit about the recommended default (`~/.codex/skills` / `$CODEX_HOME/skills`) so the agent and users follow a consistent, discoverable convention. ### Description - Updated `codex-rs/skills/src/assets/samples/skill-creator/SKILL.md` to add a user prompt: "Where should I create this skill? If you do not have a preference, I will place it in ~/.codex/skills so Codex can discover it automatically.". - Added guidance before running `init_skill.py` that if the user does not specify a location, the agent should default to `~/.codex/skills` (equivalently `$CODEX_HOME/skills`) for auto-discovery. - Updated the `init_skill.py` examples in the same `SKILL.md` to use `~/.codex/skills` as the recommended default while keeping one custom path example. ### Testing - Ran `cargo test -p codex-skills` and the crate's unit test suite passed (`1 passed; 0 failed`). - Verified relevant discovery behavior in code by checking `codex-rs/utils/home-dir/src/lib.rs` (`find_codex_home` defaults to `~/.codex`) and `codex-rs/core/src/skills/loader.rs` (user skill roots include `$CODEX_HOME/skills`). ------ [Codex Task](https://chatgpt.com/codex/tasks/task_i_69b75a50bb008322a278e55eb0ddccd6)
Add display_name support to marketplace.json.
- Added forceRemoteSync to plugin/install and plugin/uninstall. - With forceRemoteSync=true, we update the remote plugin status first, then apply the local change only if the backend call succeeds. - Kept plugin/list(forceRemoteSync=true) as the main recon path, and for now it treats remote enabled=false as uninstall. We will eventually migrate to plugin/installed for more precise state handling.
## Stack Position 2/4. Built on top of openai#14828. ## Base - openai#14828 ## Unblocks - openai#14829 - openai#14827 ## Scope - Port the realtime v2 wire parsing, session, app-server, and conversation runtime behavior onto the split websocket-method base. - Branch runtime behavior directly on the current realtime session kind instead of parser-derived flow flags. - Keep regression coverage in the existing e2e suites. --------- Co-authored-by: Codex <noreply@openai.com>
…ns (openai#14886) 1. camelCase for displayName; 2. move displayName under interface.
Increase the space-hold delay to 1 second before voice capture starts, and mirror the change in tui_app_server.
### Summary Make `FileWatcher` a reusable core component which can be built upon. Extract skills-related logic into a separate `SkillWatcher`. Introduce a composable `ThrottledWatchReceiver` to throttle filesystem events, coalescing affected paths among them. ### Testing Updated existing unit tests.
…15547) * Add `OutgoingMessageSender::send_server_notification_to_connection_and_wait` which returns only once message is written to websocket (or failed to do so) * Use this mechanism to apply back pressure to stdout/stderr streams of processes spawned by `command/exec`, to limit them to at most one message in-memory at a time * Use back pressure signal to also batch smaller chunks into ≈64KiB ones This should make commands execution more robust over high-latency/low-throughput networks
See internal communication
…#15545) Follow up to openai#15357 by making proactive ChatGPT auth refresh depend on the access token's JWT expiration instead of treating `last_refresh` age as the primary source of truth.
…ure (openai#15530) built from openai#14256. PR description from @etraut-openai: This PR addresses a hole in [PR 11802](openai#11802). The previous PR assumed that app server clients would respond to token refresh failures by presenting the user with an error ("you must log in again") and then not making further attempts to call network endpoints using the expired token. While they do present the user with this error, they don't prevent further attempts to call network endpoints and can repeatedly call `getAuthStatus(refreshToken=true)` resulting in many failed calls to the token refresh endpoint. There are three solutions I considered here: 1. Change the getAuthStatus app server call to return a null auth if the caller specified "refreshToken" on input and the refresh attempt fails. This will cause clients to immediately log out the user and return them to the log in screen. This is a really bad user experience. It's also a breaking change in the app server contract that could break third-party clients. 2. Augment the getAuthStatus app server call to return an additional field that indicates the state of "token could not be refreshed". This is a non-breaking change to the app server API, but it requires non-trivial changes for all clients to properly handle this new field properly. 3. Change the getAuthStatus implementation to handle the case where a token refresh fails by marking the AuthManager's in-memory access and refresh tokens as "poisoned" so it they are no longer used. This is the simplest fix that requires no client changes. I chose option 3. Here's Codex's explanation of this change: When an app-server client asks `getAuthStatus(refreshToken=true)`, we may try to refresh a stale ChatGPT access token. If that refresh fails permanently (for example `refresh_token_reused`, expired, or revoked), the old behavior was bad in two ways: 1. We kept the in-memory auth snapshot alive as if it were still usable. 2. Later auth checks could retry refresh again and again, creating a storm of doomed `/oauth/token` requests and repeatedly surfacing the same failure. This is especially painful for app-server clients because they poll auth status and can keep driving the refresh path without any real chance of recovery. This change makes permanent refresh failures terminal for the current managed auth snapshot without changing the app-server API contract. What changed: - `AuthManager` now poisons the current managed auth snapshot in memory after a permanent refresh failure, keyed to the unchanged `AuthDotJson`. - Once poisoned, later refresh attempts for that same snapshot fail fast locally without calling the auth service again. - The poison is cleared automatically when auth materially changes, such as a new login, logout, or reload of different auth state from storage. - `getAuthStatus(includeToken=true)` now omits `authToken` after a permanent refresh failure instead of handing out the stale cached bearer token. This keeps the current auth method visible to clients, avoids forcing an immediate logout flow, and stops repeated refresh attempts for credentials that cannot recover. --------- Co-authored-by: Eric Traut <etraut@openai.com>
## Summary - trim contiguous developer/contextual-user pre-turn updates when rollback cuts back to a user turn - add a focused history regression test for the trim behavior - update the rollback request-boundary snapshots to show the fixed non-duplicating context shape --------- Co-authored-by: Codex <noreply@openai.com>
- Remove numeric prefixes for disabled rows in shared list rendering. These numbers are shortcuts, Ex: Pressing "2" selects option `#2`. Disabled items can not be selected, so keeping numbers on these items is misleading. - Apply the same behavior in both tui and tui_app_server. - Update affected snapshots for apps/plugins loading and plugin detail rows. _**This is a global change.**_ Before: <img width="1680" height="488" alt="image" src="https://github.com/user-attachments/assets/4bcf94ad-285f-48d3-a235-a85b58ee58e2" /> After: <img width="1706" height="484" alt="image" src="https://github.com/user-attachments/assets/76bb6107-a562-42fe-ae94-29440447ca77" />
…ai#15644) ## Why `shell-tool-mcp` and the Bash fork are no longer needed, but the patched zsh fork is still relevant for shell escalation and for the DotSlash-backed zsh-fork integration tests. Deleting the old `shell-tool-mcp` workflow also deleted the only pipeline that rebuilt those patched zsh binaries. This keeps the package removal, while preserving a small release path that can be reused whenever `codex-rs/shell-escalation/patches/zsh-exec-wrapper.patch` changes. ## What changed - removed the `shell-tool-mcp` workspace package, its npm packaging/release jobs, the Bash test fixture, and the remaining Bash-specific compatibility wiring - deleted the old `.github/workflows/shell-tool-mcp.yml` and `.github/workflows/shell-tool-mcp-ci.yml` workflows now that their responsibilities have been replaced or removed - kept the zsh patch under `codex-rs/shell-escalation/patches/zsh-exec-wrapper.patch` and updated the `codex-rs/shell-escalation` docs/code to describe the zsh-based flow directly - added `.github/workflows/rust-release-zsh.yml` to build only the three zsh binaries that `codex-rs/app-server/tests/suite/zsh` needs today: - `aarch64-apple-darwin` on `macos-15` - `x86_64-unknown-linux-musl` on `ubuntu-24.04` - `aarch64-unknown-linux-musl` on `ubuntu-24.04` - extracted the shared zsh build/smoke-test/stage logic into `.github/scripts/build-zsh-release-artifact.sh`, made that helper directly executable, and now invoke it directly from the workflow so the Linux and macOS jobs only keep the OS-specific setup in YAML - wired those standalone `codex-zsh-*.tar.gz` assets into `rust-release.yml` and added `.github/dotslash-zsh-config.json` so releases also publish a `codex-zsh` DotSlash file - updated the checked-in `codex-rs/app-server/tests/suite/zsh` fixture comments to explain that new releases come from the standalone zsh assets, while the checked-in fixture remains pinned to the latest historical release until a newer zsh artifact is published - tightened a couple of follow-on cleanups in `codex-rs/shell-escalation`: the `ExecParams::command` comment now describes the shell `-c`/`-lc` string more clearly, and the README now points at the same `git.code.sf.net` zsh source URL that the workflow uses ## Testing - `cargo test -p codex-shell-escalation` - `just argument-comment-lint` - `bash -n .github/scripts/build-zsh-release-artifact.sh` - attempted `cargo test -p codex-core`; unrelated existing failures remain, but the touched `tools::runtimes::shell::unix_escalation::*` coverage passed during that run
…#15670) ## Summary Fixes a `tui_app_server` bootstrap failure when launching the CLI while logged out. ## Root cause During TUI bootstrap, `tui_app_server` fetched `account/rateLimits/read` unconditionally and treated failures as fatal. When the user was logged out, there was no ChatGPT account available, so that RPC failed and aborted startup with: ``` Error: account/rateLimits/read failed during TUI bootstrap ``` ## Changes - Only fetch bootstrap rate limits when OpenAI auth is required and a ChatGPT account is present - Treat bootstrap rate-limit fetch failures as non-fatal and fall back to empty snapshots - Log the fetch failure at debug level instead of aborting startup
- create `codex-git-utils` and move the shared git helpers into it with file moves preserved for diff readability - move the `GitInfo` helpers out of `core` so stacked rollout work can depend on the shared crate without carrying its own git info module --------- Co-authored-by: Ahmed Ibrahim <219906144+aibrahim-oai@users.noreply.github.com> Co-authored-by: Codex <noreply@openai.com>
- Remove marketplace from left column. - Change `Can be installed` to `Available` - Align right-column marketplace + selected-row hint text across states. - Changes applied to both `tui` and `tui_app_server`. - Update related snapshots/tests. <img width="2142" height="590" alt="image" src="https://github.com/user-attachments/assets/6e60b783-2bea-46d4-b353-f2fd328ac4d0" />
## Summary Fixes early TUI exit paths that could leave the terminal in a dirty state and cause a stray `%` prompt marker after the app quit. ## Root cause Both `tui` and `tui_app_server` had early returns after `tui::init()` that did not guarantee terminal restore. When that happened, shells like `zsh` inherited the altered terminal state. ## Changes - Add a restore guard around `run_ratatui_app()` in both `tui` and `tui_app_server` - Route early exits through the guard instead of relying on scattered manual restore calls - Ensure terminal restore still happens on normal shutdown
## Summary Fixes ChatGPT login in `tui_app_server` so the local browser opens again during in-process login flows. ## Root cause The app-server backend intentionally starts ChatGPT login with browser auto-open disabled, expecting the TUI client to open the returned `auth_url`. The app-server TUI was not doing that, so the login URL was shown in the UI but no browser window opened. ## Changes - Add a helper that opens the returned ChatGPT login URL locally - Call it from the main ChatGPT login flow - Call it from the device-code fallback-to-browser path as well - Limit auto-open to in-process app-server handles so remote sessions do not try to open a browser against a remote localhost callback
## Summary Fixes slow `Ctrl+C` exit from the ChatGPT browser-login screen in `tui_app_server`. ## Root cause Onboarding-level `Ctrl+C` quit bypassed the auth widget's cancel path. That let the active ChatGPT login keep running, and in-process app-server shutdown then waited on the stale login attempt before finishing. ## Changes - Extract a shared `cancel_active_attempt()` path in the auth widget - Use that path from onboarding-level `Ctrl+C` before exiting the TUI - Add focused tests for canceling browser-login and device-code attempts - Add app-server shutdown cleanup that explicitly drops any active login before draining background work
- Updated `/plugin` UI messaging for clearer wording. - Synced the same copy changes across `tui` and `tui_app_server`.
Switch plugin-install background MCP OAuth to a silent login path so the raw authorization URL is no longer printed in normal success cases. OAuth behavior is otherwise unchanged, with fallback URL output via stderr still shown only if browser launch fails. Before: https://github.com/user-attachments/assets/4bf387af-afa8-4b83-bcd6-4ca6b55da8db
- [x] Additional gating for tool suggest and apps.
## Summary - drop `sandbox_permissions` from the sandboxing `ExecOptions` and `ExecRequest` adapter types - remove the now-unused plumbing from shell, unified exec, JS REPL, and apply-patch runtime call sites - default reconstructed `ExecParams` to `SandboxPermissions::UseDefault` where the lower-level API still requires the field ## Testing - `just fmt` - `just argument-comment-lint` - `cargo test -p codex-core` (still running locally; first failures observed in `suite::cli_stream::responses_mode_stream_cli`, `suite::cli_stream::responses_mode_stream_cli_supports_openai_base_url_config_override`, and `suite::cli_stream::responses_mode_stream_cli_supports_openai_base_url_env_fallback`)
- move the shared byte-based middle truncation logic from `core` into `codex-utils-string` - keep token-specific truncation in `codex-core` so rollout can reuse the shared helper in the next stacked PR --------- Co-authored-by: Codex <noreply@openai.com>
### Summary Add the v2 app-server filesystem watch RPCs and notifications, wire them through the message processor, and implement connection-scoped watches with notify-backed change delivery. This also updates the schema fixtures, app-server documentation, and the v2 integration coverage for watch and unwatch behavior. This allows clients to efficiently watch for filesystem updates, e.g. to react on branch changes. ### Testing - exercise watch lifecycles for directory changes, atomic file replacement, missing-file targets, and unwatch cleanup
…ver (openai#15674) - Add a small delayed loading header for plugin list/detail loading messages in the TUI. Keep existing text for the first 1s, then show shimmer on the loading line. - Apply the same behavior in both tui and tui_app_server. https://github.com/user-attachments/assets/71dd35e4-7e3b-4e7b-867a-3c13dc395d3a
This allows clients to get enough information to interact with the codex skills/configuration/etc.
## Why This is a follow-up to openai#15360. That change fixed the `arg0` helper setup, but `rmcp-client` still coerced stdio transport environment values into UTF-8 `String`s before program resolution and process spawn. If `PATH` or another inherited environment value contains non-UTF-8 bytes, that loses fidelity before it reaches `which` and `Command`. ## What changed - change `create_env_for_mcp_server()` to return `HashMap<OsString, OsString>` and read inherited values with `std::env::var_os()` - change `TransportRecipe::Stdio.env`, `RmcpClient::new_stdio_client()`, and `program_resolver::resolve()` to keep stdio transport env values in `OsString` form within `rmcp-client` - keep the `codex-core` config boundary stringly, but convert configured stdio env values to `OsString` once when constructing the transport - update the rmcp-client stdio test fixtures and callers to use `OsString` env maps - add a Unix regression test that verifies `create_env_for_mcp_server()` preserves a non-UTF-8 `PATH` ## How to verify - `cargo test -p codex-rmcp-client` - `cargo test -p codex-core mcp_connection_manager` - `just argument-comment-lint` Targeted coverage in this change includes `utils::tests::create_env_preserves_path_when_it_is_not_utf8`, while the updated stdio transport path is exercised by the existing rmcp-client tests that construct `RmcpClient::new_stdio_client()`.
…gins (openai#15700) - Removes provenance filtering in the mentions feature for apps and skills that were installed as part of a plugin. - All skills and apps for a plugin are mentionable with this change.
- Adds language and "[learn more](https://help.openai.com/en/articles/11487775-apps-in-chatgpt)" link to plugin details pages. - Message is hidden when plugin is installed <img width="1970" height="498" alt="image" src="https://github.com/user-attachments/assets/f14330f7-661e-4860-8538-6dc9e8bbd90a" />
# Conflicts: # codex-rs/core/config.schema.json # codex-rs/core/src/codex.rs # codex-rs/core/src/config/mod.rs # codex-rs/hooks/src/lib.rs # codex-rs/hooks/src/registry.rs # codex-rs/tui/src/app.rs
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Merges the latest upstream/main into the fork main branch while preserving the session-manager fork features.
This branch resolves conflicts to the same tree as the locally built and smoke-tested refresh/upstream-main-20260324 branch.
Validation: