Sync upstream: automated sync (phenotype/upstream-sync-20260324)#487
Sync upstream: automated sync (phenotype/upstream-sync-20260324)#487KooshaPari merged 805 commits intomainfrom
Conversation
## Summary - reuse a guardian subagent session across approvals so reviews keep a stable prompt cache key and avoid one-shot startup overhead - clear the guardian child history before each review so prior guardian decisions do not leak into later approvals - include the `smart_approvals` -> `guardian_approval` feature flag rename in the same PR to minimize release latency on a very tight timeline - add regression coverage for prompt-cache-key reuse without prior-review prompt bleed ## Request - Bug/enhancement request: internal guardian prompt-cache and latency improvement request --------- Co-authored-by: Codex <noreply@openai.com>
- [x] Preserve tool_params keys.
Fix the layering conflict when a project profile is used with agents. This PR clean the config layering and make sure the agent config > project profile Fix openai#13849, openai#14671
…penai#14806) Fix openai#14161 This fixes sub-agent [[skills.config]] overrides being ignored when parent and child share the same cwd. The root cause was that turn skill loading rebuilt from cwd-only state and reused a cwd-scoped cache, so role-local skill enable/disable overrides did not reliably affect the spawned agent's effective skill set. This change switches turn construction to use the effective per-turn config and adds a config-aware skills cache keyed by skill roots plus final disabled paths.
Make `interrupted` an agent state and make it not final. As a result, a `wait` won't return on an interrupted agent and no notification will be send to the parent agent. The rationals are: * If a user interrupt a sub-agent for any reason, you don't want the parent agent to instantaneously ask the sub-agent to restart * If a parent agent interrupt a sub-agent, no need to add a noisy notification in the parent agen
The issue was due to a circular `Drop` schema where the embedded app-server wait for some listeners that wait for this app-server them-selves. The fix is an explicit cleaning **Repro:** * Start codex * Ask it to spawn a sub-agent * Close Codex * It takes 5s to exit
This PR replicates the `tui` code directory and creates a temporary parallel `tui_app_server` directory. It also implements a new feature flag `tui_app_server` to select between the two tui implementations. Once the new app-server-based TUI is stabilized, we'll delete the old `tui` directory and feature flag.
###### Why/Context/Summary - Exclude injected AGENTS.md instructions and standalone skill payloads from memory stage 1 inputs so memory generation focuses on conversation content instead of prompt scaffolding. - Strip only the AGENTS fragment from mixed contextual user messages during stage-1 serialization, which preserves environment context in the same message. - Keep subagent notifications in the memory input, and add focused unit coverage for the fragment classifier, rollout policy, and stage-1 serialization path. ###### Test plan - `just fmt` - `cargo test -p codex-core --lib contextual_user_message` - `cargo test -p codex-core --lib rollout::policy` - `cargo test -p codex-core --lib memories::phase1`
…penai#14139) # Summary This PR introduces the Windows sandbox runner IPC foundation that later unified_exec work will build on. The key point is that this is intentionally infrastructure-only. The new IPC transport, runner plumbing, and ConPTY helpers are added here, but the active elevated Windows sandbox path still uses the existing request-file bootstrap. In other words, this change prepares the transport and module layout we need for unified_exec without switching production behavior over yet. Part of this PR is also a source-layout cleanup: some Windows sandbox files are moved into more explicit `elevated/`, `conpty/`, and shared locations so it is clearer which code is for the elevated sandbox flow, which code is legacy/direct-spawn behavior, and which helpers are shared between them. That reorganization is intentional in this first PR so later behavioral changes do not also have to carry a large amount of file-move churn. # Why This Is Needed For unified_exec Windows elevated sandboxed unified_exec needs a long-lived, bidirectional control channel between the CLI and a helper process running under the sandbox user. That channel has to support: - starting a process and reporting structured spawn success/failure - streaming stdout/stderr back incrementally - forwarding stdin over time - terminating or polling a long-lived process - supporting both pipe-backed and PTY-backed sessions The existing elevated one-shot path is built around a request-file bootstrap and does not provide those primitives cleanly. Before we can turn on Windows sandbox unified_exec, we need the underlying runner protocol and transport layer that can carry those lifecycle events and streams. # Why Windows Needs More Machinery Than Linux Or macOS Linux and macOS can generally build unified_exec on top of the existing sandbox/process model: the parent can spawn the child directly, retain normal ownership of stdio or PTY handles, and manage the lifetime of the sandboxed process without introducing a second control process. Windows elevated sandboxing is different. To run inside the sandbox boundary, we cross into a different user/security context and then need to manage a long-lived process from outside that boundary. That means we need an explicit helper process plus an IPC transport to carry spawn, stdin, output, and exit events back and forth. The extra code here is mostly that missing Windows sandbox infrastructure, not a conceptual difference in unified_exec itself. # What This PR Adds - the framed IPC message types and transport helpers for parent <-> runner communication - the renamed Windows command runner with both the existing request-file bootstrap and the dormant IPC bootstrap - named-pipe helpers for the elevated runner path - ConPTY helpers and process-thread attribute plumbing needed for PTY-backed sessions - shared sandbox/process helpers that later PRs will reuse when switching live execution paths over - early file/module moves so later PRs can focus on behavior rather than layout churn # What This PR Does Not Yet Do - it does not switch the active elevated one-shot path over to IPC yet - it does not enable Windows sandbox unified_exec yet - it does not remove the existing request-file bootstrap yet So while this code compiles and the new path has basic validation, it is not yet the exercised production path. That is intentional for this first PR: the goal here is to land the transport and runner foundation cleanly before later PRs start routing real command execution through it. # Follow-Ups Planned follow-up PRs will: 1. switch elevated one-shot Windows sandbox execution to the new runner IPC path 2. layer Windows sandbox unified_exec sessions on top of the same transport 3. remove the legacy request-file path once the IPC-based path is live # Validation - `cargo build -p codex-windows-sandbox`
- **Summary** - expose `exit` through the code mode bridge and module so scripts can stop mid-flight - surface the helper in the description documentation - add a regression test ensuring `exit()` terminates execution cleanly - **Testing** - Not run (not requested)
## Stack Position 1/4. Base PR in the realtime stack. ## Base - `main` ## Unblocks - openai#14830 ## Scope - Split the realtime websocket request builders into `common`, `v1`, and `v2` modules. - Keep runtime behavior unchanged in this PR. --------- Co-authored-by: Codex <noreply@openai.com>
## Why Once the repo-local lint exists, `codex-rs` needs to follow the checked-in convention and CI needs to keep it from drifting. This commit applies the fallback `/*param*/` style consistently across existing positional literal call sites without changing those APIs. The longer-term preference is still to avoid APIs that require comments by choosing clearer parameter types and call shapes. This PR is intentionally the mechanical follow-through for the places where the existing signatures stay in place. After rebasing onto newer `main`, the rollout also had to cover newly introduced `tui_app_server` call sites. That made it clear the first cut of the CI job was too expensive for the common path: it was spending almost as much time installing `cargo-dylint` and re-testing the lint crate as a representative test job spends running product tests. The CI update keeps the full workspace enforcement but trims that extra overhead from ordinary `codex-rs` PRs. ## What changed - keep a dedicated `argument_comment_lint` job in `rust-ci` - mechanically annotate remaining opaque positional literals across `codex-rs` with exact `/*param*/` comments, including the rebased `tui_app_server` call sites that now fall under the lint - keep the checked-in style aligned with the lint policy by using `/*param*/` and leaving string and char literals uncommented - cache `cargo-dylint`, `dylint-link`, and the relevant Cargo registry/git metadata in the lint job - split changed-path detection so the lint crate's own `cargo test` step runs only when `tools/argument-comment-lint/*` or `rust-ci.yml` changes - continue to run the repo wrapper over the `codex-rs` workspace, so product-code enforcement is unchanged Most of the code changes in this commit are intentionally mechanical comment rewrites or insertions driven by the lint itself. ## Verification - `./tools/argument-comment-lint/run.sh --workspace` - `cargo test -p codex-tui-app-server -p codex-tui` - parsed `.github/workflows/rust-ci.yml` locally with PyYAML --- * -> openai#14652 * openai#14651
### Motivation - Prevent newly-created skills from being placed in unexpected locations by prompting for an install path and defaulting to a discoverable location so skills are usable immediately. - Make the `skill-creator` instructions explicit about the recommended default (`~/.codex/skills` / `$CODEX_HOME/skills`) so the agent and users follow a consistent, discoverable convention. ### Description - Updated `codex-rs/skills/src/assets/samples/skill-creator/SKILL.md` to add a user prompt: "Where should I create this skill? If you do not have a preference, I will place it in ~/.codex/skills so Codex can discover it automatically.". - Added guidance before running `init_skill.py` that if the user does not specify a location, the agent should default to `~/.codex/skills` (equivalently `$CODEX_HOME/skills`) for auto-discovery. - Updated the `init_skill.py` examples in the same `SKILL.md` to use `~/.codex/skills` as the recommended default while keeping one custom path example. ### Testing - Ran `cargo test -p codex-skills` and the crate's unit test suite passed (`1 passed; 0 failed`). - Verified relevant discovery behavior in code by checking `codex-rs/utils/home-dir/src/lib.rs` (`find_codex_home` defaults to `~/.codex`) and `codex-rs/core/src/skills/loader.rs` (user skill roots include `$CODEX_HOME/skills`). ------ [Codex Task](https://chatgpt.com/codex/tasks/task_i_69b75a50bb008322a278e55eb0ddccd6)
Add display_name support to marketplace.json.
- Added forceRemoteSync to plugin/install and plugin/uninstall. - With forceRemoteSync=true, we update the remote plugin status first, then apply the local change only if the backend call succeeds. - Kept plugin/list(forceRemoteSync=true) as the main recon path, and for now it treats remote enabled=false as uninstall. We will eventually migrate to plugin/installed for more precise state handling.
## Stack Position 2/4. Built on top of openai#14828. ## Base - openai#14828 ## Unblocks - openai#14829 - openai#14827 ## Scope - Port the realtime v2 wire parsing, session, app-server, and conversation runtime behavior onto the split websocket-method base. - Branch runtime behavior directly on the current realtime session kind instead of parser-derived flow flags. - Keep regression coverage in the existing e2e suites. --------- Co-authored-by: Codex <noreply@openai.com>
…ns (openai#14886) 1. camelCase for displayName; 2. move displayName under interface.
This change adds Jason to codex-core's built-in subagent nickname pool so spawned agents can pick it without any custom role configuration. The default list was simply missing that predefined name (a grave mistake).
Inspired by the work done over in openai/codex-action#74, this tightens up our use of GitHub expressions as shell/environment variables.
) ## Stack Position 3/4. Top-of-stack sibling built on openai#14830. ## Base - openai#14830 ## Sibling - openai#14827 ## Scope - Extend the realtime startup context with a bounded summary of the latest thread turns for continuity. --------- Co-authored-by: Codex <noreply@openai.com>
…i#14827) ## Stack Position 4/4. Top-of-stack sibling built on openai#14830. ## Base - openai#14830 ## Sibling - openai#14829 ## Scope - Gate low-level mic chunks while speaker playback is active, while still allowing spoken barge-in. --------- Co-authored-by: Codex <noreply@openai.com>
## Problem On Linux, Codex can be launched from a workspace path that is a symlink (for example, a symlinked checkout or a symlinked parent directory). Our sandbox policy intentionally canonicalizes writable/readable roots to the real filesystem path before building the bubblewrap mounts. That part is correct and needed for safety. The remaining bug was that bubblewrap could still inherit the helper process's logical cwd, which might be the symlinked alias instead of the mounted canonical path. In that case, the sandbox starts in a cwd that does not exist inside the sandbox namespace even though the real workspace is mounted. This can cause sandboxed commands to fail in symlinked workspaces. ## Fix This PR keeps the sandbox policy behavior the same, but separates two concepts that were previously conflated: - the canonical cwd used to define sandbox mounts and permissions - the caller's logical cwd used when launching the command On the Linux bubblewrap path, we now thread the logical command cwd through the helper explicitly and only add `--chdir <canonical path>` when the logical cwd differs from the mounted canonical path. That means: - permissions are still computed from canonical paths - bubblewrap starts the command from a cwd that definitely exists inside the sandbox - we do not widen filesystem access or undo the earlier symlink hardening ## Why This Is Safe This is a narrow Linux-only launch fix, not a policy change. - Writable/readable root canonicalization stays intact. - Protected metadata carveouts still operate on canonical roots. - We only override bubblewrap's inherited cwd when the logical path would otherwise point at a symlink alias that is not mounted in the sandbox. ## Tests - kept the existing protocol/core regression coverage for symlink canonicalization - added regression coverage for symlinked cwd handling in the Linux bubblewrap builder/helper path Local validation: - `just fmt` - `cargo test -p codex-protocol` - `cargo test -p codex-core normalize_additional_permissions_canonicalizes_symlinked_write_paths` - `cargo clippy -p codex-linux-sandbox -p codex-protocol -p codex-core --tests -- -D warnings` - `cargo build --bin codex` ## Context This is related to openai#14694. The earlier writable-root symlink fix addressed the mount/permission side; this PR fixes the remaining symlinked-cwd launch mismatch in the Linux sandbox path.
The in-process app-server currently emits both typed `ServerNotification`s and legacy `codex/event/*` notifications for the same live turn updates. `tui_app_server` was consuming both paths, so message deltas and completed items could be enqueued twice and rendered as duplicated output in the transcript. Ignore legacy notifications for event types that already have typed (app server) notification handling, while keeping legacy fallback behavior for events that still only arrive on the old path. This preserves compatibility without duplicating streamed commentary or final agent output. We will remove all of the legacy event handlers over time; they're here only during the short window where we're moving the tui to use the app server.
…14899) PR openai#14512 added an in-process app server and started to wire up the tui to use it. We were originally planning to modify the `tui` code in place, converting it to use the app server a bit at a time using a hybrid adapter. We've since decided to create an entirely new parallel `tui_app_server` implementation and do the conversion all at once but retain the existing `tui` while we work the bugs out of the new implementation. This PR undoes the changes to the `tui` made in the PR openai#14512 and restores the old initialization to its previous state. This allows us to modify the `tui_app_server` without the risk of regressing the old `tui` code. For example, we can start to remove support for all legacy core events, like the ones that PR openai#14892 needed to ignore. Testing: * I manually verified that the old `tui` starts and shuts down without a problem.
## Summary - skip nonexistent `workspace-write` writable roots in the Linux bubblewrap mount builder instead of aborting sandbox startup - keep existing writable roots mounted normally so mixed Windows/WSL configs continue to work - add unit and Linux integration regression coverage for the missing-root case ## Context This addresses regression A from openai#14875. Regression B will be handled in a separate PR. The old bubblewrap integration added `ensure_mount_targets_exist` as a preflight guard because bubblewrap bind targets must exist, and failing early let Codex return a clearer error than a lower-level mount failure. That policy turned out to be too strict once bubblewrap became the default Linux sandbox: shared Windows/WSL or mixed-platform configs can legitimately contain a well-formed writable root that does not exist on the current machine. This PR keeps bubblewrap's existing-target requirement, but changes Codex to skip missing writable roots instead of treating them as fatal configuration errors.
Switch plugin-install background MCP OAuth to a silent login path so the raw authorization URL is no longer printed in normal success cases. OAuth behavior is otherwise unchanged, with fallback URL output via stderr still shown only if browser launch fails. Before: https://github.com/user-attachments/assets/4bf387af-afa8-4b83-bcd6-4ca6b55da8db
- [x] Additional gating for tool suggest and apps.
## Summary - drop `sandbox_permissions` from the sandboxing `ExecOptions` and `ExecRequest` adapter types - remove the now-unused plumbing from shell, unified exec, JS REPL, and apply-patch runtime call sites - default reconstructed `ExecParams` to `SandboxPermissions::UseDefault` where the lower-level API still requires the field ## Testing - `just fmt` - `just argument-comment-lint` - `cargo test -p codex-core` (still running locally; first failures observed in `suite::cli_stream::responses_mode_stream_cli`, `suite::cli_stream::responses_mode_stream_cli_supports_openai_base_url_config_override`, and `suite::cli_stream::responses_mode_stream_cli_supports_openai_base_url_env_fallback`)
- move the shared byte-based middle truncation logic from `core` into `codex-utils-string` - keep token-specific truncation in `codex-core` so rollout can reuse the shared helper in the next stacked PR --------- Co-authored-by: Codex <noreply@openai.com>
### Summary Add the v2 app-server filesystem watch RPCs and notifications, wire them through the message processor, and implement connection-scoped watches with notify-backed change delivery. This also updates the schema fixtures, app-server documentation, and the v2 integration coverage for watch and unwatch behavior. This allows clients to efficiently watch for filesystem updates, e.g. to react on branch changes. ### Testing - exercise watch lifecycles for directory changes, atomic file replacement, missing-file targets, and unwatch cleanup
…ver (openai#15674) - Add a small delayed loading header for plugin list/detail loading messages in the TUI. Keep existing text for the first 1s, then show shimmer on the loading line. - Apply the same behavior in both tui and tui_app_server. https://github.com/user-attachments/assets/71dd35e4-7e3b-4e7b-867a-3c13dc395d3a
This allows clients to get enough information to interact with the codex skills/configuration/etc.
## Why This is a follow-up to openai#15360. That change fixed the `arg0` helper setup, but `rmcp-client` still coerced stdio transport environment values into UTF-8 `String`s before program resolution and process spawn. If `PATH` or another inherited environment value contains non-UTF-8 bytes, that loses fidelity before it reaches `which` and `Command`. ## What changed - change `create_env_for_mcp_server()` to return `HashMap<OsString, OsString>` and read inherited values with `std::env::var_os()` - change `TransportRecipe::Stdio.env`, `RmcpClient::new_stdio_client()`, and `program_resolver::resolve()` to keep stdio transport env values in `OsString` form within `rmcp-client` - keep the `codex-core` config boundary stringly, but convert configured stdio env values to `OsString` once when constructing the transport - update the rmcp-client stdio test fixtures and callers to use `OsString` env maps - add a Unix regression test that verifies `create_env_for_mcp_server()` preserves a non-UTF-8 `PATH` ## How to verify - `cargo test -p codex-rmcp-client` - `cargo test -p codex-core mcp_connection_manager` - `just argument-comment-lint` Targeted coverage in this change includes `utils::tests::create_env_preserves_path_when_it_is_not_utf8`, while the updated stdio transport path is exercised by the existing rmcp-client tests that construct `RmcpClient::new_stdio_client()`.
…gins (openai#15700) - Removes provenance filtering in the mentions feature for apps and skills that were installed as part of a plugin. - All skills and apps for a plugin are mentionable with this change.
- Adds language and "[learn more](https://help.openai.com/en/articles/11487775-apps-in-chatgpt)" link to plugin details pages. - Message is hidden when plugin is installed <img width="1970" height="498" alt="image" src="https://github.com/user-attachments/assets/f14330f7-661e-4860-8538-6dc9e8bbd90a" />
## Summary - Reuse the existing config path resolver for the macOS MDM managed preferences layer so `writable_roots = ["~/code"]` expands the same way as file-backed config - keep the change scoped to the MDM branch in `config_loader`; the current net diff is only `config_loader/mod.rs` plus focused regression tests in `config_loader/tests.rs` and `config/service_tests.rs` - research note: `resolve_relative_paths_in_config_toml(...)` is already used in several existing configuration paths, including [CLI overrides](https://github.com/openai/codex/blob/74fda242d3651f0a43ec8657bdbc7bde426dce0e/codex-rs/core/src/config_loader/mod.rs#L152-L163), [file-backed managed config](https://github.com/openai/codex/blob/74fda242d3651f0a43ec8657bdbc7bde426dce0e/codex-rs/core/src/config_loader/mod.rs#L274-L285), [normal config-file loading](https://github.com/openai/codex/blob/74fda242d3651f0a43ec8657bdbc7bde426dce0e/codex-rs/core/src/config_loader/mod.rs#L311-L331), [project `.codex/config.toml` loading](https://github.com/openai/codex/blob/74fda242d3651f0a43ec8657bdbc7bde426dce0e/codex-rs/core/src/config_loader/mod.rs#L863-L865), and [role config loading](https://github.com/openai/codex/blob/74fda242d3651f0a43ec8657bdbc7bde426dce0e/codex-rs/core/src/agent/role.rs#L105-L109) ## Validation - `cargo fmt --all --check` - `cargo test -p codex-core managed_preferences_expand_home_directory_in_workspace_write_roots -- --nocapture` - `cargo test -p codex-core write_value_succeeds_when_managed_preferences_expand_home_directory_paths -- --nocapture` --------- Co-authored-by: Michael Bolin <mbolin@openai.com> Co-authored-by: Michael Bolin <bolinfest@gmail.com>
## Summary - remove the fork-startup `build_initial_context` injection - keep the reconstructed `reference_context_item` as the fork baseline until the first real turn - update fork-history tests and the request snapshot, and add a `TODO(ccunningham)` for remaining nondiffable initial-context inputs ## Why Fork startup was appending current-session initial context immediately after reconstructing the parent rollout, then the first real turn could emit context updates again. That duplicated model-visible context in the child rollout. ## Impact Forked sessions now behave like resume for context seeding: startup reconstructs history and preserves the prior baseline, and the first real turn handles any current-session context emission. --------- Co-authored-by: Codex <noreply@openai.com>
- [x] Add a method to override feature flags globally and not just thread level.
- Hide App ID from plugin details page.
TL;DR: update the quickstart integration assertion to match the current example output. - replace the stale `Status:` expectation for `01_quickstart_constructor` with `Server:`, `Items:`, and `Text:` - keep the existing guard against `Server: unknown`
- [x] Flip the `plugins` and `apps` flags.
- [x] Flip on additional flags.
…nai#14172) ## Summary - keep legacy Windows restricted-token sandboxing as the supported baseline - support the split-policy subset that restricted-token can enforce directly today - support full-disk read, the same writable root set as legacy `WorkspaceWrite`, and extra read-only carveouts under those writable roots via additional deny-write ACLs - continue to fail closed for unsupported split-only shapes, including explicit unreadable (`none`) carveouts, reopened writable descendants under read-only carveouts, and writable root sets that do not match the legacy workspace roots ## Example Given a filesystem policy like: ```toml ":root" = "read" ":cwd" = "write" "./docs" = "read" ``` the restricted-token backend can keep the workspace writable while denying writes under `docs` by layering an extra deny-write carveout on top of the legacy workspace-write roots. A policy like: ```toml "/workspace" = "write" "/workspace/docs" = "read" "/workspace/docs/tmp" = "write" ``` still fails closed, because the unelevated backend cannot reopen the nested writable descendant safely. ## Stack -> fix: support split carveouts in windows restricted-token sandbox openai#14172 fix: support split carveouts in windows elevated sandbox openai#14568
This PR adds code to recover from a narrow app-server timing race where a follow-up can be sent after the previous turn has already ended but before the TUI has observed that completion. Instead of surfacing turn/steer failed: no active turn to steer, the client now treats that as a stale active-turn cache and falls back to starting a fresh turn, matching the intended submit behavior more closely. This is similar to the strategy employed by other app server clients (notably, the IDE extension and desktop app). This race exists because the current app-server API makes the client choose between two separate RPCs, turn/steer and turn/start, based on its local view of whether a turn is still active. That view is replicated from asynchronous notifications, so it can be stale for a brief window. The server may already have ended the turn while the client still believes it is in progress. Since the choice is made client-side rather than atomically on the server, tui_app_server can occasionally send turn/steer for a turn that no longer exists.
This merge brings in 804 upstream commits from openai/codex. Strategy: Accepted upstream versions for: - All Rust source code in codex-rs - TypeScript/JSON schema files - Cargo, Bazel, and module configurations - GitHub workflows - Protocol definitions Preserved Phenotype-specific: - AGENTS.md governance file - shell-tool-mcp Phenotype extensions - Phenotype-specific CI configurations Note: 100+ files had conflicts due to significant architectural divergence between the Phenotype fork and upstream OpenAI codebase.
|
Warning You have reached your daily quota limit. Please wait up to 24 hours and I will start processing your requests again! |
|
Important Review skippedToo many files! This PR contains 298 files, which is 148 over the limit of 150. ⚙️ Run configurationConfiguration used: Path: .coderabbit.yaml Review profile: CHILL Plan: Pro Run ID: ⛔ Files ignored due to path filters (2)
📒 Files selected for processing (298)
You can disable this status message by setting the Use the checkbox below for a quick retry:
Note
|
|
@coderabbitai review bot-review-trigger: coderabbitai 2026-03-25T09:14:19Z wave1 |
|
@gemini-code-assist review bot-review-trigger: gemini-code-assist 2026-03-25T09:14:19Z wave1 |
|
@Augment review bot-review-trigger: augment 2026-03-25T09:14:22Z wave2 |
|
@codex review bot-review-trigger: codex 2026-03-25T09:14:22Z wave2 |
|
To use Codex here, create a Codex account and connect to github. |
Automated upstream sync branch. Please review.