Skip to content

[hooks] userpromptsubmit - hook before user's prompt is executed#14626

Merged
eternal-openai merged 10 commits intomainfrom
eternal/hooks-userpromptsubmit
Mar 18, 2026
Merged

[hooks] userpromptsubmit - hook before user's prompt is executed#14626
eternal-openai merged 10 commits intomainfrom
eternal/hooks-userpromptsubmit

Conversation

@eternal-openai
Copy link
Contributor

@eternal-openai eternal-openai commented Mar 13, 2026

  • this allows blocking the user's prompts from executing, and also prevents them from entering history
  • handles the edge case where you can both prevent the user's prompt AND add n amount of additionalContexts
  • refactors some old code into common.rs where hooks overlap functionality
  • refactors additionalContext being previously added to user messages, instead we use developer messages for them
  • handles queued messages correctly

Sample hook for testing - if you write "[block-user-submit]" this hook will stop the thread:

example run

› sup


• Running UserPromptSubmit hook: reading the observatory notes

UserPromptSubmit hook (completed)
  warning: wizard-tower UserPromptSubmit demo inspected: sup
  hook context: Wizard Tower UserPromptSubmit demo fired. For this reply only, include the exact
phrase 'observatory lanterns lit' exactly once near the end.

• Just riding the cosmic wave and ready to help, my friend. What are we building today? observatory
  lanterns lit


› and [block-user-submit]


• Running UserPromptSubmit hook: reading the observatory notes

UserPromptSubmit hook (stopped)
  warning: wizard-tower UserPromptSubmit demo blocked the prompt on purpose.
  stop: Wizard Tower demo block: remove [block-user-submit] to continue.

.codex/config.toml

[features]
codex_hooks = true

.codex/hooks.json

{
  "hooks": {
    "UserPromptSubmit": [
      {
        "hooks": [
          {
            "type": "command",
            "command": "/usr/bin/python3 .codex/hooks/user_prompt_submit_demo.py",
            "timeoutSec": 10,
            "statusMessage": "reading the observatory notes"
          }
        ]
      }
    ]
  }
}

.codex/hooks/user_prompt_submit_demo.py

#!/usr/bin/env python3

import json
import sys
from pathlib import Path


def prompt_from_payload(payload: dict) -> str:
    prompt = payload.get("prompt")
    if isinstance(prompt, str) and prompt.strip():
        return prompt.strip()

    event = payload.get("event")
    if isinstance(event, dict):
        user_prompt = event.get("user_prompt")
        if isinstance(user_prompt, str):
            return user_prompt.strip()

    return ""


def main() -> int:
    payload = json.load(sys.stdin)
    prompt = prompt_from_payload(payload)
    cwd = Path(payload.get("cwd", ".")).name or "wizard-tower"

    if "[block-user-submit]" in prompt:
        print(
            json.dumps(
                {
                    "systemMessage": (
                        f"{cwd} UserPromptSubmit demo blocked the prompt on purpose."
                    ),
                    "decision": "block",
                    "reason": (
                        "Wizard Tower demo block: remove [block-user-submit] to continue."
                    ),
                }
            )
        )
        return 0

    prompt_preview = prompt or "(empty prompt)"
    if len(prompt_preview) > 80:
        prompt_preview = f"{prompt_preview[:77]}..."

    print(
        json.dumps(
            {
                "systemMessage": (
                    f"{cwd} UserPromptSubmit demo inspected: {prompt_preview}"
                ),
                "hookSpecificOutput": {
                    "hookEventName": "UserPromptSubmit",
                    "additionalContext": (
                        "Wizard Tower UserPromptSubmit demo fired. "
                        "For this reply only, include the exact phrase "
                        "'observatory lanterns lit' exactly once near the end."
                    ),
                },
            }
        )
    )
    return 0


if __name__ == "__main__":
    raise SystemExit(main())

@eternal-openai eternal-openai marked this pull request as ready for review March 13, 2026 21:20
@eternal-openai eternal-openai force-pushed the eternal/hooks-userpromptsubmit branch 5 times, most recently from 0a31f07 to f91c2e5 Compare March 16, 2026 06:32
@etraut-openai etraut-openai added the oai PRs contributed by OpenAI employees label Mar 16, 2026
@eternal-openai eternal-openai force-pushed the eternal/hooks-userpromptsubmit branch from 2902d26 to 8003348 Compare March 17, 2026 04:03
turn_id: Option<String>,
parse: fn(&ConfiguredHandler, CommandRunResult, Option<String>) -> ParsedHandler<T>,
) -> Vec<ParsedHandler<T>> {
let results = join_all(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This still launches every handler concurrently, so a blocking user-prompt-submit hook cannot prevent later handlers from running/injecting context. That broke the pre-submit stop semantics + conflicts with the reported Sync execution mode

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm under the impression that the code matches the spec here -- the spec doesn't have a concept of sequential sync, all hooks are run in parallel all the time (to save time, etc). What sync vs async means is that async hooks are run purely in the background, i.e. the thread doesn't wait for them to complete (we haven't implemented this yet). Sync means that we block the loop until all sync hooks complete. Agree that the naming in the code could be a bit more clear, but I'm leaning towards leaving this as-is

for pending_input in accepted_pending_input {
record_pending_input(&sess, &turn_context, pending_input).await;
}
record_additional_contexts(&sess, &turn_context, blocked_pending_input_contexts).await;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If a later queued prompt is blocked after an earlier queued prompt was accepted, this injects the blocked prompt’s context into the accepted follow-up request?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, that was a detail that I was paying attention to. The spec indeed wants us to accept additionalContext regardless of block, so: if queued prompt A is accepted, queued prompt B is blocked, and B returns additionalContext, we record A, then record B’s additional context, then build the next sampling request. So yes, B’s context can influence the follow-up request that contains A

Basically the mechanics of additionalContext is that it's added no matter what, if the hooks sets it

_ => false,
},
HookEventName::Stop => true,
HookEventName::UserPromptSubmit | HookEventName::Stop => true,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

UserPromptSubmit matchers are parsed and validated, but this makes them unconditional at runtime. Either honor matchers or reject them in config

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great catch, addressing...

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

UserPromptSubmit matchers were already ignored at runtime, but discovery was still parsing/validating/storing them, which meant an invalid matcher could still warn or prevent registration. I changed discovery to ignore matchers for UserPromptSubmit (and Stop) as well

This matches what the spec wants:

UserPromptSubmit, Stop, [...] don’t support matchers and always fire on every occurrence. If you add a matcher field to these events, it is silently ignored.

let request = codex_hooks::SessionStartRequest {
session_id: sess.conversation_id,
cwd: turn_context.cwd.clone(),
transcript_path: sess.current_rollout_path().await,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is the file at the current_rollout_path guaranteed to exist at this point? it seems like it's only materialized when the prompt is recorded below by record_user_prompt_and_emit_turn_item

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks! Fixed it so that we make sure to materialize the transcript before the hook runs

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

havent looked into this too deeply, but i want to be sure we aren't reverting intentional work. it seems from this pr that defering the materialization until the first UserMessage was intentional-- is it ok to basically undo that?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah @etraut-openai would love to have your thoughts here, but @sayan-oai I guess it's not exactly a revert, because this only runs if hooks require it -- and I'm not sure how to get around that requirement without breaking the spec. I'm also contemplating how closely we want to match the transcript style that's used in the industry, since ours is fairly different of course -- so maybe one day this becomes a different transcript mechanism altogether

PendingInputHookDisposition::Blocked {
additional_contexts,
} => {
record_additional_contexts(self, &turn_context, additional_contexts).await;
Copy link
Collaborator

@sayan-oai sayan-oai Mar 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the similar pending_input loop in run_turn seems to break + requeue remaining items on Blocked, but this one seems to continue. why are they different?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@charley-oai, why do we drain all pending items on task_finished and persisting them in conversation history? I see this pr. Just curious if this is behavior we want to continue with prompt-submission turn hooks.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, thank you Sayan & Charley for helping me understand and untangle this. I think the reef that I hit surfing through here is that:

  • Previously this was introduced to handle a kind of weird race between turn ending & queued messages coming in. So my read here is that we wanted to at least persist those loose-hanging prompts into history, then if the model ever started up that thread again, it would catch em?
  • But my little catch-22 here is that this mechanism would bypass hooks by doing so

So it sounds like we need to either (a) make sure this race can't happen in a different way, like automatically starting the next real turn or (b) make some kind of new concept of queue persistence so that the next time the thread gets started, it'll run through the normal hook processing pipeline. But let me know if I'm catching the wave wrong

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep @eternal-openai that's exactly right

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Alright, thanks @charley-oai -- I'm going to descope that from this PR then and come back to fixing this issue in its own PR, seems like it would be a bit much

let response_item: ResponseItem = initial_input_for_turn.clone().into();
sess.record_user_prompt_and_emit_turn_item(turn_context.as_ref(), &input, response_item)
.await;
record_additional_contexts(&sess, &turn_context, additional_contexts).await;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

developer messages are dropped on remote compaction, so additional context will disappear. is it ok for this to disappear from model history?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, thanks for telling me about that! I'll think about it some more, but my instinct is that that's okay:

  • various hooks are likely to run again after compaction
  • developers can use pre/post compaction hooks to ensure what they want gets reinserted

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah i think this is non-blocking; we can see if the model becomes confused.

developers can use pre/post compaction hooks to ensure what they want gets reinserted

fair, but i dont think many users will be aware of the quirks of dev messages and the compaction lifecycle :)

Copy link
Collaborator

@sayan-oai sayan-oai left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks for digging into the details to make sure this lands well!

@eternal-openai eternal-openai merged commit 6fef421 into main Mar 18, 2026
33 checks passed
@eternal-openai eternal-openai deleted the eternal/hooks-userpromptsubmit branch March 18, 2026 05:09
@github-actions github-actions bot locked and limited conversation to collaborators Mar 18, 2026
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

oai PRs contributed by OpenAI employees

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants