Feature hasn't been suggested before.
Describe the enhancement you want to request
OpenAI is silently routing requests from GPT-5.3-Codex to GPT-5.2 when their systems detect potential "cyber activities" (source). This affects legitimate users doing security research or development work.
Request:
OpenCode should detect when requests are routed to a different model and notify users with a clear warning. This could include:
Monitoring API responses for routing indicators
Displaying a warning message when detected
Providing a link to chatgpt.com/cyber for regaining access
Benefits:
Users understand why performance suddenly degrades
Reduces time debugging issues caused by silent model downgrades
Users can take action to regain proper access when misclassified
Context:
A recent incident affected ~9% of users. OpenAI plans to add notifications eventually, but detection at the OpenCode level provides immediate protection.
Feature hasn't been suggested before.
Describe the enhancement you want to request
OpenAI is silently routing requests from GPT-5.3-Codex to GPT-5.2 when their systems detect potential "cyber activities" (source). This affects legitimate users doing security research or development work.
Request:
OpenCode should detect when requests are routed to a different model and notify users with a clear warning. This could include:
Monitoring API responses for routing indicators
Displaying a warning message when detected
Providing a link to chatgpt.com/cyber for regaining access
Benefits:
Users understand why performance suddenly degrades
Reduces time debugging issues caused by silent model downgrades
Users can take action to regain proper access when misclassified
Context:
A recent incident affected ~9% of users. OpenAI plans to add notifications eventually, but detection at the OpenCode level provides immediate protection.