What version of the Codex App are you using (From “About Codex” dialog)?
0.111
What subscription do you have?
Pro
What platform is your computer?
No response
What issue are you seeing?
Probably related to all the complaints about limits, but specifically in my case I used codex 5.3 spark for 20 seconds to create a commit message and noticed my weekly usage was down to 84% from the spark limit despite not using it for anything else. I then switched to GPT-5.2 with no more spark running and watched my spark usage decrease to 78% while my normal usage doesn't move. Then things normalised after a while.
Before this I've been experiencing very high usage using 5.4 like a number of other complaints.
Seems to me there is either the wrong model being counted for usage or there is double counting of usage going on (both as the correct model and another).
What steps can reproduce the bug?
Change to 5.3-codex-spark, run a very small task, observe possibly abnormal high effect on spark limit, change to a model that hits the normal limits, use it and possibly observe spark limit being used still.
What is the expected behavior?
No response
Additional information
No response
What version of the Codex App are you using (From “About Codex” dialog)?
0.111
What subscription do you have?
Pro
What platform is your computer?
No response
What issue are you seeing?
Probably related to all the complaints about limits, but specifically in my case I used codex 5.3 spark for 20 seconds to create a commit message and noticed my weekly usage was down to 84% from the spark limit despite not using it for anything else. I then switched to GPT-5.2 with no more spark running and watched my spark usage decrease to 78% while my normal usage doesn't move. Then things normalised after a while.
Before this I've been experiencing very high usage using 5.4 like a number of other complaints.
Seems to me there is either the wrong model being counted for usage or there is double counting of usage going on (both as the correct model and another).
What steps can reproduce the bug?
Change to 5.3-codex-spark, run a very small task, observe possibly abnormal high effect on spark limit, change to a model that hits the normal limits, use it and possibly observe spark limit being used still.
What is the expected behavior?
No response
Additional information
No response