You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently If I specify a timeout query context when querying a broker, that timeout is not honored. It is entirely possible to set a timeout of 90s and have the broker hog resources for the query for 15m while it tries to complete the request.
The expected behavior is that the broker terminates the returning data stream once timeout has expired, and also that it cancels any outstanding query efforts on the historicals related to the query.
The current state of timeouts seems to be enforced on a per-segment basis, and not a per-query basis. Meaning that if historical nodes take an abnormally long time on queries, the overall query can take a very long time compared to the timeout.
Currently If I specify a
timeoutquery context when querying a broker, that timeout is not honored. It is entirely possible to set a timeout of 90s and have the broker hog resources for the query for 15m while it tries to complete the request.The expected behavior is that the broker terminates the returning data stream once
timeouthas expired, and also that it cancels any outstanding query efforts on the historicals related to the query.The current state of timeouts seems to be enforced on a per-segment basis, and not a per-query basis. Meaning that if historical nodes take an abnormally long time on queries, the overall query can take a very long time compared to the timeout.
This stems from 2 things:
Part of this comes from the assumption stated in
io.druid.query.AsyncQueryRunner::runbecause the a lot of the query workload is actually lazily done in yield.