Httpx not production ready under load #3100
Replies: 13 comments 5 replies
-
|
Ah locust looks really useful here thanks. First up don't compare HTTP/2 requests vs. HTTP/1.1 requests. We deliberately don't have HTTP/2 on by default in So then... I tried your
Let's start by figuring out if we're ~ on the same page here. Here's a sample of the results I get there... httpx requests
|
Beta Was this translation helpful? Give feedback.
-
|
Hi @tomchristie really appreciate the response!
|
Beta Was this translation helpful? Give feedback.
-
|
Problem we have is with the |
Beta Was this translation helpful? Give feedback.
-
|
Let's try to keep things really simple... import concurrent.futures
import requests
import httpx
import time
NUM_REQUESTS = 100
NUM_THREADS = 10
URL = "https://www.example.com"
print("URL: ", URL)
print("NUM_REQUESTS: ", NUM_REQUESTS)
print("NUM_THREADS: ", NUM_THREADS)
print()
# requests
client = requests.Session()
durations = []
success, failure = 0, 0
def send():
for _ in range(NUM_REQUESTS):
start = time.monotonic()
client.get(URL)
end = time.monotonic()
duration = end - start
durations.append(duration)
all_start = time.monotonic()
with concurrent.futures.ThreadPoolExecutor(max_workers=NUM_THREADS) as executor:
for _ in range(NUM_THREADS):
executor.submit(send)
all_end = time.monotonic()
durations = sorted(durations)
print("requests")
print("--------")
print("Total requests: ", len(durations))
print("Average: %.3f" % (sum(durations) / len(durations)))
print("Median: %.3f" % (durations[int(len(durations) * 0.5)]))
print("95th: %.3f" % (durations[int(len(durations) * 0.95)]))
print("99th: %.3f" % (durations[int(len(durations) * 0.99)]))
print()
# httpx
client = httpx.Client()
durations = []
def send():
for _ in range(NUM_REQUESTS):
start = time.monotonic()
client.get(URL)
end = time.monotonic()
duration = end - start
durations.append(duration)
all_start = time.monotonic()
with concurrent.futures.ThreadPoolExecutor(max_workers=NUM_THREADS) as executor:
for _ in range(NUM_THREADS):
executor.submit(send)
all_end = time.monotonic()
durations = sorted(durations)
print("httpx")
print("-----")
print("Total requests: ", len(durations))
print("Average: %.3f" % (sum(durations) / len(durations)))
print("Median: %.3f" % (durations[int(len(durations) * 0.5)]))
print("95th: %.3f" % (durations[int(len(durations) * 0.95)]))
print("99th: %.3f" % (durations[int(len(durations) * 0.99)]))
print()I'm seeing these kind of results... Trying out a few different combinations and the results I'm seeing seem ~comparable?... |
Beta Was this translation helpful? Give feedback.
-
|
@tomchristie Thanks for creating a separate script. I am going to try this out for an actual scenario. Please note that, we need to increase |
Beta Was this translation helpful? Give feedback.
-
|
Okay, where I run it for |
Beta Was this translation helpful? Give feedback.
-
|
On my local windows machine, |
Beta Was this translation helpful? Give feedback.
-
|
For |
Beta Was this translation helpful? Give feedback.
-
|
Ohkay it seems, things starts falling when I set NUM_THREADS>=25 .. |
Beta Was this translation helpful? Give feedback.
-
|
Link to the script -> https://github.com/sacOO7/locust-httpx-testing/blob/main/cli-script.py |
Beta Was this translation helpful? Give feedback.
-
|
@tomchristie do you have any updates on the above? |
Beta Was this translation helpful? Give feedback.
-
|
This might be related: #3215 |
Beta Was this translation helpful? Give feedback.
-
|
@MarkusSintonen yes, surely seems related. I think only requests gives best performance under concurrent users and doesn't cause spikes. @tomchristie maybe it's time you test the library against http, https ( with http2 enabled ) with proper sandbox setup in place. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
httpxsync client singleton instance to make requests againsthttps://rest.ably.io/time. You can check https://github.com/sacOO7/locust-httpx-testing/blob/47df57a9beee398a4f625dde736a39435de6e807/httpx_user.py#L41Following were my observations for increase in number of users =>
httpx.Limits(max_keepalive_connections=100, max_connections=100, keepalive_expiry=120), we can see fewer spikes, but it's still not as good aspython-requests. Average response time is still greater than time taken bypython-requestsObservations posted here
python-requestsandhttpx(Http2=true)python-requestsandhttpx(Http2=true)python-requestsandhttpx(Http2=true) with max connection set to 100Beta Was this translation helpful? Give feedback.
All reactions