-
-
Notifications
You must be signed in to change notification settings - Fork 397
Closed
Description
Problem
usage_request should probably use PSS instead of RSS. As RSS overestimates due to copy-on-write. See the below reproducer -
import multiprocessing
import time
import psutil
def foo():
time.sleep(10000)
parents_original_memory = bytearray(int(1e9)) # 1GB
for i in range(10):
multiprocessing.Process(target=foo).start()
def get_memory_info(type):
process_metric_value = lambda process: getattr(process.memory_full_info(), type)
current_process = psutil.Process()
all_processes = [current_process] + current_process.children(recursive=True)
return (
f"{sum([process_metric_value(process) for process in all_processes]) / 1e9} GB"
)
print("RSS: ", get_memory_info("rss"))
print("PSS: ", get_memory_info("pss"))Output is -
RSS: 11.590012928 GB
PSS: 1.082778624 GB
PSS seems to be more accurate here.
Additional context
This is very similar to - jupyter-server/jupyter-resource-usage#130
Metadata
Metadata
Assignees
Labels
No labels