fix(profiling): fix SystemError when collecting memory profiler events#12075
fix(profiling): fix SystemError when collecting memory profiler events#12075
Conversation
We added locking to the memory profiler to address crashes. These locks are mostly "try" locks, meaning we bail out if we can't acquire them right away. This was done defensively to mitigate the possibility of deadlock until we fully understood why the locks are needed and could guarantee their correctness. But as a result of using try locks, the iter_events function in particular can fail if the memory profiler lock is contended when it tries to collect profiling events. The function then returns NULL, leading to SystemError exceptions because we don't set an error. Even if we set an error, returning NULL isn't the right thing to do. It'll basically mean we wait until the next profile iteration, still accumulating events in the same buffer, and try again to upload the events. So we're going to get multiple iteration's worth of events. The right thing to do is take the lock unconditionally in iter_events. This is safe because the only thing we're guarding is allocating a new sample buffer, which doesn't call back into the python object allocator, and rotating out the pointer to the allocation buffer. Fixes #11831
|
|
Datadog ReportBranch report: ✅ 0 Failed, 130 Passed, 1468 Skipped, 4m 40s Total duration (36m 30.45s time saved) |
BenchmarksBenchmark execution time: 2025-01-27 21:22:46 Comparing candidate commit 27a42e6 in PR branch Found 1 performance improvements and 0 performance regressions! Performance is the same for 393 metrics, 2 unstable metrics. scenario:iast_aspects-ospathdirname_aspect
|
We don't currently instrument the raw allocator domain, but if we did, then we wouldn't want to allocate it with the memalloc lock held or we would miss the allocation in our profiles. Plus it's preferable to avoid any reentrancy or deadlock concerns if we can help it. Allocate the new tracker outside the lock in iter_events, and just swap out the global alloc tracker pointer under the lock.
Failure seems unlikely, but we need to clean up after ourselves if allocating an allocation tracker fails in memalloc.iter_events.
|
The backport to To backport manually, run these commands in your terminal: # Fetch latest updates from GitHub
git fetch
# Create a new working tree
git worktree add .worktrees/backport-2.18 2.18
# Navigate to the new working tree
cd .worktrees/backport-2.18
# Create a new branch
git switch --create backport-12075-to-2.18
# Cherry-pick the merged commit of this pull request and resolve the conflicts
git cherry-pick -x --mainline 1 e3045a19a20e0348f040aa589ef3c2517db74618
# Push it to GitHub
git push --set-upstream origin backport-12075-to-2.18
# Go back to the original working tree
cd ../..
# Delete the working tree
git worktree remove .worktrees/backport-2.18Then, create a pull request where the |
|
The backport to To backport manually, run these commands in your terminal: # Fetch latest updates from GitHub
git fetch
# Create a new working tree
git worktree add .worktrees/backport-2.19 2.19
# Navigate to the new working tree
cd .worktrees/backport-2.19
# Create a new branch
git switch --create backport-12075-to-2.19
# Cherry-pick the merged commit of this pull request and resolve the conflicts
git cherry-pick -x --mainline 1 e3045a19a20e0348f040aa589ef3c2517db74618
# Push it to GitHub
git push --set-upstream origin backport-12075-to-2.19
# Go back to the original working tree
cd ../..
# Delete the working tree
git worktree remove .worktrees/backport-2.19Then, create a pull request where the |
#12075) We added locking to the memory profiler to address crashes. These locks are mostly "try" locks, meaning we bail out if we can't acquire them right away. This was done defensively to mitigate the possibility of deadlock until we fully understood why the locks are needed and could guarantee their correctness. But as a result of using try locks, the `iter_events` function in particular can fail if the memory profiler lock is contended when it tries to collect profiling events. The function then returns NULL, leading to SystemError exceptions because we don't set an error. Even if we set an error, returning NULL isn't the right thing to do. It'll basically mean we wait until the next profile iteration, still accumulating events in the same buffer, and try again to upload the events. So we're going to get multiple iteration's worth of events. The right thing to do is take the lock unconditionally in `iter_events`. We can allocate the new tracker outside the memory allocation profiler lock so that we don't need to worry about reentrancy/deadlock issues if we start profiling that allocation. Then, the only thing we do under the lock is swap out the global tracker, so it's safe to take the lock unconditionally. Fixes #11831 TODO - regression test? (cherry picked from commit e3045a1)
…s [backport 2.18] Backports #12075 to 2.18 We added locking to the memory profiler to address crashes. These locks are mostly "try" locks, meaning we bail out if we can't acquire them right away. This was done defensively to mitigate the possibility of deadlock until we fully understood why the locks are needed and could guarantee their correctness. But as a result of using try locks, the `iter_events` function in particular can fail if the memory profiler lock is contended when it tries to collect profiling events. The function then returns NULL, leading to SystemError exceptions because we don't set an error. Even if we set an error, returning NULL isn't the right thing to do. It'll basically mean we wait until the next profile iteration, still accumulating events in the same buffer, and try again to upload the events. So we're going to get multiple iteration's worth of events. The right thing to do is take the lock unconditionally in `iter_events`. We can allocate the new tracker outside the memory allocation profiler lock so that we don't need to worry about reentrancy/deadlock issues if we start profiling that allocation. Then, the only thing we do under the lock is swap out the global tracker, so it's safe to take the lock unconditionally. Fixes #11831 TODO - regression test?
…s [backport 2.19] Backports #12075 to 2.19 We added locking to the memory profiler to address crashes. These locks are mostly "try" locks, meaning we bail out if we can't acquire them right away. This was done defensively to mitigate the possibility of deadlock until we fully understood why the locks are needed and could guarantee their correctness. But as a result of using try locks, the `iter_events` function in particular can fail if the memory profiler lock is contended when it tries to collect profiling events. The function then returns NULL, leading to SystemError exceptions because we don't set an error. Even if we set an error, returning NULL isn't the right thing to do. It'll basically mean we wait until the next profile iteration, still accumulating events in the same buffer, and try again to upload the events. So we're going to get multiple iteration's worth of events. The right thing to do is take the lock unconditionally in `iter_events`. We can allocate the new tracker outside the memory allocation profiler lock so that we don't need to worry about reentrancy/deadlock issues if we start profiling that allocation. Then, the only thing we do under the lock is swap out the global tracker, so it's safe to take the lock unconditionally. Fixes #11831 TODO - regression test?
…s [backport 2.20] (#12110) Backport e3045a1 from #12075 to 2.20. We added locking to the memory profiler to address crashes. These locks are mostly "try" locks, meaning we bail out if we can't acquire them right away. This was done defensively to mitigate the possibility of deadlock until we fully understood why the locks are needed and could guarantee their correctness. But as a result of using try locks, the `iter_events` function in particular can fail if the memory profiler lock is contended when it tries to collect profiling events. The function then returns NULL, leading to SystemError exceptions because we don't set an error. Even if we set an error, returning NULL isn't the right thing to do. It'll basically mean we wait until the next profile iteration, still accumulating events in the same buffer, and try again to upload the events. So we're going to get multiple iteration's worth of events. The right thing to do is take the lock unconditionally in `iter_events`. We can allocate the new tracker outside the memory allocation profiler lock so that we don't need to worry about reentrancy/deadlock issues if we start profiling that allocation. Then, the only thing we do under the lock is swap out the global tracker, so it's safe to take the lock unconditionally. Fixes #11831 TODO - regression test?
…s [backport 2.19] (#12125) Backports #12075 to 2.19 We added locking to the memory profiler to address crashes. These locks are mostly "try" locks, meaning we bail out if we can't acquire them right away. This was done defensively to mitigate the possibility of deadlock until we fully understood why the locks are needed and could guarantee their correctness. But as a result of using try locks, the `iter_events` function in particular can fail if the memory profiler lock is contended when it tries to collect profiling events. The function then returns NULL, leading to SystemError exceptions because we don't set an error. Even if we set an error, returning NULL isn't the right thing to do. It'll basically mean we wait until the next profile iteration, still accumulating events in the same buffer, and try again to upload the events. So we're going to get multiple iteration's worth of events. The right thing to do is take the lock unconditionally in `iter_events`. We can allocate the new tracker outside the memory allocation profiler lock so that we don't need to worry about reentrancy/deadlock issues if we start profiling that allocation. Then, the only thing we do under the lock is swap out the global tracker, so it's safe to take the lock unconditionally. Fixes #11831 TODO - regression test?
#12075) We added locking to the memory profiler to address crashes. These locks are mostly "try" locks, meaning we bail out if we can't acquire them right away. This was done defensively to mitigate the possibility of deadlock until we fully understood why the locks are needed and could guarantee their correctness. But as a result of using try locks, the `iter_events` function in particular can fail if the memory profiler lock is contended when it tries to collect profiling events. The function then returns NULL, leading to SystemError exceptions because we don't set an error. Even if we set an error, returning NULL isn't the right thing to do. It'll basically mean we wait until the next profile iteration, still accumulating events in the same buffer, and try again to upload the events. So we're going to get multiple iteration's worth of events. The right thing to do is take the lock unconditionally in `iter_events`. We can allocate the new tracker outside the memory allocation profiler lock so that we don't need to worry about reentrancy/deadlock issues if we start profiling that allocation. Then, the only thing we do under the lock is swap out the global tracker, so it's safe to take the lock unconditionally. Fixes #11831 TODO - regression test? (cherry picked from commit e3045a1)
#12075) We added locking to the memory profiler to address crashes. These locks are mostly "try" locks, meaning we bail out if we can't acquire them right away. This was done defensively to mitigate the possibility of deadlock until we fully understood why the locks are needed and could guarantee their correctness. But as a result of using try locks, the `iter_events` function in particular can fail if the memory profiler lock is contended when it tries to collect profiling events. The function then returns NULL, leading to SystemError exceptions because we don't set an error. Even if we set an error, returning NULL isn't the right thing to do. It'll basically mean we wait until the next profile iteration, still accumulating events in the same buffer, and try again to upload the events. So we're going to get multiple iteration's worth of events. The right thing to do is take the lock unconditionally in `iter_events`. We can allocate the new tracker outside the memory allocation profiler lock so that we don't need to worry about reentrancy/deadlock issues if we start profiling that allocation. Then, the only thing we do under the lock is swap out the global tracker, so it's safe to take the lock unconditionally. Fixes #11831 TODO - regression test?
#12075) We added locking to the memory profiler to address crashes. These locks are mostly "try" locks, meaning we bail out if we can't acquire them right away. This was done defensively to mitigate the possibility of deadlock until we fully understood why the locks are needed and could guarantee their correctness. But as a result of using try locks, the `iter_events` function in particular can fail if the memory profiler lock is contended when it tries to collect profiling events. The function then returns NULL, leading to SystemError exceptions because we don't set an error. Even if we set an error, returning NULL isn't the right thing to do. It'll basically mean we wait until the next profile iteration, still accumulating events in the same buffer, and try again to upload the events. So we're going to get multiple iteration's worth of events. The right thing to do is take the lock unconditionally in `iter_events`. We can allocate the new tracker outside the memory allocation profiler lock so that we don't need to worry about reentrancy/deadlock issues if we start profiling that allocation. Then, the only thing we do under the lock is swap out the global tracker, so it's safe to take the lock unconditionally. Fixes #11831 TODO - regression test?
#12075) We added locking to the memory profiler to address crashes. These locks are mostly "try" locks, meaning we bail out if we can't acquire them right away. This was done defensively to mitigate the possibility of deadlock until we fully understood why the locks are needed and could guarantee their correctness. But as a result of using try locks, the `iter_events` function in particular can fail if the memory profiler lock is contended when it tries to collect profiling events. The function then returns NULL, leading to SystemError exceptions because we don't set an error. Even if we set an error, returning NULL isn't the right thing to do. It'll basically mean we wait until the next profile iteration, still accumulating events in the same buffer, and try again to upload the events. So we're going to get multiple iteration's worth of events. The right thing to do is take the lock unconditionally in `iter_events`. We can allocate the new tracker outside the memory allocation profiler lock so that we don't need to worry about reentrancy/deadlock issues if we start profiling that allocation. Then, the only thing we do under the lock is swap out the global tracker, so it's safe to take the lock unconditionally. Fixes #11831 TODO - regression test?
…s [forwardport 3.x] (#12140) Forwardporting #12075 We added locking to the memory profiler to address crashes. These locks are mostly "try" locks, meaning we bail out if we can't acquire them right away. This was done defensively to mitigate the possibility of deadlock until we fully understood why the locks are needed and could guarantee their correctness. But as a result of using try locks, the iter_events function in particular can fail if the memory profiler lock is contended when it tries to collect profiling events. The function then returns NULL, leading to SystemError exceptions because we don't set an error. Even if we set an error, returning NULL isn't the right thing to do. It'll basically mean we wait until the next profile iteration, still accumulating events in the same buffer, and try again to upload the events. So we're going to get multiple iteration's worth of events. The right thing to do is take the lock unconditionally in iter_events. We can allocate the new tracker outside the memory allocation profiler lock so that we don't need to worry about reentrancy/deadlock issues if we start profiling that allocation. Then, the only thing we do under the lock is swap out the global tracker, so it's safe to take the lock unconditionally. Fixes #11831 ## Checklist - [x] PR author has checked that all the criteria below are met - The PR description includes an overview of the change - The PR description articulates the motivation for the change - The change includes tests OR the PR description describes a testing strategy - The PR description notes risks associated with the change, if any - Newly-added code is easy to change - The change follows the [library release note guidelines](https://ddtrace.readthedocs.io/en/stable/releasenotes.html) - The change includes or references documentation updates if necessary - Backport labels are set (if [applicable](https://ddtrace.readthedocs.io/en/latest/contributing.html#backporting)) ## Reviewer Checklist - [x] Reviewer has checked that all the criteria below are met - Title is accurate - All changes are related to the pull request's stated goal - Avoids breaking [API](https://ddtrace.readthedocs.io/en/stable/versioning.html#interfaces) changes - Testing strategy adequately addresses listed risks - Newly-added code is easy to change - Release note makes sense to a user of the library - If necessary, author has acknowledged and discussed the performance implications of this PR as reported in the benchmarks PR comment - Backport labels are set in a manner that is consistent with the [release branch maintenance policy](https://ddtrace.readthedocs.io/en/latest/contributing.html#backporting) Co-authored-by: Nick Ripley <nick.ripley@datadoghq.com>
#12075) We added locking to the memory profiler to address crashes. These locks are mostly "try" locks, meaning we bail out if we can't acquire them right away. This was done defensively to mitigate the possibility of deadlock until we fully understood why the locks are needed and could guarantee their correctness. But as a result of using try locks, the `iter_events` function in particular can fail if the memory profiler lock is contended when it tries to collect profiling events. The function then returns NULL, leading to SystemError exceptions because we don't set an error. Even if we set an error, returning NULL isn't the right thing to do. It'll basically mean we wait until the next profile iteration, still accumulating events in the same buffer, and try again to upload the events. So we're going to get multiple iteration's worth of events. The right thing to do is take the lock unconditionally in `iter_events`. We can allocate the new tracker outside the memory allocation profiler lock so that we don't need to worry about reentrancy/deadlock issues if we start profiling that allocation. Then, the only thing we do under the lock is swap out the global tracker, so it's safe to take the lock unconditionally. Fixes #11831 TODO - regression test?
Add a regression test for races in the memory allocation profiler. The test is marked skip for now, for a few reasons: - It doesn't trigger the crash in a deterministic amount of time, so it's not really reasonable for CI/local dev loop as-is - It probably benefits more from having the thread sanitizer enabled, which we don't currently do for the memalloc extension I'm adding the test so that we have an actual reproducer of the problem that we can easily run ourselves available to any dd-trace-py developers, and have it actually committed somewhere people can find it. It's currently only really useful for local development. I plan to tweak/optimize some of the synchronization code to reduce memalloc overhead, and we need a reliable reproducer of the crashes the synchronization was meant to fix in order to be confident we don't reintroduce them. The test reproduces the crash fixed by #11460, as well as the exception fixed by #12075. Both issues stem from the same problem: at one point, memalloc had no synchronization beyond the GIL protecting its internal state. It turns out that calling back into C Python APIs, as we do when collecting tracebacks, can in some cases lead to the GIL being released. So we need additional synchronization for state modification that straddles C Python API calls. We previously only reliably saw this in a demo program but weren't able to reproduce it locally. Now that I understand the crash much better, I was able to create a standalone reproducer. The key elements are: allocate a lot, trigger GC a lot (including from memalloc traceback collection), and release the GIL during GC. Important note: this only reliably crashes on Python 3.11. The very specific path to releasing the GIL that we hit was modified in 3.12 and later (see python/cpython#97922). We will probably support 3.11 for a while longer, so it's still worth having this test.
Add a regression test for races in the memory allocation profiler. The test is marked skip for now, for a few reasons: - It doesn't trigger the crash in a deterministic amount of time, so it's not really reasonable for CI/local dev loop as-is - It probably benefits more from having the thread sanitizer enabled, which we don't currently do for the memalloc extension I'm adding the test so that we have an actual reproducer of the problem that we can easily run ourselves available to any dd-trace-py developers, and have it actually committed somewhere people can find it. It's currently only really useful for local development. I plan to tweak/optimize some of the synchronization code to reduce memalloc overhead, and we need a reliable reproducer of the crashes the synchronization was meant to fix in order to be confident we don't reintroduce them. The test reproduces the crash fixed by #11460, as well as the exception fixed by #12075. Both issues stem from the same problem: at one point, memalloc had no synchronization beyond the GIL protecting its internal state. It turns out that calling back into C Python APIs, as we do when collecting tracebacks, can in some cases lead to the GIL being released. So we need additional synchronization for state modification that straddles C Python API calls. We previously only reliably saw this in a demo program but weren't able to reproduce it locally. Now that I understand the crash much better, I was able to create a standalone reproducer. The key elements are: allocate a lot, trigger GC a lot (including from memalloc traceback collection), and release the GIL during GC. Important note: this only reliably crashes on Python 3.11. The very specific path to releasing the GIL that we hit was modified in 3.12 and later (see python/cpython#97922). We will probably support 3.11 for a while longer, so it's still worth having this test.
Add a regression test for races in the memory allocation profiler. The test is marked skip for now, for a few reasons: - It doesn't trigger the crash in a deterministic amount of time, so it's not really reasonable for CI/local dev loop as-is - It probably benefits more from having the thread sanitizer enabled, which we don't currently do for the memalloc extension I'm adding the test so that we have an actual reproducer of the problem that we can easily run ourselves available to any dd-trace-py developers, and have it actually committed somewhere people can find it. It's currently only really useful for local development. I plan to tweak/optimize some of the synchronization code to reduce memalloc overhead, and we need a reliable reproducer of the crashes the synchronization was meant to fix in order to be confident we don't reintroduce them. The test reproduces the crash fixed by #11460, as well as the exception fixed by #12075. Both issues stem from the same problem: at one point, memalloc had no synchronization beyond the GIL protecting its internal state. It turns out that calling back into C Python APIs, as we do when collecting tracebacks, can in some cases lead to the GIL being released. So we need additional synchronization for state modification that straddles C Python API calls. We previously only reliably saw this in a demo program but weren't able to reproduce it locally. Now that I understand the crash much better, I was able to create a standalone reproducer. The key elements are: allocate a lot, trigger GC a lot (including from memalloc traceback collection), and release the GIL during GC. Important note: this only reliably crashes on Python 3.11. The very specific path to releasing the GIL that we hit was modified in 3.12 and later (see python/cpython#97922). We will probably support 3.11 for a while longer, so it's still worth having this test.
We added locking to memalloc, the memory profiler, in #11460 in order to address crashes. These locks made the crashes go away, but significantly increased the baseline overhead of the profiler and introduced subtle bugs. The locks we added turned out to be fundamentally incompatible with the global interpreter lock (GIL), at least with the implementation from #11460. This PR refactors the profiler to use the GIL exclusively for locking. First, we should acknowledge no-GIL and subinterpreters. As of right now, our module does not support either. A module has to explicitly opt-in to support either, so there is no risk of those modes being enabled under our feet. Supporting either mode is likely a repo-wide project. For now, we can assume the GIL exists. This work was motivated by overhead. We currently acquire and release locks in every memory allocation and free. Even when the locks aren't contended, allocations and frees are very frequent, and the extra works adds up. We add about ~8x overhead to the baselien cost of allocation just with our locking, not including the cost of actually sampling an allocation. We can't get rid of this overhead just by reducing sampling frequency. There are a few rules to follow in order to use the GIL correctly for locking: 1) The GIL is held when a C extension function is called, _except_ possibly in the raw allocator, which we do not profile 2) The GIL may be released during C Python API calls. Even if it is released, though, it will be held again after the call 3) Thus, the GIL creates critical sections only between C Python API calls, and the beginning and end of C extension functions. Modifications to shared state across those points are not atomic. 4) If we take a lock of our own in a C extension code (i.e. a pthread_mutex), and the extension code releases the GIL, then the program will deadlock due to lock order inversion. We can only safely take locks in C extension when the GIL is released. The crashes that #11460 addresed were due to breaking the first three rules. In particular, we could race on accessing the shared scratch buffer used when collecting tracebacks, which lead to double-frees. See #13185 for more details. Our mitigation involved using C locks around any access to the shared profiler state. We nearly broke rule 4 in the process. However, we used try-locks specifically out of a fear of introducing deadlocks. Try-locks mean that we attempt to acquire the lock, but return a failure if the lock is already held. This stopped deadlocks, but introduced bugs: For example: - If we failed to take the lock when trying to report allocation profile events, we'd raise an exception when it was in fact not reasonable for doing that to fail. See #12075. - memalloc_heap_untrack, which removes tracked allocations, was guarded with a try-lock. If we couldn't acquire the lock, we would fail to remove a record for an allocation and effectively leak memory. See #13317 - We attempted to make our locking fork-safe. The first attempt was inefficient; we made it less inefficient but the fix only "worked" because of try-locks. See #11848 Try-locks hide concurrency problems and we shouldn't use them. Using our own locks requires releasing the GIL before acquisition, and then re-acquiring the GIL. That adds unnecessary overhead. We don't inherently need to do any off-GIL work. So, we should try to just use the GIL as long as it is available. The basic refactor is actually pretty simple. In a nutshell, we rearrange the memalloc_add_event and memalloc_heap_track functions so that they make the sampling decision, then take a traceback, then insert the traceback into the appropriate data structure. Collecting a traceback can release the GIL, so we make sure that modifying the data structure happens completely after the traceback is collected. We also safeguard against the possibility that the profiler was stopped during sampling, if the GIL was released. This requires a small rearrangement of memalloc_stop to make sure that the sampling functions don't see partially-freed profiler data structures. For testing, I have mainly used the code from test_memealloc_data_race_regression. I also added a debug mode, enabled by compiling with MEMALLOC_TESTING_GIL_RELEASE, which releases the GIL at places where it would be expected. For performance I examined the overhead of profiling on a basic flask application.
We added locking to memalloc, the memory profiler, in #11460 in order to address crashes. These locks made the crashes go away, but significantly increased the baseline overhead of the profiler and introduced subtle bugs. The locks we added turned out to be fundamentally incompatible with the global interpreter lock (GIL), at least with the implementation from #11460. This PR refactors the profiler to use the GIL exclusively for locking. First, we should acknowledge no-GIL and subinterpreters. As of right now, our module does not support either. A module has to explicitly opt-in to support either, so there is no risk of those modes being enabled under our feet. Supporting either mode is likely a repo-wide project. For now, we can assume the GIL exists. This work was motivated by overhead. We currently acquire and release locks in every memory allocation and free. Even when the locks aren't contended, allocations and frees are very frequent, and the extra works adds up. We add about ~8x overhead to the baselien cost of allocation just with our locking, not including the cost of actually sampling an allocation. We can't get rid of this overhead just by reducing sampling frequency. There are a few rules to follow in order to use the GIL correctly for locking: 1) The GIL is held when a C extension function is called, _except_ possibly in the raw allocator, which we do not profile 2) The GIL may be released during C Python API calls. Even if it is released, though, it will be held again after the call 3) Thus, the GIL creates critical sections only between C Python API calls, and the beginning and end of C extension functions. Modifications to shared state across those points are not atomic. 4) If we take a lock of our own in a C extension code (i.e. a pthread_mutex), and the extension code releases the GIL, then the program will deadlock due to lock order inversion. We can only safely take locks in C extension when the GIL is released. The crashes that #11460 addresed were due to breaking the first three rules. In particular, we could race on accessing the shared scratch buffer used when collecting tracebacks, which lead to double-frees. See #13185 for more details. Our mitigation involved using C locks around any access to the shared profiler state. We nearly broke rule 4 in the process. However, we used try-locks specifically out of a fear of introducing deadlocks. Try-locks mean that we attempt to acquire the lock, but return a failure if the lock is already held. This stopped deadlocks, but introduced bugs: For example: - If we failed to take the lock when trying to report allocation profile events, we'd raise an exception when it was in fact not reasonable for doing that to fail. See #12075. - memalloc_heap_untrack, which removes tracked allocations, was guarded with a try-lock. If we couldn't acquire the lock, we would fail to remove a record for an allocation and effectively leak memory. See #13317 - We attempted to make our locking fork-safe. The first attempt was inefficient; we made it less inefficient but the fix only "worked" because of try-locks. See #11848 Try-locks hide concurrency problems and we shouldn't use them. Using our own locks requires releasing the GIL before acquisition, and then re-acquiring the GIL. That adds unnecessary overhead. We don't inherently need to do any off-GIL work. So, we should try to just use the GIL as long as it is available. The basic refactor is actually pretty simple. In a nutshell, we rearrange the memalloc_add_event and memalloc_heap_track functions so that they make the sampling decision, then take a traceback, then insert the traceback into the appropriate data structure. Collecting a traceback can release the GIL, so we make sure that modifying the data structure happens completely after the traceback is collected. We also safeguard against the possibility that the profiler was stopped during sampling, if the GIL was released. This requires a small rearrangement of memalloc_stop to make sure that the sampling functions don't see partially-freed profiler data structures. For testing, I have mainly used the code from test_memealloc_data_race_regression. I also added a debug mode, enabled by compiling with MEMALLOC_TESTING_GIL_RELEASE, which releases the GIL at places where it would be expected. For performance I examined the overhead of profiling on a basic flask application.
We added locking to memalloc, the memory profiler, in #11460 in order to address crashes. These locks made the crashes go away, but significantly increased the baseline overhead of the profiler and introduced subtle bugs. The locks we added turned out to be fundamentally incompatible with the global interpreter lock (GIL), at least with the implementation from #11460. This PR refactors the profiler to use the GIL exclusively for locking. First, we should acknowledge no-GIL and subinterpreters. As of right now, our module does not support either. A module has to explicitly opt-in to support either, so there is no risk of those modes being enabled under our feet. Supporting either mode is likely a repo-wide project. For now, we can assume the GIL exists. This work was motivated by overhead. We currently acquire and release locks in every memory allocation and free. Even when the locks aren't contended, allocations and frees are very frequent, and the extra works adds up. We add about ~8x overhead to the baselien cost of allocation just with our locking, not including the cost of actually sampling an allocation. We can't get rid of this overhead just by reducing sampling frequency. There are a few rules to follow in order to use the GIL correctly for locking: 1) The GIL is held when a C extension function is called, _except_ possibly in the raw allocator, which we do not profile 2) The GIL may be released during C Python API calls. Even if it is released, though, it will be held again after the call 3) Thus, the GIL creates critical sections only between C Python API calls, and the beginning and end of C extension functions. Modifications to shared state across those points are not atomic. 4) If we take a lock of our own in a C extension code (i.e. a pthread_mutex), and the extension code releases the GIL, then the program will deadlock due to lock order inversion. We can only safely take locks in C extension when the GIL is released. The crashes that #11460 addresed were due to breaking the first three rules. In particular, we could race on accessing the shared scratch buffer used when collecting tracebacks, which lead to double-frees. See #13185 for more details. Our mitigation involved using C locks around any access to the shared profiler state. We nearly broke rule 4 in the process. However, we used try-locks specifically out of a fear of introducing deadlocks. Try-locks mean that we attempt to acquire the lock, but return a failure if the lock is already held. This stopped deadlocks, but introduced bugs: For example: - If we failed to take the lock when trying to report allocation profile events, we'd raise an exception when it was in fact not reasonable for doing that to fail. See #12075. - memalloc_heap_untrack, which removes tracked allocations, was guarded with a try-lock. If we couldn't acquire the lock, we would fail to remove a record for an allocation and effectively leak memory. See #13317 - We attempted to make our locking fork-safe. The first attempt was inefficient; we made it less inefficient but the fix only "worked" because of try-locks. See #11848 Try-locks hide concurrency problems and we shouldn't use them. Using our own locks requires releasing the GIL before acquisition, and then re-acquiring the GIL. That adds unnecessary overhead. We don't inherently need to do any off-GIL work. So, we should try to just use the GIL as long as it is available. The basic refactor is actually pretty simple. In a nutshell, we rearrange the memalloc_add_event and memalloc_heap_track functions so that they make the sampling decision, then take a traceback, then insert the traceback into the appropriate data structure. Collecting a traceback can release the GIL, so we make sure that modifying the data structure happens completely after the traceback is collected. We also safeguard against the possibility that the profiler was stopped during sampling, if the GIL was released. This requires a small rearrangement of memalloc_stop to make sure that the sampling functions don't see partially-freed profiler data structures. For testing, I have mainly used the code from test_memealloc_data_race_regression. I also added a debug mode, enabled by compiling with MEMALLOC_TESTING_GIL_RELEASE, which releases the GIL at places where it would be expected. For performance I examined the overhead of profiling on a basic flask application.
We added locking to memalloc, the memory profiler, in #11460 in order to address crashes. These locks made the crashes go away, but significantly increased the baseline overhead of the profiler and introduced subtle bugs. The locks we added turned out to be fundamentally incompatible with the global interpreter lock (GIL), at least with the implementation from #11460. This PR refactors the profiler to use the GIL exclusively for locking. First, we should acknowledge no-GIL and subinterpreters. As of right now, our module does not support either. A module has to explicitly opt-in to support either, so there is no risk of those modes being enabled under our feet. Supporting either mode is likely a repo-wide project. For now, we can assume the GIL exists. This work was motivated by overhead. We currently acquire and release locks in every memory allocation and free. Even when the locks aren't contended, allocations and frees are very frequent, and the extra works adds up. We add about ~8x overhead to the baselien cost of allocation just with our locking, not including the cost of actually sampling an allocation. We can't get rid of this overhead just by reducing sampling frequency. There are a few rules to follow in order to use the GIL correctly for locking: 1) The GIL is held when a C extension function is called, _except_ possibly in the raw allocator, which we do not profile 2) The GIL may be released during C Python API calls. Even if it is released, though, it will be held again after the call 3) Thus, the GIL creates critical sections only between C Python API calls, and the beginning and end of C extension functions. Modifications to shared state across those points are not atomic. 4) If we take a lock of our own in a C extension code (i.e. a pthread_mutex), and the extension code releases the GIL, then the program will deadlock due to lock order inversion. We can only safely take locks in C extension when the GIL is released. The crashes that #11460 addresed were due to breaking the first three rules. In particular, we could race on accessing the shared scratch buffer used when collecting tracebacks, which lead to double-frees. See #13185 for more details. Our mitigation involved using C locks around any access to the shared profiler state. We nearly broke rule 4 in the process. However, we used try-locks specifically out of a fear of introducing deadlocks. Try-locks mean that we attempt to acquire the lock, but return a failure if the lock is already held. This stopped deadlocks, but introduced bugs: For example: - If we failed to take the lock when trying to report allocation profile events, we'd raise an exception when it was in fact not reasonable for doing that to fail. See #12075. - memalloc_heap_untrack, which removes tracked allocations, was guarded with a try-lock. If we couldn't acquire the lock, we would fail to remove a record for an allocation and effectively leak memory. See #13317 - We attempted to make our locking fork-safe. The first attempt was inefficient; we made it less inefficient but the fix only "worked" because of try-locks. See #11848 Try-locks hide concurrency problems and we shouldn't use them. Using our own locks requires releasing the GIL before acquisition, and then re-acquiring the GIL. That adds unnecessary overhead. We don't inherently need to do any off-GIL work. So, we should try to just use the GIL as long as it is available. The basic refactor is actually pretty simple. In a nutshell, we rearrange the memalloc_add_event and memalloc_heap_track functions so that they make the sampling decision, then take a traceback, then insert the traceback into the appropriate data structure. Collecting a traceback can release the GIL, so we make sure that modifying the data structure happens completely after the traceback is collected. We also safeguard against the possibility that the profiler was stopped during sampling, if the GIL was released. This requires a small rearrangement of memalloc_stop to make sure that the sampling functions don't see partially-freed profiler data structures. For testing, I have mainly used the code from test_memealloc_data_race_regression. I also added a debug mode, enabled by compiling with MEMALLOC_TESTING_GIL_RELEASE, which releases the GIL at places where it would be expected. For performance I examined the overhead of profiling on a basic flask application.
We added locking to memalloc, the memory profiler, in #11460 in order to address crashes. These locks made the crashes go away, but significantly increased the baseline overhead of the profiler and introduced subtle bugs. The locks we added turned out to be fundamentally incompatible with the global interpreter lock (GIL), at least with the implementation from #11460. This PR refactors the profiler to use the GIL exclusively for locking. First, we should acknowledge no-GIL and subinterpreters. As of right now, our module does not support either. A module has to explicitly opt-in to support either, so there is no risk of those modes being enabled under our feet. Supporting either mode is likely a repo-wide project. For now, we can assume the GIL exists. This work was motivated by overhead. We currently acquire and release locks in every memory allocation and free. Even when the locks aren't contended, allocations and frees are very frequent, and the extra works adds up. We add about ~8x overhead to the baselien cost of allocation just with our locking, not including the cost of actually sampling an allocation. We can't get rid of this overhead just by reducing sampling frequency. There are a few rules to follow in order to use the GIL correctly for locking: 1) The GIL is held when a C extension function is called, _except_ possibly in the raw allocator, which we do not profile 2) The GIL may be released during C Python API calls. Even if it is released, though, it will be held again after the call 3) Thus, the GIL creates critical sections only between C Python API calls, and the beginning and end of C extension functions. Modifications to shared state across those points are not atomic. 4) If we take a lock of our own in a C extension code (i.e. a pthread_mutex), and the extension code releases the GIL, then the program will deadlock due to lock order inversion. We can only safely take locks in C extension when the GIL is released. The crashes that #11460 addresed were due to breaking the first three rules. In particular, we could race on accessing the shared scratch buffer used when collecting tracebacks, which lead to double-frees. See #13185 for more details. Our mitigation involved using C locks around any access to the shared profiler state. We nearly broke rule 4 in the process. However, we used try-locks specifically out of a fear of introducing deadlocks. Try-locks mean that we attempt to acquire the lock, but return a failure if the lock is already held. This stopped deadlocks, but introduced bugs: For example: - If we failed to take the lock when trying to report allocation profile events, we'd raise an exception when it was in fact not reasonable for doing that to fail. See #12075. - memalloc_heap_untrack, which removes tracked allocations, was guarded with a try-lock. If we couldn't acquire the lock, we would fail to remove a record for an allocation and effectively leak memory. See #13317 - We attempted to make our locking fork-safe. The first attempt was inefficient; we made it less inefficient but the fix only "worked" because of try-locks. See #11848 Try-locks hide concurrency problems and we shouldn't use them. Using our own locks requires releasing the GIL before acquisition, and then re-acquiring the GIL. That adds unnecessary overhead. We don't inherently need to do any off-GIL work. So, we should try to just use the GIL as long as it is available. The basic refactor is actually pretty simple. In a nutshell, we rearrange the memalloc_add_event and memalloc_heap_track functions so that they make the sampling decision, then take a traceback, then insert the traceback into the appropriate data structure. Collecting a traceback can release the GIL, so we make sure that modifying the data structure happens completely after the traceback is collected. We also safeguard against the possibility that the profiler was stopped during sampling, if the GIL was released. This requires a small rearrangement of memalloc_stop to make sure that the sampling functions don't see partially-freed profiler data structures. For testing, I have mainly used the code from test_memealloc_data_race_regression. I also added a debug mode, enabled by compiling with MEMALLOC_TESTING_GIL_RELEASE, which releases the GIL at places where it would be expected. For performance I examined the overhead of profiling on a basic flask application.
We added locking to memalloc, the memory profiler, in #11460 in order to address crashes. These locks made the crashes go away, but significantly increased the baseline overhead of the profiler and introduced subtle bugs. The locks we added turned out to be fundamentally incompatible with the global interpreter lock (GIL), at least with the implementation from #11460. This PR refactors the profiler to use the GIL exclusively for locking. First, we should acknowledge no-GIL and subinterpreters. As of right now, our module does not support either. A module has to explicitly opt-in to support either, so there is no risk of those modes being enabled under our feet. Supporting either mode is likely a repo-wide project. For now, we can assume the GIL exists. This work was motivated by overhead. We currently acquire and release locks in every memory allocation and free. Even when the locks aren't contended, allocations and frees are very frequent, and the extra works adds up. We add about ~8x overhead to the baseline cost of allocation just with our locking, not including the cost of actually sampling an allocation. We can't get rid of this overhead just by reducing sampling frequency. There are a few rules to follow in order to use the GIL correctly for locking: 1) The GIL is held when a C extension function is called, _except_ possibly in the raw allocator, which we do not profile 2) The GIL may be released during C Python API calls. Even if it is released, though, it will be held again after the call 3) Thus, the GIL creates critical sections only between C Python API calls, and the beginning and end of C extension functions. Modifications to shared state across those points are not atomic. 4) If we take a lock of our own in a C extension code (i.e. a pthread_mutex), and the extension code releases the GIL, then the program will deadlock due to lock order inversion. We can only safely take locks in C extension when the GIL is released. The crashes that #11460 addresed were due to breaking the first three rules. In particular, we could race on accessing the shared scratch buffer used when collecting tracebacks, which lead to double-frees. See #13185 for more details. Our mitigation involved using C locks around any access to the shared profiler state. We nearly broke rule 4 in the process. However, we used try-locks specifically out of a fear of introducing deadlocks. Try-locks mean that we attempt to acquire the lock, but return a failure if the lock is already held. This stopped deadlocks, but introduced bugs: For example: - If we failed to take the lock when trying to report allocation profile events, we'd raise an exception when it was in fact not reasonable for doing that to fail. See #12075. - memalloc_heap_untrack, which removes tracked allocations, was guarded with a try-lock. If we couldn't acquire the lock, we would fail to remove a record for an allocation and effectively leak memory. See #13317 - We attempted to make our locking fork-safe. The first attempt was inefficient; we made it less inefficient but the fix only "worked" because of try-locks. See #11848 Try-locks hide concurrency problems and we shouldn't use them. Using our own locks requires releasing the GIL before acquisition, and then re-acquiring the GIL. That adds unnecessary overhead. We don't inherently need to do any off-GIL work. So, we should try to just use the GIL as long as it is available. The basic refactor is actually pretty simple. In a nutshell, we rearrange the memalloc_add_event and memalloc_heap_track functions so that they make the sampling decision, then take a traceback, then insert the traceback into the appropriate data structure. Collecting a traceback can release the GIL, so we make sure that modifying the data structure happens completely after the traceback is collected. We also safeguard against the possibility that the profiler was stopped during sampling, if the GIL was released. This requires a small rearrangement of memalloc_stop to make sure that the sampling functions don't see partially-freed profiler data structures. For testing, I have mainly used the code from test_memealloc_data_race_regression. I also added a debug mode, enabled by compiling with MEMALLOC_TESTING_GIL_RELEASE, which releases the GIL at places where it would be expected. For performance I examined the overhead of profiling on a basic flask application.
We added locking to the memory profiler to address crashes. These locks
are mostly "try" locks, meaning we bail out if we can't acquire them
right away. This was done defensively to mitigate the possibility of
deadlock until we fully understood why the locks are needed and could
guarantee their correctness. But as a result of using try locks, the
iter_eventsfunction in particular can fail if the memory profiler lockis contended when it tries to collect profiling events. The function
then returns NULL, leading to SystemError exceptions because we don't
set an error.
Even if we set an error, returning NULL isn't the right thing to do.
It'll basically mean we wait until the next profile iteration, still
accumulating events in the same buffer, and try again to upload the
events. So we're going to get multiple iteration's worth of events. The
right thing to do is take the lock unconditionally in
iter_events. Wecan allocate the new tracker outside the memory allocation profiler lock
so that we don't need to worry about reentrancy/deadlock issues if
we start profiling that allocation. Then, the only thing we do under the
lock is swap out the global tracker, so it's safe to take the lock
unconditionally.
Fixes #11831
TODO - regression test?
Checklist
Reviewer Checklist