Skip to content

refactor(profiling): remove redundant locks from memalloc#13305

Merged
nsrip-dd merged 9 commits intomainfrom
nick.ripley/use-gil-as-a-lock-test
Jun 2, 2025
Merged

refactor(profiling): remove redundant locks from memalloc#13305
nsrip-dd merged 9 commits intomainfrom
nick.ripley/use-gil-as-a-lock-test

Conversation

@nsrip-dd
Copy link
Copy Markdown
Contributor

@nsrip-dd nsrip-dd commented Apr 30, 2025

We added locking to memalloc, the memory profiler, in #11460 in order to
address crashes. These locks made the crashes go away, but significantly
increased the baseline overhead of the profiler and introduced subtle
bugs. The locks we added turned out to be fundamentally incompatible
with the global interpreter lock (GIL), at least with the implementation
from #11460. This PR refactors the profiler to use the GIL exclusively
for locking.

First, we should acknowledge no-GIL and subinterpreters. As of right
now, our module does not support either. A module has to explicitly
opt-in to support either, so there is no risk of those modes being
enabled under our feet. Supporting either mode is likely a repo-wide
project. For now, we can assume the GIL exists.

This work was motivated by overhead. We currently acquire and release
locks in every memory allocation and free. Even when the locks aren't
contended, allocations and frees are very frequent, and the extra works
adds up. We add about ~8x overhead to the baseline cost of allocation
just with our locking, not including the cost of actually sampling an
allocation. We can't get rid of this overhead just by reducing sampling
frequency.

There are a few rules to follow in order to use the GIL correctly for
locking:

  1. The GIL is held when a C extension function is called, except
    possibly in the raw allocator, which we do not profile
  2. The GIL may be released during C Python API calls. Even if it is
    released, though, it will be held again after the call
  3. Thus, the GIL creates critical sections only between C Python API
    calls, and the beginning and end of C extension functions. Modifications
    to shared state across those points are not atomic.
  4. If we take a lock of our own in a C extension code (i.e. a
    pthread_mutex), and the extension code releases the GIL, then the
    program will deadlock due to lock order inversion. We can only safely
    take locks in C extension when the GIL is released.

The crashes that #11460 addresed were due to breaking the first three
rules. In particular, we could race on accessing the shared scratch
buffer used when collecting tracebacks, which lead to double-frees.
See #13185 for more details.

Our mitigation involved using C locks around any access to the shared
profiler state. We nearly broke rule 4 in the process. However, we used
try-locks specifically out of a fear of introducing deadlocks. Try-locks
mean that we attempt to acquire the lock, but return a failure if the
lock is already held. This stopped deadlocks, but introduced bugs: For
example:

Try-locks hide concurrency problems and we shouldn't use them. Using our
own locks requires releasing the GIL before acquisition, and then
re-acquiring the GIL. That adds unnecessary overhead. We don't
inherently need to do any off-GIL work. So, we should try to just use
the GIL as long as it is available.

The basic refactor is actually pretty simple. In a nutshell, we
rearrange the memalloc_add_event and memalloc_heap_track functions so
that they make the sampling decision, then take a traceback, then insert
the traceback into the appropriate data structure. Collecting a
traceback can release the GIL, so we make sure that modifying the data
structure happens completely after the traceback is collected. We also
safeguard against the possibility that the profiler was stopped during
sampling, if the GIL was released. This requires a small rearrangement
of memalloc_stop to make sure that the sampling functions don't see
partially-freed profiler data structures.

For testing, I have mainly used the code from test_memealloc_data_race_regression.
I also added a debug mode, enabled by compiling with
MEMALLOC_TESTING_GIL_RELEASE, which releases the GIL at places where it
would be expected. For performance I examined the overhead of profiling
on a basic flask application.

Checklist

  • PR author has checked that all the criteria below are met
  • The PR description includes an overview of the change
  • The PR description articulates the motivation for the change
  • The change includes tests OR the PR description describes a testing strategy
  • The PR description notes risks associated with the change, if any
  • Newly-added code is easy to change
  • The change follows the library release note guidelines
  • The change includes or references documentation updates if necessary
  • Backport labels are set (if applicable)

Reviewer Checklist

  • Reviewer has checked that all the criteria below are met
  • Title is accurate
  • All changes are related to the pull request's stated goal
  • Avoids breaking API changes
  • Testing strategy adequately addresses listed risks
  • Newly-added code is easy to change
  • Release note makes sense to a user of the library
  • If necessary, author has acknowledged and discussed the performance implications of this PR as reported in the benchmarks PR comment
  • Backport labels are set in a manner that is consistent with the release branch maintenance policy

@github-actions
Copy link
Copy Markdown
Contributor

github-actions Bot commented Apr 30, 2025

CODEOWNERS have been resolved as:

ddtrace/profiling/collector/_memalloc_debug.h                           @DataDog/profiling-python
releasenotes/notes/profiling-memalloc-remove-redundant-locks-56b58cbed98c1330.yaml  @DataDog/apm-python
ddtrace/profiling/collector/_memalloc.c                                 @DataDog/profiling-python
ddtrace/profiling/collector/_memalloc_heap.c                            @DataDog/profiling-python
ddtrace/profiling/collector/_memalloc_heap.h                            @DataDog/profiling-python
ddtrace/profiling/collector/_memalloc_heap_map.c                        @DataDog/profiling-python
ddtrace/profiling/collector/_memalloc_heap_map.h                        @DataDog/profiling-python
ddtrace/profiling/collector/_memalloc_reentrant.h                       @DataDog/profiling-python
ddtrace/profiling/collector/_memalloc_tb.c                              @DataDog/profiling-python

@github-actions
Copy link
Copy Markdown
Contributor

github-actions Bot commented Apr 30, 2025

Bootstrap import analysis

Comparison of import times between this PR and base.

Summary

The average import time from this PR is: 244 ± 3 ms.

The average import time from base is: 246 ± 3 ms.

The import time difference between this PR and base is: -1.7 ± 0.1 ms.

Import time breakdown

The following import paths have shrunk:

ddtrace.auto 1.986 ms (0.81%)
ddtrace.bootstrap.sitecustomize 1.311 ms (0.54%)
ddtrace.bootstrap.preload 1.311 ms (0.54%)
ddtrace.internal.remoteconfig.client 0.649 ms (0.27%)
ddtrace 0.675 ms (0.28%)
ddtrace.internal._unpatched 0.023 ms (0.01%)

@nsrip-dd nsrip-dd force-pushed the nick.ripley/use-gil-as-a-lock-test branch from 9b408da to 5e2cba5 Compare May 14, 2025 18:44
@nsrip-dd nsrip-dd changed the title wip: use the GIL as a lock refactor(profiling): remove redundant locks from memalloc May 14, 2025
@nsrip-dd nsrip-dd force-pushed the nick.ripley/use-gil-as-a-lock-test branch from 5e2cba5 to f982d8f Compare May 15, 2025 19:16
@pr-commenter
Copy link
Copy Markdown

pr-commenter Bot commented May 15, 2025

Benchmarks

Benchmark execution time: 2025-06-02 18:52:06

Comparing candidate commit 62ce1a0 in PR branch nick.ripley/use-gil-as-a-lock-test with baseline commit 94a030b in branch main.

Found 3 performance improvements and 2 performance regressions! Performance is the same for 500 metrics, 3 unstable metrics.

scenario:djangosimple-profiler

  • 🟩 execution_time [-2.215ms; -2.072ms] or [-12.533%; -11.723%]

scenario:djangosimple-tracer-and-profiler

  • 🟩 execution_time [-2.709ms; -2.461ms] or [-10.998%; -9.994%]

scenario:flasksimple-profiler

  • 🟩 execution_time [-182.199µs; -173.394µs] or [-8.456%; -8.047%]

scenario:iastaspectsospath-ospathsplit_aspect

  • 🟥 execution_time [+357.340ns; +587.246ns] or [+7.321%; +12.032%]

scenario:iastaspectsospath-ospathsplitext_aspect

  • 🟥 execution_time [+448.797ns; +606.484ns] or [+9.956%; +13.454%]

@nsrip-dd nsrip-dd force-pushed the nick.ripley/use-gil-as-a-lock-test branch 2 times, most recently from 5136c8c to 24e1f79 Compare May 20, 2025 15:01
We added locking to memalloc, the memory profiler, in #11460 in order to
address crashes. These locks made the crashes go away, but significantly
increased the baseline overhead of the profiler and introduced subtle
bugs. The locks we added turned out to be fundamentally incompatible
with the global interpreter lock (GIL), at least with the implementation
from #11460. This PR refactors the profiler to use the GIL exclusively
for locking.

First, we should acknowledge no-GIL and subinterpreters. As of right
now, our module does not support either. A module has to explicitly
opt-in to support either, so there is no risk of those modes being
enabled under our feet. Supporting either mode is likely a repo-wide
project. For now, we can assume the GIL exists.

This work was motivated by overhead. We currently acquire and release
locks in every memory allocation and free. Even when the locks aren't
contended, allocations and frees are very frequent, and the extra works
adds up. We add about ~8x overhead to the baselien cost of allocation
just with our locking, not including the cost of actually sampling an
allocation. We can't get rid of this overhead just by reducing sampling
frequency.

There are a few rules to follow in order to use the GIL correctly for
locking:

1) The GIL is held when a C extension function is called, _except_
   possibly in the raw allocator, which we do not profile
2) The GIL may be released during C Python API calls. Even if it is
   released, though, it will be held again after the call
3) Thus, the GIL creates critical sections only between C Python API
   calls, and the beginning and end of C extension functions. Modifications
   to shared state across those points are not atomic.
4) If we take a lock of our own in a C extension code (i.e. a
   pthread_mutex), and the extension code releases the GIL, then the
   program will deadlock due to lock order inversion. We can only safely
   take locks in C extension when the GIL is released.

The crashes that #11460 addresed were due to breaking the first three
rules. In particular, we could race on accessing the shared scratch
buffer used when collecting tracebacks, which lead to double-frees.
See #13185 for more details.

Our mitigation involved using C locks around any access to the shared
profiler state. We nearly broke rule 4 in the process. However, we used
try-locks specifically out of a fear of introducing deadlocks. Try-locks
mean that we attempt to acquire the lock, but return a failure if the
lock is already held. This stopped deadlocks, but introduced bugs: For
example:

- If we failed to take the lock when trying to report allocation
  profile events, we'd raise an exception when it was in fact not
  reasonable for doing that to fail. See #12075.
- memalloc_heap_untrack, which removes tracked allocations, was guarded
  with a try-lock. If we couldn't acquire the lock, we would fail to
  remove a record for an allocation and effectively leak memory.
  See #13317
- We attempted to make our locking fork-safe. The first attempt was
  inefficient; we made it less inefficient but the fix only "worked"
  because of try-locks. See #11848

Try-locks hide concurrency problems and we shouldn't use them. Using our
own locks requires releasing the GIL before acquisition, and then
re-acquiring the GIL. That adds unnecessary overhead. We don't
inherently need to do any off-GIL work. So, we should try to just use
the GIL as long as it is available.

The basic refactor is actually pretty simple. In a nutshell, we
rearrange the memalloc_add_event and memalloc_heap_track functions so
that they make the sampling decision, then take a traceback, then insert
the traceback into the appropriate data structure. Collecting a
traceback can release the GIL, so we make sure that modifying the data
structure happens completely after the traceback is collected. We also
safeguard against the possibility that the profiler was stopped during
sampling, if the GIL was released. This requires a small rearrangement
of memalloc_stop to make sure that the sampling functions don't see
partially-freed profiler data structures.

For testing, I have mainly used the code from test_memealloc_data_race_regression.
I also added a debug mode, enabled by compiling with
MEMALLOC_TESTING_GIL_RELEASE, which releases the GIL at places where it
would be expected. For performance I examined the overhead of profiling
on a basic flask application.
@nsrip-dd nsrip-dd force-pushed the nick.ripley/use-gil-as-a-lock-test branch from 24e1f79 to 840710b Compare May 20, 2025 15:01
@nsrip-dd nsrip-dd marked this pull request as ready for review May 20, 2025 15:08
@nsrip-dd nsrip-dd requested review from a team as code owners May 20, 2025 15:08
Copy link
Copy Markdown
Contributor

@taegyunkim taegyunkim left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe this is the way to go, but feels like it's going to be harder in general to modify this part of the code. We need to think really hard about what operations we can safely do. Hope they would be surfaced easily by our tests but as always I'd want to see more of extreme test cases that we could imagine in general. I'd be happy to discuss together on those and this PR looks good as is.

Comment thread ddtrace/profiling/collector/_memalloc.c Outdated
Comment thread ddtrace/profiling/collector/_memalloc.c
Copy link
Copy Markdown
Contributor Author

@nsrip-dd nsrip-dd left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe this is the way to go, but feels like it's going to be harder in general to modify this part of the code. We need to think really hard about what operations we can safely do. Hope they would be surfaced easily by our tests but as always I'd want to see more of extreme test cases that we could imagine in general. I'd be happy to discuss together on those and this PR looks good as is.

Thanks! If you have ideas for how to test this better I'd definitely be happy to hear them. Especially if we can get something that is reasonable to run as part of our CI. And also agreed this will be tricky to modify... I'm wondering if we can add some more annotations/debug mode to make it clear what the order of different operations are supposed to be?

Comment thread ddtrace/profiling/collector/_memalloc.c
@taegyunkim taegyunkim added proposal Profiling Continous Profling and removed proposal labels May 21, 2025
Comment thread ddtrace/profiling/collector/_memalloc.c Outdated
Comment thread ddtrace/profiling/collector/_memalloc.c
Comment thread ddtrace/profiling/collector/_memalloc.c Outdated
Comment thread ddtrace/profiling/collector/_memalloc.c Outdated
Comment thread ddtrace/profiling/collector/_memalloc.c
Comment thread ddtrace/profiling/collector/_memalloc.c
Comment thread ddtrace/profiling/collector/_memalloc.c Outdated
Comment thread ddtrace/profiling/collector/_memalloc.c
Parts where the GIL must be held and not released are pulled into their
own functions, with helper to assert that the GIL is held and that
critical sections are maintained.
Comment thread ddtrace/profiling/collector/_memalloc.c Outdated
Comment thread ddtrace/profiling/collector/_memalloc.c
Comment thread ddtrace/profiling/collector/_memalloc.c
Comment thread ddtrace/profiling/collector/_memalloc.c Outdated
Comment thread ddtrace/profiling/collector/_memalloc_debug.h
nsrip-dd added 2 commits May 29, 2025 15:43
Address a few TODOs, and use the heap_tracker parameter consistently in
memalloc_heap_add_sample_no_cpython
@nsrip-dd nsrip-dd merged commit 1ed7332 into main Jun 2, 2025
332 checks passed
@nsrip-dd nsrip-dd deleted the nick.ripley/use-gil-as-a-lock-test branch June 2, 2025 19:07
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Profiling Continous Profling

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants