Skip to content

[don't merge] #31802 with additional patches #90

Closed
Sjors wants to merge 17 commits into
2025/02/ipc-yeafrom
2025/06/ipc-yea-plus
Closed

[don't merge] #31802 with additional patches #90
Sjors wants to merge 17 commits into
2025/02/ipc-yeafrom
2025/06/ipc-yea-plus

Conversation

@Sjors
Copy link
Copy Markdown
Owner

@Sjors Sjors commented Jun 24, 2025

This is a PR to run CI against bitcoin#31802 with additional patches.

Currently:

Update note to self:

git fetch ryanofsky
git reset --hard 2025/02/ipc-yea
git merge ryanofsky/pr/ipc-stop
# If there's any libmultiprocess commits to include:
# https://github.com/bitcoin-core/libmultiprocess/pull/186
# HASH=... 
# git fetch libmultiprocess $HASH
# git subtree merge --prefix=src/ipc/libmultiprocess --squash $HASH
# N= # commits minus 1
# git cherry-pick sjors/2025/06/ipc-yea-plus~N^..sjors/2025/06/ipc-yea-plus

@Sjors
Copy link
Copy Markdown
Owner Author

Sjors commented Jun 24, 2025

Other than the linter some failures may be spurious because my CI machine needs a reboot. I'll re-run them.

@Sjors
Copy link
Copy Markdown
Owner Author

Sjors commented Jun 24, 2025

The remaining test failures do seem real, but perhaps they're a result of not fully including bitcoin#32345?

@Sjors Sjors force-pushed the 2025/06/ipc-yea-plus branch from 35f9fa4 to 8acbe58 Compare June 25, 2025 08:23
@Sjors
Copy link
Copy Markdown
Owner Author

Sjors commented Jun 25, 2025

I'm now using the full bitcoin#32345 and updated bitcoin-core/libmultiprocess#184 with its linter fix.

I used git merge for the first PR branch, which is why there's commits from master here.

@Sjors
Copy link
Copy Markdown
Owner Author

Sjors commented Jun 25, 2025

@ryanofsky the previous releases build fails with:

[10:28:02.512] [ 35%] Building CXX object src/ipc/CMakeFiles/bitcoin_ipc.dir/interfaces.cpp.o
[10:28:02.513] cd /ci_container_base/ci/scratch/build-x86_64-pc-linux-gnu/src/ipc && /usr/bin/ccache /usr/bin/g++-11 -DABORT_ON_FAILED_ASSUME -DDEBUG -DDEBUG_LOCKCONTENTION -DDEBUG_LOCKORDER -DKJ_USE_FIBERS -DRPC_DOC_CHECK -I/ci_container_base/ci/scratch/build-x86_64-pc-linux-gnu/src -I/ci_container_base/src -I/ci_container_base/ci/scratch/build-x86_64-pc-linux-gnu -I/ci_container_base/src/ipc/libmultiprocess/include -I/ci_container_base/ci/scratch/build-x86_64-pc-linux-gnu/src/ipc/libmultiprocess/include -I/ci_container_base/src/univalue/include -isystem /ci_container_base/depends/x86_64-pc-linux-gnu/include -funsigned-char -g2 -O2 -fPIC -fvisibility=hidden -pthread -fno-extended-identifiers -fdebug-prefix-map=/ci_container_base/src=. -fmacro-prefix-map=/ci_container_base/src=. -fstack-reuse=none -U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=3 -Wstack-protector -fstack-protector-all -fcf-protection=full -fstack-clash-protection -Werror -Wall -Wextra -Wformat -Wformat-security -Wvla -Wredundant-decls -Wdate-time -Wduplicated-branches -Wduplicated-cond -Wlogical-op -Woverloaded-virtual -Wsuggest-override -Wimplicit-fallthrough -Wunreachable-code -Wundef -Wno-unused-parameter -std=c++20 -MD -MT src/ipc/CMakeFiles/bitcoin_ipc.dir/interfaces.cpp.o -MF CMakeFiles/bitcoin_ipc.dir/interfaces.cpp.o.d -o CMakeFiles/bitcoin_ipc.dir/interfaces.cpp.o -c /ci_container_base/src/ipc/interfaces.cpp -DBOOST_MULTI_INDEX_ENABLE_SAFE_MODE 
[10:28:06.739] /ci_container_base/src/ipc/interfaces.cpp: In function ‘void ipc::{anonymous}::HandleCtrlC(int)’:
[10:28:06.739] /ci_container_base/src/ipc/interfaces.cpp:35:16: error: ignoring return value of ‘ssize_t write(int, const void*, size_t)’ declared with attribute ‘warn_unused_result’ [-Werror=unused-result]
[10:28:06.739]    35 |     (void)write(STDOUT_FILENO, g_ignore_ctrl_c.data(), g_ignore_ctrl_c.size());
[10:28:06.739]       |           ~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[10:28:06.739] cc1plus: all warnings being treated as errors

https://cirrus-ci.com/task/5168529512595456

The linter found new things to complain about:

[08:29:25.285] Missing "export LC_ALL=C" (to avoid locale dependence) as first non-comment non-empty line in src/ipc/libmultiprocess/ci/configs/default.sh
[08:29:25.285] Missing "export LC_ALL=C" (to avoid locale dependence) as first non-comment non-empty line in src/ipc/libmultiprocess/ci/configs/llvm.sh
[08:29:25.285] Missing "export LC_ALL=C" (to avoid locale dependence) as first non-comment non-empty line in src/ipc/libmultiprocess/ci/scripts/ci.sh
[08:29:25.285] Missing "export LC_ALL=C" (to avoid locale dependence) as first non-comment non-empty line in src/ipc/libmultiprocess/ci/scripts/run.sh
[08:29:25.290] ^---- ⚠️ Failure generated from lint-shell-locale.py

https://cirrus-ci.com/task/5520373233483776

Here's a segfault:

[10:33:38.485] 
./test/ipc_tests.cpp(12): Entering test case "ipc_tests"
[10:33:38.485] 
[10:33:40.585] 139/145 Test #140: wallet_crypto_tests ..................   Passed    4.80 sec
[10:33:41.420] 140/145 Test #115: txrequest_tests ......................   Passed   17.80 sec
[10:33:48.876] 141/145 Test #141: wallet_tests .........................   Passed   11.52 sec
[10:33:52.230] 142/145 Test  #81: random_tests .........................   Passed   40.58 sec
[10:34:31.152] 143/145 Test #131: coinselector_tests ...................   Passed   57.29 sec
[10:35:06.435] 144/145 Test  #10: bench_sanity_check ...................   Passed  134.67 sec
[10:35:24.840] 145/145 Test  #33: coins_tests ..........................   Passed  149.20 sec
[10:35:24.843] 
[10:35:24.843] 99% tests passed, 1 tests failed out of 145
[10:35:24.844] 
[10:35:24.844] Total Test time (real) = 153.12 sec
[10:35:24.844] 
[10:35:24.844] The following tests FAILED:
[10:35:24.845] 	145 - ipc_tests (SEGFAULT)
[10:35:24.845] Errors while running CTest
[10:35:24.885] 
[10:35:24.885] Exit status: 8

https://cirrus-ci.com/task/5731479466016768

@ryanofsky
Copy link
Copy Markdown

Thanks for testing and finding all those problems. I guess this is an early lesson that bitcoin-core/libmultiprocess#184 will not be sufficient to catch a lot of problems and is not a substitute for running full set of bitcoin CI jobs.

@Sjors
Copy link
Copy Markdown
Owner Author

Sjors commented Jun 27, 2025

Trying with bitcoin-core/libmultiprocess#186 now (and bitcoin#32345 minus the subtree update).


Same complaint from the linter as above:

[06:35:37.929] Missing "export LC_ALL=C" (to avoid locale dependence) as first non-comment non-empty line in src/ipc/libmultiprocess/ci/configs/default.sh
...

I tested locally that bitcoin-core/libmultiprocess@8ac8b4c is enough to quiet the linter.


CentOS still sees a SegFault for ipc_tests: https://cirrus-ci.com/task/6110371448094720

The previous releases build has a warning about an unused result:

[08:59:33.978] cd /ci_container_base/ci/scratch/build-x86_64-pc-linux-gnu/src/ipc && /usr/bin/ccache /usr/bin/g++-11 -DABORT_ON_FAILED_ASSUME -DDEBUG -DDEBUG_LOCKCONTENTION -DDEBUG_LOCKORDER -DKJ_USE_FIBERS -DRPC_DOC_CHECK -I/ci_container_base/ci/scratch/build-x86_64-pc-linux-gnu/src -I/ci_container_base/src -I/ci_container_base/ci/scratch/build-x86_64-pc-linux-gnu -I/ci_container_base/src/ipc/libmultiprocess/include -I/ci_container_base/ci/scratch/build-x86_64-pc-linux-gnu/src/ipc/libmultiprocess/include -I/ci_container_base/src/univalue/include -isystem /ci_container_base/depends/x86_64-pc-linux-gnu/include -funsigned-char -g2 -O2 -fPIC -fvisibility=hidden -pthread -fno-extended-identifiers -fdebug-prefix-map=/ci_container_base/src=. -fmacro-prefix-map=/ci_container_base/src=. -fstack-reuse=none -U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=3 -Wstack-protector -fstack-protector-all -fcf-protection=full -fstack-clash-protection -Werror -Wall -Wextra -Wformat -Wformat-security -Wvla -Wredundant-decls -Wdate-time -Wduplicated-branches -Wduplicated-cond -Wlogical-op -Woverloaded-virtual -Wsuggest-override -Wimplicit-fallthrough -Wunreachable-code -Wundef -Wno-unused-parameter -std=c++20 -MD -MT src/ipc/CMakeFiles/bitcoin_ipc.dir/interfaces.cpp.o -MF CMakeFiles/bitcoin_ipc.dir/interfaces.cpp.o.d -o CMakeFiles/bitcoin_ipc.dir/interfaces.cpp.o -c /ci_container_base/src/ipc/interfaces.cpp -DBOOST_MULTI_INDEX_ENABLE_SAFE_MODE 
[08:59:40.173] /ci_container_base/src/ipc/interfaces.cpp: In function ‘void ipc::{anonymous}::HandleCtrlC(int)’:
[08:59:40.173] /ci_container_base/src/ipc/interfaces.cpp:35:16: error: ignoring return value of ‘ssize_t write(int, const void*, size_t)’ declared with attribute ‘warn_unused_result’ [-Werror=unused-result]
[08:59:40.173]    35 |     (void)write(STDOUT_FILENO, g_ignore_ctrl_c.data(), g_ignore_ctrl_c.size());
[08:59:40.173]       |           ~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[08:59:40.173] cc1plus: all warnings being treated as errors

https://cirrus-ci.com/task/5547421494673408

Tidy complains of a deprecated header:

[08:57:34.287] [427/707][9.3s] clang-tidy-20 -p=/ci_container_base/ci/scratch/build-x86_64-pc-linux-gnu -quiet -load=/tidy-build/libbitcoin-tidy.so /ci_container_base/src/ipc/interfaces.cpp
[08:57:34.287] /ci_container_base/src/ipc/interfaces.cpp:21:10: error: inclusion of deprecated C++ header 'signal.h'; consider using 'csignal' instead [modernize-deprecated-headers,-warnings-as-errors]
[08:57:34.287]    21 | #include <signal.h>
[08:57:34.287]       |          ^~~~~~~~~~
[08:57:34.287]       |          <csignal>

https://cirrus-ci.com/task/6391846424805376

TSAN found another data race: https://cirrus-ci.com/task/6673321401516032?logs=ci#L3648

@Sjors
Copy link
Copy Markdown
Owner Author

Sjors commented Jun 27, 2025

Pushed fixes for some of the previous failures.

Still getting a data race (which I didn't try to fix): https://cirrus-ci.com/task/4902764083412992?logs=ci#L3539. It seems to between m_sync_cleanup_fns in Connection::~Connection vs server_connection->onDisconnect([&] { server_connection.reset(); }); in the test.

Previous releases now builds fine, but segfaults during the ipc test: https://cirrus-ci.com/task/6310138966966272

UndefinedBehaviorSanitizer: null-pointer-use:

 6/145 Test   #5: mptest ...............................***Failed    1.01 sec
[ TEST ] test.cpp:108: Call FooInterface methods
LOG0: {mptest-8187/mptest-8187} IPC client first request from current thread, constructing waiter
LOG0: {mptest-8187/mptest-8187} IPC client send FooInterface.add$Params (a = 1, b = 2)
LOG0: {mptest-8187/mptest-8196} IPC server recv request  #1 FooInterface.add$Params (a = 1, b = 2)
LOG0: {mptest-8187/mptest-8196} IPC server send response #1 FooInterface.add$Results (result = 3)
LOG0: {mptest-8187/mptest-8187} IPC client recv FooInterface.add$Results (result = 3)
LOG0: {mptest-8187/mptest-8187} IPC client send FooInterface.pass$Params (arg = (name = "name", setint = [1, 2], vbool = [false, true, false]))
LOG0: {mptest-8187/mptest-8196} IPC server recv request  #2 FooInterface.pass$Params (arg = (name = "name", setint = [1, 2], vbool = [false, true, false]))
LOG0: {mptest-8187/mptest-8196} IPC server send response #2 FooInterface.pass$Results (result = (name = "name", setint = [1, 2], vbool = [false, true, false]))
LOG0: {mptest-8187/mptest-8187} IPC client recv FooInterface.pass$Results (result = (name = "name", setint = [1, 2], vbool = [false, true, false]))
LOG0: {mptest-8187/mptest-8187} IPC client send FooInterface.raise$Params (arg = (name = "name", setint = [1, 2], vbool = [false, true, false]))
LOG0: {mptest-8187/mptest-8196} IPC server recv request  #3 FooInterface.raise$Params (arg = (name = "name", setint = [1, 2], vbool = [false, true, false]))
LOG0: {mptest-8187/mptest-8196} IPC server send response #3 FooInterface.raise$Results (error = (name = "name", setint = [1, 2], vbool = [false, true, false]))
LOG0: {mptest-8187/mptest-8187} IPC client recv FooInterface.raise$Results (error = (name = "name", setint = [1, 2], vbool = [false, true, false]))
LOG0: {mptest-8187/mptest-8187} IPC client send FooInterface.initThreadMap$Params (threadMap = <external capability>)
LOG0: {mptest-8187/mptest-8198 (from mptest-8187/mptest-8187)} IPC client send FooCallback.destroy$Params (context = (thread = <external capability>, callbackThread = <external capability>))
LOG0: {mptest-8187/mptest-8196} IPC server recv request  #23 FooCallback.destroy$Params (context = (thread = <external capability>, callbackThread = <external capability>))
LOG0: {mptest-8187/mptest-8196} IPC server post request  #23 {mptest-8187/mptest-8187}
LOG0: {mptest-8187/mptest-8196} IPC server send response #23 FooCallback.destroy$Results ()
LOG0: {mptest-8187/mptest-8198 (from mptest-8187/mptest-8187)} IPC client recv FooCallback.destroy$Results ()
LOG0: {mptest-8187/mptest-8196} IPC server send response #21 FooInterface.callbackExtended$Results (result = 12)
LOG0: {mptest-8187/mptest-8187} IPC client recv FooInterface.callbackExtended$Results (result = 12)
LOG0: {mptest-8187/mptest-8196} IPC server destroy N2mp11ProxyServerINS_4test8messages16ExtendedCallbackEEE
LOG0: {mptest-8187/mptest-8187} IPC client send FooInterface.passCustom$Params (arg = (v1 = "v1", v2 = 5))
LOG0: {mptest-8187/mptest-8196} IPC server recv request  #24 FooInterface.passCustom$Params (arg = (v1 = "v1", v2 = 5))
LOG0: {mptest-8187/mptest-8196} IPC server send response #24 FooInterface.passCustom$Results (result = (v1 = "v1", v2 = 5))
LOG0: {mptest-8187/mptest-8187} IPC client recv FooInterface.passCustom$Results (result = (v1 = "v1", v2 = 5))
LOG0: {mptest-8187/mptest-8187} IPC client send FooInterface.passEmpty$Params (arg = ())
LOG0: {mptest-8187/mptest-8196} IPC server recv request  #25 FooInterface.passEmpty$Params (arg = ())
LOG0: {mptest-8187/mptest-8196} IPC server send response #25 FooInterface.passEmpty$Results (result = ())
LOG0: {mptest-8187/mptest-8187} IPC client recv FooInterface.passEmpty$Results (result = ())
LOG0: {mptest-8187/mptest-8187} IPC client send FooInterface.passMessage$Params (arg = (message = "init build"))
LOG0: {mptest-8187/mptest-8196} IPC server recv request  #26 FooInterface.passMessage$Params (arg = (message = "init build"))
LOG0: {mptest-8187/mptest-8196} IPC server send response #26 FooInterface.passMessage$Results (result = (message = "init build read call build"))
LOG0: {mptest-8187/mptest-8187} IPC client recv FooInterface.passMessage$Results (result = (message = "init build read call build"))
LOG0: {mptest-8187/mptest-8187} IPC client send FooInterface.passMutable$Params (arg = (message = "init build"))
LOG0: {mptest-8187/mptest-8196} IPC server recv request  #27 FooInterface.passMutable$Params (arg = (message = "init build"))
LOG0: {mptest-8187/mptest-8196} IPC server send response #27 FooInterface.passMutable$Results (arg = (message = "init build pass call return"))
LOG0: {mptest-8187/mptest-8187} IPC client recv FooInterface.passMutable$Results (arg = (message = "init build pass call return"))
LOG0: {mptest-8187/mptest-8187} IPC client send FooInterface.passFn$Params (context = (thread = <external capability>, callbackThread = <external capability>), fn = <external capability>)
LOG0: {mptest-8187/mptest-8196} IPC server recv request  #28 FooInterface.passFn$Params (context = (thread = <external capability>, callbackThread = <external capability>), fn = <external capability>)
LOG0: {mptest-8187/mptest-8196} IPC server post request  #28 {mptest-8187/mptest-8198 (from mptest-8187/mptest-8187)}
LOG0: {mptest-8187/mptest-8198 (from mptest-8187/mptest-8187)} IPC client send FooFn.call$Params (context = (thread = <external capability>, callbackThread = <external capability>))
LOG0: {mptest-8187/mptest-8196} IPC server recv request  #29 FooFn.call$Params (context = (thread = <external capability>, callbackThread = <external capability>))
LOG0: {mptest-8187/mptest-8196} IPC server post request  #29 {mptest-8187/mptest-8187}
LOG0: {mptest-8187/mptest-8196} IPC server send response #29 FooFn.call$Results (result = 10)
LOG0: {mptest-8187/mptest-8198 (from mptest-8187/mptest-8187)} IPC client recv FooFn.call$Results (result = 10)
LOG0: {mptest-8187/mptest-8198 (from mptest-8187/mptest-8187)} IPC client destroy N2mp11ProxyClientINS_4test8messages5FooFnEEE
LOG0: {mptest-8187/mptest-8198 (from mptest-8187/mptest-8187)} IPC client send FooFn.destroy$Params (context = (thread = <external capability>, callbackThread = <external capability>))
LOG0: {mptest-8187/mptest-8196} IPC server recv request  #30 FooFn.destroy$Params (context = (thread = <external capability>, callbackThread = <external capability>))
LOG0: {mptest-8187/mptest-8196} IPC server post request  #30 {mptest-8187/mptest-8187}
LOG0: {mptest-8187/mptest-8196} IPC server send response #30 FooFn.destroy$Results ()
LOG0: {mptest-8187/mptest-8198 (from mptest-8187/mptest-8187)} IPC client recv FooFn.destroy$Results ()
LOG0: {mptest-8187/mptest-8196} IPC server send response #28 FooInterface.passFn$Results (result = 10)
LOG0: {mptest-8187/mptest-8187} IPC client recv FooInterface.passFn$[ PASS ] test.cpp:108: Call FooInterface methods (94576 μs)
[ TEST ] test.cpp:195: Call IPC method after client connection is closed
/home/runner/work/_temp/src/ipc/libmultiprocess/include/mp/proxy-types.h:612:40: runtime error: reference binding to null pointer of type 'Connection'
    #0 0x55f6c823b850 in void mp::clientInvoke<mp::ProxyClient<mp::test::messages::FooInterface>, capnp::Request<mp::test::messages::FooInterface::AddParams, mp::test::messages::FooInterface::AddResults> (mp::test::messages::FooInterface::Client::*)(kj::Maybe<capnp::MessageSize>), mp::ClientParam<mp::Accessor<mp::foo_fields::A, 1>, int>, mp::ClientParam<mp::Accessor<mp::foo_fields::B, 1>, int>, mp::ClientParam<mp::Accessor<mp::foo_fields::Result, 2>, int&>>(mp::ProxyClient<mp::test::messages::FooInterface>&, capnp::Request<mp::test::messages::FooInterface::AddParams, mp::test::messages::FooInterface::AddResults> (mp::test::messages::FooInterface::Client::* const&)(kj::Maybe<capnp::MessageSize>), mp::ClientParam<mp::Accessor<mp::foo_fields::A, 1>, int>&&, mp::ClientParam<mp::Accessor<mp::foo_fields::B, 1>, int>&&, mp::ClientParam<mp::Accessor<mp::foo_fields::Result, 2>, int&>&&) /home/runner/work/_temp/build-asan/src/ipc/libmultiprocess/test/./ipc/libmultiprocess/include/mp/proxy-types.h:612:25
    #1 0x55f6c823722a in mp::ProxyClient<mp::test::messages::FooInterface>::add(int, int) /home/runner/work/_temp/build-asan/src/ipc/libmultiprocess/test/mp/test/foo.capnp.proxy-client.c++:20:5
    #2 0x55f6c81f69df in mp::test::TestCase195::run() /home/runner/work/_temp/build-asan/src/ipc/libmultiprocess/test/./ipc/libmultiprocess/test/mp/test/test.cpp:204:14
    #3 0x7f5efeb7d0e1  (/lib/x86_64-linux-gnu/libkj-test-1.0.1.so+0x50e1) (BuildId: 2ff7f524274168e50347a2d6dd423f273ae8d268)
    #4 0x7f5efeb7d657  (/lib/x86_64-linux-gnu/libkj-test-1.0.1.so+0x5657) (BuildId: 2ff7f524274168e50347a2d6dd423f273ae8d268)
    #5 0x7f5efe90c37f in kj::MainBuilder::MainImpl::operator()(kj::StringPtr, kj::ArrayPtr<kj::StringPtr const>) (/lib/x86_64-linux-gnu/libkj-1.0.1.so+0x5537f) (BuildId: 4b52c0e2756bcb53e58e705fcb10bab8f63f24fd)
    #6 0x7f5efe907499 in kj::runMainAndExit(kj::ProcessContext&, kj::Function<void (kj::StringPtr, kj::ArrayPtr<kj::StringPtr const>)>&&, int, char**) (/lib/x86_64-linux-gnu/libkj-1.0.1.so+0x50499) (BuildId: 4b52c0e2756bcb53e58e705fcb10bab8f63f24fd)
    #7 0x7f5efeb7bfcb in main (/lib/x86_64-linux-gnu/libkj-test-1.0.1.so+0x3fcb) (BuildId: 2ff7f524274168e50347a2d6dd423f273ae8d268)
    #8 0x7f5efe3251c9  (/lib/x86_64-linux-gnu/libc.so.6+0x2a1c9) (BuildId: 42c84c92e6f98126b3e2230ebfdead22c235b667)
    #9 0x7f5efe32528a in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x2a28a) (BuildId: 42c84c92e6f98126b3e2230ebfdead22c235b667)
    #10 0x55f6c8108604 in _start (/home/runner/work/_temp/build-asan/src/ipc/libmultiprocess/test/mptest+0x11f604) (BuildId: 1d9786de21dd346184acb11afd6cab2561d8182f)

SUMMARY: UndefinedBehaviorSanitizer: null-pointer-use /home/runner/work/_temp/src/ipc/libmultiprocess/include/mp/proxy-types.h:612:40

https://github.com/Sjors/bitcoin/actions/runs/15921104847/job/44907875067?pr=90

ARM was just a timeout, which happens a lot on my CI machine.

ryanofsky added a commit to bitcoin-core/libmultiprocess that referenced this pull request Jun 27, 2025
…ence.

3a6db38 ci: rename configs to .bash (Sjors Provoost)
401e0ce ci: add copyright to bash scripts (Sjors Provoost)
e956467 ci: export LC_ALL (Sjors Provoost)

Pull request description:

  Prevents linter issues like reported here: Sjors/bitcoin#90 (comment)

  But probably also just a good idea.

  Also make the use of `#!/usr/bin/env bash` consistent and added a copyright header (mostly for aesthetic reasons).

ACKs for top commit:
  ryanofsky:
    Code review ACK 3a6db38. Thanks!

Tree-SHA512: de4a3fe62126f0f37e9b33c6e443ee07b2968ac8df18b7d91ce482ce63aeab896b216a7efcc638945ef4f83b4429471433c3515b5bbe744fb238bec08792f7af
@Sjors Sjors force-pushed the 2025/06/ipc-yea-plus branch from 35caa56 to f3a5285 Compare June 28, 2025 09:23
@Sjors
Copy link
Copy Markdown
Owner Author

Sjors commented Jun 28, 2025

Updated to use the latest bitcoin-core/libmultiprocess@258b83c

@Sjors Sjors force-pushed the 2025/06/ipc-yea-plus branch from f3a5285 to 95de562 Compare June 28, 2025 09:26
@Sjors
Copy link
Copy Markdown
Owner Author

Sjors commented Jun 28, 2025

CentOS still happily segfaults: https://cirrus-ci.com/task/5925562629226496

As does previous releases: https://cirrus-ci.com/task/5362612675805184

MSan complains of MemorySanitizer: use-of-uninitialized-value: https://cirrus-ci.com/task/4658925234028544

As does ASan: SUMMARY: UndefinedBehaviorSanitizer: null-pointer-use /home/runner/work/_temp/src/ipc/libmultiprocess/include/mp/proxy-types.h:612:40
https://github.com/Sjors/bitcoin/actions/runs/15942739184/job/44973004692?pr=90

@ryanofsky
Copy link
Copy Markdown

MSan complains of MemorySanitizer: use-of-uninitialized-value: https://cirrus-ci.com/task/4658925234028544

For this I think following fix should work (not sure about the other failures yet but they might be related):

diff

--- a/test/mp/test/test.cpp
+++ b/test/mp/test/test.cpp
@@ -268,12 +268,12 @@ KJ_TEST("Calling IPC method, disconnecting and blocking during the call")
     // ProxyServer objects associated with the connection. Having an in-progress
     // RPC call requires keeping the ProxyServer longer.
 
+    std::promise<void> signal;
     TestSetup setup{/*client_owns_connection=*/false};
     ProxyClient<messages::FooInterface>* foo = setup.client.get();
     KJ_EXPECT(foo->add(1, 2) == 3);
 
     foo->initThreadMap();
-    std::promise<void> signal;
     setup.server->m_impl->m_fn = [&] {
         EventLoopRef loop{*setup.server->m_context.loop};
         setup.client_disconnect();
@@ -289,6 +289,11 @@ KJ_TEST("Calling IPC method, disconnecting and blocking during the call")
     }
     KJ_EXPECT(disconnected);
 
+    // Now that the disconnect has been detected, set signal allowing the
+    // callFnAsync() IPC call to return. Since signalling may not wake up the
+    // thread right away, it is important for the signal variable to be declared
+    // *before* the TestSetup variable so is available during the entire
+    // TestSetup shutdown process.
     signal.set_value();
 }
 

@ryanofsky
Copy link
Copy Markdown

Actually the other failures don't look related. Previous releases & centos jobs are failing with a segfault in ipc_tests not mptest, so not the same unit tests. Will probably need to try to reproduce these failures locally in podman and get a stack trace to debug.

The UndefinedBehaviorSanitizer failure is also calling out technically undefined behavior that should be fixed, but not behavior that should cause any problems in practice. It just catches code initializing an unused reference variable from a null pointer.

Probably also would be good to add more CI jobs in libmultiprocess to test the other sanitizers.

So lots more to do here.

@ryanofsky
Copy link
Copy Markdown

Following should fix UndefinedBehaviorSanitizer:

diff

--- a/include/mp/proxy-types.h
+++ b/include/mp/proxy-types.h
@@ -609,42 +609,44 @@ void clientInvoke(ProxyClient& proxy_client, const GetRequest& get_request, Fiel
             << "{" << g_thread_context.thread_name
             << "} IPC client first request from current thread, constructing waiter";
     }
-    ClientInvokeContext invoke_context{*proxy_client.m_context.connection, g_thread_context};
+    ThreadContext& thread_context{g_thread_context};
+    std::optional<ClientInvokeContext> invoke_context;
     std::exception_ptr exception;
     std::string kj_exception;
     bool done = false;
     const char* disconnected = nullptr;
     proxy_client.m_context.loop->sync([&]() {
         if (!proxy_client.m_context.connection) {
-            const std::unique_lock<std::mutex> lock(invoke_context.thread_context.waiter->m_mutex);
+            const std::unique_lock<std::mutex> lock(thread_context.waiter->m_mutex);
             done = true;
             disconnected = "IPC client method called after disconnect.";
-            invoke_context.thread_context.waiter->m_cv.notify_all();
+            thread_context.waiter->m_cv.notify_all();
             return;
         }
 
         auto request = (proxy_client.m_client.*get_request)(nullptr);
         using Request = CapRequestTraits<decltype(request)>;
         using FieldList = typename ProxyClientMethodTraits<typename Request::Params>::Fields;
-        IterateFields().handleChain(invoke_context, request, FieldList(), typename FieldObjs::BuildParams{&fields}...);
+        invoke_context.emplace(*proxy_client.m_context.connection, thread_context);
+        IterateFields().handleChain(*invoke_context, request, FieldList(), typename FieldObjs::BuildParams{&fields}...);
         proxy_client.m_context.loop->logPlain()
-            << "{" << invoke_context.thread_context.thread_name << "} IPC client send "
+            << "{" << thread_context.thread_name << "} IPC client send "
             << TypeName<typename Request::Params>() << " " << LogEscape(request.toString());
 
         proxy_client.m_context.loop->m_task_set->add(request.send().then(
             [&](::capnp::Response<typename Request::Results>&& response) {
                 proxy_client.m_context.loop->logPlain()
-                    << "{" << invoke_context.thread_context.thread_name << "} IPC client recv "
+                    << "{" << thread_context.thread_name << "} IPC client recv "
                     << TypeName<typename Request::Results>() << " " << LogEscape(response.toString());
                 try {
                     IterateFields().handleChain(
-                        invoke_context, response, FieldList(), typename FieldObjs::ReadResults{&fields}...);
+                        *invoke_context, response, FieldList(), typename FieldObjs::ReadResults{&fields}...);
                 } catch (...) {
                     exception = std::current_exception();
                 }
-                const std::unique_lock<std::mutex> lock(invoke_context.thread_context.waiter->m_mutex);
+                const std::unique_lock<std::mutex> lock(thread_context.waiter->m_mutex);
                 done = true;
-                invoke_context.thread_context.waiter->m_cv.notify_all();
+                thread_context.waiter->m_cv.notify_all();
             },
             [&](const ::kj::Exception& e) {
                 if (e.getType() == ::kj::Exception::Type::DISCONNECTED) {
@@ -652,16 +654,16 @@ void clientInvoke(ProxyClient& proxy_client, const GetRequest& get_request, Fiel
                 } else {
                     kj_exception = kj::str("kj::Exception: ", e).cStr();
                     proxy_client.m_context.loop->logPlain()
-                        << "{" << invoke_context.thread_context.thread_name << "} IPC client exception " << kj_exception;
+                        << "{" << thread_context.thread_name << "} IPC client exception " << kj_exception;
                 }
-                const std::unique_lock<std::mutex> lock(invoke_context.thread_context.waiter->m_mutex);
+                const std::unique_lock<std::mutex> lock(thread_context.waiter->m_mutex);
                 done = true;
-                invoke_context.thread_context.waiter->m_cv.notify_all();
+                thread_context.waiter->m_cv.notify_all();
             }));
     });
 
-    std::unique_lock<std::mutex> lock(invoke_context.thread_context.waiter->m_mutex);
-    invoke_context.thread_context.waiter->wait(lock, [&done]() { return done; });
+    std::unique_lock<std::mutex> lock(thread_context.waiter->m_mutex);
+    thread_context.waiter->wait(lock, [&done]() { return done; });
     if (exception) std::rethrow_exception(exception);
     if (!kj_exception.empty()) proxy_client.m_context.loop->raise() << kj_exception;
     if (disconnected) proxy_client.m_context.loop->raise() << disconnected;

@Sjors Sjors force-pushed the 2025/06/ipc-yea-plus branch from 95de562 to 9f9c8b1 Compare June 30, 2025 06:16
@Sjors
Copy link
Copy Markdown
Owner Author

Sjors commented Jun 30, 2025

Updated to the latest commit from bitcoin-core/libmultiprocess@71d3107 and included the above two patches.

@Sjors Sjors force-pushed the 2025/06/ipc-yea-plus branch from 9f9c8b1 to 2f22406 Compare June 30, 2025 07:08
@Sjors
Copy link
Copy Markdown
Owner Author

Sjors commented Jun 30, 2025

@ryanofsky
Copy link
Copy Markdown

I reproduced centos ipc_tests crash locally in podman, and could get a stack trace and fix the bug pretty easily. The fix is:

--- a/src/test/ipc_test.cpp
+++ b/src/test/ipc_test.cpp
@@ -62,7 +62,8 @@ void IpcPipeTest()
         auto connection_client = std::make_unique<mp::Connection>(loop, kj::mv(pipe.ends[0]));
         auto foo_client = std::make_unique<mp::ProxyClient<gen::FooInterface>>(
             connection_client->m_rpc_system->bootstrap(mp::ServerVatId().vat_id).castAs<gen::FooInterface>(),
-            connection_client.release(), /* destroy_connection= */ true);
+            connection_client.get(), /* destroy_connection= */ true);
+        connection_client.release();
         foo_promise.set_value(std::move(foo_client));
 
         auto connection_server = std::make_unique<mp::Connection>(loop, kj::mv(pipe.ends[1]), [&](mp::Connection& connection) {

(and cause is just connection_client.release() causing connection_client->... to segfault depending on what order function arguments are evaluated, which varies by compiler)

I'll try to reproduce the mptest timeout next

@Sjors
Copy link
Copy Markdown
Owner Author

Sjors commented Jun 30, 2025

@ryanofsky
Copy link
Copy Markdown

ryanofsky commented Jun 30, 2025

Thanks! The following patch should fix mptest timeouts:

diff

--- a/src/ipc/libmultiprocess/include/mp/type-context.h
+++ b/src/ipc/libmultiprocess/include/mp/type-context.h
@@ -69,7 +69,6 @@ auto PassField(Priority<1>, TypeList<>, ServerContext& server_context, const Fn&
                 const auto& params = call_context.getParams();
                 Context::Reader context_arg = Accessor::get(params);
                 ServerContext server_context{server, call_context, req};
-                bool disconnected{false};
                 {
                     // Before invoking the function, store a reference to the
                     // callbackThread provided by the client in the
@@ -101,7 +100,7 @@ auto PassField(Priority<1>, TypeList<>, ServerContext& server_context, const Fn&
                     // recursive call (IPC call calling back to the caller which
                     // makes another IPC call), so avoid modifying the map.
                     const bool erase_thread{inserted};
-                    KJ_DEFER({
+                    KJ_DEFER(if (erase_thread) {
                         std::unique_lock<std::mutex> lock(thread_context.waiter->m_mutex);
                         // Call erase here with a Connection* argument instead
                         // of an iterator argument, because the `request_thread`
@@ -112,24 +111,10 @@ auto PassField(Priority<1>, TypeList<>, ServerContext& server_context, const Fn&
                         // erases the thread from the map, and also because the
                         // ProxyServer<Thread> destructor calls
                         // request_threads.clear().
-                        if (erase_thread) {
-                            disconnected = !request_threads.erase(server.m_context.connection);
-                        } else {
-                            disconnected = !request_threads.count(server.m_context.connection);
-                        }
+                        request_threads.erase(server.m_context.connection);
                     });
                     fn.invoke(server_context, args...);
                 }
-                if (disconnected) {
-                    // If disconnected is true, the Connection object was
-                    // destroyed during the method call. Deal with this by
-                    // returning without ever fulfilling the promise, which will
-                    // cause the ProxyServer object to leak. This is not ideal,
-                    // but fixing the leak will require nontrivial code changes
-                    // because there is a lot of code assuming ProxyServer
-                    // objects are destroyed before Connection objects.
-                    return;
-                }
                 KJ_IF_MAYBE(exception, kj::runCatchingExceptions([&]() {
                     server.m_context.loop->sync([&] {
                         auto fulfiller_dispose = kj::mv(fulfiller);

I was able to reproduce the problem somewhat reliably by running the "native_asan" CI locally in podman, and running mptest in a loop, and then debug it by getting a stack trace at the point of the hang. I was later able to reproduce the bug much more reliably by adding a sleep call to the "disconnecting and blocking test":

--- a/test/mp/test/test.cpp
+++ b/test/mp/test/test.cpp
@@ -278,6 +278,7 @@ KJ_TEST("Calling IPC method, disconnecting and blocking during the call")
         EventLoopRef loop{*setup.server->m_context.loop};
         setup.client_disconnect();
         signal.get_future().get();
+        sleep(1); // Wait an extra second to test what happens if shutdown proceeds while callFnAsync is still held up,
     };
 
     bool disconnected{false};

So I want to add a new test case to cover this and prevent any regressions.

The fix above just reverts an old workaround introduced in bitcoin-core/libmultiprocess#118 which is no longer necessary after bitcoin-core/libmultiprocess@315ff53 from bitcoin-core/libmultiprocess#160

ryanofsky added a commit to ryanofsky/libmultiprocess that referenced this pull request Jul 1, 2025
Fix MemorySanitizer: use-of-uninitialized-value error in "disconnecting and
blocking" promise::get_future call reported:

- Sjors/bitcoin#90 (comment)
- https://cirrus-ci.com/task/4658925234028544and

and fixed:

- Sjors/bitcoin#90 (comment)

An issue exists to add an MSAN CI job to catch errors like this more quickly in
the future bitcoin-core#188

Error looks like:

[ TEST ] test.cpp:251: Calling IPC method, disconnecting and blocking during the call
...
MemorySanitizer: use-of-uninitialized-value
    #0 0x7f83ecb19853 in std::__1::promise<void>::get_future() /msan/llvm-project/libcxx/src/future.cpp:154:16
    bitcoin-core#1 0x55bf563af03c in mp::test::TestCase251::run()::$_0::operator()() const /ci_container_base/ci/scratch/build-x86_64-pc-linux-gnu/src/ipc/libmultiprocess/test/./ipc/libmultiprocess/test/mp/test/test.cpp:280:16
    bitcoin-core#2 0x55bf563af03c in decltype(std::declval<mp::test::TestCase251::run()::$_0&>()()) std::__1::__invoke[abi:de200100]<mp::test::TestCase251::run()::$_0&>(mp::test::TestCase251::run()::$_0&) /msan/cxx_build/include/c++/v1/__type_traits/invoke.h:179:25
    bitcoin-core#3 0x55bf563af03c in void std::__1::__invoke_void_return_wrapper<void, true>::__call[abi:de200100]<mp::test::TestCase251::run()::$_0&>(mp::test::TestCase251::run()::$_0&) /msan/cxx_build/include/c++/v1/__type_traits/invoke.h:251:5
    bitcoin-core#4 0x55bf563af03c in void std::__1::__invoke_r[abi:de200100]<void, mp::test::TestCase251::run()::$_0&>(mp::test::TestCase251::run()::$_0&) /msan/cxx_build/include/c++/v1/__type_traits/invoke.h:273:10
    bitcoin-core#5 0x55bf563af03c in std::__1::__function::__alloc_func<mp::test::TestCase251::run()::$_0, std::__1::allocator<mp::test::TestCase251::run()::$_0>, void ()>::operator()[abi:de200100]() /msan/cxx_build/include/c++/v1/__functional/function.h:167:12
    bitcoin-core#6 0x55bf563af03c in std::__1::__function::__func<mp::test::TestCase251::run()::$_0, std::__1::allocator<mp::test::TestCase251::run()::$_0>, void ()>::operator()() /msan/cxx_build/include/c++/v1/__functional/function.h:319:10
    bitcoin-core#7 0x55bf565f1b25 in std::__1::__function::__value_func<void ()>::operator()[abi:de200100]() const /msan/cxx_build/include/c++/v1/__functional/function.h:436:12
    bitcoin-core#8 0x55bf565f1b25 in std::__1::function<void ()>::operator()() const /msan/cxx_build/include/c++/v1/__functional/function.h:995:10
    bitcoin-core#9 0x55bf565f1b25 in mp::test::FooImplementation::callFnAsync() /ci_container_base/ci/scratch/build-x86_64-pc-linux-gnu/src/ipc/libmultiprocess/test/./ipc/libmultiprocess/test/mp/test/foo.h:83:40
    bitcoin-core#10 0x55bf565f1b25 in decltype(auto) mp::ProxyMethodTraits<mp::test::messages::FooInterface::CallFnAsyncParams, void>::invoke<mp::ServerInvokeContext<mp::ProxyServer<mp::test::messages::FooInterface>, capnp::CallContext<mp::test::messages::FooInterface::CallFnAsyncParams, mp::test::messages::FooInterface::CallFnAsyncResults>>>(mp::ServerInvokeContext<mp::ProxyServer<mp::test::messages::FooInterface>, capnp::CallContext<mp::test::messages::FooInterface::CallFnAsyncParams, mp::test::messages::FooInterface::CallFnAsyncResults>>&) /ci_container_base/ci/scratch/build-x86_64-pc-linux-gnu/src/ipc/libmultiprocess/test/./ipc/libmultiprocess/include/mp/proxy.h:288:16
    bitcoin-core#11 0x55bf565f1b25 in decltype(auto) mp::ServerCall::invoke<mp::ServerInvokeContext<mp::ProxyServer<mp::test::messages::FooInterface>, capnp::CallContext<mp::test::messages::FooInterface::CallFnAsyncParams, mp::test::messages::FooInterface::CallFnAsyncResults>>>(mp::ServerInvokeContext<mp::ProxyServer<mp::test::messages::FooInterface>, capnp::CallContext<mp::test::messages::FooInterface::CallFnAsyncParams, mp::test::messages::FooInterface::CallFnAsyncResults>>&, mp::TypeList<>) const /ci_container_base/ci/scratch/build-x86_64-pc-linux-gnu/src/ipc/libmultiprocess/test/./ipc/libmultiprocess/include/mp/proxy-types.h:448:16
    bitcoin-core#12 0x55bf565f1b25 in std::__1::enable_if<std::is_same<decltype(mp::Accessor<mp::foo_fields::Context, 17>::get(fp1.call_context.getParams())), mp::Context::Reader>::value, kj::Promise<mp::ServerInvokeContext<mp::ProxyServer<mp::test::messages::FooInterface>, capnp::CallContext<mp::test::messages::FooInterface::CallFnAsyncParams, mp::test::messages::FooInterface::CallFnAsyncResults>>::CallContext>>::type mp::PassField<mp::Accessor<mp::foo_fields::Context, 17>, mp::ServerInvokeContext<mp::ProxyServer<mp::test::messages::FooInterface>, capnp::CallContext<mp::test::messages::FooInterface::CallFnAsyncParams, mp::test::messages::FooInterface::CallFnAsyncResults>>, mp::ServerCall, mp::TypeList<>>(mp::Priority<1>, mp::TypeList<>, mp::ServerInvokeContext<mp::ProxyServer<mp::test::messages::FooInterface>, capnp::CallContext<mp::test::messages::FooInterface::CallFnAsyncParams, mp::test::messages::FooInterface::CallFnAsyncResults>>&, mp::ServerCall const&, mp::TypeList<>&&)::'lambda'()::operator()() /ci_container_base/ci/scratch/build-x86_64-pc-linux-gnu/src/ipc/libmultiprocess/test/./ipc/libmultiprocess/include/mp/type-context.h:121:24
    bitcoin-core#13 0x55bf565f12d1 in kj::Function<void ()>::Impl<std::__1::enable_if<std::is_same<decltype(mp::Accessor<mp::foo_fields::Context, 17>::get(fp1.call_context.getParams())), mp::Context::Reader>::value, kj::Promise<mp::ServerInvokeContext<mp::ProxyServer<mp::test::messages::FooInterface>, capnp::CallContext<mp::test::messages::FooInterface::CallFnAsyncParams, mp::test::messages::FooInterface::CallFnAsyncResults>>::CallContext>>::type mp::PassField<mp::Accessor<mp::foo_fields::Context, 17>, mp::ServerInvokeContext<mp::ProxyServer<mp::test::messages::FooInterface>, capnp::CallContext<mp::test::messages::FooInterface::CallFnAsyncParams, mp::test::messages::FooInterface::CallFnAsyncResults>>, mp::ServerCall, mp::TypeList<>>(mp::Priority<1>, mp::TypeList<>, mp::ServerInvokeContext<mp::ProxyServer<mp::test::messages::FooInterface>, capnp::CallContext<mp::test::messages::FooInterface::CallFnAsyncParams, mp::test::messages::FooInterface::CallFnAsyncResults>>&, mp::ServerCall const&, mp::TypeList<>&&)::'lambda'()>::operator()() /ci_container_base/depends/x86_64-pc-linux-gnu/include/kj/function.h:142:14
    bitcoin-core#14 0x55bf5641125e in kj::Function<void ()>::operator()() /ci_container_base/depends/x86_64-pc-linux-gnu/include/kj/function.h:119:12
    bitcoin-core#15 0x55bf5641125e in void mp::Unlock<std::__1::unique_lock<std::__1::mutex>, kj::Function<void ()>&>(std::__1::unique_lock<std::__1::mutex>&, kj::Function<void ()>&) /ci_container_base/ci/scratch/build-x86_64-pc-linux-gnu/src/ipc/libmultiprocess/test/./ipc/libmultiprocess/include/mp/util.h:198:5
    bitcoin-core#16 0x55bf5667e45b in void mp::Waiter::wait<mp::ProxyServer<mp::ThreadMap>::makeThread(capnp::CallContext<mp::ThreadMap::MakeThreadParams, mp::ThreadMap::MakeThreadResults>)::$_0::operator()() const::'lambda'()>(std::__1::unique_lock<std::__1::mutex>&, mp::ProxyServer<mp::ThreadMap>::makeThread(capnp::CallContext<mp::ThreadMap::MakeThreadParams, mp::ThreadMap::MakeThreadResults>)::$_0::operator()() const::'lambda'())::'lambda'()::operator()() const /ci_container_base/ci/scratch/build-x86_64-pc-linux-gnu/src/ipc/libmultiprocess/./ipc/libmultiprocess/include/mp/proxy-io.h:294:17
    bitcoin-core#17 0x55bf5667e45b in void std::__1::condition_variable::wait<void mp::Waiter::wait<mp::ProxyServer<mp::ThreadMap>::makeThread(capnp::CallContext<mp::ThreadMap::MakeThreadParams, mp::ThreadMap::MakeThreadResults>)::$_0::operator()() const::'lambda'()>(std::__1::unique_lock<std::__1::mutex>&, mp::ProxyServer<mp::ThreadMap>::makeThread(capnp::CallContext<mp::ThreadMap::MakeThreadParams, mp::ThreadMap::MakeThreadResults>)::$_0::operator()() const::'lambda'())::'lambda'()>(std::__1::unique_lock<std::__1::mutex>&, mp::ProxyServer<mp::ThreadMap>::makeThread(capnp::CallContext<mp::ThreadMap::MakeThreadParams, mp::ThreadMap::MakeThreadResults>)::$_0::operator()() const::'lambda'()) /msan/cxx_build/include/c++/v1/__condition_variable/condition_variable.h:146:11
    bitcoin-core#18 0x55bf5667e45b in void mp::Waiter::wait<mp::ProxyServer<mp::ThreadMap>::makeThread(capnp::CallContext<mp::ThreadMap::MakeThreadParams, mp::ThreadMap::MakeThreadResults>)::$_0::operator()() const::'lambda'()>(std::__1::unique_lock<std::__1::mutex>&, mp::ProxyServer<mp::ThreadMap>::makeThread(capnp::CallContext<mp::ThreadMap::MakeThreadParams, mp::ThreadMap::MakeThreadResults>)::$_0::operator()() const::'lambda'()) /ci_container_base/ci/scratch/build-x86_64-pc-linux-gnu/src/ipc/libmultiprocess/./ipc/libmultiprocess/include/mp/proxy-io.h:285:14
    bitcoin-core#19 0x55bf5667e45b in mp::ProxyServer<mp::ThreadMap>::makeThread(capnp::CallContext<mp::ThreadMap::MakeThreadParams, mp::ThreadMap::MakeThreadResults>)::$_0::operator()() const /ci_container_base/ci/scratch/build-x86_64-pc-linux-gnu/src/ipc/libmultiprocess/./ipc/libmultiprocess/src/mp/proxy.cpp:404:34
    bitcoin-core#20 0x55bf5667e45b in decltype(std::declval<mp::ProxyServer<mp::ThreadMap>::makeThread(capnp::CallContext<mp::ThreadMap::MakeThreadParams, mp::ThreadMap::MakeThreadResults>)::$_0>()()) std::__1::__invoke[abi:de200100]<mp::ProxyServer<mp::ThreadMap>::makeThread(capnp::CallContext<mp::ThreadMap::MakeThreadParams, mp::ThreadMap::MakeThreadResults>)::$_0>(mp::ProxyServer<mp::ThreadMap>::makeThread(capnp::CallContext<mp::ThreadMap::MakeThreadParams, mp::ThreadMap::MakeThreadResults>)::$_0&&) /msan/cxx_build/include/c++/v1/__type_traits/invoke.h:179:25
    bitcoin-core#21 0x55bf5667e45b in void std::__1::__thread_execute[abi:de200100]<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct>>, mp::ProxyServer<mp::ThreadMap>::makeThread(capnp::CallContext<mp::ThreadMap::MakeThreadParams, mp::ThreadMap::MakeThreadResults>)::$_0>(std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct>>, mp::ProxyServer<mp::ThreadMap>::makeThread(capnp::CallContext<mp::ThreadMap::MakeThreadParams, mp::ThreadMap::MakeThreadResults>)::$_0>&, std::__1::__tuple_indices<...>) /msan/cxx_build/include/c++/v1/__thread/thread.h:199:3
    bitcoin-core#22 0x55bf5667e45b in void* std::__1::__thread_proxy[abi:de200100]<std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct>>, mp::ProxyServer<mp::ThreadMap>::makeThread(capnp::CallContext<mp::ThreadMap::MakeThreadParams, mp::ThreadMap::MakeThreadResults>)::$_0>>(void*) /msan/cxx_build/include/c++/v1/__thread/thread.h:208:3
    bitcoin-core#23 0x7f83ec69caa3  (/lib/x86_64-linux-gnu/libc.so.6+0x9caa3) (BuildId: 42c84c92e6f98126b3e2230ebfdead22c235b667)
    bitcoin-core#24 0x7f83ec729c3b  (/lib/x86_64-linux-gnu/libc.so.6+0x129c3b) (BuildId: 42c84c92e6f98126b3e2230ebfdead22c235b667)
[11:46:33.109]
  Member fields were destroyed
    #0 0x55bf56348abd in __sanitizer_dtor_callback_fields /msan/llvm-project/compiler-rt/lib/msan/msan_interceptors.cpp:1044:5
    bitcoin-core#1 0x7f83ecb196de in ~promise /msan/cxx_build/include/c++/v1/future:1341:22
    bitcoin-core#2 0x7f83ecb196de in std::__1::promise<void>::~promise() /msan/llvm-project/libcxx/src/future.cpp:151:1
    bitcoin-core#3 0x55bf563adb36 in mp::test::TestCase251::run() /ci_container_base/ci/scratch/build-x86_64-pc-linux-gnu/src/ipc/libmultiprocess/test/./ipc/libmultiprocess/test/mp/test/test.cpp:293:1
    bitcoin-core#4 0x55bf5669252e in kj::TestRunner::run()::'lambda'()::operator()() const /usr/src/kj/test.c++:318:11
    bitcoin-core#5 0x55bf5669252e in kj::Maybe<kj::Exception> kj::runCatchingExceptions<kj::TestRunner::run()::'lambda'()>(kj::TestRunner::run()::'lambda'()&&) /usr/src/kj/exception.h:371:5
    bitcoin-core#6 0x55bf5669071e in kj::TestRunner::run() /usr/src/kj/test.c++:318:11
    bitcoin-core#7 0x55bf5668f977 in auto kj::TestRunner::getMain()::'lambda5'(auto&, auto&&...)::operator()<kj::TestRunner>(auto&, auto&&...) /usr/src/kj/test.c++:217:27
    bitcoin-core#8 0x55bf5668f977 in auto kj::_::BoundMethod<kj::TestRunner&, kj::TestRunner::getMain()::'lambda5'(auto&, auto&&...), kj::TestRunner::getMain()::'lambda6'(auto&, auto&&...)>::operator()<>() /usr/src/kj/function.h:263:12
    bitcoin-core#9 0x55bf5668f977 in kj::Function<kj::MainBuilder::Validity ()>::Impl<kj::_::BoundMethod<kj::TestRunner&, kj::TestRunner::getMain()::'lambda5'(auto&, auto&&...), kj::TestRunner::getMain()::'lambda6'(auto&, auto&&...)>>::operator()() /usr/src/kj/function.h:142:14
    bitcoin-core#10 0x55bf56b7362c in kj::Function<kj::MainBuilder::Validity ()>::operator()() /usr/src/kj/function.h:119:12
    bitcoin-core#11 0x55bf56b7362c in kj::MainBuilder::MainImpl::operator()(kj::StringPtr, kj::ArrayPtr<kj::StringPtr const>) /usr/src/kj/main.c++:623:5
    bitcoin-core#12 0x55bf56b8865c in kj::Function<void (kj::StringPtr, kj::ArrayPtr<kj::StringPtr const>)>::Impl<kj::MainBuilder::MainImpl>::operator()(kj::StringPtr, kj::ArrayPtr<kj::StringPtr const>) /usr/src/kj/function.h:142:14
    bitcoin-core#13 0x55bf56b6a592 in kj::Function<void (kj::StringPtr, kj::ArrayPtr<kj::StringPtr const>)>::operator()(kj::StringPtr, kj::ArrayPtr<kj::StringPtr const>) /usr/src/kj/function.h:119:12
    bitcoin-core#14 0x55bf56b6a592 in kj::runMainAndExit(kj::ProcessContext&, kj::Function<void (kj::StringPtr, kj::ArrayPtr<kj::StringPtr const>)>&&, int, char**)::$_0::operator()() const /usr/src/kj/main.c++:228:5
    bitcoin-core#15 0x55bf56b6a592 in kj::Maybe<kj::Exception> kj::runCatchingExceptions<kj::runMainAndExit(kj::ProcessContext&, kj::Function<void (kj::StringPtr, kj::ArrayPtr<kj::StringPtr const>)>&&, int, char**)::$_0>(kj::runMainAndExit(kj::ProcessContext&, kj::Function<void (kj::StringPtr, kj::ArrayPtr<kj::StringPtr const>)>&&, int, char**)::$_0&&) /usr/src/kj/exception.h:371:5
    bitcoin-core#16 0x55bf56b69b5d in kj::runMainAndExit(kj::ProcessContext&, kj::Function<void (kj::StringPtr, kj::ArrayPtr<kj::StringPtr const>)>&&, int, char**) /usr/src/kj/main.c++:228:5
    bitcoin-core#17 0x55bf5668be8f in main /usr/src/kj/test.c++:381:1
    bitcoin-core#18 0x7f83ec62a1c9  (/lib/x86_64-linux-gnu/libc.so.6+0x2a1c9) (BuildId: 42c84c92e6f98126b3e2230ebfdead22c235b667)
    bitcoin-core#19 0x7f83ec62a28a in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x2a28a) (BuildId: 42c84c92e6f98126b3e2230ebfdead22c235b667)
    bitcoin-core#20 0x55bf5630b374 in _start (/ci_container_base/ci/scratch/build-x86_64-pc-linux-gnu/src/ipc/libmultiprocess/test/mptest+0x77374)
[11:46:33.109]
SUMMARY: MemorySanitizer: use-of-uninitialized-value /msan/llvm-project/libcxx/src/future.cpp:154:16 in std::__1::promise<void>::get_future()
Exiting
ryanofsky added a commit to ryanofsky/libmultiprocess that referenced this pull request Jul 1, 2025
Fix UndefinedBehaviorSanitizer: null-pointer-use error binding a null pointer
to an unused reference variable. This error should not cause problems in
practice because the reference is not used, but is technically undefined
behavior. Issue was reported:

- Sjors/bitcoin#90 (comment)
- https://github.com/Sjors/bitcoin/actions/runs/15921104847/job/44907875067?pr=90
- Sjors/bitcoin#90 (comment)
- https://github.com/Sjors/bitcoin/actions/runs/15942739184/job/44973004692?pr=90

and fixed:

- Sjors/bitcoin#90 (comment)

Error looks like:

[ TEST ] test.cpp:197: Call IPC method after client connection is closed
/home/runner/work/_temp/src/ipc/libmultiprocess/include/mp/proxy-types.h:612:40: runtime error: reference binding to null pointer of type 'Connection'
    #0 0x5647ad32fc50 in void mp::clientInvoke<mp::ProxyClient<mp::test::messages::FooInterface>, capnp::Request<mp::test::messages::FooInterface::AddParams, mp::test::messages::FooInterface::AddResults> (mp::test::messages::FooInterface::Client::*)(kj::Maybe<capnp::MessageSize>), mp::ClientParam<mp::Accessor<mp::foo_fields::A, 1>, int>, mp::ClientParam<mp::Accessor<mp::foo_fields::B, 1>, int>, mp::ClientParam<mp::Accessor<mp::foo_fields::Result, 2>, int&>>(mp::ProxyClient<mp::test::messages::FooInterface>&, capnp::Request<mp::test::messages::FooInterface::AddParams, mp::test::messages::FooInterface::AddResults> (mp::test::messages::FooInterface::Client::* const&)(kj::Maybe<capnp::MessageSize>), mp::ClientParam<mp::Accessor<mp::foo_fields::A, 1>, int>&&, mp::ClientParam<mp::Accessor<mp::foo_fields::B, 1>, int>&&, mp::ClientParam<mp::Accessor<mp::foo_fields::Result, 2>, int&>&&) /home/runner/work/_temp/build-asan/src/ipc/libmultiprocess/test/./ipc/libmultiprocess/include/mp/proxy-types.h:612:25
    bitcoin-core#1 0x5647ad32b62a in mp::ProxyClient<mp::test::messages::FooInterface>::add(int, int) /home/runner/work/_temp/build-asan/src/ipc/libmultiprocess/test/mp/test/foo.capnp.proxy-client.c++:20:5
    bitcoin-core#2 0x5647ad2eb9ef in mp::test::TestCase197::run() /home/runner/work/_temp/build-asan/src/ipc/libmultiprocess/test/./ipc/libmultiprocess/test/mp/test/test.cpp:206:14
    bitcoin-core#3 0x7f6aaf7100e1  (/lib/x86_64-linux-gnu/libkj-test-1.0.1.so+0x50e1) (BuildId: 2ff7f524274168e50347a2d6dd423f273ae8d268)
    bitcoin-core#4 0x7f6aaf710657  (/lib/x86_64-linux-gnu/libkj-test-1.0.1.so+0x5657) (BuildId: 2ff7f524274168e50347a2d6dd423f273ae8d268)
    bitcoin-core#5 0x7f6aaf49f37f in kj::MainBuilder::MainImpl::operator()(kj::StringPtr, kj::ArrayPtr<kj::StringPtr const>) (/lib/x86_64-linux-gnu/libkj-1.0.1.so+0x5537f) (BuildId: 4b52c0e2756bcb53e58e705fcb10bab8f63f24fd)
    bitcoin-core#6 0x7f6aaf49a499 in kj::runMainAndExit(kj::ProcessContext&, kj::Function<void (kj::StringPtr, kj::ArrayPtr<kj::StringPtr const>)>&&, int, char**) (/lib/x86_64-linux-gnu/libkj-1.0.1.so+0x50499) (BuildId: 4b52c0e2756bcb53e58e705fcb10bab8f63f24fd)
    bitcoin-core#7 0x7f6aaf70efcb in main (/lib/x86_64-linux-gnu/libkj-test-1.0.1.so+0x3fcb) (BuildId: 2ff7f524274168e50347a2d6dd423f273ae8d268)
    bitcoin-core#8 0x7f6aaeeb81c9  (/lib/x86_64-linux-gnu/libc.so.6+0x2a1c9) (BuildId: 42c84c92e6f98126b3e2230ebfdead22c235b667)
    bitcoin-core#9 0x7f6aaeeb828a in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x2a28a) (BuildId: 42c84c92e6f98126b3e2230ebfdead22c235b667)
    bitcoin-core#10 0x5647ad1fd614 in _start (/home/runner/work/_temp/build-asan/src/ipc/libmultiprocess/test/mptest+0x11d614) (BuildId: a172f78701ced3f93ad52c3562f181811a1c98e8)

SUMMARY: UndefinedBehaviorSanitizer: null-pointer-use /home/runner/work/_temp/src/ipc/libmultiprocess/include/mp/proxy-types.h:612:40
@Sjors
Copy link
Copy Markdown
Owner Author

Sjors commented Jul 3, 2025

CI passed for 444c86f8a76f6077acd52ad6aaf91b0bb58e6ed6. I pushed again to include the rebase of bitcoin#31802.

ryanofsky added a commit to ryanofsky/libmultiprocess that referenced this pull request Jul 10, 2025
fix posted Sjors/bitcoin#90 (comment)

CI error attempting to fix (does not work because nothing prevents ~ProxyClient<Thread> object from being destoryed during m_disconnect_cb call if m_disconnect_cb is interrupted before successfully acquires Waiter::m_mutex, and mutex coudl be deleted while it is waiting or before it starts waiting):

WARNING: ThreadSanitizer: data race (pid=8333)
  Write of size 8 at 0x721000003e90 by thread T11 (mutexes: write M0, write M1):
    #0 operator delete(void*, unsigned long) <null> (mptest+0x12b1dc) (BuildId: bcd28b84fed9c9590c0eb7740b07101619532417)
    bitcoin-core#1 void std::__1::__libcpp_operator_delete[abi:ne200100]<std::__1::__list_node<std::__1::function<void ()>, void*>*, unsigned long>(std::__1::__list_node<std::__1::function<void ()>, void*>*, unsigned long) /usr/lib/llvm-20/bin/../include/c++/v1/__new/allocate.h:46:3 (mptest+0x235783) (BuildId: bcd28b84fed9c9590c0eb7740b07101619532417)
    bitcoin-core#2 void std::__1::__libcpp_deallocate[abi:ne200100]<std::__1::__list_node<std::__1::function<void ()>, void*>>(std::__1::__type_identity<std::__1::__list_node<std::__1::function<void ()>, void*>>::type*, std::__1::__element_count, unsigned long) /usr/lib/llvm-20/bin/../include/c++/v1/__new/allocate.h:86:12 (mptest+0x235783)
    bitcoin-core#3 std::__1::allocator<std::__1::__list_node<std::__1::function<void ()>, void*>>::deallocate[abi:ne200100](std::__1::__list_node<std::__1::function<void ()>, void*>*, unsigned long) /usr/lib/llvm-20/bin/../include/c++/v1/__memory/allocator.h:120:7 (mptest+0x235783)
    bitcoin-core#4 std::__1::allocator_traits<std::__1::allocator<std::__1::__list_node<std::__1::function<void ()>, void*>>>::deallocate[abi:ne200100](std::__1::allocator<std::__1::__list_node<std::__1::function<void ()>, void*>>&, std::__1::__list_node<std::__1::function<void ()>, void*>*, unsigned long) /usr/lib/llvm-20/bin/../include/c++/v1/__memory/allocator_traits.h:302:9 (mptest+0x235783)
    bitcoin-core#5 std::__1::__list_imp<std::__1::function<void ()>, std::__1::allocator<std::__1::function<void ()>>>::__delete_node[abi:ne200100](std::__1::__list_node<std::__1::function<void ()>, void*>*) /usr/lib/llvm-20/bin/../include/c++/v1/list:571:5 (mptest+0x235783)
    bitcoin-core#6 std::__1::list<std::__1::function<void ()>, std::__1::allocator<std::__1::function<void ()>>>::erase(std::__1::__list_const_iterator<std::__1::function<void ()>, void*>) /usr/lib/llvm-20/bin/../include/c++/v1/list:1342:9 (mptest+0x235783)
    bitcoin-core#7 mp::Connection::removeSyncCleanup(std::__1::__list_iterator<std::__1::function<void ()>, void*>) /ci_container_base/ci/scratch/build-x86_64-pc-linux-gnu/src/ipc/libmultiprocess/./ipc/libmultiprocess/src/mp/proxy.cpp:160:24 (mptest+0x235783)

   157  void Connection::removeSyncCleanup(CleanupIt it)
   158  {
   159      const Lock lock(m_loop->m_mutex);
   160      m_sync_cleanup_fns.erase(it);
   161  }

    bitcoin-core#8 mp::ProxyClient<mp::Thread>::~ProxyClient() /ci_container_base/ci/scratch/build-x86_64-pc-linux-gnu/src/ipc/libmultiprocess/./ipc/libmultiprocess/src/mp/proxy.cpp:339:31 (mptest+0x235783)

   333  ProxyClient<Thread>::~ProxyClient()
   334  {
   335      // If thread is being destroyed before connection is destroyed, remove the
   336      // cleanup callback that was registered to handle the connection being
   337      // destroyed before the thread being destroyed.
   338      if (m_disconnect_cb) {
   339          m_context.connection->removeSyncCleanup(*m_disconnect_cb);
   340      }
   341  }

    bitcoin-core#9 std::__1::pair<mp::Connection* const, mp::ProxyClient<mp::Thread>>::~pair() /usr/lib/llvm-20/bin/../include/c++/v1/__utility/pair.h:63:29 (mptest+0x1d2e8d) (BuildId: bcd28b84fed9c9590c0eb7740b07101619532417)
    bitcoin-core#10 void std::__1::__destroy_at[abi:ne200100]<std::__1::pair<mp::Connection* const, mp::ProxyClient<mp::Thread>>, 0>(std::__1::pair<mp::Connection* const, mp::ProxyClient<mp::Thread>>*) /usr/lib/llvm-20/bin/../include/c++/v1/__memory/construct_at.h:66:11 (mptest+0x1d2e8d)
    bitcoin-core#11 void std::__1::allocator_traits<std::__1::allocator<std::__1::__tree_node<std::__1::__value_type<mp::Connection*, mp::ProxyClient<mp::Thread>>, void*>>>::destroy[abi:ne200100]<std::__1::pair<mp::Connection* const, mp::ProxyClient<mp::Thread>>, void, 0>(std::__1::allocator<std::__1::__tree_node<std::__1::__value_type<mp::Connection*, mp::ProxyClient<mp::Thread>>, void*>>&, std::__1::pair<mp::Connection* const, mp::ProxyClient<mp::Thread>>*) /usr/lib/llvm-20/bin/../include/c++/v1/__memory/allocator_traits.h:329:5 (mptest+0x1d2e8d)
    bitcoin-core#12 std::__1::__tree<std::__1::__value_type<mp::Connection*, mp::ProxyClient<mp::Thread>>, std::__1::__map_value_compare<mp::Connection*, std::__1::__value_type<mp::Connection*, mp::ProxyClient<mp::Thread>>, std::__1::less<mp::Connection*>, true>, std::__1::allocator<std::__1::__value_type<mp::Connection*, mp::ProxyClient<mp::Thread>>>>::erase(std::__1::__tree_const_iterator<std::__1::__value_type<mp::Connection*, mp::ProxyClient<mp::Thread>>, std::__1::__tree_node<std::__1::__value_type<mp::Connection*, mp::ProxyClient<mp::Thread>>, void*>*, long>) /usr/lib/llvm-20/bin/../include/c++/v1/__tree:2047:3 (mptest+0x1d2e8d)
    bitcoin-core#13 unsigned long std::__1::__tree<std::__1::__value_type<mp::Connection*, mp::ProxyClient<mp::Thread>>, std::__1::__map_value_compare<mp::Connection*, std::__1::__value_type<mp::Connection*, mp::ProxyClient<mp::Thread>>, std::__1::less<mp::Connection*>, true>, std::__1::allocator<std::__1::__value_type<mp::Connection*, mp::ProxyClient<mp::Thread>>>>::__erase_unique<mp::Connection*>(mp::Connection* const&) /usr/lib/llvm-20/bin/../include/c++/v1/__tree:2067:3 (mptest+0x1d2e8d)
    bitcoin-core#14 std::__1::map<mp::Connection*, mp::ProxyClient<mp::Thread>, std::__1::less<mp::Connection*>, std::__1::allocator<std::__1::pair<mp::Connection* const, mp::ProxyClient<mp::Thread>>>>::erase[abi:ne200100](mp::Connection* const&) /usr/lib/llvm-20/bin/../include/c++/v1/map:1320:79 (mptest+0x205841) (BuildId: bcd28b84fed9c9590c0eb7740b07101619532417)
    bitcoin-core#15 std::__1::enable_if<std::is_same<decltype(mp::Accessor<mp::foo_fields::Context, 17>::get(fp1.call_context.getParams())), mp::Context::Reader>::value, kj::Promise<mp::ServerInvokeContext<mp::ProxyServer<mp::test::messages::FooInterface>, capnp::CallContext<mp::test::messages::FooInterface::CallFnAsyncParams, mp::test::messages::FooInterface::CallFnAsyncResults>>::CallContext>>::type mp::PassField<mp::Accessor<mp::foo_fields::Context, 17>, mp::ServerInvokeContext<mp::ProxyServer<mp::test::messages::FooInterface>, capnp::CallContext<mp::test::messages::FooInterface::CallFnAsyncParams, mp::test::messages::FooInterface::CallFnAsyncResults>>, mp::ServerCall, mp::TypeList<>>(mp::Priority<1>, mp::TypeList<>, mp::ServerInvokeContext<mp::ProxyServer<mp::test::messages::FooInterface>, capnp::CallContext<mp::test::messages::FooInterface::CallFnAsyncParams, mp::test::messages::FooInterface::CallFnAsyncResults>>&, mp::ServerCall const&, mp::TypeList<>&&)::'lambda'()::operator()()::'lambda0'()::operator()() const /ci_container_base/ci/scratch/build-x86_64-pc-linux-gnu/src/ipc/libmultiprocess/test/./ipc/libmultiprocess/include/mp/type-context.h:103:21 (mptest+0x205841)

   103                      KJ_DEFER(if (erase_thread) {
   104                          std::unique_lock<std::mutex> lock(thread_context.waiter->m_mutex);
   105                          // Call erase here with a Connection* argument instead
   106                          // of an iterator argument, because the `request_thread`
   107                          // iterator may be invalid if the connection is closed
   108                          // during this function call. More specifically, the
   109                          // iterator may be invalid because SetThread adds a
   110                          // cleanup callback to the Connection destructor that
   111                          // erases the thread from the map, and also because the
   112                          // ProxyServer<Thread> destructor calls
   113                          // request_threads.clear().
   114                          request_threads.erase(server.m_context.connection);
   115                      });

    bitcoin-core#16 kj::_::Deferred<std::__1::enable_if<std::is_same<decltype(mp::Accessor<mp::foo_fields::Context, 17>::get(fp1.call_context.getParams())), mp::Context::Reader>::value, kj::Promise<mp::ServerInvokeContext<mp::ProxyServer<mp::test::messages::FooInterface>, capnp::CallContext<mp::test::messages::FooInterface::CallFnAsyncParams, mp::test::messages::FooInterface::CallFnAsyncResults>>::CallContext>>::type mp::PassField<mp::Accessor<mp::foo_fields::Context, 17>, mp::ServerInvokeContext<mp::ProxyServer<mp::test::messages::FooInterface>, capnp::CallContext<mp::test::messages::FooInterface::CallFnAsyncParams, mp::test::messages::FooInterface::CallFnAsyncResults>>, mp::ServerCall, mp::TypeList<>>(mp::Priority<1>, mp::TypeList<>, mp::ServerInvokeContext<mp::ProxyServer<mp::test::messages::FooInterface>, capnp::CallContext<mp::test::messages::FooInterface::CallFnAsyncParams, mp::test::messages::FooInterface::CallFnAsyncResults>>&, mp::ServerCall const&, mp::TypeList<>&&)::'lambda'()::operator()()::'lambda0'()>::run() /ci_container_base/depends/x86_64-pc-linux-gnu/include/kj/common.h:2010:7 (mptest+0x205841)
    bitcoin-core#17 kj::_::Deferred<std::__1::enable_if<std::is_same<decltype(mp::Accessor<mp::foo_fields::Context, 17>::get(fp1.call_context.getParams())), mp::Context::Reader>::value, kj::Promise<mp::ServerInvokeContext<mp::ProxyServer<mp::test::messages::FooInterface>, capnp::CallContext<mp::test::messages::FooInterface::CallFnAsyncParams, mp::test::messages::FooInterface::CallFnAsyncResults>>::CallContext>>::type mp::PassField<mp::Accessor<mp::foo_fields::Context, 17>, mp::ServerInvokeContext<mp::ProxyServer<mp::test::messages::FooInterface>, capnp::CallContext<mp::test::messages::FooInterface::CallFnAsyncParams, mp::test::messages::FooInterface::CallFnAsyncResults>>, mp::ServerCall, mp::TypeList<>>(mp::Priority<1>, mp::TypeList<>, mp::ServerInvokeContext<mp::ProxyServer<mp::test::messages::FooInterface>, capnp::CallContext<mp::test::messages::FooInterface::CallFnAsyncParams, mp::test::messages::FooInterface::CallFnAsyncResults>>&, mp::ServerCall const&, mp::TypeList<>&&)::'lambda'()::operator()()::'lambda0'()>::~Deferred() /ci_container_base/depends/x86_64-pc-linux-gnu/include/kj/common.h:1999:5 (mptest+0x205841)
    bitcoin-core#18 std::__1::enable_if<std::is_same<decltype(mp::Accessor<mp::foo_fields::Context, 17>::get(fp1.call_context.getParams())), mp::Context::Reader>::value, kj::Promise<mp::ServerInvokeContext<mp::ProxyServer<mp::test::messages::FooInterface>, capnp::CallContext<mp::test::messages::FooInterface::CallFnAsyncParams, mp::test::messages::FooInterface::CallFnAsyncResults>>::CallContext>>::type mp::PassField<mp::Accessor<mp::foo_fields::Context, 17>, mp::ServerInvokeContext<mp::ProxyServer<mp::test::messages::FooInterface>, capnp::CallContext<mp::test::messages::FooInterface::CallFnAsyncParams, mp::test::messages::FooInterface::CallFnAsyncResults>>, mp::ServerCall, mp::TypeList<>>(mp::Priority<1>, mp::TypeList<>, mp::ServerInvokeContext<mp::ProxyServer<mp::test::messages::FooInterface>, capnp::CallContext<mp::test::messages::FooInterface::CallFnAsyncParams, mp::test::messages::FooInterface::CallFnAsyncResults>>&, mp::ServerCall const&, mp::TypeList<>&&)::'lambda'()::operator()() /ci_container_base/ci/scratch/build-x86_64-pc-linux-gnu/src/ipc/libmultiprocess/test/./ipc/libmultiprocess/include/mp/type-context.h:117:17 (mptest+0x205841)
    bitcoin-core#19 kj::Function<void ()>::Impl<std::__1::enable_if<std::is_same<decltype(mp::Accessor<mp::foo_fields::Context, 17>::get(fp1.call_context.getParams())), mp::Context::Reader>::value, kj::Promise<mp::ServerInvokeContext<mp::ProxyServer<mp::test::messages::FooInterface>, capnp::CallContext<mp::test::messages::FooInterface::CallFnAsyncParams, mp::test::messages::FooInterface::CallFnAsyncResults>>::CallContext>>::type mp::PassField<mp::Accessor<mp::foo_fields::Context, 17>, mp::ServerInvokeContext<mp::ProxyServer<mp::test::messages::FooInterface>, capnp::CallContext<mp::test::messages::FooInterface::CallFnAsyncParams, mp::test::messages::FooInterface::CallFnAsyncResults>>, mp::ServerCall, mp::TypeList<>>(mp::Priority<1>, mp::TypeList<>, mp::ServerInvokeContext<mp::ProxyServer<mp::test::messages::FooInterface>, capnp::CallContext<mp::test::messages::FooInterface::CallFnAsyncParams, mp::test::messages::FooInterface::CallFnAsyncResults>>&, mp::ServerCall const&, mp::TypeList<>&&)::'lambda'()>::operator()() /ci_container_base/depends/x86_64-pc-linux-gnu/include/kj/function.h:142:14 (mptest+0x20552f) (BuildId: bcd28b84fed9c9590c0eb7740b07101619532417)
    bitcoin-core#20 kj::Function<void ()>::operator()() /ci_container_base/depends/x86_64-pc-linux-gnu/include/kj/function.h:119:12 (mptest+0x154867) (BuildId: bcd28b84fed9c9590c0eb7740b07101619532417)
    bitcoin-core#21 void mp::Unlock<std::__1::unique_lock<std::__1::mutex>, kj::Function<void ()>&>(std::__1::unique_lock<std::__1::mutex>&, kj::Function<void ()>&) /ci_container_base/ci/scratch/build-x86_64-pc-linux-gnu/src/ipc/libmultiprocess/test/./ipc/libmultiprocess/include/mp/util.h:198:5 (mptest+0x154867)

   194  template <typename Lock, typename Callback>
   195  void Unlock(Lock& lock, Callback&& callback)
   196  {
   197      const UnlockGuard<Lock> unlock(lock);
   198      callback();
   199  }

    bitcoin-core#22 void mp::Waiter::wait<mp::ProxyServer<mp::ThreadMap>::makeThread(capnp::CallContext<mp::ThreadMap::MakeThreadParams, mp::ThreadMap::MakeThreadResults>)::$_0::operator()() const::'lambda'()>(std::__1::unique_lock<std::__1::mutex>&, mp::ProxyServer<mp::ThreadMap>::makeThread(capnp::CallContext<mp::ThreadMap::MakeThreadParams, mp::ThreadMap::MakeThreadResults>)::$_0::operator()() const::'lambda'())::'lambda'()::operator()() const /ci_container_base/ci/scratch/build-x86_64-pc-linux-gnu/src/ipc/libmultiprocess/./ipc/libmultiprocess/include/mp/proxy-io.h:296:17 (mptest+0x237d69) (BuildId: bcd28b84fed9c9590c0eb7740b07101619532417)

   284      template <class Predicate>
   285      void wait(std::unique_lock<std::mutex>& lock, Predicate pred)
   286      {
   287          m_cv.wait(lock, [&] {
   288              // Important for this to be "while (m_fn)", not "if (m_fn)" to avoid
   289              // a lost-wakeup bug. A new m_fn and m_cv notification might be sent
   290              // after the fn() call and before the lock.lock() call in this loop
   291              // in the case where a capnp response is sent and a brand new
   292              // request is immediately received.
   293              while (m_fn) {
   294                  auto fn = std::move(*m_fn);
   295                  m_fn.reset();
   296                  Unlock(lock, fn);
   297              }
   298              const bool done = pred();
   299              return done;
   300          });
   301      }

    bitcoin-core#23 void std::__1::condition_variable::wait<void mp::Waiter::wait<mp::ProxyServer<mp::ThreadMap>::makeThread(capnp::CallContext<mp::ThreadMap::MakeThreadParams, mp::ThreadMap::MakeThreadResults>)::$_0::operator()() const::'lambda'()>(std::__1::unique_lock<std::__1::mutex>&, mp::ProxyServer<mp::ThreadMap>::makeThread(capnp::CallContext<mp::ThreadMap::MakeThreadParams, mp::ThreadMap::MakeThreadResults>)::$_0::operator()() const::'lambda'())::'lambda'()>(std::__1::unique_lock<std::__1::mutex>&, mp::ProxyServer<mp::ThreadMap>::makeThread(capnp::CallContext<mp::ThreadMap::MakeThreadParams, mp::ThreadMap::MakeThreadResults>)::$_0::operator()() const::'lambda'()) /usr/lib/llvm-20/bin/../include/c++/v1/__condition_variable/condition_variable.h:146:11 (mptest+0x237d69)
    bitcoin-core#24 void mp::Waiter::wait<mp::ProxyServer<mp::ThreadMap>::makeThread(capnp::CallContext<mp::ThreadMap::MakeThreadParams, mp::ThreadMap::MakeThreadResults>)::$_0::operator()() const::'lambda'()>(std::__1::unique_lock<std::__1::mutex>&, mp::ProxyServer<mp::ThreadMap>::makeThread(capnp::CallContext<mp::ThreadMap::MakeThreadParams, mp::ThreadMap::MakeThreadResults>)::$_0::operator()() const::'lambda'()) /ci_container_base/ci/scratch/build-x86_64-pc-linux-gnu/src/ipc/libmultiprocess/./ipc/libmultiprocess/include/mp/proxy-io.h:287:14 (mptest+0x237d69)
    bitcoin-core#25 mp::ProxyServer<mp::ThreadMap>::makeThread(capnp::CallContext<mp::ThreadMap::MakeThreadParams, mp::ThreadMap::MakeThreadResults>)::$_0::operator()() const /ci_container_base/ci/scratch/build-x86_64-pc-linux-gnu/src/ipc/libmultiprocess/./ipc/libmultiprocess/src/mp/proxy.cpp:404:34 (mptest+0x237d69)

   404          g_thread_context.waiter->wait(lock, [] { return !g_thread_context.waiter; });

    bitcoin-core#26 decltype(std::declval<mp::ProxyServer<mp::ThreadMap>::makeThread(capnp::CallContext<mp::ThreadMap::MakeThreadParams, mp::ThreadMap::MakeThreadResults>)::$_0>()()) std::__1::__invoke[abi:ne200100]<mp::ProxyServer<mp::ThreadMap>::makeThread(capnp::CallContext<mp::ThreadMap::MakeThreadParams, mp::ThreadMap::MakeThreadResults>)::$_0>(mp::ProxyServer<mp::ThreadMap>::makeThread(capnp::CallContext<mp::ThreadMap::MakeThreadParams, mp::ThreadMap::MakeThreadResults>)::$_0&&) /usr/lib/llvm-20/bin/../include/c++/v1/__type_traits/invoke.h:179:25 (mptest+0x237d69)
    bitcoin-core#27 void std::__1::__thread_execute[abi:ne200100]<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct>>, mp::ProxyServer<mp::ThreadMap>::makeThread(capnp::CallContext<mp::ThreadMap::MakeThreadParams, mp::ThreadMap::MakeThreadResults>)::$_0>(std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct>>, mp::ProxyServer<mp::ThreadMap>::makeThread(capnp::CallContext<mp::ThreadMap::MakeThreadParams, mp::ThreadMap::MakeThreadResults>)::$_0>&, std::__1::__tuple_indices<...>) /usr/lib/llvm-20/bin/../include/c++/v1/__thread/thread.h:199:3 (mptest+0x237d69)
    bitcoin-core#28 void* std::__1::__thread_proxy[abi:ne200100]<std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct>>, mp::ProxyServer<mp::ThreadMap>::makeThread(capnp::CallContext<mp::ThreadMap::MakeThreadParams, mp::ThreadMap::MakeThreadResults>)::$_0>>(void*) /usr/lib/llvm-20/bin/../include/c++/v1/__thread/thread.h:208:3 (mptest+0x237d69)

  Previous read of size 8 at 0x721000003e90 by thread T10:
    #0 std::__1::__function::__value_func<void ()>::operator()[abi:ne200100]() const /usr/lib/llvm-20/bin/../include/c++/v1/__functional/function.h:436:12 (mptest+0x238638) (BuildId: bcd28b84fed9c9590c0eb7740b07101619532417)
    bitcoin-core#1 std::__1::function<void ()>::operator()() const /usr/lib/llvm-20/bin/../include/c++/v1/__functional/function.h:995:10 (mptest+0x238638)
    bitcoin-core#2 void mp::Unlock<mp::Lock, std::__1::function<void ()>&>(mp::Lock&, std::__1::function<void ()>&) /ci_container_base/ci/scratch/build-x86_64-pc-linux-gnu/src/ipc/libmultiprocess/./ipc/libmultiprocess/include/mp/util.h:198:5 (mptest+0x238638)

   194  template <typename Lock, typename Callback>
   195  void Unlock(Lock& lock, Callback&& callback)
   196  {
   197      const UnlockGuard<Lock> unlock(lock);
   198      callback();
   199  }

    bitcoin-core#3 mp::Connection::~Connection() /ci_container_base/ci/scratch/build-x86_64-pc-linux-gnu/src/ipc/libmultiprocess/./ipc/libmultiprocess/src/mp/proxy.cpp:139:9 (mptest+0x232bcb) (BuildId: bcd28b84fed9c9590c0eb7740b07101619532417)

   135      Lock lock{m_loop->m_mutex};
   136      while (!m_sync_cleanup_fns.empty()) {
   137          CleanupList fn;
   138          fn.splice(fn.begin(), m_sync_cleanup_fns, m_sync_cleanup_fns.begin());
   139          Unlock(lock, fn.front());
   140      }

    bitcoin-core#4 std::__1::default_delete<mp::Connection>::operator()[abi:ne200100](mp::Connection*) const /usr/lib/llvm-20/bin/../include/c++/v1/__memory/unique_ptr.h:78:5 (mptest+0x13742b) (BuildId: bcd28b84fed9c9590c0eb7740b07101619532417)
    bitcoin-core#5 std::__1::unique_ptr<mp::Connection, std::__1::default_delete<mp::Connection>>::reset[abi:ne200100](mp::Connection*) /usr/lib/llvm-20/bin/../include/c++/v1/__memory/unique_ptr.h:300:7 (mptest+0x13742b)
    bitcoin-core#6 mp::test::TestSetup::TestSetup(bool)::'lambda'()::operator()() const::'lambda0'()::operator()() const /ci_container_base/ci/scratch/build-x86_64-pc-linux-gnu/src/ipc/libmultiprocess/test/./ipc/libmultiprocess/test/mp/test/test.cpp:79:71 (mptest+0x13742b)

    79                server_connection->onDisconnect([&] { server_connection.reset(); });

    bitcoin-core#7 kj::_::Void kj::_::MaybeVoidCaller<kj::_::Void, kj::_::Void>::apply<mp::test::TestSetup::TestSetup(bool)::'lambda'()::operator()() const::'lambda0'()>(mp::test::TestSetup::TestSetup(bool)::'lambda'()::operator()() const::'lambda0'()&, kj::_::Void&&) /ci_container_base/depends/x86_64-pc-linux-gnu/include/kj/async-prelude.h:195:5 (mptest+0x13742b)
    bitcoin-core#8 kj::_::TransformPromiseNode<kj::_::Void, kj::_::Void, mp::test::TestSetup::TestSetup(bool)::'lambda'()::operator()() const::'lambda0'(), kj::_::PropagateException>::getImpl(kj::_::ExceptionOrValue&) /ci_container_base/depends/x86_64-pc-linux-gnu/include/kj/async-inl.h:739:31 (mptest+0x13742b)
    bitcoin-core#9 kj::_::TransformPromiseNodeBase::get(kj::_::ExceptionOrValue&) <null> (mptest+0x2df64c) (BuildId: bcd28b84fed9c9590c0eb7740b07101619532417)
    bitcoin-core#10 mp::EventLoop::loop() /ci_container_base/ci/scratch/build-x86_64-pc-linux-gnu/src/ipc/libmultiprocess/./ipc/libmultiprocess/src/mp/proxy.cpp:231:68 (mptest+0x234699) (BuildId: bcd28b84fed9c9590c0eb7740b07101619532417)

   231          const size_t read_bytes = wait_stream->read(&buffer, 0, 1).wait(m_io_context.waitScope);

    bitcoin-core#11 mp::test::TestSetup::TestSetup(bool)::'lambda'()::operator()() const /ci_container_base/ci/scratch/build-x86_64-pc-linux-gnu/src/ipc/libmultiprocess/test/./ipc/libmultiprocess/test/mp/test/test.cpp:92:20 (mptest+0x133819) (BuildId: bcd28b84fed9c9590c0eb7740b07101619532417)
    bitcoin-core#12 decltype(std::declval<mp::test::TestSetup::TestSetup(bool)::'lambda'()>()()) std::__1::__invoke[abi:ne200100]<mp::test::TestSetup::TestSetup(bool)::'lambda'()>(mp::test::TestSetup::TestSetup(bool)::'lambda'()&&) /usr/lib/llvm-20/bin/../include/c++/v1/__type_traits/invoke.h:179:25 (mptest+0x133119) (BuildId: bcd28b84fed9c9590c0eb7740b07101619532417)
    bitcoin-core#13 void std::__1::__thread_execute[abi:ne200100]<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct>>, mp::test::TestSetup::TestSetup(bool)::'lambda'()>(std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct>>, mp::test::TestSetup::TestSetup(bool)::'lambda'()>&, std::__1::__tuple_indices<...>) /usr/lib/llvm-20/bin/../include/c++/v1/__thread/thread.h:199:3 (mptest+0x133119)
    bitcoin-core#14 void* std::__1::__thread_proxy[abi:ne200100]<std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct>>, mp::test::TestSetup::TestSetup(bool)::'lambda'()>>(void*) /usr/lib/llvm-20/bin/../include/c++/v1/__thread/thread.h:208:3 (mptest+0x133119)

  Mutex M0 (0x721c00003790) created at:
    #0 pthread_mutex_lock <null> (mptest+0xa844b) (BuildId: bcd28b84fed9c9590c0eb7740b07101619532417)
    bitcoin-core#1 std::__1::mutex::lock() <null> (libc++.so.1.0.20+0x713cc) (BuildId: 30ef7da36db6fb0c014ee96603f7649f755cb793)

  Mutex M1 (0x7fbbe45fd4e0) created at:
    #0 pthread_mutex_lock <null> (mptest+0xa844b) (BuildId: bcd28b84fed9c9590c0eb7740b07101619532417)
    bitcoin-core#1 std::__1::mutex::lock() <null> (libc++.so.1.0.20+0x713cc) (BuildId: 30ef7da36db6fb0c014ee96603f7649f755cb793)
    bitcoin-core#2 mp::Connection::Connection(mp::EventLoop&, kj::Own<kj::AsyncIoStream, std::nullptr_t>&&, std::__1::function<capnp::Capability::Client (mp::Connection&)> const&) /ci_container_base/ci/scratch/build-x86_64-pc-linux-gnu/src/ipc/libmultiprocess/test/./ipc/libmultiprocess/include/mp/proxy-io.h:330:11 (mptest+0x134cfb) (BuildId: bcd28b84fed9c9590c0eb7740b07101619532417)
    bitcoin-core#3 std::__1::unique_ptr<mp::Connection, std::__1::default_delete<mp::Connection>> std::__1::make_unique[abi:ne200100]<mp::Connection, mp::EventLoop&, kj::Own<kj::AsyncIoStream, std::nullptr_t>, mp::test::TestSetup::TestSetup(bool)::'lambda'()::operator()() const::'lambda'(mp::Connection&), 0>(mp::EventLoop&, kj::Own<kj::AsyncIoStream, std::nullptr_t>&&, mp::test::TestSetup::TestSetup(bool)::'lambda'()::operator()() const::'lambda'(mp::Connection&)&&) /usr/lib/llvm-20/bin/../include/c++/v1/__memory/unique_ptr.h:767:30 (mptest+0x1333f9) (BuildId: bcd28b84fed9c9590c0eb7740b07101619532417)
    bitcoin-core#4 mp::test::TestSetup::TestSetup(bool)::'lambda'()::operator()() const /ci_container_base/ci/scratch/build-x86_64-pc-linux-gnu/src/ipc/libmultiprocess/test/./ipc/libmultiprocess/test/mp/test/test.cpp:70:19 (mptest+0x1333f9)
    bitcoin-core#5 decltype(std::declval<mp::test::TestSetup::TestSetup(bool)::'lambda'()>()()) std::__1::__invoke[abi:ne200100]<mp::test::TestSetup::TestSetup(bool)::'lambda'()>(mp::test::TestSetup::TestSetup(bool)::'lambda'()&&) /usr/lib/llvm-20/bin/../include/c++/v1/__type_traits/invoke.h:179:25 (mptest+0x133119) (BuildId: bcd28b84fed9c9590c0eb7740b07101619532417)
    bitcoin-core#6 void std::__1::__thread_execute[abi:ne200100]<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct>>, mp::test::TestSetup::TestSetup(bool)::'lambda'()>(std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct>>, mp::test::TestSetup::TestSetup(bool)::'lambda'()>&, std::__1::__tuple_indices<...>) /usr/lib/llvm-20/bin/../include/c++/v1/__thread/thread.h:199:3 (mptest+0x133119)
    bitcoin-core#7 void* std::__1::__thread_proxy[abi:ne200100]<std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct>>, mp::test::TestSetup::TestSetup(bool)::'lambda'()>>(void*) /usr/lib/llvm-20/bin/../include/c++/v1/__thread/thread.h:208:3 (mptest+0x133119)

  Thread T11 (tid=8378, running) created by thread T10 at:
    #0 pthread_create <null> (mptest+0xa673e) (BuildId: bcd28b84fed9c9590c0eb7740b07101619532417)
    bitcoin-core#1 std::__1::__libcpp_thread_create[abi:ne200100](unsigned long*, void* (*)(void*), void*) /usr/lib/llvm-20/bin/../include/c++/v1/__thread/support/pthread.h:182:10 (mptest+0x236338) (BuildId: bcd28b84fed9c9590c0eb7740b07101619532417)
    bitcoin-core#2 std::__1::thread::thread<mp::ProxyServer<mp::ThreadMap>::makeThread(capnp::CallContext<mp::ThreadMap::MakeThreadParams, mp::ThreadMap::MakeThreadResults>)::$_0, 0>(mp::ProxyServer<mp::ThreadMap>::makeThread(capnp::CallContext<mp::ThreadMap::MakeThreadParams, mp::ThreadMap::MakeThreadResults>)::$_0&&) /usr/lib/llvm-20/bin/../include/c++/v1/__thread/thread.h:218:14 (mptest+0x236338)
    bitcoin-core#3 mp::ProxyServer<mp::ThreadMap>::makeThread(capnp::CallContext<mp::ThreadMap::MakeThreadParams, mp::ThreadMap::MakeThreadResults>) /ci_container_base/ci/scratch/build-x86_64-pc-linux-gnu/src/ipc/libmultiprocess/./ipc/libmultiprocess/src/mp/proxy.cpp:397:17 (mptest+0x236338)
    bitcoin-core#4 mp::ThreadMap::Server::dispatchCallInternal(unsigned short, capnp::CallContext<capnp::AnyPointer, capnp::AnyPointer>) /ci_container_base/ci/scratch/build-x86_64-pc-linux-gnu/src/ipc/libmultiprocess/include/mp/proxy.capnp.c++:602:9 (mptest+0x231af8) (BuildId: bcd28b84fed9c9590c0eb7740b07101619532417)
    bitcoin-core#5 mp::ThreadMap::Server::dispatchCall(unsigned long, unsigned short, capnp::CallContext<capnp::AnyPointer, capnp::AnyPointer>) /ci_container_base/ci/scratch/build-x86_64-pc-linux-gnu/src/ipc/libmultiprocess/include/mp/proxy.capnp.c++:591:14 (mptest+0x231af8)
    bitcoin-core#6 virtual thunk to mp::ThreadMap::Server::dispatchCall(unsigned long, unsigned short, capnp::CallContext<capnp::AnyPointer, capnp::AnyPointer>) /ci_container_base/ci/scratch/build-x86_64-pc-linux-gnu/src/ipc/libmultiprocess/include/mp/proxy.capnp.c++ (mptest+0x231af8)
    bitcoin-core#7 capnp::LocalClient::callInternal(unsigned long, unsigned short, capnp::CallContextHook&) <null> (mptest+0x25031c) (BuildId: bcd28b84fed9c9590c0eb7740b07101619532417)
    bitcoin-core#8 mp::EventLoop::loop() /ci_container_base/ci/scratch/build-x86_64-pc-linux-gnu/src/ipc/libmultiprocess/./ipc/libmultiprocess/src/mp/proxy.cpp:231:68 (mptest+0x234699) (BuildId: bcd28b84fed9c9590c0eb7740b07101619532417)
    bitcoin-core#9 mp::test::TestSetup::TestSetup(bool)::'lambda'()::operator()() const /ci_container_base/ci/scratch/build-x86_64-pc-linux-gnu/src/ipc/libmultiprocess/test/./ipc/libmultiprocess/test/mp/test/test.cpp:92:20 (mptest+0x133819) (BuildId: bcd28b84fed9c9590c0eb7740b07101619532417)
    bitcoin-core#10 decltype(std::declval<mp::test::TestSetup::TestSetup(bool)::'lambda'()>()()) std::__1::__invoke[abi:ne200100]<mp::test::TestSetup::TestSetup(bool)::'lambda'()>(mp::test::TestSetup::TestSetup(bool)::'lambda'()&&) /usr/lib/llvm-20/bin/../include/c++/v1/__type_traits/invoke.h:179:25 (mptest+0x133119) (BuildId: bcd28b84fed9c9590c0eb7740b07101619532417)
    bitcoin-core#11 void std::__1::__thread_execute[abi:ne200100]<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct>>, mp::test::TestSetup::TestSetup(bool)::'lambda'()>(std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct>>, mp::test::TestSetup::TestSetup(bool)::'lambda'()>&, std::__1::__tuple_indices<...>) /usr/lib/llvm-20/bin/../include/c++/v1/__thread/thread.h:199:3 (mptest+0x133119)
    bitcoin-core#12 void* std::__1::__thread_proxy[abi:ne200100]<std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct>>, mp::test::TestSetup::TestSetup(bool)::'lambda'()>>(void*) /usr/lib/llvm-20/bin/../include/c++/v1/__thread/thread.h:208:3 (mptest+0x133119)

  Thread T10 (tid=8377, running) created by main thread at:
    #0 pthread_create <null> (mptest+0xa673e) (BuildId: bcd28b84fed9c9590c0eb7740b07101619532417)
    bitcoin-core#1 std::__1::__libcpp_thread_create[abi:ne200100](unsigned long*, void* (*)(void*), void*) /usr/lib/llvm-20/bin/../include/c++/v1/__thread/support/pthread.h:182:10 (mptest+0x132c30) (BuildId: bcd28b84fed9c9590c0eb7740b07101619532417)
    bitcoin-core#2 std::__1::thread::thread<mp::test::TestSetup::TestSetup(bool)::'lambda'(), 0>(mp::test::TestSetup::TestSetup(bool)::'lambda'()&&) /usr/lib/llvm-20/bin/../include/c++/v1/__thread/thread.h:218:14 (mptest+0x132c30)
    bitcoin-core#3 mp::test::TestSetup::TestSetup(bool) /ci_container_base/ci/scratch/build-x86_64-pc-linux-gnu/src/ipc/libmultiprocess/test/./ipc/libmultiprocess/test/mp/test/test.cpp:62:11 (mptest+0x12f8dd) (BuildId: bcd28b84fed9c9590c0eb7740b07101619532417)
    bitcoin-core#4 mp::test::TestCase251::run() /ci_container_base/ci/scratch/build-x86_64-pc-linux-gnu/src/ipc/libmultiprocess/test/./ipc/libmultiprocess/test/mp/test/test.cpp:272:15 (mptest+0x12e95e) (BuildId: bcd28b84fed9c9590c0eb7740b07101619532417)
    bitcoin-core#5 kj::Maybe<kj::Exception> kj::runCatchingExceptions<kj::TestRunner::run()::'lambda'()>(kj::TestRunner::run()::'lambda'()&&) <null> (mptest+0x23e7c0) (BuildId: bcd28b84fed9c9590c0eb7740b07101619532417)

SUMMARY: ThreadSanitizer: data race (/ci_container_base/ci/scratch/build-x86_64-pc-linux-gnu/src/ipc/libmultiprocess/test/mptest+0x12b1dc) (BuildId: bcd28b84fed9c9590c0eb7740b07101619532417) in operator delete(void*, unsigned long)
@Sjors Sjors force-pushed the 2025/02/ipc-yea branch 3 times, most recently from 939d6f8 to 082b416 Compare July 24, 2025 14:26
@Sjors Sjors force-pushed the 2025/02/ipc-yea branch 9 times, most recently from a12733e to ce7d94a Compare August 14, 2025 19:00
@Sjors Sjors closed this Aug 25, 2025
ryanofsky added a commit to ryanofsky/libmultiprocess that referenced this pull request Sep 2, 2025
…returning

This bug is currently causing mptest "disconnecting and blocking" test to
occasionally hang as reported by maflcko in
bitcoin/bitcoin#33244.

The bug was actually first reported by Sjors in
Sjors/bitcoin#90 (comment) and there are
more details about it in
bitcoin-core#189.

The bug is caused by the "disconnecting and blocking" test triggering a
disconnect right before a server IPC call returns. This results in a race
between the IPC server thread and the onDisconnect handler in the event loop thread both
trying to destroy the server's request_threads ProxyClient<Thread> object when
the IPC call is done.

There was a lack of synchronization in this case, fixed here by adding
loop->sync() various places. There were also lock order issues where
Waiter::m_mutex could be incorrectly locked before EventLoop::m_mutex resulting
in a deadlock.
ryanofsky added a commit to ryanofsky/libmultiprocess that referenced this pull request Sep 3, 2025
…returning

This bug is currently causing mptest "disconnecting and blocking" test to
occasionally hang as reported by maflcko in
bitcoin/bitcoin#33244.

The bug was actually first reported by Sjors in
Sjors/bitcoin#90 (comment) and there are
more details about it in
bitcoin-core#189.

The bug is caused by the "disconnecting and blocking" test triggering a
disconnect right before a server IPC call returns. This results in a race
between the IPC server thread and the onDisconnect handler in the event loop thread both
trying to destroy the server's request_threads ProxyClient<Thread> object when
the IPC call is done.

There was a lack of synchronization in this case, fixed here by adding
loop->sync() various places. There were also lock order issues where
Waiter::m_mutex could be incorrectly locked before EventLoop::m_mutex resulting
in a deadlock.
ryanofsky added a commit to ryanofsky/libmultiprocess that referenced this pull request Sep 5, 2025
…returning

This bug is currently causing mptest "disconnecting and blocking" test to
occasionally hang as reported by maflcko in
bitcoin/bitcoin#33244.

The bug was actually first reported by Sjors in
Sjors/bitcoin#90 (comment) and there are
more details about it in
bitcoin-core#189.

The bug is caused by the "disconnecting and blocking" test triggering a
disconnect right before a server IPC call returns. This results in a race
between the IPC server thread and the onDisconnect handler in the event loop thread both
trying to destroy the server's request_threads ProxyClient<Thread> object when
the IPC call is done.

There was a lack of synchronization in this case, fixed here by adding
loop->sync() various places. There were also lock order issues where
Waiter::m_mutex could be incorrectly locked before EventLoop::m_mutex resulting
in a deadlock.
ryanofsky added a commit to ryanofsky/libmultiprocess that referenced this pull request Sep 5, 2025
…returning

This bug is currently causing mptest "disconnecting and blocking" test to
occasionally hang as reported by maflcko in
bitcoin/bitcoin#33244.

The bug was actually first reported by Sjors in
Sjors/bitcoin#90 (comment) and there are
more details about it in
bitcoin-core#189.

The bug is caused by the "disconnecting and blocking" test triggering a
disconnect right before a server IPC call returns. This results in a race
between the IPC server thread and the onDisconnect handler in the event loop thread both
trying to destroy the server's request_threads ProxyClient<Thread> object when
the IPC call is done.

There was a lack of synchronization in this case, fixed here by adding
loop->sync() various places. There were also lock order issues where
Waiter::m_mutex could be incorrectly locked before EventLoop::m_mutex resulting
in a deadlock.
Sjors pushed a commit to Sjors/libmultiprocess that referenced this pull request Sep 10, 2025
…returning

This bug is currently causing mptest "disconnecting and blocking" test to
occasionally hang as reported by maflcko in
bitcoin/bitcoin#33244.

The bug was actually first reported by Sjors in
Sjors/bitcoin#90 (comment) and there are
more details about it in
bitcoin-core#189.

The bug is caused by the "disconnecting and blocking" test triggering a
disconnect right before a server IPC call returns. This results in a race
between the IPC server thread and the onDisconnect handler in the event loop thread both
trying to destroy the server's request_threads ProxyClient<Thread> object when
the IPC call is done.

There was a lack of synchronization in this case, fixed here by adding
loop->sync() various places. There were also lock order issues where
Waiter::m_mutex could be incorrectly locked before EventLoop::m_mutex resulting
in a deadlock.
Sjors pushed a commit to Sjors/libmultiprocess that referenced this pull request Sep 10, 2025
…returning

This bug is currently causing mptest "disconnecting and blocking" test to
occasionally hang as reported by maflcko in
bitcoin/bitcoin#33244.

The bug was actually first reported by Sjors in
Sjors/bitcoin#90 (comment) and there are
more details about it in
bitcoin-core#189.

The bug is caused by the "disconnecting and blocking" test triggering a
disconnect right before a server IPC call returns. This results in a race
between the IPC server thread and the onDisconnect handler in the event loop thread both
trying to destroy the server's request_threads ProxyClient<Thread> object when
the IPC call is done.

There was a lack of synchronization in this case, fixed here by adding
loop->sync() various places. There were also lock order issues where
Waiter::m_mutex could be incorrectly locked before EventLoop::m_mutex resulting
in a deadlock.
ryanofsky added a commit to ryanofsky/libmultiprocess that referenced this pull request Sep 10, 2025
…returning

This bug is currently causing mptest "disconnecting and blocking" test to
occasionally hang as reported by maflcko in
bitcoin/bitcoin#33244.

The bug was actually first reported by Sjors in
Sjors/bitcoin#90 (comment) and there are
more details about it in
bitcoin-core#189.

The bug is caused by the "disconnecting and blocking" test triggering a
disconnect right before a server IPC call returns. This results in a race
between the IPC server thread and the onDisconnect handler in the event loop
thread both trying to destroy the server's request_threads ProxyClient<Thread>
object when the IPC call is done.

There was a lack of synchronization in this case, fixed here by adding
loop->sync() a few places. Specifically the fixes were to:

- Always call SetThread on the event loop thread using the loop->sync() method,
  to prevent a race between the ProxyClient<Thread> creation code and the
  connection shutdown code if there was an ill-timed disconnect.

- Similarly in ~ProxyClient<Thread> and thread-context.h PassField(), use
  loop->sync() when destroying the thread object, in case a disconnect happens
  at that time.

A few other changes were made in this commit to make the resulting code safer
and simpler, even though they are not technically necessary for the fix:

- In thread-context.h PassField(), Waiter::m_mutex is now unlocked while
  destroying ProxyClient<Thread> just to respect EventLoop::m_mutex and
  Waiter::m_mutex lock order and never lock the Waiter first. This is just for
  consistency. There is no actually possibility for a deadlock here due to the
  new sync() call.

- This adds asserts to make sure functions expected to run on the event
  loop thread are only called on that thread.

- This inlines the ProxyClient<Thread>::setDisconnectCallback function, just
  because it was a short function only called in a single place.
ryanofsky added a commit to ryanofsky/libmultiprocess that referenced this pull request Sep 10, 2025
…returning

This bug is currently causing mptest "disconnecting and blocking" test to
occasionally hang as reported by maflcko in
bitcoin/bitcoin#33244.

The bug was actually first reported by Sjors in
Sjors/bitcoin#90 (comment) and there are
more details about it in
bitcoin-core#189.

The bug is caused by the "disconnecting and blocking" test triggering a
disconnect right before a server IPC call returns. This results in a race
between the IPC server thread and the onDisconnect handler in the event loop
thread both trying to destroy the server's request_threads ProxyClient<Thread>
object when the IPC call is done.

There was a lack of synchronization in this case, fixed here by adding
loop->sync() a few places. Specifically the fixes were to:

- Always call SetThread on the event loop thread using the loop->sync() method,
  to prevent a race between the ProxyClient<Thread> creation code and the
  connection shutdown code if there was an ill-timed disconnect.

- Similarly in ~ProxyClient<Thread> and thread-context.h PassField(), use
  loop->sync() when destroying the thread object, in case a disconnect happens
  at that time.

A few other changes were made in this commit to make the resulting code safer
and simpler, even though they are not technically necessary for the fix:

- In thread-context.h PassField(), Waiter::m_mutex is now unlocked while
  destroying ProxyClient<Thread> just to respect EventLoop::m_mutex and
  Waiter::m_mutex lock order and never lock the Waiter first. This is just for
  consistency. There is no actually possibility for a deadlock here due to the
  new sync() call.

- This adds asserts to make sure functions expected to run on the event
  loop thread are only called on that thread.

- This inlines the ProxyClient<Thread>::setDisconnectCallback function, just
  because it was a short function only called in a single place.
ryanofsky added a commit to ryanofsky/libmultiprocess that referenced this pull request Sep 11, 2025
…returning

This bug is currently causing mptest "disconnecting and blocking" test to
occasionally hang as reported by maflcko in
bitcoin/bitcoin#33244.

The bug was actually first reported by Sjors in
Sjors/bitcoin#90 (comment) and there are
more details about it in
bitcoin-core#189.

The bug is caused by the "disconnecting and blocking" test triggering a
disconnect right before a server IPC call returns. This results in a race
between the IPC server thread and the onDisconnect handler in the event loop
thread both trying to destroy the server's request_threads ProxyClient<Thread>
object when the IPC call is done.

There was a lack of synchronization in this case, fixed here by adding
loop->sync() a few places. Specifically the fixes were to:

- Always call SetThread on the event loop thread using the loop->sync() method,
  to prevent a race between the ProxyClient<Thread> creation code and the
  connection shutdown code if there was an ill-timed disconnect.

- Similarly in ~ProxyClient<Thread> and thread-context.h PassField(), use
  loop->sync() when destroying the thread object, in case a disconnect happens
  at that time.

A few other changes were made in this commit to make the resulting code safer
and simpler, even though they are not technically necessary for the fix:

- In thread-context.h PassField(), Waiter::m_mutex is now unlocked while
  destroying ProxyClient<Thread> just to respect EventLoop::m_mutex and
  Waiter::m_mutex lock order and never lock the Waiter first. This is just for
  consistency. There is no actually possibility for a deadlock here due to the
  new sync() call.

- This adds asserts to make sure functions expected to run on the event
  loop thread are only called on that thread.

- This inlines the ProxyClient<Thread>::setDisconnectCallback function, just
  because it was a short function only called in a single place.
ryanofsky added a commit to ryanofsky/libmultiprocess that referenced this pull request Sep 17, 2025
…returning

This bug is currently causing mptest "disconnecting and blocking" test to
occasionally hang as reported by maflcko in
bitcoin/bitcoin#33244.

The bug was actually first reported by Sjors in
Sjors/bitcoin#90 (comment) and there are
more details about it in
bitcoin-core#189.

The bug is caused by the "disconnecting and blocking" test triggering a
disconnect right before a server IPC call returns. This results in a race
between the IPC server thread and the onDisconnect handler in the event loop
thread both trying to destroy the server's request_threads ProxyClient<Thread>
object when the IPC call is done.

There was a lack of synchronization in this case, fixed here by adding
loop->sync() a few places. Specifically the fixes were to:

- Always call SetThread on the event loop thread using the loop->sync() method,
  to prevent a race between the ProxyClient<Thread> creation code and the
  connection shutdown code if there was an ill-timed disconnect.

- Similarly in ~ProxyClient<Thread> and thread-context.h PassField(), use
  loop->sync() when destroying the thread object, in case a disconnect happens
  at that time.

A few other changes were made in this commit to make the resulting code safer
and simpler, even though they are not technically necessary for the fix:

- In thread-context.h PassField(), Waiter::m_mutex is now unlocked while
  destroying ProxyClient<Thread> just to respect EventLoop::m_mutex and
  Waiter::m_mutex lock order and never lock the Waiter first. This is just for
  consistency. There is no actually possibility for a deadlock here due to the
  new sync() call.

- This adds asserts to make sure functions expected to run on the event
  loop thread are only called on that thread.

- This inlines the ProxyClient<Thread>::setDisconnectCallback function, just
  because it was a short function only called in a single place.
ryanofsky added a commit to bitcoin-core/libmultiprocess that referenced this pull request Sep 17, 2025
…connect handler

4a269b2 bug: fix ProxyClient<Thread> deadlock if disconnected as IPC call is returning (Ryan Ofsky)
85df964 Use try_emplace in SetThread instead of threads.find (Sjors Provoost)
ca9b380 Use std::optional in ConnThreads to allow shortening locks (Sjors Provoost)
9b07991 doc: describe ThreadContext struct and synchronization requirements (Ryan Ofsky)
d60db60 proxy-io.h: add Waiter::m_mutex thread safety annotations (Ryan Ofsky)
4e365b0 ci: Use -Wthread-safety not -Wthread-safety-analysis (Ryan Ofsky)

Pull request description:

  This bug is currently causing mptest "disconnecting and blocking" test to occasionally hang as reported by maflcko in bitcoin/bitcoin#33244.

  The bug was actually first reported by Sjors in Sjors/bitcoin#90 (comment) and there are more details about it in #189.

  The bug is caused by the "disconnecting and blocking" test triggering a disconnect right before a server IPC call returns. This results in a race between the IPC server thread and the `onDisconnect` handler in the event loop thread both trying to destroy the server's `request_threads` `ProxyClient<Thread>` object when the IPC call is done.

  There was a lack of synchronization in this case, fixed here by adding `loop->sync()` various places. There were also lock order issues where `Waiter::m_mutex` could be incorrectly locked before `EventLoop::m_mutex` resulting in a deadlock.

ACKs for top commit:
  Sjors:
    ACK 4a269b2
  Eunovo:
    ACK 4a269b2

Tree-SHA512: 1894c33f9847ef755816e232cfc18843435e25ad5b400cd5a04eead732a738accc1da401fd13b5b11ade04fb3145060640760f53b431132cae022bd7d004e73b
theuni pushed a commit to theuni/libmultiprocess that referenced this pull request Sep 26, 2025
…returning

This bug is currently causing mptest "disconnecting and blocking" test to
occasionally hang as reported by maflcko in
bitcoin/bitcoin#33244.

The bug was actually first reported by Sjors in
Sjors/bitcoin#90 (comment) and there are
more details about it in
bitcoin-core#189.

The bug is caused by the "disconnecting and blocking" test triggering a
disconnect right before a server IPC call returns. This results in a race
between the IPC server thread and the onDisconnect handler in the event loop
thread both trying to destroy the server's request_threads ProxyClient<Thread>
object when the IPC call is done.

There was a lack of synchronization in this case, fixed here by adding
loop->sync() a few places. Specifically the fixes were to:

- Always call SetThread on the event loop thread using the loop->sync() method,
  to prevent a race between the ProxyClient<Thread> creation code and the
  connection shutdown code if there was an ill-timed disconnect.

- Similarly in ~ProxyClient<Thread> and thread-context.h PassField(), use
  loop->sync() when destroying the thread object, in case a disconnect happens
  at that time.

A few other changes were made in this commit to make the resulting code safer
and simpler, even though they are not technically necessary for the fix:

- In thread-context.h PassField(), Waiter::m_mutex is now unlocked while
  destroying ProxyClient<Thread> just to respect EventLoop::m_mutex and
  Waiter::m_mutex lock order and never lock the Waiter first. This is just for
  consistency. There is no actually possibility for a deadlock here due to the
  new sync() call.

- This adds asserts to make sure functions expected to run on the event
  loop thread are only called on that thread.

- This inlines the ProxyClient<Thread>::setDisconnectCallback function, just
  because it was a short function only called in a single place.
Sjors pushed a commit that referenced this pull request Dec 29, 2025
Consensus Cleanup preparation: backport "miner: timelock the coinbase to the mined block's height"
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants