Skip to content

Conversation

@tobias93
Copy link
Contributor

When indexing large point clouds, we always had problems with PotreeConverter consuming enormous amounts of memory. PotreeConverter would only run with 100GiB of swap memory, if not more.

Long and thorough investigations revealed the issue: The poisson sampling sorts the points by their distance to the center of the node. This is done using std::sort with the relatively new (C++17) std::execution::par_unseq argument so the sorting happens in parallel. In Linux, the C++ standard library relies on Intel TBB for the implementation of the parallel sorting. The TBB library is where the memory is leaking.

There is not much information available online about this, but I think this is the issue that is causing the leak: https://community.intel.com/t5/Intel-oneAPI-Threading-Building/std-sort-std-execution-par-unseq-has-a-memory-leak-on-Linux/m-p/1582773

The fix for PotreeConverter is to simply not use par_unseq when sorting the points. This means that the sorting won't happen in parallel any more. However PotreeConverter already parallelizes over the chunks, so it should not be an issue.

We tested the fix with a point cloud consisting of approximately 100GB of uncompressed LAS files. In the current version of PotreeConverter you can see the memory climbing up to ~60GB during the indexing phase, while it stays below 12GB with the fix:

Old:
Figure_1

New (fixed):
Figure_2

A positive side effect of this fix is, that it is also faster - possibly due to the reduced parallelisation overhead. In the example above, the indexing step was faster by factor 2.2, which saved us almost an hour of processing time.

I believe, that this PR will probably fix issue #528.

@tobias93 tobias93 changed the title Parallel sort has a memory leak in linux. Use sequential sort. Fix memory leak. Aug 28, 2024
@m-schuetz
Copy link
Collaborator

m-schuetz commented Aug 28, 2024

Interesting, I've heard from others that they've also experienced massive memory issues on some systems. Thanks for the PR! I'll just check whether this affects performance on windows before merging.

@bmmeijers
Copy link

Thanks for this!

I can confirm on our Ubuntu system this now allows processing the Dutch AHN3 dataset in its entirety 🚀 - previously the process tried to use all available memory on the system (until killed by the OOM-killer)

[100%, ... [RAM: 20.1GB (highest 44.8GB), ...]
=======================================
=== STATS                              
=======================================
#points:               557'925'797'136
#input files:          1'374
sampling method:       poisson

If it does not regress on Windows, I'd recommend merging!

@pierreleripoll
Copy link

hi @m-schuetz do you plan to merge this feature in the upcoming weeks ? This is really beneficial for my project 👼

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants