This repository was archived by the owner on Feb 26, 2025. It is now read-only.
Optimize loading ?fferent_edges.#298
Merged
joni-herttuainen merged 2 commits intomasterfrom Nov 6, 2023
Merged
Conversation
4 tasks
matz-e
reviewed
Oct 23, 2023
2cad097 to
be4cb91
Compare
be4cb91 to
fd8adc1
Compare
Collaborator
Author
|
When reading edge ids for a large file we get a 10'000x speedup. The benchmark is: This would compute the edge IDs used by each MPI rank for analysis purposes. With the optimization it takes about 2-4s without the first 10 rank take 77s (or 8.5h total). |
Collaborator
Author
|
The PR uses templates to hide the difference between |
Collaborator
Author
|
The solution only works for |
sergiorg-hpc
previously approved these changes
Nov 1, 2023
Reading from parallel filesystems, e.g. GPFS, requires reading few but
large chunks. Reading multiple times from the same block/page, come with
a hefty performance penalty.
The commit implements the functionality for merging nearby reads by
adding or modifying:
* `sortAndMerge` to allow merging ranges across gaps of a certain
size.
* `bulkRead` to read block-by-block and extract the requested slices
in memory.
* `_readSelection` to always combine reads.
* `?fferent_edges` to optimize reading of edge IDs.
It requires a compile-time constant `SONATA_PAGESIZE` to specify the
block/pagesize to be targeted.
fd8adc1 to
d2e189e
Compare
matz-e
previously approved these changes
Nov 3, 2023
joni-herttuainen
approved these changes
Nov 6, 2023
4 tasks
WeinaJi
pushed a commit
to BlueBrain/neurodamus
that referenced
this pull request
Jan 29, 2024
## Context When using `WholeCell` load-balancing, the access pattern when reading parameters during synapse creation is extremely poor and is the main reason why we see long (10+ minutes) periods of severe performance degradation of our parallel filesystem when running slightly larger simulations on BB5. Using Darshan and several PoCs we established that the time required to read these parameters can be reduced by more than 8x and IOps can be reduced by over 1000x when using collective MPI-IO. Moreover, the "waiters" where reduced substantially as well. See BBPBGLIB-1070. Following those finding we concluded that neurodamus would need to use collective MPI-IO in the future. We've implemented most of the required changes directly in libsonata allowing others to benefit from the same optimizations should the need arise. See, BlueBrain/libsonata#309 BlueBrain/libsonata#307 and preparatory work: BlueBrain/libsonata#315 BlueBrain/libsonata#314 BlueBrain/libsonata#298 By instrumenting two simulations (SSCX and reduced MMB) we concluded that neurodamus was almost collective. However, certain attributes where read in different order on different MPI ranks. Maybe due to salting hashes differently on different MPI ranks. ## Scope This PR enables neurodamus to use collective IO for the simulation described above. ## Testing <!-- Please add a new test under `tests`. Consider the following cases: 1. If the change is in an independent component (e.g, a new container type, a parser, etc) a bare unit test should be sufficient. See e.g. `tests/test_coords.py` 2. If you are fixing or adding components supporting a scientific use case, affecting node or synapse creation, etc..., which typically rely on Neuron, tests should set up a simulation using that feature, instantiate neurodamus, **assess the state**, run the simulation and check the results are as expected. See an example at `tests/test_simulation.py#L66` --> We successfully ran the reduced MMB simulation, but since SSCX hasn't been converted to SONATA, we can't run that simulation. ## Review * [x] PR description is complete * [x] Coding style (imports, function length, New functions, classes or files) are good * [ ] Unit/Scientific test added * [ ] Updated Readme, in-code, developer documentation --------- Co-authored-by: Luc Grosheintz <luc.grosheintz@gmail.ch>
WeinaJi
pushed a commit
to BlueBrain/neurodamus
that referenced
this pull request
Oct 14, 2024
## Context When using `WholeCell` load-balancing, the access pattern when reading parameters during synapse creation is extremely poor and is the main reason why we see long (10+ minutes) periods of severe performance degradation of our parallel filesystem when running slightly larger simulations on BB5. Using Darshan and several PoCs we established that the time required to read these parameters can be reduced by more than 8x and IOps can be reduced by over 1000x when using collective MPI-IO. Moreover, the "waiters" where reduced substantially as well. See BBPBGLIB-1070. Following those finding we concluded that neurodamus would need to use collective MPI-IO in the future. We've implemented most of the required changes directly in libsonata allowing others to benefit from the same optimizations should the need arise. See, BlueBrain/libsonata#309 BlueBrain/libsonata#307 and preparatory work: BlueBrain/libsonata#315 BlueBrain/libsonata#314 BlueBrain/libsonata#298 By instrumenting two simulations (SSCX and reduced MMB) we concluded that neurodamus was almost collective. However, certain attributes where read in different order on different MPI ranks. Maybe due to salting hashes differently on different MPI ranks. ## Scope This PR enables neurodamus to use collective IO for the simulation described above. ## Testing <!-- Please add a new test under `tests`. Consider the following cases: 1. If the change is in an independent component (e.g, a new container type, a parser, etc) a bare unit test should be sufficient. See e.g. `tests/test_coords.py` 2. If you are fixing or adding components supporting a scientific use case, affecting node or synapse creation, etc..., which typically rely on Neuron, tests should set up a simulation using that feature, instantiate neurodamus, **assess the state**, run the simulation and check the results are as expected. See an example at `tests/test_simulation.py#L66` --> We successfully ran the reduced MMB simulation, but since SSCX hasn't been converted to SONATA, we can't run that simulation. ## Review * [x] PR description is complete * [x] Coding style (imports, function length, New functions, classes or files) are good * [ ] Unit/Scientific test added * [ ] Updated Readme, in-code, developer documentation --------- Co-authored-by: Luc Grosheintz <luc.grosheintz@gmail.ch>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Optimized reading of edge IDs by aggregating ranges into larger (GPFS-friendly) ranges before creating the appropriate HDF5 selection to reduce the number of individual reads. Then it filters out any unneeded data in memory. This is very similar to work done in #183.
This PR introduces the following:
libsonata.Selection.?fferent_edgein bulk.SONATA_PAGESIZEwhich controls how large the merged region need to be.