Skip to content

Parallel Benchmark: Cleanup#398

Merged
ax3l merged 1 commit intoopenPMD:devfrom
ax3l:doc-cleanupParallelBenchmark
Dec 12, 2018
Merged

Parallel Benchmark: Cleanup#398
ax3l merged 1 commit intoopenPMD:devfrom
ax3l:doc-cleanupParallelBenchmark

Conversation

@ax3l
Copy link
Member

@ax3l ax3l commented Nov 30, 2018

Clean leftovers from new MPI benchmark in #346.

  • properly add auxiliary benchmark sources
  • properly name for build with MPI only
  • fix BlockSlicer API: no MPI needed
  • includes
  • add changelog

@ax3l ax3l requested a review from franzpoeschel November 30, 2018 12:23
@ax3l ax3l force-pushed the doc-cleanupParallelBenchmark branch from 06c7e0f to ba31f55 Compare November 30, 2018 12:24
@ax3l ax3l added the MPI label Nov 30, 2018
@ax3l ax3l force-pushed the doc-cleanupParallelBenchmark branch from ba31f55 to 6c98ccd Compare November 30, 2018 12:28
src/backend/PatchRecord.cpp
src/backend/PatchRecordComponent.cpp
src/backend/Writable.cpp
src/benchmark/mpi/OneDimensionalBlockSlicer.cpp)
Copy link
Member Author

@ax3l ax3l Nov 30, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hm, unless this shall be auxiliary as the documentation you wrote suggests (part of the library, not only part of the benchmark example...).

Maybe it's well-placed in auxiliary then. But then it should be included in our facade headers as well.

@franzpoeschel did you pass the whole communicator in case you want to determine e.g. a MPI grid position from it?

Maybe an openPMD::benchmark CMake target makes sense, outside of examples.

This comment was marked as resolved.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Passing the rank directly is probably the more sensible solution, yes.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

part of the library, not only part of the benchmark example

I was intending for it to become part of the library to allow users to easily dissect a dataset into pieces. If we go with this, OneDimensionalBlockSlicer.cpp should probably be put back into the sources list again (possibly after moving it to auxiliary). If not, the complete class (i.e. OneDimensionalBlockSlicer) should probably go into the example file and be ridded from the API altogether otherwise. But this will result in users having to needlessly copy-paste the class if they want to use it..?

Maybe it's well-placed in auxiliary then.

So you suggest placing the whole parallel benchmark into auxiliary (headers as well as sources)? Sounds good for me.

Copy link
Member Author

@ax3l ax3l Dec 12, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, prospectively that would make a good place for that helper. I'll leave it in the dir for now. We also need to thing of the auxiliary headers for this workflow, they are currently not part of the public API but we still might want to add facade headers... dunno yet :)

@ax3l ax3l force-pushed the doc-cleanupParallelBenchmark branch 4 times, most recently from 7fd95d5 to a2e816a Compare November 30, 2018 12:47
Clean leftovers from new MPI benchmark.

- properly add auxiliary benchmark sources
- properly name for build with MPI only
- fix BlockSlicer API: no MPI needed
- includes
- some formatting
- add changelog
@ax3l ax3l force-pushed the doc-cleanupParallelBenchmark branch from a2e816a to cc514f5 Compare December 12, 2018 09:28
@ax3l ax3l merged commit 8980b71 into openPMD:dev Dec 12, 2018
@ax3l ax3l deleted the doc-cleanupParallelBenchmark branch December 12, 2018 13:07
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants