-
Notifications
You must be signed in to change notification settings - Fork 22
Use Cuda for validation of the test pipeline #2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
lih
pushed a commit
that referenced
this pull request
Nov 29, 2024
glesur
added a commit
that referenced
this pull request
Dec 15, 2025
This PR fixes a long-standing issue causing the face-centered magnetic field component B (referred to as BXs) to drift at MPI subdomain boundaries during long integrations. The drift occurs because neighboring subdomains may compute slightly different edge EMFs—likely due to roundoff differences—resulting in inconsistent BXs values across shared boundaries. This bug is subtle and can manifest in several ways: A sudden increase in div(B) after restarting from a dump file. Because the dump stores only one of the two possible BXs values at each subdomain boundary, any accumulated mismatch becomes visible upon restart. Unexpected or non-physical BXs values at subdomain edges when using vector potentials. Historically, these edge zones were not exchanged, under the assumption that each subdomain’s computed values were sufficient. In practice, this allowed roundoff-level discrepancies to accumulate over time. This PR introduces a consistent synchronization rule: the BXs value from the left side (=start) of each subdomain is treated as the authoritative value and overwrites the corresponding right-side value of the left-side neighbour. This ensures deterministic and stable edge values. The change increases the communication cost by roughly ~5% due to the larger exchange buffer. Finally, note that a similar issue can also arise in serial runs with periodic boundary conditions, and this PR fixes that case as well. Because of these changes, this PR represents a major refactoring of the MPI exchange routines and boundary conditions logic, that all now relies on pre-defined bounding boxes. The new edge zones are now handled only for internal and strictly periodic (i.e., non–shearing-box) boundary conditions, simplifying and unifying the overall approach. List of commits: * fix non blocking mpi comms * avoid overwriting data when there is no neighbour * fix pack/unpak loop name for debug * fix non-sending X1 direction * back to mpi_persistent * fix linter * - use the left domain as the reference domain for EMFs (instead of averaging) to be coherent with BXs exchange routine - check that vector potential follow the same boundary logic as the EMF (left domain is the reference) Note: BCs on the vector potential is only apply after boundary conditions, since EMFs boundary conditions ensure that the vector potential will always be consistent later. Todo: - enforce these boundary conditions with vector potential with periodic boundary conditions and without MPI (as is done for EMFs) - check that BXs normal is also consistent when MPI is off with periodic BCs * fix file spellink * Check shearing box with MPI. As expected, shearingbox+MPI is broken as we should not reset the surface field in this case. * refactor MPI exchange routine. In the future, will allow exchange routines not to exchange normal field last active zone when shearing box is enabled. For this, the refactoring include an optional parameter "overwriteBXn" in each exchange direction * fix shearing boxes * boundaryFor implemented with variable boundingBoxes, now allowing to overwrite BXs normal only with periodic boundary conditions to save the shearing box * Final fix to the vector potential consistancy in serial, to do the same as with MPI * -fix bounding box implementation for serial BCs -fix gauge for AmbipolarCShock to be compatible with periodic boundary conditions along x2 * cleaning up exchange routine * fix nghost bug in mpi exchanger * use a single exchanger for fargo with domain decomposition * Different message tag for each mpi exchanger * -ensure that mpi is using the datablock's boundary conditions -fix missing fences with shearing box BCs * fix linter * fix linter #2 * Update src/fluid/constrainedTransport/enforceEMFBoundary.hpp Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> * add missing pushRegion * Update src/fluid/boundary/axis.hpp Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> * fix typo --------- Co-authored-by: glesur <glesur@login6.head.adastra.cines.fr> Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This will give a proper report for the status of the pipeline similarly to gitlab