Fix IO for LHA bot#238
Conversation
|
@alecandido the trigger for #227 is still not optimal, I think ... run 14 and 15 have been triggered simultaneously and I suspect by me asking for 2 reviewers - but of course we need only one run, regardless of how many people I ask |
@felixhekhorn true, but I believe it should do so if you select them together. You can select multiple reviewer in a single shot, without exiting the dropdown menu, but if you exit, you're filing two requests in a row (even if they are later on grouped in the PR board). |
|
The |
yes! 🎉 |
882487b to
51d66ac
Compare
|
I didn't even have to drop it manually: they were the same diff, and Git figured out on its own he didn't have to reapply it twice. |
|
@felixhekhorn as soon as the workflow will pass again, I believe we can merge! |
but I cannot reproduce this locally Note that the obs hash is not the same ... I seem to remember to have seen this before ... @giacomomagni ? or @t7phy ? |
|
@felixhekhorn I acknowledge that to be an issue, but it is not #237 With the idea of keeping PRs short and atomic, this has already solved its related issue, and it already provides a useful result, even though not perfect. |
Closes #237
This PR fixes
mu2grid->mugridin LHAflavored_mugridto be serializable and adds the relevant testQ2gridin LHA benchmarkQ2gridin navigatorQ2gridin docsfurther comments:
QrefQEDsince there is no such thing any longer - what should we do? should we translate the existence tocouplings.em_running? should we change the theory again? (meaningbanana)