Conversation
Coverage Report
|
|||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
cf3fb2f to
081ba24
Compare
|
@jan-janssen For some reason, I'm seeing a behavior where when I run through my API integration test once, it fails - it keeps pinging the /check endpoint until I kill the test (see also CI). There are no errors in the API logs. When I locally restart the API and run the test again, it passes immediately (i.e. the simulation worked the first time and I can see it produces the cache files, they are just not picked up the first time). I'm a bit lost at the moment - if you have any ideas/spot anything, let me know. Relevant code should be in workflow: how it is submitted by the API: The code is likely overly cautious/complex in some parts (was tries to get it to work that didn't work and can be removed again later) |
|
@ltalirz You were a bit too fast. From my perspective there are currently two conflicting tests the |
|
There's a few things to be cleaned up - I will likely not have time today to finish it, but should be able to have another look tomorrow @jan-janssen One question: in 669a33c we introduced workers with separate subprocesses in order to work around the signal handling of pyiron We no longer need this here, correct? |
Yes, that works fine. As suggested in #124 it seems to be an issue with orphan processes being killed from the testing framework, so the transition to flux as backend should solve this issue. |
ff90e90 to
4bac361
Compare
|
@jan-janssen I fixed the basic logic for submission and /check; I also switched to the With that I now see I can reproduce this locally as well. Maybe some of the logic in /check is still not correct... |
This is fixed in pyiron/executorlib#913 |
e664f86 to
0dc6ac0
Compare
No description provided.