Add jampy target to workflows#6
Conversation
|
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: Path: .coderabbit.yaml Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (1)
📝 WalkthroughWalkthroughAdds a new GitHub Actions workflow "Performance: jampy" (scheduled and manual) that reuses an existing reusable workflow; updates the Deploy Dashboard workflow to trigger after this workflow; README gains a jampy entry in the Minifuzz + Performance table. Changes
Estimated code review effort🎯 2 (Simple) | ⏱️ ~10 minutes Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
📝 Coding Plan
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
@dakk the image seems to be private / non-existent, can you check? |
Fixed now |
https://github.com/FluffyLabs/jam-testing/actions/runs/23113614274/job/67135247558#step:5:33 |
|
I updated the docker image with a newer version of debian, and it seems to work now |
|
@tomusdrw I don't get why at some point the ci hangs, and then timeout: https://github.com/FluffyLabs/jam-testing/actions/runs/23136052605/job/67200886358 for instance, in picofuzz - fallback it processed all the blocks and then hanged |
|
After the tests are finished we attempt to terminate the target first by sending SIGTERM and when that doesn't work we send SIGKILL. It seems to me that neither of this works for jampy after the 10x picofuzz runs. Since it works fine for minifuzz, my guess is that the issue is casued by repeated runs. Each run closes and re-opens the socket. I'm running a test now to confirm my suspicion, see #10 |
Summary by CodeRabbit
New Features
Documentation