[api] add done event to signify when a single job is done so that we can hook into that#7
[api] add done event to signify when a single job is done so that we can hook into that#7jcrugzz wants to merge 1 commit intodominictarr:masterfrom
done event to signify when a single job is done so that we can hook into that#7Conversation
done event to signify when a single job is done so that we candone event to signify when a single job is done so that we can hook into that
|
sounds reasonable - but can you give me a description of what you are using this for? |
|
@dominictarr a little wrapper around this essentially. https://github.com/jcrugzz/atomicize/blob/master/index.js [edit]: and i thought code was better than description 🎿 |
|
hmm, are you sure that is how it should work?
maybe this does make sense in your case, though, what does the data auctually represent? |
|
@dominictarr in my case there are ephemeral messages that can be sent in rapid fire to do work as soon as possible. The work being done is fairly well defined so in my situation, queueing pending messages, even if they have new data, can cause unnecessary work. I'd rather drop all messages while the job is executing because there is a high probability messages will stop being received once the job is complete. The work being done in my initial use case is sshing into particular machines to execute well defined commands. What I'm reducing here is the number of times the box gets sshed into as the data that actually changes is fetched externally outside of the "job" message. The only data in this specific use case that ends up changing is the IP address which would be allowed to act concurrently. |
|
@dominictarr if there is no real issue here, i'd love a merge as I'd like to get rid of that dirty git dependency on my module ;). I'm currently using it in production with success. Let me know if you want anymore details though. |
|
Well, I'm not really agaist merging, but the use case you are describing doesn't have the intended I ask because issues end up being documentation, so other people with similar problems might end up here and see this. So even if we end up merging this, we need a discussion about what is the best approach to this class of problem. So the thing here is that you are triggering state in an external system from changes in level. If you are just wrting transformed data to another database then just overwriting the old data will probably be fine - but if you are altering the state of an external system, like, say, starting a new server process or spinning up a new machine etc, then you could make that idempotent by first checking whether the job has already run, and if so exiting with a success. You'll still ssh in twice, but you wont perform the actual work - if you could make your script work like that I think it should make your system overall more predictable. |
|
@dominictarr I have no problem with the back and forth :). I find it a valuable exercise. Technically I am using I could have done the same by just using What I wouldn't get in this case is the ability to return the messages I want when managing my own state. This change is only really necessary because I want to to do this (which could be considered unnecessary) but is kind of nice to be able to have this introspection regardless even if my specific use of this added event is to satisfy my own OCD. |
…e can hook into that
|
@dominictarr this is rebased if you want to consider merging ;). |
|
I'm not yet persuaded that this is the best solution to your problem. To merge this, we will have to satisify both your OCD and mine ;) It would help to have a more concrete idea of what your ssh scripts do and why an update is likely to happen many times in quick succession? my gut feeling here is that we may arrive at a better solution by reframing the problem. |
|
@dominictarr main use case currently is the following.
During this process we are holding the initial request and doing retries on a backoff. If this app is being accessed by a different person or multiple people that are hitting different balancers, many of these messages could be coming in at once until that first command then completes (which brings the app back up). This is basically a hack around people who aren't in the space to have certain awareness around their code until we can provide that awareness. This was the solution I came up with that seemed simplest at the time and seems to be working well. |
im open to a different event name if you want but I want to be able to hook into this to keep track of when individual jobs have been executed