-
Notifications
You must be signed in to change notification settings - Fork 16.4k
Switch the Supervisor/task process from line-based to length-prefixed #51699
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Switch the Supervisor/task process from line-based to length-prefixed #51699
Conversation
|
@gopidesupavan I took your example test dags I'm not 100% sure I've removed the deadlock (such is the issue with multi-threading) but I can no longer reproduce it where as previously it would lock after a few seconds. In short this appears to help the problem, but all of the discussion in the issue about this makes me think that it shouldn't fix it. |
1322bb6 to
22a44cb
Compare
I think the same way. If |
|
Ahhh I know why it helps then. The main thread/outer is the async event loop. It calls in to |
22a44cb to
4975725
Compare
a4963a5 to
bad9e3a
Compare
kaxil
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🎉
0f38c3b to
f45647a
Compare
amoghrajesh
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM +1.
…51885) This should have been handled in apache#51699 but was missed there as it is very infrequently hit. (cherry picked from commit b8b7f4a) Co-authored-by: Ash Berlin-Taylor <ash@apache.org>
…#51699) * Switch the Supervisor/task process from line-based to length-prefixed The existing JSON Lines based approach had two major drawbacks 1. In the case of really large lines (in the region of 10 or 20MB) the python line buffering could _sometimes_ result in a partial read 2. The JSON based approach didn't have the ability to add any metadata (such as errors). 3. Not every message type/call-site waited for a response, which meant those client functions could never get told about an error One of the ways this line-based approach fell down was if you suddenly tried to run 100s of triggers at the same time you would get an error like this: ``` Traceback (most recent call last): File "/Users/ash/.local/share/uv/python/cpython-3.12.7-macos-aarch64-none/lib/python3.12/asyncio/streams.py", line 568, in readline line = await self.readuntil(sep) ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/ash/.local/share/uv/python/cpython-3.12.7-macos-aarch64-none/lib/python3.12/asyncio/streams.py", line 663, in readuntil raise exceptions.LimitOverrunError( asyncio.exceptions.LimitOverrunError: Separator is found, but chunk is longer than limit ``` The other way this caused problems was if you parse a large dag (as in one with 20k tasks or more) the DagFileProcessor could end up getting a partial read which would be invalid JSON. This changes the communications protocol in in a couple of ways. First off at the python level the separate send and receive methods in the client/task side have been removed and replaced with a single `send()` that sends the request, reads the response and raises an error if one is returned. (But note, right now almost nothing in the supervisor side sets the error, that will be a future PR.) Secondly the JSON Lines approach has been changed from a line-based protocol to a binary "frame" one. The protocol (which is the same for whichever side is sending) is length-prefixed, i.e. we first send the length of the data as a 4byte big-endian integer, followed by the data itself. This should remove the possibility of JSON parse errors due to reading incomplete lines Finally the last change made in this PR is to remove the "extra" requests socket/channel. Upon closer examination with this comms path I realised that this socket is unnecessary: Since we are in 100% control of the client side we can make use of the bi-directional nature of `socketpair` and save file handles. This also happens to help the `run_as_user` feature which is currently broken, as without extra config to `sudoers` file, `sudo` will close all filehandles other than stdin, stdout, and stderr -- so by introducing this change we make it easier to re-add run_as_user support. In order to support this in the DagFileProcessor (as the fact that the proc manager uses a single selector for multiple processes) means I have moved the `on_close` callback to be part of the object we store in the `selector` object in the supervisors, previoulsy it was the "on_read" callback, now we store a tuple of `(on_read, on_close)` and on_close is called once universally. This also changes the way comms are handled from the (async) TriggerRunner process. Previously we had a sync+async lock, but that made it possible to end up deadlocking things. The change now is to have `send` on `TriggerCommsDecoder` "go back" to the async even loop via `async_to_sync`, so that only async code deals with the socket, and we can use an async lock (rather than the hybrid sync and async lock we tried before). This seems to help the deadlock issue, but I'm not 100% sure it will remove it entirely, but it makes it much much harder to hit - I've not been able to reprouce it with this change * Deal with compat in tests This compat issue is only in tests, as nothing in the runtime of airflow-core imports/calls methods directly on SUPERVISOR_COMMS, we are only importing it in tests to mkae assertions about the behavour/to stub the return values. (cherry picked from commit 492518e)
…#51699) * Switch the Supervisor/task process from line-based to length-prefixed The existing JSON Lines based approach had two major drawbacks 1. In the case of really large lines (in the region of 10 or 20MB) the python line buffering could _sometimes_ result in a partial read 2. The JSON based approach didn't have the ability to add any metadata (such as errors). 3. Not every message type/call-site waited for a response, which meant those client functions could never get told about an error One of the ways this line-based approach fell down was if you suddenly tried to run 100s of triggers at the same time you would get an error like this: ``` Traceback (most recent call last): File "/Users/ash/.local/share/uv/python/cpython-3.12.7-macos-aarch64-none/lib/python3.12/asyncio/streams.py", line 568, in readline line = await self.readuntil(sep) ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/ash/.local/share/uv/python/cpython-3.12.7-macos-aarch64-none/lib/python3.12/asyncio/streams.py", line 663, in readuntil raise exceptions.LimitOverrunError( asyncio.exceptions.LimitOverrunError: Separator is found, but chunk is longer than limit ``` The other way this caused problems was if you parse a large dag (as in one with 20k tasks or more) the DagFileProcessor could end up getting a partial read which would be invalid JSON. This changes the communications protocol in in a couple of ways. First off at the python level the separate send and receive methods in the client/task side have been removed and replaced with a single `send()` that sends the request, reads the response and raises an error if one is returned. (But note, right now almost nothing in the supervisor side sets the error, that will be a future PR.) Secondly the JSON Lines approach has been changed from a line-based protocol to a binary "frame" one. The protocol (which is the same for whichever side is sending) is length-prefixed, i.e. we first send the length of the data as a 4byte big-endian integer, followed by the data itself. This should remove the possibility of JSON parse errors due to reading incomplete lines Finally the last change made in this PR is to remove the "extra" requests socket/channel. Upon closer examination with this comms path I realised that this socket is unnecessary: Since we are in 100% control of the client side we can make use of the bi-directional nature of `socketpair` and save file handles. This also happens to help the `run_as_user` feature which is currently broken, as without extra config to `sudoers` file, `sudo` will close all filehandles other than stdin, stdout, and stderr -- so by introducing this change we make it easier to re-add run_as_user support. In order to support this in the DagFileProcessor (as the fact that the proc manager uses a single selector for multiple processes) means I have moved the `on_close` callback to be part of the object we store in the `selector` object in the supervisors, previoulsy it was the "on_read" callback, now we store a tuple of `(on_read, on_close)` and on_close is called once universally. This also changes the way comms are handled from the (async) TriggerRunner process. Previously we had a sync+async lock, but that made it possible to end up deadlocking things. The change now is to have `send` on `TriggerCommsDecoder` "go back" to the async even loop via `async_to_sync`, so that only async code deals with the socket, and we can use an async lock (rather than the hybrid sync and async lock we tried before). This seems to help the deadlock issue, but I'm not 100% sure it will remove it entirely, but it makes it much much harder to hit - I've not been able to reprouce it with this change * Deal with compat in tests This compat issue is only in tests, as nothing in the runtime of airflow-core imports/calls methods directly on SUPERVISOR_COMMS, we are only importing it in tests to mkae assertions about the behavour/to stub the return values. (cherry picked from commit 492518e)
…#51699) * Switch the Supervisor/task process from line-based to length-prefixed The existing JSON Lines based approach had two major drawbacks 1. In the case of really large lines (in the region of 10 or 20MB) the python line buffering could _sometimes_ result in a partial read 2. The JSON based approach didn't have the ability to add any metadata (such as errors). 3. Not every message type/call-site waited for a response, which meant those client functions could never get told about an error One of the ways this line-based approach fell down was if you suddenly tried to run 100s of triggers at the same time you would get an error like this: ``` Traceback (most recent call last): File "/Users/ash/.local/share/uv/python/cpython-3.12.7-macos-aarch64-none/lib/python3.12/asyncio/streams.py", line 568, in readline line = await self.readuntil(sep) ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/ash/.local/share/uv/python/cpython-3.12.7-macos-aarch64-none/lib/python3.12/asyncio/streams.py", line 663, in readuntil raise exceptions.LimitOverrunError( asyncio.exceptions.LimitOverrunError: Separator is found, but chunk is longer than limit ``` The other way this caused problems was if you parse a large dag (as in one with 20k tasks or more) the DagFileProcessor could end up getting a partial read which would be invalid JSON. This changes the communications protocol in in a couple of ways. First off at the python level the separate send and receive methods in the client/task side have been removed and replaced with a single `send()` that sends the request, reads the response and raises an error if one is returned. (But note, right now almost nothing in the supervisor side sets the error, that will be a future PR.) Secondly the JSON Lines approach has been changed from a line-based protocol to a binary "frame" one. The protocol (which is the same for whichever side is sending) is length-prefixed, i.e. we first send the length of the data as a 4byte big-endian integer, followed by the data itself. This should remove the possibility of JSON parse errors due to reading incomplete lines Finally the last change made in this PR is to remove the "extra" requests socket/channel. Upon closer examination with this comms path I realised that this socket is unnecessary: Since we are in 100% control of the client side we can make use of the bi-directional nature of `socketpair` and save file handles. This also happens to help the `run_as_user` feature which is currently broken, as without extra config to `sudoers` file, `sudo` will close all filehandles other than stdin, stdout, and stderr -- so by introducing this change we make it easier to re-add run_as_user support. In order to support this in the DagFileProcessor (as the fact that the proc manager uses a single selector for multiple processes) means I have moved the `on_close` callback to be part of the object we store in the `selector` object in the supervisors, previoulsy it was the "on_read" callback, now we store a tuple of `(on_read, on_close)` and on_close is called once universally. This also changes the way comms are handled from the (async) TriggerRunner process. Previously we had a sync+async lock, but that made it possible to end up deadlocking things. The change now is to have `send` on `TriggerCommsDecoder` "go back" to the async even loop via `async_to_sync`, so that only async code deals with the socket, and we can use an async lock (rather than the hybrid sync and async lock we tried before). This seems to help the deadlock issue, but I'm not 100% sure it will remove it entirely, but it makes it much much harder to hit - I've not been able to reprouce it with this change * Deal with compat in tests This compat issue is only in tests, as nothing in the runtime of airflow-core imports/calls methods directly on SUPERVISOR_COMMS, we are only importing it in tests to mkae assertions about the behavour/to stub the return values. (cherry picked from commit 492518e)
…#51699) * Switch the Supervisor/task process from line-based to length-prefixed The existing JSON Lines based approach had two major drawbacks 1. In the case of really large lines (in the region of 10 or 20MB) the python line buffering could _sometimes_ result in a partial read 2. The JSON based approach didn't have the ability to add any metadata (such as errors). 3. Not every message type/call-site waited for a response, which meant those client functions could never get told about an error One of the ways this line-based approach fell down was if you suddenly tried to run 100s of triggers at the same time you would get an error like this: ``` Traceback (most recent call last): File "/Users/ash/.local/share/uv/python/cpython-3.12.7-macos-aarch64-none/lib/python3.12/asyncio/streams.py", line 568, in readline line = await self.readuntil(sep) ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/ash/.local/share/uv/python/cpython-3.12.7-macos-aarch64-none/lib/python3.12/asyncio/streams.py", line 663, in readuntil raise exceptions.LimitOverrunError( asyncio.exceptions.LimitOverrunError: Separator is found, but chunk is longer than limit ``` The other way this caused problems was if you parse a large dag (as in one with 20k tasks or more) the DagFileProcessor could end up getting a partial read which would be invalid JSON. This changes the communications protocol in in a couple of ways. First off at the python level the separate send and receive methods in the client/task side have been removed and replaced with a single `send()` that sends the request, reads the response and raises an error if one is returned. (But note, right now almost nothing in the supervisor side sets the error, that will be a future PR.) Secondly the JSON Lines approach has been changed from a line-based protocol to a binary "frame" one. The protocol (which is the same for whichever side is sending) is length-prefixed, i.e. we first send the length of the data as a 4byte big-endian integer, followed by the data itself. This should remove the possibility of JSON parse errors due to reading incomplete lines Finally the last change made in this PR is to remove the "extra" requests socket/channel. Upon closer examination with this comms path I realised that this socket is unnecessary: Since we are in 100% control of the client side we can make use of the bi-directional nature of `socketpair` and save file handles. This also happens to help the `run_as_user` feature which is currently broken, as without extra config to `sudoers` file, `sudo` will close all filehandles other than stdin, stdout, and stderr -- so by introducing this change we make it easier to re-add run_as_user support. In order to support this in the DagFileProcessor (as the fact that the proc manager uses a single selector for multiple processes) means I have moved the `on_close` callback to be part of the object we store in the `selector` object in the supervisors, previoulsy it was the "on_read" callback, now we store a tuple of `(on_read, on_close)` and on_close is called once universally. This also changes the way comms are handled from the (async) TriggerRunner process. Previously we had a sync+async lock, but that made it possible to end up deadlocking things. The change now is to have `send` on `TriggerCommsDecoder` "go back" to the async even loop via `async_to_sync`, so that only async code deals with the socket, and we can use an async lock (rather than the hybrid sync and async lock we tried before). This seems to help the deadlock issue, but I'm not 100% sure it will remove it entirely, but it makes it much much harder to hit - I've not been able to reprouce it with this change * Deal with compat in tests This compat issue is only in tests, as nothing in the runtime of airflow-core imports/calls methods directly on SUPERVISOR_COMMS, we are only importing it in tests to mkae assertions about the behavour/to stub the return values. (cherry picked from commit 492518e)
…#51699) * Switch the Supervisor/task process from line-based to length-prefixed The existing JSON Lines based approach had two major drawbacks 1. In the case of really large lines (in the region of 10 or 20MB) the python line buffering could _sometimes_ result in a partial read 2. The JSON based approach didn't have the ability to add any metadata (such as errors). 3. Not every message type/call-site waited for a response, which meant those client functions could never get told about an error One of the ways this line-based approach fell down was if you suddenly tried to run 100s of triggers at the same time you would get an error like this: ``` Traceback (most recent call last): File "/Users/ash/.local/share/uv/python/cpython-3.12.7-macos-aarch64-none/lib/python3.12/asyncio/streams.py", line 568, in readline line = await self.readuntil(sep) ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/ash/.local/share/uv/python/cpython-3.12.7-macos-aarch64-none/lib/python3.12/asyncio/streams.py", line 663, in readuntil raise exceptions.LimitOverrunError( asyncio.exceptions.LimitOverrunError: Separator is found, but chunk is longer than limit ``` The other way this caused problems was if you parse a large dag (as in one with 20k tasks or more) the DagFileProcessor could end up getting a partial read which would be invalid JSON. This changes the communications protocol in in a couple of ways. First off at the python level the separate send and receive methods in the client/task side have been removed and replaced with a single `send()` that sends the request, reads the response and raises an error if one is returned. (But note, right now almost nothing in the supervisor side sets the error, that will be a future PR.) Secondly the JSON Lines approach has been changed from a line-based protocol to a binary "frame" one. The protocol (which is the same for whichever side is sending) is length-prefixed, i.e. we first send the length of the data as a 4byte big-endian integer, followed by the data itself. This should remove the possibility of JSON parse errors due to reading incomplete lines Finally the last change made in this PR is to remove the "extra" requests socket/channel. Upon closer examination with this comms path I realised that this socket is unnecessary: Since we are in 100% control of the client side we can make use of the bi-directional nature of `socketpair` and save file handles. This also happens to help the `run_as_user` feature which is currently broken, as without extra config to `sudoers` file, `sudo` will close all filehandles other than stdin, stdout, and stderr -- so by introducing this change we make it easier to re-add run_as_user support. In order to support this in the DagFileProcessor (as the fact that the proc manager uses a single selector for multiple processes) means I have moved the `on_close` callback to be part of the object we store in the `selector` object in the supervisors, previoulsy it was the "on_read" callback, now we store a tuple of `(on_read, on_close)` and on_close is called once universally. This also changes the way comms are handled from the (async) TriggerRunner process. Previously we had a sync+async lock, but that made it possible to end up deadlocking things. The change now is to have `send` on `TriggerCommsDecoder` "go back" to the async even loop via `async_to_sync`, so that only async code deals with the socket, and we can use an async lock (rather than the hybrid sync and async lock we tried before). This seems to help the deadlock issue, but I'm not 100% sure it will remove it entirely, but it makes it much much harder to hit - I've not been able to reprouce it with this change * Deal with compat in tests This compat issue is only in tests, as nothing in the runtime of airflow-core imports/calls methods directly on SUPERVISOR_COMMS, we are only importing it in tests to mkae assertions about the behavour/to stub the return values. (cherry picked from commit 492518e)
…#51699) * Switch the Supervisor/task process from line-based to length-prefixed The existing JSON Lines based approach had two major drawbacks 1. In the case of really large lines (in the region of 10 or 20MB) the python line buffering could _sometimes_ result in a partial read 2. The JSON based approach didn't have the ability to add any metadata (such as errors). 3. Not every message type/call-site waited for a response, which meant those client functions could never get told about an error One of the ways this line-based approach fell down was if you suddenly tried to run 100s of triggers at the same time you would get an error like this: ``` Traceback (most recent call last): File "/Users/ash/.local/share/uv/python/cpython-3.12.7-macos-aarch64-none/lib/python3.12/asyncio/streams.py", line 568, in readline line = await self.readuntil(sep) ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/ash/.local/share/uv/python/cpython-3.12.7-macos-aarch64-none/lib/python3.12/asyncio/streams.py", line 663, in readuntil raise exceptions.LimitOverrunError( asyncio.exceptions.LimitOverrunError: Separator is found, but chunk is longer than limit ``` The other way this caused problems was if you parse a large dag (as in one with 20k tasks or more) the DagFileProcessor could end up getting a partial read which would be invalid JSON. This changes the communications protocol in in a couple of ways. First off at the python level the separate send and receive methods in the client/task side have been removed and replaced with a single `send()` that sends the request, reads the response and raises an error if one is returned. (But note, right now almost nothing in the supervisor side sets the error, that will be a future PR.) Secondly the JSON Lines approach has been changed from a line-based protocol to a binary "frame" one. The protocol (which is the same for whichever side is sending) is length-prefixed, i.e. we first send the length of the data as a 4byte big-endian integer, followed by the data itself. This should remove the possibility of JSON parse errors due to reading incomplete lines Finally the last change made in this PR is to remove the "extra" requests socket/channel. Upon closer examination with this comms path I realised that this socket is unnecessary: Since we are in 100% control of the client side we can make use of the bi-directional nature of `socketpair` and save file handles. This also happens to help the `run_as_user` feature which is currently broken, as without extra config to `sudoers` file, `sudo` will close all filehandles other than stdin, stdout, and stderr -- so by introducing this change we make it easier to re-add run_as_user support. In order to support this in the DagFileProcessor (as the fact that the proc manager uses a single selector for multiple processes) means I have moved the `on_close` callback to be part of the object we store in the `selector` object in the supervisors, previoulsy it was the "on_read" callback, now we store a tuple of `(on_read, on_close)` and on_close is called once universally. This also changes the way comms are handled from the (async) TriggerRunner process. Previously we had a sync+async lock, but that made it possible to end up deadlocking things. The change now is to have `send` on `TriggerCommsDecoder` "go back" to the async even loop via `async_to_sync`, so that only async code deals with the socket, and we can use an async lock (rather than the hybrid sync and async lock we tried before). This seems to help the deadlock issue, but I'm not 100% sure it will remove it entirely, but it makes it much much harder to hit - I've not been able to reprouce it with this change * Deal with compat in tests This compat issue is only in tests, as nothing in the runtime of airflow-core imports/calls methods directly on SUPERVISOR_COMMS, we are only importing it in tests to mkae assertions about the behavour/to stub the return values. (cherry picked from commit 492518e)
| except Exception: | ||
| log.exception("Unable to decode message", line=line) | ||
| log.exception("Unable to decode message", body=request.body) | ||
| continue |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why we are continuing? i think if we unable to decode we should respond ?
|
Thanks for the rework, and updating triggers. |
Yes exactly.. |
…apache#51699) * Switch the Supervisor/task process from line-based to length-prefixed The existing JSON Lines based approach had two major drawbacks 1. In the case of really large lines (in the region of 10 or 20MB) the python line buffering could _sometimes_ result in a partial read 2. The JSON based approach didn't have the ability to add any metadata (such as errors). 3. Not every message type/call-site waited for a response, which meant those client functions could never get told about an error One of the ways this line-based approach fell down was if you suddenly tried to run 100s of triggers at the same time you would get an error like this: ``` Traceback (most recent call last): File "/Users/ash/.local/share/uv/python/cpython-3.12.7-macos-aarch64-none/lib/python3.12/asyncio/streams.py", line 568, in readline line = await self.readuntil(sep) ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/ash/.local/share/uv/python/cpython-3.12.7-macos-aarch64-none/lib/python3.12/asyncio/streams.py", line 663, in readuntil raise exceptions.LimitOverrunError( asyncio.exceptions.LimitOverrunError: Separator is found, but chunk is longer than limit ``` The other way this caused problems was if you parse a large dag (as in one with 20k tasks or more) the DagFileProcessor could end up getting a partial read which would be invalid JSON. This changes the communications protocol in in a couple of ways. First off at the python level the separate send and receive methods in the client/task side have been removed and replaced with a single `send()` that sends the request, reads the response and raises an error if one is returned. (But note, right now almost nothing in the supervisor side sets the error, that will be a future PR.) Secondly the JSON Lines approach has been changed from a line-based protocol to a binary "frame" one. The protocol (which is the same for whichever side is sending) is length-prefixed, i.e. we first send the length of the data as a 4byte big-endian integer, followed by the data itself. This should remove the possibility of JSON parse errors due to reading incomplete lines Finally the last change made in this PR is to remove the "extra" requests socket/channel. Upon closer examination with this comms path I realised that this socket is unnecessary: Since we are in 100% control of the client side we can make use of the bi-directional nature of `socketpair` and save file handles. This also happens to help the `run_as_user` feature which is currently broken, as without extra config to `sudoers` file, `sudo` will close all filehandles other than stdin, stdout, and stderr -- so by introducing this change we make it easier to re-add run_as_user support. In order to support this in the DagFileProcessor (as the fact that the proc manager uses a single selector for multiple processes) means I have moved the `on_close` callback to be part of the object we store in the `selector` object in the supervisors, previoulsy it was the "on_read" callback, now we store a tuple of `(on_read, on_close)` and on_close is called once universally. This also changes the way comms are handled from the (async) TriggerRunner process. Previously we had a sync+async lock, but that made it possible to end up deadlocking things. The change now is to have `send` on `TriggerCommsDecoder` "go back" to the async even loop via `async_to_sync`, so that only async code deals with the socket, and we can use an async lock (rather than the hybrid sync and async lock we tried before). This seems to help the deadlock issue, but I'm not 100% sure it will remove it entirely, but it makes it much much harder to hit - I've not been able to reprouce it with this change * Deal with compat in tests This compat issue is only in tests, as nothing in the runtime of airflow-core imports/calls methods directly on SUPERVISOR_COMMS, we are only importing it in tests to mkae assertions about the behavour/to stub the return values.
This should have been handled in apache#51699 but was missed there as it is very infrequently hit.
…gth-prefixed (#51699) * Switch the Supervisor/task process from line-based to length-prefixed The existing JSON Lines based approach had two major drawbacks 1. In the case of really large lines (in the region of 10 or 20MB) the python line buffering could _sometimes_ result in a partial read 2. The JSON based approach didn't have the ability to add any metadata (such as errors). 3. Not every message type/call-site waited for a response, which meant those client functions could never get told about an error One of the ways this line-based approach fell down was if you suddenly tried to run 100s of triggers at the same time you would get an error like this: ``` Traceback (most recent call last): File "/Users/ash/.local/share/uv/python/cpython-3.12.7-macos-aarch64-none/lib/python3.12/asyncio/streams.py", line 568, in readline line = await self.readuntil(sep) ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/ash/.local/share/uv/python/cpython-3.12.7-macos-aarch64-none/lib/python3.12/asyncio/streams.py", line 663, in readuntil raise exceptions.LimitOverrunError( asyncio.exceptions.LimitOverrunError: Separator is found, but chunk is longer than limit ``` The other way this caused problems was if you parse a large dag (as in one with 20k tasks or more) the DagFileProcessor could end up getting a partial read which would be invalid JSON. This changes the communications protocol in in a couple of ways. First off at the python level the separate send and receive methods in the client/task side have been removed and replaced with a single `send()` that sends the request, reads the response and raises an error if one is returned. (But note, right now almost nothing in the supervisor side sets the error, that will be a future PR.) Secondly the JSON Lines approach has been changed from a line-based protocol to a binary "frame" one. The protocol (which is the same for whichever side is sending) is length-prefixed, i.e. we first send the length of the data as a 4byte big-endian integer, followed by the data itself. This should remove the possibility of JSON parse errors due to reading incomplete lines Finally the last change made in this PR is to remove the "extra" requests socket/channel. Upon closer examination with this comms path I realised that this socket is unnecessary: Since we are in 100% control of the client side we can make use of the bi-directional nature of `socketpair` and save file handles. This also happens to help the `run_as_user` feature which is currently broken, as without extra config to `sudoers` file, `sudo` will close all filehandles other than stdin, stdout, and stderr -- so by introducing this change we make it easier to re-add run_as_user support. In order to support this in the DagFileProcessor (as the fact that the proc manager uses a single selector for multiple processes) means I have moved the `on_close` callback to be part of the object we store in the `selector` object in the supervisors, previoulsy it was the "on_read" callback, now we store a tuple of `(on_read, on_close)` and on_close is called once universally. This also changes the way comms are handled from the (async) TriggerRunner process. Previously we had a sync+async lock, but that made it possible to end up deadlocking things. The change now is to have `send` on `TriggerCommsDecoder` "go back" to the async even loop via `async_to_sync`, so that only async code deals with the socket, and we can use an async lock (rather than the hybrid sync and async lock we tried before). This seems to help the deadlock issue, but I'm not 100% sure it will remove it entirely, but it makes it much much harder to hit - I've not been able to reprouce it with this change * Deal with compat in tests This compat issue is only in tests, as nothing in the runtime of airflow-core imports/calls methods directly on SUPERVISOR_COMMS, we are only importing it in tests to mkae assertions about the behavour/to stub the return values. (cherry picked from commit 492518e)
…gth-prefixed (#51699) * Switch the Supervisor/task process from line-based to length-prefixed The existing JSON Lines based approach had two major drawbacks 1. In the case of really large lines (in the region of 10 or 20MB) the python line buffering could _sometimes_ result in a partial read 2. The JSON based approach didn't have the ability to add any metadata (such as errors). 3. Not every message type/call-site waited for a response, which meant those client functions could never get told about an error One of the ways this line-based approach fell down was if you suddenly tried to run 100s of triggers at the same time you would get an error like this: ``` Traceback (most recent call last): File "/Users/ash/.local/share/uv/python/cpython-3.12.7-macos-aarch64-none/lib/python3.12/asyncio/streams.py", line 568, in readline line = await self.readuntil(sep) ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/ash/.local/share/uv/python/cpython-3.12.7-macos-aarch64-none/lib/python3.12/asyncio/streams.py", line 663, in readuntil raise exceptions.LimitOverrunError( asyncio.exceptions.LimitOverrunError: Separator is found, but chunk is longer than limit ``` The other way this caused problems was if you parse a large dag (as in one with 20k tasks or more) the DagFileProcessor could end up getting a partial read which would be invalid JSON. This changes the communications protocol in in a couple of ways. First off at the python level the separate send and receive methods in the client/task side have been removed and replaced with a single `send()` that sends the request, reads the response and raises an error if one is returned. (But note, right now almost nothing in the supervisor side sets the error, that will be a future PR.) Secondly the JSON Lines approach has been changed from a line-based protocol to a binary "frame" one. The protocol (which is the same for whichever side is sending) is length-prefixed, i.e. we first send the length of the data as a 4byte big-endian integer, followed by the data itself. This should remove the possibility of JSON parse errors due to reading incomplete lines Finally the last change made in this PR is to remove the "extra" requests socket/channel. Upon closer examination with this comms path I realised that this socket is unnecessary: Since we are in 100% control of the client side we can make use of the bi-directional nature of `socketpair` and save file handles. This also happens to help the `run_as_user` feature which is currently broken, as without extra config to `sudoers` file, `sudo` will close all filehandles other than stdin, stdout, and stderr -- so by introducing this change we make it easier to re-add run_as_user support. In order to support this in the DagFileProcessor (as the fact that the proc manager uses a single selector for multiple processes) means I have moved the `on_close` callback to be part of the object we store in the `selector` object in the supervisors, previoulsy it was the "on_read" callback, now we store a tuple of `(on_read, on_close)` and on_close is called once universally. This also changes the way comms are handled from the (async) TriggerRunner process. Previously we had a sync+async lock, but that made it possible to end up deadlocking things. The change now is to have `send` on `TriggerCommsDecoder` "go back" to the async even loop via `async_to_sync`, so that only async code deals with the socket, and we can use an async lock (rather than the hybrid sync and async lock we tried before). This seems to help the deadlock issue, but I'm not 100% sure it will remove it entirely, but it makes it much much harder to hit - I've not been able to reprouce it with this change * Deal with compat in tests This compat issue is only in tests, as nothing in the runtime of airflow-core imports/calls methods directly on SUPERVISOR_COMMS, we are only importing it in tests to mkae assertions about the behavour/to stub the return values. (cherry picked from commit 492518e)
…gth-prefixed (#51699) (#51924) * Switch the Supervisor/task process from line-based to length-prefixed The existing JSON Lines based approach had two major drawbacks 1. In the case of really large lines (in the region of 10 or 20MB) the python line buffering could _sometimes_ result in a partial read 2. The JSON based approach didn't have the ability to add any metadata (such as errors). 3. Not every message type/call-site waited for a response, which meant those client functions could never get told about an error One of the ways this line-based approach fell down was if you suddenly tried to run 100s of triggers at the same time you would get an error like this: ``` Traceback (most recent call last): File "/Users/ash/.local/share/uv/python/cpython-3.12.7-macos-aarch64-none/lib/python3.12/asyncio/streams.py", line 568, in readline line = await self.readuntil(sep) ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/ash/.local/share/uv/python/cpython-3.12.7-macos-aarch64-none/lib/python3.12/asyncio/streams.py", line 663, in readuntil raise exceptions.LimitOverrunError( asyncio.exceptions.LimitOverrunError: Separator is found, but chunk is longer than limit ``` The other way this caused problems was if you parse a large dag (as in one with 20k tasks or more) the DagFileProcessor could end up getting a partial read which would be invalid JSON. This changes the communications protocol in in a couple of ways. First off at the python level the separate send and receive methods in the client/task side have been removed and replaced with a single `send()` that sends the request, reads the response and raises an error if one is returned. (But note, right now almost nothing in the supervisor side sets the error, that will be a future PR.) Secondly the JSON Lines approach has been changed from a line-based protocol to a binary "frame" one. The protocol (which is the same for whichever side is sending) is length-prefixed, i.e. we first send the length of the data as a 4byte big-endian integer, followed by the data itself. This should remove the possibility of JSON parse errors due to reading incomplete lines Finally the last change made in this PR is to remove the "extra" requests socket/channel. Upon closer examination with this comms path I realised that this socket is unnecessary: Since we are in 100% control of the client side we can make use of the bi-directional nature of `socketpair` and save file handles. This also happens to help the `run_as_user` feature which is currently broken, as without extra config to `sudoers` file, `sudo` will close all filehandles other than stdin, stdout, and stderr -- so by introducing this change we make it easier to re-add run_as_user support. In order to support this in the DagFileProcessor (as the fact that the proc manager uses a single selector for multiple processes) means I have moved the `on_close` callback to be part of the object we store in the `selector` object in the supervisors, previoulsy it was the "on_read" callback, now we store a tuple of `(on_read, on_close)` and on_close is called once universally. This also changes the way comms are handled from the (async) TriggerRunner process. Previously we had a sync+async lock, but that made it possible to end up deadlocking things. The change now is to have `send` on `TriggerCommsDecoder` "go back" to the async even loop via `async_to_sync`, so that only async code deals with the socket, and we can use an async lock (rather than the hybrid sync and async lock we tried before). This seems to help the deadlock issue, but I'm not 100% sure it will remove it entirely, but it makes it much much harder to hit - I've not been able to reprouce it with this change * Deal with compat in tests This compat issue is only in tests, as nothing in the runtime of airflow-core imports/calls methods directly on SUPERVISOR_COMMS, we are only importing it in tests to mkae assertions about the behavour/to stub the return values. (cherry picked from commit 492518e)
… to length-prefixed (apache#51699) (apache#51924)
|
#protm |
Relates to #46426
The existing JSON Lines based approach had three major drawbacks
line buffering could sometimes result in a partial read
as errors).
client functions could never get told about an error
One of the ways this line-based approach fell down was if you suddenly tried
to run 100s of triggers at the same time you would get an error like this:
The other way this caused problems was if you parse a large dag (as in one
with 20k tasks or more) the DagFileProcessor could end up getting a partial
read which would be invalid JSON.
This changes the communications protocol in in a couple of ways.
First off at the python level the separate send and receive methods in the
client/task side have been removed and replaced with a single
send()thatsends the request, reads the response and raises an error if one is returned.
(But note, right now almost nothing in the supervisor side sets the error,
that will be a future PR.)
Secondly the JSON Lines approach has been changed from a line-based protocol
to a binary "frame" one. The protocol (which is the same for whichever side is
sending) is length-prefixed, i.e. we first send the length of the data as a
4byte big-endian integer, followed by the data itself. This should remove the
possibility of JSON parse errors due to reading incomplete lines
Finally the last change made in this PR is to remove the "extra" requests
socket/channel. Upon closer examination with this comms path I realised that
this socket is unnecessary: Since we are in 100% control of the client side we
can make use of the bi-directional nature of
socketpairand save filehandles. This also happens to help the
run_as_userfeature which iscurrently broken, as without extra config to
sudoersfile,sudowill closeall filehandles other than stdin, stdout, and stderr -- so by introducing this
change we make it easier to re-add run_as_user support.
In order to support this in the DagFileProcessor (as the fact that the proc
manager uses a single selector for multiple processes) means I have moved the
on_closecallback to be part of the object we store in theselectorobjectin the supervisors, previoulsy it was the "on_read" callback, now we store a
tuple of
(on_read, on_close)and on_close is called once universally.This also changes the way comms are handled from the (async) TriggerRunner
process. Previously we had a sync+async lock, but that made it possible to end
up deadlocking things. The change now is to have
sendonTriggerCommsDecoder"go back" to the async even loop viaasync_to_sync, sothat only async code deals with the socket, and we can use an async lock
(rather than the hybrid sync and async lock we tried before). This seems to
help the deadlock issue, but I'm not 100% sure it will remove it entirely, but
it makes it much much harder to hit - I've not been able to reprouce it with
this change.
Fixes #50185, fixes #51213, closes #51279