diff --git a/docs/source/history.rst b/docs/source/history.rst index 0579d4420d..3936df867e 100644 --- a/docs/source/history.rst +++ b/docs/source/history.rst @@ -213,7 +213,7 @@ Features worry: you can now pass a custom ``deliver_cancel=`` argument to define your own process killing policy. (`#1104 `__) - It turns out that creating a subprocess can block the parent process - for a surprisingly long time. So `trio.open_process` now uses a worker + for a surprisingly long time. So ``trio.open_process`` now uses a worker thread to avoid blocking the event loop. (`#1109 `__) - We've added FreeBSD to the list of platforms we support and test on. (`#1118 `__) - On Linux kernels v5.3 or newer, `trio.Process.wait` now uses `the @@ -267,7 +267,7 @@ Deprecations and Removals alternatives or make a case for why some particular class should be designed to support subclassing. (`#1044 `__) - If you want to create a `trio.Process` object, you now have to call - `trio.open_process`; calling ``trio.Process()`` directly was + ``trio.open_process``; calling ``trio.Process()`` directly was deprecated in v0.12.0 and has now been removed. (`#1109 `__) - Remove ``clear`` method on `trio.Event`: it was deprecated in 0.12.0. (`#1498 `__) @@ -495,7 +495,7 @@ Deprecations and Removals deprecated. (`#878 `__) - It turns out that it's better to treat subprocess spawning as an async operation. Therefore, direct construction of `Process` objects has - been deprecated. Use `trio.open_process` instead. (`#1109 `__) + been deprecated. Use ``trio.open_process`` instead. (`#1109 `__) Miscellaneous internal changes diff --git a/docs/source/reference-io.rst b/docs/source/reference-io.rst index ef971f6ea8..81a615e039 100644 --- a/docs/source/reference-io.rst +++ b/docs/source/reference-io.rst @@ -664,22 +664,43 @@ Spawning subprocesses Trio provides support for spawning other programs as subprocesses, communicating with them via pipes, sending them signals, and waiting -for them to exit. The interface for doing so consists of two layers: +for them to exit. -* :func:`trio.run_process` runs a process from start to - finish and returns a :class:`~subprocess.CompletedProcess` object describing - its outputs and return value. This is what you should reach for if you - want to run a process to completion before continuing, while possibly - sending it some input or capturing its output. It is modelled after - the standard :func:`subprocess.run` with some additional features - and safer defaults. +Most of the time, this is done through our high-level interface, +`trio.run_process`. It lets you either run a process to completion +while optionally capturing the output, or else run it in a background +task and interact with it while it's running: -* `trio.open_process` starts a process in the background and returns a - `Process` object to let you interact with it. Using it requires a - bit more code than `run_process`, but exposes additional - capabilities: back-and-forth communication, processing output as - soon as it is generated, and so forth. It is modelled after the - standard library :class:`subprocess.Popen`. +.. autofunction:: trio.run_process + +.. autoclass:: trio.Process + + .. autoattribute:: returncode + + .. automethod:: wait + + .. automethod:: poll + + .. automethod:: kill + + .. automethod:: terminate + + .. automethod:: send_signal + + .. note:: :meth:`~subprocess.Popen.communicate` is not provided as a + method on :class:`~trio.Process` objects; call :func:`~trio.run_process` + normally for simple capturing, or write the loop yourself if you + have unusual needs. :meth:`~subprocess.Popen.communicate` has + quite unusual cancellation behavior in the standard library (on + some platforms it spawns a background thread which continues to + read from the child process even after the timeout has expired) + and we wanted to provide an interface with fewer surprises. + +If `trio.run_process` is too limiting, we also offer a low-level API, +`trio.lowlevel.open_process`. For example, if you want to spawn a +child process that will outlive the parent process and be +orphaned, then `~trio.run_process` can't do that, but +`~trio.lowlevel.open_process` can. .. _subprocess-options: @@ -705,58 +726,6 @@ with a process, so it does not support the ``encoding``, ``errors``, options. -Running a process and waiting for it to finish -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -The basic interface for running a subprocess start-to-finish is -:func:`trio.run_process`. It always waits for the subprocess to exit -before returning, so there's no need to worry about leaving a process -running by mistake after you've gone on to do other things. -:func:`~trio.run_process` is similar to the standard library -:func:`subprocess.run` function, but tries to have safer defaults: -with no options, the subprocess's input is empty rather than coming -from the user's terminal, and a failure in the subprocess will be -propagated as a :exc:`subprocess.CalledProcessError` exception. Of -course, these defaults can be changed where necessary. - -.. autofunction:: trio.run_process - - -Interacting with a process as it runs -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -If you want more control than :func:`~trio.run_process` affords, you -can use `trio.open_process` to spawn a subprocess, and then interact -with it using the `Process` interface. - -.. autofunction:: trio.open_process - -.. autoclass:: trio.Process - - .. autoattribute:: returncode - - .. automethod:: aclose - - .. automethod:: wait - - .. automethod:: poll - - .. automethod:: kill - - .. automethod:: terminate - - .. automethod:: send_signal - - .. note:: :meth:`~subprocess.Popen.communicate` is not provided as a - method on :class:`~trio.Process` objects; use :func:`~trio.run_process` - instead, or write the loop yourself if you have unusual - needs. :meth:`~subprocess.Popen.communicate` has quite unusual - cancellation behavior in the standard library (on some platforms it - spawns a background thread which continues to read from the child - process even after the timeout has expired) and we wanted to - provide an interface with fewer surprises. - - .. _subprocess-quoting: Quoting: more than you wanted to know diff --git a/docs/source/reference-lowlevel.rst b/docs/source/reference-lowlevel.rst index 2fe773cbb7..e0a604bebc 100644 --- a/docs/source/reference-lowlevel.rst +++ b/docs/source/reference-lowlevel.rst @@ -106,6 +106,12 @@ The tutorial has a :ref:`fully-worked example Trio's internal scheduling decisions. +Low-level process spawning +========================== + +.. autofunction:: trio.lowlevel.open_process + + Low-level I/O primitives ======================== diff --git a/newsfragments/1104.feature.rst b/newsfragments/1104.feature.rst new file mode 100644 index 0000000000..b2d4ceaa11 --- /dev/null +++ b/newsfragments/1104.feature.rst @@ -0,0 +1,3 @@ +You can now conveniently spawn a child process in a background task +and interact it with on the fly using ``process = await +nursery.start(run_process, ...)``. See `run_process` for more details. diff --git a/newsfragments/1104.removal.rst b/newsfragments/1104.removal.rst new file mode 100644 index 0000000000..f2a5a0a997 --- /dev/null +++ b/newsfragments/1104.removal.rst @@ -0,0 +1,5 @@ +``trio.open_process`` has been renamed to +`trio.lowlevel.open_process`, and the ``aclose`` method on `Process` +has been deprecated, along with ``async with process_obj``. We +recommend most users switch to the new +``nursery.start(trio.run_process, ...)`` API instead. diff --git a/trio/__init__.py b/trio/__init__.py index d66ffceea9..a50ec33310 100644 --- a/trio/__init__.py +++ b/trio/__init__.py @@ -70,7 +70,7 @@ from ._path import Path -from ._subprocess import Process, open_process, run_process +from ._subprocess import Process, run_process from ._ssl import SSLStream, SSLListener, NeedHandshakeError @@ -106,6 +106,15 @@ _deprecate.enable_attribute_deprecations(__name__) +__deprecated_attributes__ = { + "open_process": _deprecate.DeprecatedAttribute( + value=lowlevel.open_process, + version="0.20.0", + issue=1104, + instead="trio.lowlevel.open_process", + ), +} + # Having the public path in .__module__ attributes is important for: # - exception names in printed tracebacks # - sphinx :show-inheritance: diff --git a/trio/_subprocess.py b/trio/_subprocess.py index 876cc0d7c9..f836e45e95 100644 --- a/trio/_subprocess.py +++ b/trio/_subprocess.py @@ -16,6 +16,7 @@ create_pipe_to_child_stdin, create_pipe_from_child_output, ) +from ._deprecate import deprecated from ._util import NoPublicConstructor import trio @@ -66,22 +67,20 @@ def pidfd_open(fd: int, flags: int) -> int: class Process(AsyncResource, metaclass=NoPublicConstructor): r"""A child process. Like :class:`subprocess.Popen`, but async. - This class has no public constructor. To create a child process, use - `open_process`:: + This class has no public constructor. The most common way to get a + `Process` object is to combine `Nursery.start` with `run_process`:: - process = await trio.open_process(...) + process_object = await nursery.start(run_process, ...) - `Process` implements the `~trio.abc.AsyncResource` interface. In order to - make sure your process doesn't end up getting abandoned by mistake or - after an exception, you can use ``async with``:: + This way, `run_process` supervises the process and makes sure that it is + cleaned up properly, while optionally checking the return value, feeding + it input, and so on. - async with await trio.open_process(...) as process: - ... + If you need more control – for example, because you want to spawn a child + process that outlives your program – then another option is to use + `trio.lowlevel.open_process`:: - "Closing" a :class:`Process` will close any pipes to the child and wait - for it to exit; if cancelled, the child will be forcibly killed and we - will ensure it has finished exiting before allowing the cancellation to - propagate. + process_object = await trio.lowlevel.open_process(...) Attributes: args (str or list): The ``command`` passed at construction time, @@ -180,6 +179,18 @@ def returncode(self): self._close_pidfd() return result + @deprecated( + "0.20.0", + thing="using trio.Process as an async context manager", + issue=1104, + instead="run_process or nursery.start(run_process, ...)", + ) + async def __aenter__(self): + return self + + @deprecated( + "0.20.0", issue=1104, instead="run_process or nursery.start(run_process, ...)" + ) async def aclose(self): """Close any pipes we have to the process (both input and output) and wait for it to exit. @@ -281,17 +292,20 @@ async def open_process( ) -> Process: r"""Execute a child program in a new process. - After construction, you can interact with the child process by writing - data to its `~Process.stdin` stream (a `~trio.abc.SendStream`), reading - data from its `~Process.stdout` and/or `~Process.stderr` streams (both - `~trio.abc.ReceiveStream`\s), sending it signals using - `~Process.terminate`, `~Process.kill`, or `~Process.send_signal`, and - waiting for it to exit using `~Process.wait`. See `Process` for details. + After construction, you can interact with the child process by writing data to its + `~trio.Process.stdin` stream (a `~trio.abc.SendStream`), reading data from its + `~trio.Process.stdout` and/or `~trio.Process.stderr` streams (both + `~trio.abc.ReceiveStream`\s), sending it signals using `~trio.Process.terminate`, + `~trio.Process.kill`, or `~trio.Process.send_signal`, and waiting for it to exit + using `~trio.Process.wait`. See `trio.Process` for details. - Each standard stream is only available if you specify that a pipe should - be created for it. For example, if you pass ``stdin=subprocess.PIPE``, you - can write to the `~Process.stdin` stream, else `~Process.stdin` will be - ``None``. + Each standard stream is only available if you specify that a pipe should be created + for it. For example, if you pass ``stdin=subprocess.PIPE``, you can write to the + `~trio.Process.stdin` stream, else `~trio.Process.stdin` will be ``None``. + + Unlike `trio.run_process`, this function doesn't do any kind of automatic + management of the child process. It's up to you to implement whatever semantics you + want. Args: command (list or str): The command to run. Typically this is a @@ -319,7 +333,7 @@ async def open_process( are also accepted. Returns: - A new `Process` object. + A new `trio.Process` object. Raises: OSError: if the process spawning fails, for example because the @@ -424,35 +438,55 @@ async def run_process( capture_stderr=False, check=True, deliver_cancel=None, + task_status=trio.TASK_STATUS_IGNORED, **options, ): - """Run ``command`` in a subprocess, wait for it to complete, and - return a :class:`subprocess.CompletedProcess` instance describing - the results. - - If cancelled, :func:`run_process` terminates the subprocess and - waits for it to exit before propagating the cancellation, like - :meth:`Process.aclose`. - - **Input:** The subprocess's standard input stream is set up to - receive the bytes provided as ``stdin``. Once the given input has - been fully delivered, or if none is provided, the subprocess will - receive end-of-file when reading from its standard input. - Alternatively, if you want the subprocess to read its - standard input from the same place as the parent Trio process, you - can pass ``stdin=None``. + """Run ``command`` in a subprocess and wait for it to complete. + + This function can be called in two different ways. + + One option is a direct call, like:: + + completed_process_info = await trio.run_process(...) + + In this case, it returns a :class:`subprocess.CompletedProcess` instance + describing the results. Use this if you want to treat a process like a + function call. + + The other option is to run it as a task using `Nursery.start` – the enhanced version + of `~Nursery.start_soon` that lets a task pass back a value during startup:: + + process = await nursery.start(trio.run_process, ...) + + In this case, `~Nursery.start` returns a `Process` object that you can use + to interact with the process while it's running. Use this if you want to + treat a process like a background task. + + Either way, `run_process` makes sure that the process has exited before + returning, handles cancellation, optionally checks for errors, and + provides some convenient shorthands for dealing with the child's + input/output. + + **Input:** `run_process` supports all the same ``stdin=`` arguments as + `subprocess.Popen`. In addition, if you simply want to pass in some fixed + data, you can pass a plain `bytes` object, and `run_process` will take + care of setting up a pipe, feeding in the data you gave, and then sending + end-of-file. The default is ``b""``, which means that the child will receive + an empty stdin. If you want the child to instead read from the parent's + stdin, use ``stdin=None``. **Output:** By default, any output produced by the subprocess is passed through to the standard output and error streams of the - parent Trio process. If you would like to capture this output and - do something with it, you can pass ``capture_stdout=True`` to - capture the subprocess's standard output, and/or - ``capture_stderr=True`` to capture its standard error. Captured - data is provided as the + parent Trio process. + + When calling `run_process` directly, you can capture the subprocess's output by + passing ``capture_stdout=True`` to capture the subprocess's standard output, and/or + ``capture_stderr=True`` to capture its standard error. Captured data is collected up + by Trio into an in-memory buffer, and then provided as the :attr:`~subprocess.CompletedProcess.stdout` and/or - :attr:`~subprocess.CompletedProcess.stderr` attributes of the - returned :class:`~subprocess.CompletedProcess` object. The value - for any stream that was not captured will be ``None``. + :attr:`~subprocess.CompletedProcess.stderr` attributes of the returned + :class:`~subprocess.CompletedProcess` object. The value for any stream that was not + captured will be ``None``. If you want to capture both stdout and stderr while keeping them separate, pass ``capture_stdout=True, capture_stderr=True``. @@ -463,6 +497,13 @@ async def run_process( output will be available in the `~subprocess.CompletedProcess.stdout` attribute. + If you're using ``await nursery.start(trio.run_process, ...)`` and want to capture + the subprocess's output for further processing, then use ``stdout=subprocess.PIPE`` + and then make sure to read the data out of the `Process.stdout` stream. If you want + to capture stderr separately, use ``stderr=subprocess.PIPE``. If you want to capture + both, but mixed together in the correct order, use ``stdout=subproces.PIPE, + stderr=subprocess.STDOUT``. + **Error checking:** If the subprocess exits with a nonzero status code, indicating failure, :func:`run_process` raises a :exc:`subprocess.CalledProcessError` exception rather than @@ -470,8 +511,28 @@ async def run_process( the :attr:`~subprocess.CalledProcessError.stdout` and :attr:`~subprocess.CalledProcessError.stderr` attributes of that exception. To disable this behavior, so that :func:`run_process` - returns normally even if the subprocess exits abnormally, pass - ``check=False``. + returns normally even if the subprocess exits abnormally, pass ``check=False``. + + Note that this can make the ``capture_stdout`` and ``capture_stderr`` + arguments useful even when starting `run_process` as a task: if you only + care about the output if the process fails, then you can enable capturing + and then read the output off of the `~subprocess.CalledProcessError`. + + **Cancellation:** If cancelled, `run_process` sends a termination + request to the subprocess, then waits for it to fully exit. The + ``deliver_cancel`` argument lets you control how the process is terminated. + + .. note:: `run_process` is intentionally similar to the standard library + `subprocess.run`, but some of the defaults are different. Specifically, we + default to: + + - ``check=True``, because `"errors should never pass silently / unless + explicitly silenced" `__. + + - ``stdin=b""``, because it produces less-confusing results if a subprocess + unexpectedly tries to read from stdin. + + To get the `subprocess.run` semantics, use ``check=False, stdin=None``. Args: command (list or str): The command to run. Typically this is a @@ -482,24 +543,27 @@ async def run_process( be a string, which will be parsed following platform-dependent :ref:`quoting rules `. - stdin (:obj:`bytes`, file descriptor, or None): The bytes to provide to - the subprocess on its standard input stream, or ``None`` if the - subprocess's standard input should come from the same place as - the parent Trio process's standard input. As is the case with - the :mod:`subprocess` module, you can also pass a - file descriptor or an object with a ``fileno()`` method, - in which case the subprocess's standard input will come from - that file. + stdin (:obj:`bytes`, subprocess.PIPE, file descriptor, or None): The + bytes to provide to the subprocess on its standard input stream, or + ``None`` if the subprocess's standard input should come from the + same place as the parent Trio process's standard input. As is the + case with the :mod:`subprocess` module, you can also pass a file + descriptor or an object with a ``fileno()`` method, in which case + the subprocess's standard input will come from that file. + + When starting `run_process` as a background task, you can also use + ``stdin=subprocess.PIPE``, in which case `Process.stdin` will be a + `~trio.abc.SendStream` that you can use to send data to the child. capture_stdout (bool): If true, capture the bytes that the subprocess writes to its standard output stream and return them in the - :attr:`~subprocess.CompletedProcess.stdout` attribute - of the returned :class:`~subprocess.CompletedProcess` object. + `~subprocess.CompletedProcess.stdout` attribute of the returned + `subprocess.CompletedProcess` or `subprocess.CalledProcessError`. capture_stderr (bool): If true, capture the bytes that the subprocess writes to its standard error stream and return them in the - :attr:`~subprocess.CompletedProcess.stderr` attribute - of the returned :class:`~subprocess.CompletedProcess` object. + `~subprocess.CompletedProcess.stderr` attribute of the returned + `~subprocess.CompletedProcess` or `subprocess.CalledProcessError`. check (bool): If false, don't validate that the subprocess exits successfully. You should be sure to check the @@ -544,8 +608,11 @@ async def my_deliver_cancel(process): ``stdout=subprocess.DEVNULL``, or file descriptors. Returns: - A :class:`subprocess.CompletedProcess` instance describing the - return code and outputs. + + When called normally – a `subprocess.CompletedProcess` instance + describing the return code and outputs. + + When called via `Nursery.start` – a `trio.Process` instance. Raises: UnicodeError: if ``stdin`` is specified as a Unicode string, rather @@ -568,12 +635,23 @@ async def my_deliver_cancel(process): if isinstance(stdin, str): raise UnicodeError("process stdin must be bytes, not str") - if stdin == subprocess.PIPE: - raise ValueError( - "stdin=subprocess.PIPE doesn't make sense since the pipe " - "is internal to run_process(); pass the actual data you " - "want to send over that pipe instead" - ) + if task_status is trio.TASK_STATUS_IGNORED: + if stdin is subprocess.PIPE: + raise ValueError( + "stdout=subprocess.PIPE is only valid with nursery.start, " + "since that's the only way to access the pipe; use nursery.start " + "or pass the data you want to write directly" + ) + if options.get("stdout") is subprocess.PIPE: + raise ValueError( + "stdout=subprocess.PIPE is only valid with nursery.start, " + "since that's the only way to access the pipe" + ) + if options.get("stderr") is subprocess.PIPE: + raise ValueError( + "stderr=subprocess.PIPE is only valid with nursery.start, " + "since that's the only way to access the pipe" + ) if isinstance(stdin, (bytes, bytearray, memoryview)): input = stdin options["stdin"] = subprocess.PIPE @@ -603,44 +681,49 @@ async def my_deliver_cancel(process): stdout_chunks = [] stderr_chunks = [] - async with await open_process(command, **options) as proc: - - async def feed_input(): - async with proc.stdin: - try: - await proc.stdin.send_all(input) - except trio.BrokenResourceError: - pass + async def feed_input(stream): + async with stream: + try: + await stream.send_all(input) + except trio.BrokenResourceError: + pass - async def read_output(stream, chunks): - async with stream: - async for chunk in stream: - chunks.append(chunk) + async def read_output(stream, chunks): + async with stream: + async for chunk in stream: + chunks.append(chunk) - async with trio.open_nursery() as nursery: - if proc.stdin is not None: - nursery.start_soon(feed_input) - if proc.stdout is not None: + async with trio.open_nursery() as nursery: + proc = await open_process(command, **options) + try: + if input is not None: + nursery.start_soon(feed_input, proc.stdin) + proc.stdin = None + proc.stdio = None + if capture_stdout: nursery.start_soon(read_output, proc.stdout, stdout_chunks) - if proc.stderr is not None: + proc.stdout = None + proc.stdio = None + if capture_stderr: nursery.start_soon(read_output, proc.stderr, stderr_chunks) - try: + proc.stderr = None + task_status.started(proc) + await proc.wait() + except BaseException: + with trio.CancelScope(shield=True): + killer_cscope = trio.CancelScope(shield=True) + + async def killer(): + with killer_cscope: + await deliver_cancel(proc) + + nursery.start_soon(killer) await proc.wait() - except trio.Cancelled: - with trio.CancelScope(shield=True): - killer_cscope = trio.CancelScope(shield=True) - - async def killer(): - with killer_cscope: - await deliver_cancel(proc) - - nursery.start_soon(killer) - await proc.wait() - killer_cscope.cancel() - raise + killer_cscope.cancel() + raise - stdout = b"".join(stdout_chunks) if proc.stdout is not None else None - stderr = b"".join(stderr_chunks) if proc.stderr is not None else None + stdout = b"".join(stdout_chunks) if capture_stdout else None + stderr = b"".join(stderr_chunks) if capture_stderr else None if proc.returncode and check: raise subprocess.CalledProcessError( diff --git a/trio/_subprocess_platform/__init__.py b/trio/_subprocess_platform/__init__.py index 37797424ab..418808a8ac 100644 --- a/trio/_subprocess_platform/__init__.py +++ b/trio/_subprocess_platform/__init__.py @@ -4,6 +4,7 @@ import sys from typing import Optional, Tuple, TYPE_CHECKING +import trio from .. import _core, _subprocess from .._abc import SendStream, ReceiveStream @@ -72,15 +73,14 @@ def create_pipe_from_child_output() -> Tuple[ReceiveStream, int]: pass elif os.name == "posix": - from ..lowlevel import FdStream def create_pipe_to_child_stdin(): # noqa: F811 rfd, wfd = os.pipe() - return FdStream(wfd), rfd + return trio.lowlevel.FdStream(wfd), rfd def create_pipe_from_child_output(): # noqa: F811 rfd, wfd = os.pipe() - return FdStream(rfd), wfd + return trio.lowlevel.FdStream(rfd), wfd elif os.name == "nt": from .._windows_pipes import PipeSendStream, PipeReceiveStream diff --git a/trio/lowlevel.py b/trio/lowlevel.py index 8e6dfc5ee4..f30fdcc4bd 100644 --- a/trio/lowlevel.py +++ b/trio/lowlevel.py @@ -46,6 +46,8 @@ start_guest_run, ) +from ._subprocess import open_process + if sys.platform == "win32": # Windows symbols from ._core import ( diff --git a/trio/tests/test_subprocess.py b/trio/tests/test_subprocess.py index 7ba794a428..28783303a9 100644 --- a/trio/tests/test_subprocess.py +++ b/trio/tests/test_subprocess.py @@ -5,6 +5,7 @@ import pytest import random from functools import partial +from async_generator import asynccontextmanager from .. import ( _core, @@ -13,10 +14,11 @@ sleep, sleep_forever, Process, - open_process, run_process, TrioDeprecationWarning, + ClosedResourceError, ) +from ..lowlevel import open_process from .._core.tests.tutil import slow, skip_if_fbsd_pipes_broken from ..testing import wait_all_tasks_blocked @@ -47,36 +49,65 @@ def got_signal(proc, sig): return proc.returncode != 0 -async def test_basic(): - async with await open_process(EXIT_TRUE) as proc: - pass +@asynccontextmanager +async def open_process_then_kill(*args, **kwargs): + proc = await open_process(*args, **kwargs) + try: + yield proc + finally: + proc.kill() + await proc.wait() + + +@asynccontextmanager +async def run_process_in_nursery(*args, **kwargs): + async with _core.open_nursery() as nursery: + kwargs.setdefault("check", False) + proc = await nursery.start(partial(run_process, *args, **kwargs)) + yield proc + nursery.cancel_scope.cancel() + + +background_process_param = pytest.mark.parametrize( + "background_process", + [open_process_then_kill, run_process_in_nursery], + ids=["open_process", "run_process in nursery"], +) + + +@background_process_param +async def test_basic(background_process): + async with background_process(EXIT_TRUE) as proc: + await proc.wait() assert isinstance(proc, Process) assert proc._pidfd is None assert proc.returncode == 0 assert repr(proc) == f"" - async with await open_process(EXIT_FALSE) as proc: - pass + async with background_process(EXIT_FALSE) as proc: + await proc.wait() assert proc.returncode == 1 assert repr(proc) == "".format( EXIT_FALSE, "exited with status 1" ) -async def test_auto_update_returncode(): - p = await open_process(SLEEP(9999)) - assert p.returncode is None - assert "running" in repr(p) - p.kill() - p._proc.wait() - assert p.returncode is not None - assert "exited" in repr(p) - assert p._pidfd is None - assert p.returncode is not None +@background_process_param +async def test_auto_update_returncode(background_process): + async with background_process(SLEEP(9999)) as p: + assert p.returncode is None + assert "running" in repr(p) + p.kill() + p._proc.wait() + assert p.returncode is not None + assert "exited" in repr(p) + assert p._pidfd is None + assert p.returncode is not None -async def test_multi_wait(): - async with await open_process(SLEEP(10)) as proc: +@background_process_param +async def test_multi_wait(background_process): + async with background_process(SLEEP(10)) as proc: # Check that wait (including multi-wait) tolerates being cancelled async with _core.open_nursery() as nursery: nursery.start_soon(proc.wait) @@ -94,7 +125,21 @@ async def test_multi_wait(): proc.kill() -async def test_kill_when_context_cancelled(): +# Test for deprecated 'async with process:' semantics +async def test_async_with_basics_deprecated(recwarn): + async with await open_process( + CAT, stdin=subprocess.PIPE, stdout=subprocess.PIPE + ) as proc: + pass + assert proc.returncode is not None + with pytest.raises(ClosedResourceError): + await proc.stdin.send_all(b"x") + with pytest.raises(ClosedResourceError): + await proc.stdout.receive_some() + + +# Test for deprecated 'async with process:' semantics +async def test_kill_when_context_cancelled(recwarn): with move_on_after(100) as scope: async with await open_process(SLEEP(10)) as proc: assert proc.poll() is None @@ -114,8 +159,9 @@ async def test_kill_when_context_cancelled(): ) -async def test_pipes(): - async with await open_process( +@background_process_param +async def test_pipes(background_process): + async with background_process( COPY_STDIN_TO_STDOUT_AND_BACKWARD_TO_STDERR, stdin=subprocess.PIPE, stdout=subprocess.PIPE, @@ -144,7 +190,8 @@ async def check_output(stream, expected): assert 0 == await proc.wait() -async def test_interactive(): +@background_process_param +async def test_interactive(background_process): # Test some back-and-forth with a subprocess. This one works like so: # in: 32\n # out: 0000...0000\n (32 zeroes) @@ -156,7 +203,7 @@ async def test_interactive(): # out: EOF # err: EOF - async with await open_process( + async with background_process( python( "idx = 0\n" "while True:\n" @@ -209,6 +256,8 @@ async def drain_one(stream, count, digit): await proc.stdin.aclose() assert await proc.stdout.receive_some(1) == b"" assert await proc.stderr.receive_some(1) == b"" + await proc.wait() + assert proc.returncode == 0 @@ -245,6 +294,10 @@ async def test_run(): await run_process(CAT, stdin="oh no, it's text") with pytest.raises(ValueError): await run_process(CAT, stdin=subprocess.PIPE) + with pytest.raises(ValueError): + await run_process(CAT, stdout=subprocess.PIPE) + with pytest.raises(ValueError): + await run_process(CAT, stderr=subprocess.PIPE) with pytest.raises(ValueError): await run_process(CAT, capture_stdout=True, stdout=subprocess.DEVNULL) with pytest.raises(ValueError): @@ -278,8 +331,9 @@ async def test_run_with_broken_pipe(): assert result.stdout is result.stderr is None -async def test_stderr_stdout(): - async with await open_process( +@background_process_param +async def test_stderr_stdout(background_process): + async with background_process( COPY_STDIN_TO_STDOUT_AND_BACKWARD_TO_STDERR, stdin=subprocess.PIPE, stdout=subprocess.PIPE, @@ -312,19 +366,20 @@ async def test_stderr_stdout(): # this one hits the branch where stderr=STDOUT but stdout # is not redirected - async with await open_process( + async with background_process( CAT, stdin=subprocess.PIPE, stderr=subprocess.STDOUT ) as proc: assert proc.stdout is None assert proc.stderr is None await proc.stdin.aclose() + await proc.wait() assert proc.returncode == 0 if posix: try: r, w = os.pipe() - async with await open_process( + async with background_process( COPY_STDIN_TO_STDOUT_AND_BACKWARD_TO_STDERR, stdin=subprocess.PIPE, stdout=w, @@ -356,11 +411,13 @@ async def test_errors(): await open_process("ls", shell=False) -async def test_signals(): +@background_process_param +async def test_signals(background_process): async def test_one_signal(send_it, signum): with move_on_after(1.0) as scope: - async with await open_process(SLEEP(3600)) as proc: + async with background_process(SLEEP(3600)) as proc: send_it(proc) + await proc.wait() assert not scope.cancelled_caught if posix: assert proc.returncode == -signum @@ -381,13 +438,14 @@ async def test_one_signal(send_it, signum): @pytest.mark.skipif(not posix, reason="POSIX specific") -async def test_wait_reapable_fails(): +@background_process_param +async def test_wait_reapable_fails(background_process): old_sigchld = signal.signal(signal.SIGCHLD, signal.SIG_IGN) try: # With SIGCHLD disabled, the wait() syscall will wait for the # process to exit but then fail with ECHILD. Make sure we # support this case as the stdlib subprocess module does. - async with await open_process(SLEEP(3600)) as proc: + async with background_process(SLEEP(3600)) as proc: async with _core.open_nursery() as nursery: nursery.start_soon(proc.wait) await wait_all_tasks_blocked() @@ -480,3 +538,12 @@ async def test_warn_on_cancel_SIGKILL_escalation(autojump_clock, monkeypatch): nursery.start_soon(run_process, SLEEP(9999)) await wait_all_tasks_blocked() nursery.cancel_scope.cancel() + + +# the background_process_param exercises a lot of run_process cases, but it uses +# check=False, so lets have a test that uses check=True as well +async def test_run_process_background_fail(): + with pytest.raises(subprocess.CalledProcessError): + async with _core.open_nursery() as nursery: + proc = await nursery.start(run_process, EXIT_FALSE) + assert proc.returncode == 1