From f1c66f93192e71695c665b396201bc2b0c6ec07d Mon Sep 17 00:00:00 2001 From: Qiang Li Date: Tue, 21 Jan 2025 18:10:47 +0100 Subject: [PATCH] update text in parallel-computing.rst --- content/parallel-computing.rst | 4 ---- 1 file changed, 4 deletions(-) diff --git a/content/parallel-computing.rst b/content/parallel-computing.rst index 879fb81..88eab0d 100644 --- a/content/parallel-computing.rst +++ b/content/parallel-computing.rst @@ -429,10 +429,6 @@ Examples recvbuf = comm.scatter(sendbuf, root=0) print(f"rank {rank} received message: {recvbuf}") - MPI excels for problems which can be divided up into some sort of subdomains and - communication is required between the subdomains between e.g. timesteps or iterations. - The word-count problem is simpler than that and MPI is somewhat overkill, but in an exercise - below you will learn to use point-to-point communication to parallelize it. In addition to the lower-case methods :meth:`send`, :meth:`recv`, :meth:`broadcast` etc., there are also *upper-case* methods :meth:`Send`, :meth:`Recv`, :meth:`Broadcast`. These work with