-
Notifications
You must be signed in to change notification settings - Fork 4k
[RFC] ARROW-15282: [C++][FlightRPC] Support non-grpc data planes #12196
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
This comment has been minimized.
This comment has been minimized.
|
NOTE: The main purpose of this PR is to collect responses and seek for best approaches to support optimized data transfer methods other than grpc. |
This patch decouples flightrpc data plane from grpc so we can leverage
optimized data transfer libraries.
The basic idea is to replace grpc stream with a data plane stream for
FlightData transmission in DoGet/DoPut/DoExchange. There's no big change
to current flight client and server implementations. Added a wrapper to
support both grpc stream and data plane stream. By default, grpc stream
is used, which goes the current grpc based code path. If a data plane is
enabled (currently through environment variable), flight payload will go
through the data plane stream instead. See client.cc and server.cc to
review the changes.
**About data plane implementation**
- data_plane/types.{h,cc}
Defines client/server data plane and data plane stream interfaces.
It's the only exported api to other component ({client,server}.cc).
- data_plane/serialize.{h,cc}
De-Serialize FlightData manually as we bypass grpc. Luckly, we already
implemented related functions to support payload zero-copy.
- shm.cc
A shared memory driver to verify the data plane approach. The code may
be a bit hard to read, it's better to focus on data plane interface
implementations at first before dive deep into details like shared
memory, ipc and buffer management related code.
Please note there are still many caveats in current code, see TODO and
XXX in shm.cc for details.
**To evaluate this patch**
I tested shared memory data plane on Linux (x86, Arm) and MacOS (Arm).
Build with `-DARROW_FLIGHT_DP_SHM=ON` to enable the shared memory data
plane. Set `FLIGHT_DATAPLANE=shm` environment variable to run unit tests
and benchmarks with the shared memory data plane enabled.
```
Build: cmake -DARROW_FLIGHT_DP_SHM=ON -DARROW_FLIGHT=ON ....
Test: FLIGHT_DATAPLANE=shm release/arrow-flight-test
Bench: FLIGHT_DATAPLANE=shm release/arrow-flight-benchmark \
-num_streams=1|2|4 -num_threads=1|2|4
```
Benchmark result (throughput, latency) on Xeon Gold 5218.
Test case: DoGet, batch size = 128KiB
| streams | grpc over unix socket | shared memory data plane |
| ------- | --------------------- | ------------------------ |
| 1 | 3324 MB/s, 35 us | 7045 MB/s, 16 us |
| 2 | 6289 MB/s, 38 us | 13311 MB/s, 17 us |
| 4 | 10037 MB/s, 44 us | 25012 MB/s, 17 us |
lidavidm
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like the core idea here, to separate the data stream behind an interface and mostly decouple it from gRPC. I guess the difference between here and the UCX prototype is the degree of decoupling. For instance see https://github.com/lidavidm/arrow/blob/flight-ucx/cpp/src/arrow/flight/client.cc#L1587-L1597 where the "stream" gets no reference to gRPC at all and gRPC is on equal footing with the other backends, whereas here we still always make a gRPC call.
If we have a "simple" or "data only" backend, the approach here makes more sense. For something like UCX that can entirely replace gRPC, I'm not sure if it makes as much sense. Reconciling the two might be hard, unless we want to just have two entirely different extension points (with perhaps some shared interfaces for the data side of things) - that might be ok.
| } | ||
|
|
||
| // TODO(yibo): getting data plane uri from env var is bad, shall we extend | ||
| // location to support two uri (control, data)? or any better approach to |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
grpc+tcp://localhost:1337/?data=shm or something? Or maybe something like grpc+tcp+shm://... not sure what is semantically correct.
|
I suppose the key thing is to consider the types of backends we want to support. If they look more like UCX or WebSockets and can entirely replace gRPC, that's one thing; if they look more like shared memory (or libfabrics? DPDK?) then this approach is probably easier. Or maybe there's an approach where we allow swapping gRPC out entirely, but only for the data plane methods. Since it doesn't really add value to reimplement GetFlightInfo in UCX. (That argument breaks down somewhat for WebSockets, where I think using gRPC at all requires proxying or some other deployment configuration.) |
|
Thanks @lidavidm ! Now looks to me UCX transport is the better way. My main concern of the data plane approach is that we have to build by ourselves the data transmission over raw data plane libraries. A robust, high performance communication system is hard enough and we'd better adopt mature frameworks like gRPC or UCX. 80% code of this PR is the shared memory driver, and it's still far from production quality (we need to handle cache management, flow control, and many other tricky things like race conditions). I think we can leave this PR open to see if there are other comments. |
|
How were the tests run? Assume the flight benchmark as described at https://arrow.apache.org/blog/2019/10/13/introducing-arrow-flight/ was used. |
To be fair, even with UCX there is still a fair amount of work to get Flight working on top. But I see your point - there is a lot of code that can be reused in Flight, but something very low level like shared memory without a helper library still requires a lot of work. |
Do you mean how to run flightrpc benchmark?
|
|
Close this PR. Prefer #12442. |
This patch decouples flightrpc data plane from grpc so we can leverage
optimized data transfer libraries.
The basic idea is to replace grpc stream with a data plane stream for
FlightData transmission in DoGet/DoPut/DoExchange. There's no big change
to current flight client and server implementations. Added a wrapper to
support both grpc stream and data plane stream. By default, grpc stream
is used, which goes the current grpc based code path. If a data plane is
enabled (currently through environment variable), flight payload will go
through the data plane stream instead. See client.cc and server.cc to
review the changes.
About data plane implementation
Defines client/server data plane and data plane stream interfaces.
It's the only exported api to other component ({client,server}.cc).
De-Serialize FlightData manually as we bypass grpc. Luckly, we already
implemented related functions to support payload zero-copy.
A shared memory driver to verify the data plane approach. The code may
be a bit hard to read, it's better to focus on data plane interface
implementations at first before dive deep into details like shared
memory, ipc and buffer management related code.
Please note there are still many caveats in current code, see TODO and
XXX in shm.cc for details.
To evaluate this patch
I tested shared memory data plane on Linux (x86, Arm) and MacOS (Arm).
Build with
-DARROW_FLIGHT_DP_SHM=ONto enable the shared memory dataplane. Set
FLIGHT_DATAPLANE=shmenvironment variable to run unit testsand benchmarks with the shared memory data plane enabled.
Benchmark result (throughput, latency) on Xeon Gold 5218.
Test case: DoGet, batch size = 128KiB