Comparing approaches to User Defined Functions in Apache DataFusion using Python
+
+ • timsaucer
+
+
+
+
+
Writing User Defined Functions in Apache DataFusion using Python
+
+
Personal Context
+
+
For a few months now I’ve been working with Apache DataFusion, a
+fast query engine written in Rust. From my experience the language that nearly all data scientists
+are working in is Python. In general, data scientists often use Pandas
+for in-memory tasks and PySpark for larger tasks that require
+distributed processing.
+
+
In addition to DataFusion, there is another Rust based newcomer to the DataFrame world,
+Polars. The latter is growing extremely fast, and it serves many of the same
+use cases as DataFusion. For my use cases, I’m interested in DataFusion because I want to be able
+to build small scale tests rapidly and then scale them up to larger distributed systems with ease.
+I do recommend evaluating Polars for in-memory work.
+
+
Personally, I would love a single query approach that is fast for both in-memory usage and can
+extend to large batch processing to exploit parallelization. I think DataFusion, coupled with
+Ballista or
+DataFusion-Ray, may provide this solution.
+
+
As I’m testing, I’m primarily limiting my work to the
+datafusion-python project, a wrapper around the Rust
+DataFusion library. This wrapper gives you the speed advantages of keeping all of the data in the
+Rust implementation and the ergonomics of working in Python. Personally, I would prefer to work
+purely in Rust, but I also recognize that since the industry works in Python we should meet the
+people where they are.
+
+
User-Defined Functions
+
+
The focus of this post is User-Defined Functions (UDFs). The DataFusion library gives a lot of
+useful functions already for doing DataFrame manipulation. These are going to be similar to those
+you find in other DataFrame libraries. You’ll be able to do simple arithmetic, create substrings of
+columns, or find the average value across a group of rows. These cover most of the use cases
+you’ll need in a DataFrame.
+
+
However, there will always arise times when you want a custom function. With UDFs you open a
+world of possibilities in your code. Sometimes there simply isn’t an easy way to use built-in
+functions to achieve your goals.
+
+
In the following, I’m going to demonstrate two example use cases. These are based on real world
+problems I’ve encountered. Also I want to demonstrate the approach of “make it work, make it work
+well, make it work fast” that is a motto I’ve seen thrown around in data science.
+
+
I will demonstrate three approaches to writing UDFs. In order of increasing performance they are
+
+
+
Writing a pure Python function to do your computation
+
Using the PyArrow libraries in Python to accelerate your processing
+
Writing a UDF in Rust and exposing it to Python
+
+
+
Additionally I will demonstrate two variants of this. The first will be nearly identical to the
+PyArrow library approach to simplify understanding how to connect the Rust code to Python. In the
+second version we will do the iteration through the input arrays ourselves to give even greater
+flexibility to the user.
+
+
Here are the two example use cases, taken from my own work but generalized.
+
+
Use Case 1: Scalar Function
+
+
I have a DataFrame and a list of tuples that I’m interested in. I want to filter out the DataFrame
+to only have values that match those tuples from certain columns in the DataFrame.
+
+
To give a concrete example, we will use data generated for the TPC-H benchmarks.
+Suppose I have a table of sales line items. There are many columns, but I am interested in three: a
+part key (p_partkey), supplier key (p_suppkey), and return status (p_returnflag). I want
+only to return a DataFrame with a specific combination of these three values. That is, I want
+to know if part number 1530 from supplier 4031 was sold (not returned), so I want a specific
+combination of p_partkey = 1530, p_suppkey = 4031, and p_returnflag = 'N'. I have a small
+handful of these combinations I want to return.
+
+
Probably the most ergonomic way to do this without UDF is to turn that list of tuples into a
+DataFrame itself, perform a join, and select the columns from the original DataFrame. If we were
+working in PySpark we would probably broadcast join the DataFrame created from the tuple list since
+it is tiny. In practice, I have found that with some DataFrame libraries performing a filter rather
+than a join can be significantly faster. This is worth profiling for your specific use case.
+
+
Use Case 2: Aggregate Function
+
+
I have a DataFrame with many values that I want to aggregate. I have already analyzed it and
+determined there is a noise level below which I do not want to include in my analysis. I want to
+compute a sum of only values that are above my noise threshold.
+
+
This can be done fairly easy without leaning on a User Defined Aggegate Function (UDAF). You can
+simply filter the DataFrame and then aggregate using the built-in sum function. Here, we
+demonstrate doing this as a UDF primarily as an example of how to write UDAFs. We will use the
+PyArrow compute approach.
+
+
Pure Python approach
+
+
The fastest way (developer time, not code time) for me to implement the scalar problem solution
+was to do something along the lines of “for each row, check the values of interest contains that
+tuple”. I’ve published this as
+an example
+in the datafusion-python repository. Here is an
+example of how this can be done:
When working with a DataFusion UDF in Python, you define your function to take in some number of
+expressions. During the evaluation, these will get computed into their corresponding values and
+passed to your UDF as a PyArrow Array. We must return an Array also with the same number of
+elements (rows). So the UDF example just iterates through all of the arrays and checks to see if
+the tuple created from these columns matches any of those that we’re looking for.
+
+
I’ll repeat because this is something that tripped me up the first time I wrote a UDF for
+datafusion: DataFusion UDFs, even scalar UDFs, process an array of values at a time not a single
+row. This is different from some other DataFrame libraries and you may need to recognize a slight
+change in mentality.
+
+
Some important lines here are the lines like partkey = partkey.as_py(). When we do this, we pay a
+heavy cost. Now instead of keeping the analysis in the Rust code, we have to take the values in the
+array and convert them over to Python objects. In this case we end up getting two numbers and a
+string as real Python objects, complete with reference counting and all. Also we are iterating
+through the array in Python rather than Rust native. These will significantly slow down your
+code. Any time you have to cross the barrier where you change values inside the Rust arrays into
+Python objects or vice versa you will pay heavy cost in that transformation. You will want to
+design your UDFs to avoid this as much as possible.
+
+
Python approach using PyArrow compute
+
+
DataFusion uses Apache Arrow as its in-memory data format. This can
+be seen in the way that Arrow Arrays are passed into the UDFs. We can take advantage of the fact
+that PyArrow, the canonical Python Arrow implementation,
+provides a variety of
+useful functions. In the example below, we are only using a few of the boolean functions and the
+equality function. Each of these functions takes two arrays and analyzes them row by row. In the
+below example, we shift the logic around a little since we are now operating on an entire array of
+values instead of checking a single row ourselves.
The idea in the code above is that we will iterate through each of the values of interest, which we
+expect to be small. For each of the columns, we compare the value of interest to it’s corresponding
+array using pyarrow.compute.equal. This will give use three boolean arrays. We have a match to
+the tuple if we have a row in all three arrays that is true, so we use pyarrow.compute.and_. Now
+our return value from the UDF needs to include arrays for which any of the values of interest list
+of tuples exists, so we take the result from the current loop and perform a pyarrow.compute.or_
+on it.
+
+
From my benchmarking, switching from approach of converting values into Python objects to this
+approach of using the PyArrow built-in functions leads to about a 10x speed improvement in this
+simple problem.
+
+
It’s worth noting that almost all of the PyArrow compute functions expect to take one or two arrays
+as their arguments. If you need to write a UDF that is evaluating three or more columns, you’ll
+need to do something akin to what we’ve shown here.
+
+
Rust UDF with Python wrapper
+
+
This is the most complicated approach, but has the potential to be the most performant. What we
+will do here is write a Rust function to perform our computation and then expose that function to
+Python. I know of two use cases where I would recommend this approach. The first is the case when
+the PyArrow compute functions are insufficient for your needs. Perhaps your code is too complex or
+could be greatly simplified if you pulled in some outside dependency. The second use case is when
+you have written a UDF that you’re sharing across multiple projects and have hardened the approach.
+It is possible that you can implement your function in Rust to give a speed improvement and then
+every project that is using this shared UDF will benefit from those updates.
+
+
When deciding to use this approach, it’s worth considering how much you think you’ll actually
+benefit from the Rust implementation to decide if it’s worth the additional effort to maintain and
+deploy the Python wheels you generate. It is certainly not necessary for every use case.
+
+
Due to the excellent work by the Python arrow team, we can simplify our work to needing only two
+dependencies on the Rust side, arrow-rs and
+pyo3. I have posted a minimal example.
+You’ll need maturin to build the project, and you must use
+release mode when building to get the expected performance.
+
+
maturin develop --release
+
+
+
When you write your UDF in Rust you generally will need to take these steps
+
+
+
Write a function description that takes in some number of Python generic objects.
+
Convert these objects to Arrow Arrays of the appropriate type(s).
+
Perform your computation and create a resultant Array.
+
Convert the array into a Python generic object.
+
+
+
For the conversion to and from Python objects, we can take advantage of the
+ArrayData::from_pyarrow_bound and ArrayData::to_pyarrow functions. All that remains is to
+perform your computation.
+
+
We are going to demonstrate doing this computation in two ways. The first is to mimic what we’ve
+done in the above approach using PyArrow. In the second we demonstrate iterating through the three
+arrays ourselves.
+
+
In our first approach, we can expect the performance to be nearly identical to when we used the
+PyArrow compute functions. On the Rust side we will have slightly less overhead but the heavy
+lifting portions of the code are essentially the same between this Rust implementation and the
+PyArrow approach above.
+
+
The reason for demonstrating this, even though it doesn’t provide a significant speedup over
+Python, is to primarily demonstrate how to make the Python to Rust with Python wrapper
+transition. In the second implementation you can see how we can iterate through all of the arrays
+ourselves.
+
+
In this first example, we are hard coding the values of interest, but in the following section
+we demonstrate passing these in during initalization.
That’s it! We’ve now got a third party Rust UDF with Python wrappers working with DataFusion’s
+Python bindings!
+
+
Rust UDF with initialization
+
+
Looking at the code above, you can see that it is hard coding the values we’re interested in. There
+are many types of UDFs that don’t require any additional data provided to them before they start
+the computation. The code above is sloppy, so let’s clean it up.
+
+
We want to write the function to take some additional data. A limitation of the UDFs we create is
+that they expect to operate on entire arrays of data at a time. We can get around this problem by
+creating an initializer for our UDF. We do this by defining a Rust struct that contains the data we
+need and implement two methods on this struct, new and __call__. By doing this we will create a
+Python object that is callable, so it can be the function we provide to udf.
When you write this, you don’t have to call your constructor new. The more important part is that
+you have #[new] designated on the function. With this you can provide any kinds of data you need
+during processing. Using this initializer in Python is fairly straightforward.
When you use this approach you will need to provide a name argument to udf. This is because our
+class/struct does not get the __qualname__ attribute that the udf function is looking for. You
+can give this udf any name you choose.
+
+
Rust UDF with direct iteration
+
+
The final version of our scalar UDF is one where we implement it in Rust and iterate through all of
+the arrays ourselves. If you are iterating through more than 3 arrays at a time I recommend looking
+at izip in the
+itertools crate. For ease of understanding and since we only
+have 3 arrays here I will just explicitly create my own tuple here.
We convert the values_of_interest into a vector of borrowed types so that we can do a fast search
+without creating additional memory. The other option is to turn the returnflag into a String
+but that memory allocation is unnecessary. After that we use two zip operations so that we can
+iterate over all three columns in a single pass. Since each zip will return a tuple of two
+elements, a quick map turns them into the tuple format we need. Also, StringArray is a little
+different in the buffer it uses, so it is treated slightly differently from the others.
+
+
User Defined Aggregate Function
+
+
Writing a user defined aggregate function or user defined window function is slightly more complex
+than scalar functions. This is because we must accumulate values and there is no guarantee that one
+batch will contain all the values we are aggregating over. For this we need to define an
+Accumulator which will do a few things.
+
+
+
Process a batch and compute an internal state
+
Share the state so that we can combine multiple batches
+
Merge the results across multiple batches
+
Return the final result
+
+
+
In the example below, we’re going to look at customer orders and we want to know per customer ID,
+how much they have ordered total. We want to ignore small orders, which we define as anything under
+5000.
Since we are doing a sum we can keep a single value as our internal state. When we call update()
+we will process a single array and update the internal state, which we share with the state()
+function. For larger batches we may merge() these states. It is important to note that the
+states in the merge() function are an array of the values returned from state(). It is
+entirely possible that the merge function is significantly different than the update, though in
+our example they are very similar.
+
+
One example of implementing a user defined aggregate function where the update() and merge()
+operations are different is computing an average. In update() we would create a state that is both
+a sum and a count. state() would return a list of these two values, and merge() would compute
+the final result.
+
+
User Defined Window Functions
+
+
Writing a user defined window function is slightly more complex than an aggregate function due
+to the variety of ways that window functions are called. I recommend reviewing the
+online documentation
+for a description of which functions need to be implemented. The details of how to implement
+these generally follow the same patterns as described above for aggregate functions.
+
+
Performance Comparison
+
+
For the scalar functions above, we performed a timing evaluation, repeating the operation 100
+times. For this simple example these are our results.
As expected, the conversion to Python objects is by far the worst performance. As soon as we drop
+into using any functions that keep the data entirely on the Native (Rust or C/C++) side we see a
+near 10x speed improvement. Then as we increase our complexity from using PyArrow compute functions
+to implementing the UDF in Rust we see incremental improvements. Our fastest approach - iterating
+through the arrays ourselves does operate nearly 10% faster than the PyArrow compute approach.
+
+
Final Thoughts and Recommendations
+
+
For anyone who is curious about DataFusion I highly recommend
+giving it a try. This post was designed to make it easier for new users to the Python implementation
+to work with User Defined Functions by giving a few examples of how one might implement these.
+
+
When it comes to designing UDFs, I strongly recommend seeing if you can write your UDF using
+PyArrow functions rather than pure Python
+objects. As shown in the scalar example above, you can achieve a 10x speedup by using PyArrow
+functions. If you must do something that isn’t well represented by the PyArrow compute functions,
+then I would consider using a Rust based UDF in the manner shown above.
Lastly, the Apache Arrow and DataFusion community is an active group of very helpful people working
+to make a great tool. If you want to get involved, please take a look at the
+online documentation and jump in to help with one of the
+open issues.
+
+
+
+
+
+
+
+
+
+
diff --git a/README.md b/README.md
index b59a68f5..3e34ab17 100644
--- a/README.md
+++ b/README.md
@@ -1,9 +1,84 @@
# Apache DataFusion Blog Content
-https://datafusion.apache.org/blog/
+This repository contains the Apache DataFusion blog at https://datafusion.apache.org/blog/
-This (`asf-site`) branch contains the website's static content.
+## Setup for Mac
-Please see the [`main` branch README] for the source code and instructions:
+Based on instructions at https://jekyllrb.com/docs/installation/macos/
+
+```shell
+brew install chruby ruby-install xz
+ruby-install ruby 3.1.3
+```
+
+Note: I did not have a `~/.zshrc` file so had to create one first.
+
+```
+echo "source $(brew --prefix)/opt/chruby/share/chruby/chruby.sh" >> ~/.zshrc
+echo "source $(brew --prefix)/opt/chruby/share/chruby/auto.sh" >> ~/.zshrc
+echo "chruby ruby-3.1.3" >> ~/.zshrc # run 'chruby' to see actual version
+```
+
+Quit and restart terminal.
+
+```shell
+ruby -v
+```
+Should be `ruby 3.1.3p185 (2022-11-24 revision 1a6b16756e) [arm64-darwin23]` or similar.
+
+```shell
+gem install jekyll bundler
+```
+
+### Preview site locally
+
+```shell
+bundle exec jekyll serve
+```
+
+## Setup for Docker
+
+If you don't wish to change or install ruby and nodejs locally, you can use docker to build and preview the site with a command like:
+
+```shell
+docker run -v `pwd`:/datafusion-site -p 4000:4000 -it ruby bash
+cd datafusion-site
+gem install jekyll bundler
+bundle install
+# Serve using local container address
+bundle exec jekyll serve --host 0.0.0.0
+```
+
+Then open http://localhost:4000/blog/ to see the blog locally
+
+## Publish site
+
+This is currently a manual process. Basic steps are:
+
+#### Check out `main` and build site
+```shell
+# Check out latest code
+git checkout main
+git pull
+# build site (html is left in _site directory)
+bundle exec jekyll build
+```
+
+#### Check out `asf-site` and copy content
+Checkout a separate copy of `datafusion-site`
+
+```shell
+git checkout asf-site
+git pull
+# create a branch for the publishing
+git checkout -b publish_blog
+# push code upstream
+git push
+# copy content built from _site directory
+cp -R ../datafusion-site/_site/* .
+git commit -a -m 'Publish blog content'
+```
+
+#### Make PR, targeting the `asf-site` branch
+For example, see https://github.com/apache/datafusion-site/pull/9
-[`main` branch README]: https://github.com/apache/datafusion-site/blob/main/README.md
diff --git a/feed.xml b/feed.xml
index a48659ef..d3766e5e 100644
--- a/feed.xml
+++ b/feed.xml
@@ -1,4 +1,607 @@
-Jekyll2024-10-08T02:34:20+00:00https://datafusion.apache.org/blog/feed.xmlApache DataFusion Project News & BlogApache DataFusion is a very fast, extensible query engine for building high-quality data-centric systems in Rust, using the Apache Arrow in-memory format.Apache DataFusion Comet 0.3.0 Release2024-09-27T00:00:00+00:002024-09-27T00:00:00+00:00https://datafusion.apache.org/blog/2024/09/27/datafusion-comet-0.3.0Jekyll2024-11-19T22:33:58+00:00https://datafusion.apache.org/blog/feed.xmlApache DataFusion Project News & BlogApache DataFusion is a very fast, extensible query engine for building high-quality data-centric systems in Rust, using the Apache Arrow in-memory format.Comparing approaches to User Defined Functions in Apache DataFusion using Python2024-11-19T00:00:00+00:002024-11-19T00:00:00+00:00https://datafusion.apache.org/blog/2024/11/19/datafusion-python-udf-comparisons
+
Writing User Defined Functions in Apache DataFusion using Python
+
+
Personal Context
+
+
For a few months now I’ve been working with Apache DataFusion, a
+fast query engine written in Rust. From my experience the language that nearly all data scientists
+are working in is Python. In general, data scientists often use Pandas
+for in-memory tasks and PySpark for larger tasks that require
+distributed processing.
+
+
In addition to DataFusion, there is another Rust based newcomer to the DataFrame world,
+Polars. The latter is growing extremely fast, and it serves many of the same
+use cases as DataFusion. For my use cases, I’m interested in DataFusion because I want to be able
+to build small scale tests rapidly and then scale them up to larger distributed systems with ease.
+I do recommend evaluating Polars for in-memory work.
+
+
Personally, I would love a single query approach that is fast for both in-memory usage and can
+extend to large batch processing to exploit parallelization. I think DataFusion, coupled with
+Ballista or
+DataFusion-Ray, may provide this solution.
+
+
As I’m testing, I’m primarily limiting my work to the
+datafusion-python project, a wrapper around the Rust
+DataFusion library. This wrapper gives you the speed advantages of keeping all of the data in the
+Rust implementation and the ergonomics of working in Python. Personally, I would prefer to work
+purely in Rust, but I also recognize that since the industry works in Python we should meet the
+people where they are.
+
+
User-Defined Functions
+
+
The focus of this post is User-Defined Functions (UDFs). The DataFusion library gives a lot of
+useful functions already for doing DataFrame manipulation. These are going to be similar to those
+you find in other DataFrame libraries. You’ll be able to do simple arithmetic, create substrings of
+columns, or find the average value across a group of rows. These cover most of the use cases
+you’ll need in a DataFrame.
+
+
However, there will always arise times when you want a custom function. With UDFs you open a
+world of possibilities in your code. Sometimes there simply isn’t an easy way to use built-in
+functions to achieve your goals.
+
+
In the following, I’m going to demonstrate two example use cases. These are based on real world
+problems I’ve encountered. Also I want to demonstrate the approach of “make it work, make it work
+well, make it work fast” that is a motto I’ve seen thrown around in data science.
+
+
I will demonstrate three approaches to writing UDFs. In order of increasing performance they are
+
+
+
Writing a pure Python function to do your computation
+
Using the PyArrow libraries in Python to accelerate your processing
+
Writing a UDF in Rust and exposing it to Python
+
+
+
Additionally I will demonstrate two variants of this. The first will be nearly identical to the
+PyArrow library approach to simplify understanding how to connect the Rust code to Python. In the
+second version we will do the iteration through the input arrays ourselves to give even greater
+flexibility to the user.
+
+
Here are the two example use cases, taken from my own work but generalized.
+
+
Use Case 1: Scalar Function
+
+
I have a DataFrame and a list of tuples that I’m interested in. I want to filter out the DataFrame
+to only have values that match those tuples from certain columns in the DataFrame.
+
+
To give a concrete example, we will use data generated for the TPC-H benchmarks.
+Suppose I have a table of sales line items. There are many columns, but I am interested in three: a
+part key (p_partkey), supplier key (p_suppkey), and return status (p_returnflag). I want
+only to return a DataFrame with a specific combination of these three values. That is, I want
+to know if part number 1530 from supplier 4031 was sold (not returned), so I want a specific
+combination of p_partkey = 1530, p_suppkey = 4031, and p_returnflag = 'N'. I have a small
+handful of these combinations I want to return.
+
+
Probably the most ergonomic way to do this without UDF is to turn that list of tuples into a
+DataFrame itself, perform a join, and select the columns from the original DataFrame. If we were
+working in PySpark we would probably broadcast join the DataFrame created from the tuple list since
+it is tiny. In practice, I have found that with some DataFrame libraries performing a filter rather
+than a join can be significantly faster. This is worth profiling for your specific use case.
+
+
Use Case 2: Aggregate Function
+
+
I have a DataFrame with many values that I want to aggregate. I have already analyzed it and
+determined there is a noise level below which I do not want to include in my analysis. I want to
+compute a sum of only values that are above my noise threshold.
+
+
This can be done fairly easy without leaning on a User Defined Aggegate Function (UDAF). You can
+simply filter the DataFrame and then aggregate using the built-in sum function. Here, we
+demonstrate doing this as a UDF primarily as an example of how to write UDAFs. We will use the
+PyArrow compute approach.
+
+
Pure Python approach
+
+
The fastest way (developer time, not code time) for me to implement the scalar problem solution
+was to do something along the lines of “for each row, check the values of interest contains that
+tuple”. I’ve published this as
+an example
+in the datafusion-python repository. Here is an
+example of how this can be done:
When working with a DataFusion UDF in Python, you define your function to take in some number of
+expressions. During the evaluation, these will get computed into their corresponding values and
+passed to your UDF as a PyArrow Array. We must return an Array also with the same number of
+elements (rows). So the UDF example just iterates through all of the arrays and checks to see if
+the tuple created from these columns matches any of those that we’re looking for.
+
+
I’ll repeat because this is something that tripped me up the first time I wrote a UDF for
+datafusion: DataFusion UDFs, even scalar UDFs, process an array of values at a time not a single
+row. This is different from some other DataFrame libraries and you may need to recognize a slight
+change in mentality.
+
+
Some important lines here are the lines like partkey = partkey.as_py(). When we do this, we pay a
+heavy cost. Now instead of keeping the analysis in the Rust code, we have to take the values in the
+array and convert them over to Python objects. In this case we end up getting two numbers and a
+string as real Python objects, complete with reference counting and all. Also we are iterating
+through the array in Python rather than Rust native. These will significantly slow down your
+code. Any time you have to cross the barrier where you change values inside the Rust arrays into
+Python objects or vice versa you will pay heavy cost in that transformation. You will want to
+design your UDFs to avoid this as much as possible.
+
+
Python approach using PyArrow compute
+
+
DataFusion uses Apache Arrow as its in-memory data format. This can
+be seen in the way that Arrow Arrays are passed into the UDFs. We can take advantage of the fact
+that PyArrow, the canonical Python Arrow implementation,
+provides a variety of
+useful functions. In the example below, we are only using a few of the boolean functions and the
+equality function. Each of these functions takes two arrays and analyzes them row by row. In the
+below example, we shift the logic around a little since we are now operating on an entire array of
+values instead of checking a single row ourselves.
The idea in the code above is that we will iterate through each of the values of interest, which we
+expect to be small. For each of the columns, we compare the value of interest to it’s corresponding
+array using pyarrow.compute.equal. This will give use three boolean arrays. We have a match to
+the tuple if we have a row in all three arrays that is true, so we use pyarrow.compute.and_. Now
+our return value from the UDF needs to include arrays for which any of the values of interest list
+of tuples exists, so we take the result from the current loop and perform a pyarrow.compute.or_
+on it.
+
+
From my benchmarking, switching from approach of converting values into Python objects to this
+approach of using the PyArrow built-in functions leads to about a 10x speed improvement in this
+simple problem.
+
+
It’s worth noting that almost all of the PyArrow compute functions expect to take one or two arrays
+as their arguments. If you need to write a UDF that is evaluating three or more columns, you’ll
+need to do something akin to what we’ve shown here.
+
+
Rust UDF with Python wrapper
+
+
This is the most complicated approach, but has the potential to be the most performant. What we
+will do here is write a Rust function to perform our computation and then expose that function to
+Python. I know of two use cases where I would recommend this approach. The first is the case when
+the PyArrow compute functions are insufficient for your needs. Perhaps your code is too complex or
+could be greatly simplified if you pulled in some outside dependency. The second use case is when
+you have written a UDF that you’re sharing across multiple projects and have hardened the approach.
+It is possible that you can implement your function in Rust to give a speed improvement and then
+every project that is using this shared UDF will benefit from those updates.
+
+
When deciding to use this approach, it’s worth considering how much you think you’ll actually
+benefit from the Rust implementation to decide if it’s worth the additional effort to maintain and
+deploy the Python wheels you generate. It is certainly not necessary for every use case.
+
+
Due to the excellent work by the Python arrow team, we can simplify our work to needing only two
+dependencies on the Rust side, arrow-rs and
+pyo3. I have posted a minimal example.
+You’ll need maturin to build the project, and you must use
+release mode when building to get the expected performance.
+
+
maturin develop --release
+
+
+
When you write your UDF in Rust you generally will need to take these steps
+
+
+
Write a function description that takes in some number of Python generic objects.
+
Convert these objects to Arrow Arrays of the appropriate type(s).
+
Perform your computation and create a resultant Array.
+
Convert the array into a Python generic object.
+
+
+
For the conversion to and from Python objects, we can take advantage of the
+ArrayData::from_pyarrow_bound and ArrayData::to_pyarrow functions. All that remains is to
+perform your computation.
+
+
We are going to demonstrate doing this computation in two ways. The first is to mimic what we’ve
+done in the above approach using PyArrow. In the second we demonstrate iterating through the three
+arrays ourselves.
+
+
In our first approach, we can expect the performance to be nearly identical to when we used the
+PyArrow compute functions. On the Rust side we will have slightly less overhead but the heavy
+lifting portions of the code are essentially the same between this Rust implementation and the
+PyArrow approach above.
+
+
The reason for demonstrating this, even though it doesn’t provide a significant speedup over
+Python, is to primarily demonstrate how to make the Python to Rust with Python wrapper
+transition. In the second implementation you can see how we can iterate through all of the arrays
+ourselves.
+
+
In this first example, we are hard coding the values of interest, but in the following section
+we demonstrate passing these in during initalization.
That’s it! We’ve now got a third party Rust UDF with Python wrappers working with DataFusion’s
+Python bindings!
+
+
Rust UDF with initialization
+
+
Looking at the code above, you can see that it is hard coding the values we’re interested in. There
+are many types of UDFs that don’t require any additional data provided to them before they start
+the computation. The code above is sloppy, so let’s clean it up.
+
+
We want to write the function to take some additional data. A limitation of the UDFs we create is
+that they expect to operate on entire arrays of data at a time. We can get around this problem by
+creating an initializer for our UDF. We do this by defining a Rust struct that contains the data we
+need and implement two methods on this struct, new and __call__. By doing this we will create a
+Python object that is callable, so it can be the function we provide to udf.
When you write this, you don’t have to call your constructor new. The more important part is that
+you have #[new] designated on the function. With this you can provide any kinds of data you need
+during processing. Using this initializer in Python is fairly straightforward.
When you use this approach you will need to provide a name argument to udf. This is because our
+class/struct does not get the __qualname__ attribute that the udf function is looking for. You
+can give this udf any name you choose.
+
+
Rust UDF with direct iteration
+
+
The final version of our scalar UDF is one where we implement it in Rust and iterate through all of
+the arrays ourselves. If you are iterating through more than 3 arrays at a time I recommend looking
+at izip in the
+itertools crate. For ease of understanding and since we only
+have 3 arrays here I will just explicitly create my own tuple here.
We convert the values_of_interest into a vector of borrowed types so that we can do a fast search
+without creating additional memory. The other option is to turn the returnflag into a String
+but that memory allocation is unnecessary. After that we use two zip operations so that we can
+iterate over all three columns in a single pass. Since each zip will return a tuple of two
+elements, a quick map turns them into the tuple format we need. Also, StringArray is a little
+different in the buffer it uses, so it is treated slightly differently from the others.
+
+
User Defined Aggregate Function
+
+
Writing a user defined aggregate function or user defined window function is slightly more complex
+than scalar functions. This is because we must accumulate values and there is no guarantee that one
+batch will contain all the values we are aggregating over. For this we need to define an
+Accumulator which will do a few things.
+
+
+
Process a batch and compute an internal state
+
Share the state so that we can combine multiple batches
+
Merge the results across multiple batches
+
Return the final result
+
+
+
In the example below, we’re going to look at customer orders and we want to know per customer ID,
+how much they have ordered total. We want to ignore small orders, which we define as anything under
+5000.
Since we are doing a sum we can keep a single value as our internal state. When we call update()
+we will process a single array and update the internal state, which we share with the state()
+function. For larger batches we may merge() these states. It is important to note that the
+states in the merge() function are an array of the values returned from state(). It is
+entirely possible that the merge function is significantly different than the update, though in
+our example they are very similar.
+
+
One example of implementing a user defined aggregate function where the update() and merge()
+operations are different is computing an average. In update() we would create a state that is both
+a sum and a count. state() would return a list of these two values, and merge() would compute
+the final result.
+
+
User Defined Window Functions
+
+
Writing a user defined window function is slightly more complex than an aggregate function due
+to the variety of ways that window functions are called. I recommend reviewing the
+online documentation
+for a description of which functions need to be implemented. The details of how to implement
+these generally follow the same patterns as described above for aggregate functions.
+
+
Performance Comparison
+
+
For the scalar functions above, we performed a timing evaluation, repeating the operation 100
+times. For this simple example these are our results.
As expected, the conversion to Python objects is by far the worst performance. As soon as we drop
+into using any functions that keep the data entirely on the Native (Rust or C/C++) side we see a
+near 10x speed improvement. Then as we increase our complexity from using PyArrow compute functions
+to implementing the UDF in Rust we see incremental improvements. Our fastest approach - iterating
+through the arrays ourselves does operate nearly 10% faster than the PyArrow compute approach.
+
+
Final Thoughts and Recommendations
+
+
For anyone who is curious about DataFusion I highly recommend
+giving it a try. This post was designed to make it easier for new users to the Python implementation
+to work with User Defined Functions by giving a few examples of how one might implement these.
+
+
When it comes to designing UDFs, I strongly recommend seeing if you can write your UDF using
+PyArrow functions rather than pure Python
+objects. As shown in the scalar example above, you can achieve a 10x speedup by using PyArrow
+functions. If you must do something that isn’t well represented by the PyArrow compute functions,
+then I would consider using a Rust based UDF in the manner shown above.
Lastly, the Apache Arrow and DataFusion community is an active group of very helpful people working
+to make a great tool. If you want to get involved, please take a look at the
+online documentation and jump in to help with one of the
+open issues.
]]>timsaucerApache DataFusion Comet 0.3.0 Release2024-09-27T00:00:00+00:002024-09-27T00:00:00+00:00https://datafusion.apache.org/blog/2024/09/27/datafusion-comet-0.3.0
@@ -1189,278 +1792,4 @@ discussion about the initial donation.
Try out the project and provide feedback, file issues, and contribute code.
We recently released DataFusion 34.0.0. This blog highlights some of the major
-improvements since we released DataFusion 26.0.0 (spoiler alert there are many)
-and a preview of where the community plans to focus in the next 6 months.
-
-
Apache Arrow DataFusion is an extensible query engine, written in Rust, that
-uses Apache Arrow as its in-memory format. DataFusion is used by developers to
-create new, fast data centric systems such as databases, dataframe libraries,
-machine learning and streaming applications. While DataFusion’s primary design
-goal is to accelerate creating other data centric systems, it has a
-reasonable experience directly out of the box as a dataframe library and
-command line SQL tool.
-
-
This may also be our last update on the Apache Arrow Site. Future
-updates will likely be on the DataFusion website as we are working to graduate
-to a top level project (Apache Arrow DataFusion → Apache DataFusion!) which
-will help focus governance and project growth. Also exciting, our first
-DataFusion in person meetup is planned for March 2024.
-
-
DataFusion is very much a community endeavor. Our core thesis is that as a
-community we can build much more advanced technology than any of us as
-individuals or companies could alone. In the last 6 months between 26.0.0 and
-34.0.0, community growth has been strong. We accepted and reviewed over a
-thousand PRs from 124 different committers, created over 650 issues and closed 517
-of them.
-You can find a list of all changes in the detailed CHANGELOG.
-
-
-
-
Improved Performance 🚀
-
-
Performance is a key feature of DataFusion, DataFusion is
-more than 2x faster on ClickBench compared to version 25.0.0, as shown below:
-
-
-
-
-
-
- Figure 1: Performance improvement between 25.0.0 and 34.0.0 on ClickBench.
- Note that DataFusion 25.0.0, could not run several queries due to
- unsupported SQL (Q9, Q11, Q12, Q14) or memory requirements (Q33).
-
-
-
-
-
-
- Figure 2: Total query runtime for DataFusion 34.0.0 and DataFusion 25.0.0.
-
-
-
-
Here are some specific enhancements we have made to improve performance:
Eliminate redundant sorting with sort order aware optimizers
-
-
-
New Features ✨
-
-
DML / Insert / Creating Files
-
-
DataFusion now supports writing data in parallel, to individual or multiple
-files, using Parquet, CSV, JSON, ARROW and user defined formats.
-Benchmark results show improvements up to 5x in some cases.
-
-
Similarly to reading, data can now be written to any ObjectStore
-implementation, including AWS S3, Azure Blob Storage, GCP Cloud Storage, local
-files, and user defined implementations. While reading from hive style
-partitioned tables has long been supported, it is now possible to write to such
-tables as well.
We have also submitted a paper to SIGMOD 2024, one of the
-premiere database conferences, describing DataFusion in a technically formal
-style and making the case that it is possible to create a modular and extensive query engine
-without sacrificing performance. We hope this paper helps people
-evaluating DataFusion for their needs understand it better.
-
-
DataFusion in 2024 🥳
-
-
Some major initiatives from contributors we know of this year are:
Community Growth: Graduate to our own top level Apache project, and
-subsequently add more committers and PMC members to keep pace with project
-growth.
-
-
-
Use case white papers: Write blog posts and videos explaining
-how to use DataFusion for real-world use cases.
-
-
-
Testing: Improve CI infrastructure and test coverage, more fuzz
-testing, and better functional and performance regression testing.
Aggregate Performance: Improve the speed of aggregating “high cardinality” data
-when there are many (e.g. millions) of distinct groups.
-
-
-
Statistics: Improved statistics handling with an eye towards more
-sophisticated expression analysis and cost models.
-
-
-
-
How to Get Involved
-
-
If you are interested in contributing to DataFusion we would love to have you
-join us. You can try out DataFusion on some of your own data and projects and
-let us know how it goes, contribute suggestions, documentation, bug reports, or
-a PR with documentation, tests or code. A list of open issues
-suitable for beginners is here.
-
-
As the community grows, we are also looking to restart biweekly calls /
-meetings. Timezones are always a challenge for such meetings, but we hope to
-have two calls that can work for most attendees. If you are interested
-in helping, or just want to say hi, please drop us a note via one of
-the methods listed in our Communication Doc.
]]>pmc
\ No newline at end of file
+]]>pmc
\ No newline at end of file
diff --git a/index.html b/index.html
index 0d1617e1..0e2c5757 100644
--- a/index.html
+++ b/index.html
@@ -38,7 +38,12 @@