Currently we use read_record_batch_from_apache_parquet_file() and write_record_batch_to_apache_parquet_file() to handle spilled data in the uncompressed data manager. Since we now have the DataFolder type which is responsible for I/O, we should use this type for spilled data as well.
One option is to create a separate database schema for uncompressed data and create a Delta Lake table in that schema for each time series table that can be used to handle uncompressed data.
Currently we use
read_record_batch_from_apache_parquet_file()andwrite_record_batch_to_apache_parquet_file()to handle spilled data in the uncompressed data manager. Since we now have theDataFoldertype which is responsible for I/O, we should use this type for spilled data as well.One option is to create a separate database schema for uncompressed data and create a Delta Lake table in that schema for each time series table that can be used to handle uncompressed data.