-
Notifications
You must be signed in to change notification settings - Fork 26
Description
Currently, Leopard-RS supports both 8-bit and 16-bit finite fields. The leo_encode() and leo_decode() functions choose which fields to use based on the parameters original_count and recovery_count. (Essentially, the 8-bit field is used only if original_count + recovery_count < 256 except those values are rounded up to a power of 2 first.)
I noticed however that the 16-bit algorithm is around 20–30% faster than the 8-bit algorithm even for small buffer counts. This makes sense, since the algorithms are fundamentally the same, but the 16-bit algorithm processes two bytes at a time, so it could potentially be up to twice as fast. The downside of the 16-bit algorithm is that it uses larger lookup tables, which is less cache friendly, but apparently it's still faster on net, which makes me question whether it's useful to use the 8-bit algorithm at all.
Based on this, I'd like to propose two changes:
Change 1: allow the library user to explicitly choose whether to use the 8-bit or 16-bit field, by adding a parameter to leo_encode() and leo_decode. Something like:
enum LeopardFiniteFieldT { LEO_FF_AUTO, LEO_FF8, LEO_FF16 };...where LEO_FF_AUTO selects the field size based on the number of buffers, as is done today, while LEO_FF8 forces the use of the 8-bit field (returning an error if the total number of buffers exceeds 256), and LEO_FF16 forces the use of the 16-bit field (even when the total number of buffers is below 256).
Change 2: Allow disabling one or both of the fields at compile-time. This allows applications that only ever intend to use one field or the other to omit the unused part of the library.
This might seem pedantic but it actually makes a pretty significant difference in library size, for example using the bench_leopard binary:
456K bench_leopard8
508K bench_leopard16
708K bench_leopard
This change would be really simple. Currently, LeopardCommon.h contains:
// Enable 8-bit or 16-bit fields
#define LEO_HAS_FF8
#define LEO_HAS_FF16I would propose changing this to:
// Enable 8-bit or 16-bit fields
#ifndef NO_LEO_HAS_FF8
#define LEO_HAS_FF8
#endif
#ifndef NO_LEO_HAS_FF16
#define LEO_HAS_FF16
#endifThat way, the library defaults to including both fields, but any one of them could be excluded by passing -DNO_LEO_HAS_FF8 or -DNO_LEO_HAS_FF16 as compiler arguments.
I'd be happy to propose patches for both of these changes, if maintainers are willing to merge them.