Skip to content

Benchmark results #89

@axic

Description

@axic

I have run the benchmarks (including #88) and it could be better. Not sure whether it is representative at all in terms of commonly used methods.

Where bn.js failed:

  • toString(10)
  • toString(16)
  • mul 35% slower
  • mul-jumbo 96% slower (this might be a broken test, but two others were at least twice as fast as bn.js)
  • sqr 26% slower
  • div 63% slower
  • mod 57% slower
  • pow 80% slower
  • gcd 27% slower

Regarding toString(10): a dedicated case might speed things up.

Re: toString(16): the fact that 26 bits are stored in a word makes it a lengthier process. Storing 24 or 32 bits would make this significantly faster. What is the actual reason behind going for 26 bits?

The full output:

bash-3.2$ node index.js 
Benchmarking: create-10
bn.js#create-10 x 917,636 ops/sec ±2.74% (8 runs sampled)
bignum#create-10 x 224,568 ops/sec ±2.60% (9 runs sampled)
bigi#create-10 x 571,307 ops/sec ±11.43% (8 runs sampled)
yaffle#create-10 x 912,360 ops/sec ±4.76% (9 runs sampled)
silentmatt-biginteger#create-10 x 73,199 ops/sec ±1.61% (8 runs sampled)
bignumber#create-10 x 479,789 ops/sec ±1.88% (9 runs sampled)
------------------------
Fastest is bn.js#create-10,yaffle#create-10
========================
Benchmarking: create-hex
bn.js#create-hex x 1,178,547 ops/sec ±11.80% (9 runs sampled)
bignum#create-hex x 189,415 ops/sec ±13.69% (8 runs sampled)
bigi#create-hex x 548,509 ops/sec ±6.54% (9 runs sampled)
sjcl#create-hex x 743,918 ops/sec ±3.52% (8 runs sampled)
yaffle#create-hex x 874,148 ops/sec ±5.63% (9 runs sampled)
silentmatt-biginteger#create-hex x 19,237 ops/sec ±4.85% (8 runs sampled)
bignumber#create-hex x 17,456 ops/sec ±1.67% (8 runs sampled)
------------------------
Fastest is bn.js#create-hex
========================
Benchmarking: toString-10
bn.js#toString-10 x 493,331 ops/sec ±2.14% (9 runs sampled)
bignum#toString-10 x 377,052 ops/sec ±3.66% (9 runs sampled)
bigi#toString-10 x 56,522 ops/sec ±1.85% (8 runs sampled)
yaffle#toString-10 x 1,160,115 ops/sec ±5.15% (9 runs sampled)
silentmatt-biginteger#toString-10 x 3,089,024 ops/sec ±4.15% (9 runs sampled)
bignumber#toString-10 x 22,371 ops/sec ±4.83% (9 runs sampled)
------------------------
Fastest is silentmatt-biginteger#toString-10
========================
Benchmarking: toString-hex
bn.js#toString-hex x 373,833 ops/sec ±7.83% (9 runs sampled)
bignum#toString-hex x 2,053,779 ops/sec ±3.73% (9 runs sampled)
bigi#toString-hex x 597,184 ops/sec ±2.52% (9 runs sampled)
sjcl#toString-hex x 380,253 ops/sec ±6.30% (9 runs sampled)
yaffle#toString-hex x 336,258 ops/sec ±4.74% (9 runs sampled)
silentmatt-biginteger#toString-hex x 9,564 ops/sec ±5.16% (9 runs sampled)
bignumber#toString-hex x 24,026 ops/sec ±5.71% (9 runs sampled)
------------------------
Fastest is bignum#toString-hex
========================
Benchmarking: add
bn.js#add x 7,486,633 ops/sec ±2.01% (8 runs sampled)
bignum#add x 113,935 ops/sec ±2.02% (8 runs sampled)
bigi#add x 1,994,830 ops/sec ±7.26% (9 runs sampled)
sjcl#add x 3,863,970 ops/sec ±4.71% (9 runs sampled)
yaffle#add x 5,122,197 ops/sec ±2.97% (9 runs sampled)
silentmatt-biginteger#add x 1,328,770 ops/sec ±5.26% (9 runs sampled)
bignumber#add x 1,475,561 ops/sec ±5.36% (8 runs sampled)
------------------------
Fastest is bn.js#add
========================
Benchmarking: sub
bn.js#sub x 5,438,894 ops/sec ±4.31% (9 runs sampled)
bignum#sub x 112,649 ops/sec ±4.04% (9 runs sampled)
bigi#sub x 1,612,451 ops/sec ±16.89% (8 runs sampled)
sjcl#sub x 3,326,135 ops/sec ±17.04% (8 runs sampled)
yaffle#sub x 4,654,711 ops/sec ±4.51% (8 runs sampled)
silentmatt-biginteger#sub x 3,256,610 ops/sec ±6.52% (9 runs sampled)
bignumber#sub: 
------------------------
Fastest is bn.js#sub
========================
Benchmarking: mul
bn.js#mul x 1,492,842 ops/sec ±3.50% (9 runs sampled)
bn.js[FFT]#mul x 114,362 ops/sec ±4.76% (9 runs sampled)
bignum#mul x 58,624 ops/sec ±26.16% (7 runs sampled)
bigi#mul x 390,251 ops/sec ±7.24% (8 runs sampled)
sjcl#mul x 2,287,125 ops/sec ±3.93% (9 runs sampled)
yaffle#mul x 1,409,295 ops/sec ±6.91% (9 runs sampled)
silentmatt-biginteger#mul x 514,773 ops/sec ±4.03% (8 runs sampled)
bignumber#mul x 556,803 ops/sec ±1.41% (9 runs sampled)
------------------------
Fastest is sjcl#mul
========================
Benchmarking: mul-jumbo
bn.js#mul-jumbo x 1,190 ops/sec ±4.87% (9 runs sampled)
bn.js[FFT]#mul-jumbo x 3,026 ops/sec ±3.93% (9 runs sampled)
bignum#mul-jumbo x 28,532 ops/sec ±7.91% (9 runs sampled)
bigi#mul-jumbo x 1,157 ops/sec ±5.46% (8 runs sampled)
sjcl#mul-jumbo x 2,841 ops/sec ±3.68% (9 runs sampled)
yaffle#mul-jumbo x 1,552 ops/sec ±2.28% (9 runs sampled)
silentmatt-biginteger#mul-jumbo x 599 ops/sec ±6.48% (9 runs sampled)
bignumber#mul-jumbo x 593 ops/sec ±7.00% (8 runs sampled)
------------------------
Fastest is bignum#mul-jumbo
========================
Benchmarking: sqr
bn.js#sqr x 1,328,815 ops/sec ±5.63% (8 runs sampled)
bignum#sqr x 59,338 ops/sec ±28.25% (7 runs sampled)
bigi#sqr x 249,727 ops/sec ±11.39% (8 runs sampled)
sjcl#sqr x 1,810,258 ops/sec ±3.90% (8 runs sampled)
yaffle#sqr x 1,364,301 ops/sec ±5.63% (8 runs sampled)
silentmatt-biginteger#sqr x 318,959 ops/sec ±5.51% (9 runs sampled)
bignumber#sqr x 522,413 ops/sec ±3.23% (8 runs sampled)
------------------------
Fastest is sjcl#sqr
========================
Benchmarking: div
bn.js#div x 253,720 ops/sec ±6.84% (8 runs sampled)
bignum#div x 35,872 ops/sec ±64.38% (6 runs sampled)
bigi#div x 121,087 ops/sec ±6.19% (7 runs sampled)
yaffle#div x 677,293 ops/sec ±3.85% (9 runs sampled)
silentmatt-biginteger#div x 23,874 ops/sec ±3.83% (9 runs sampled)
bignumber#div x 38,726 ops/sec ±3.88% (8 runs sampled)
------------------------
Fastest is yaffle#div
========================
Benchmarking: mod
bn.js#mod x 213,368 ops/sec ±18.17% (9 runs sampled)
bignum#mod x 52,249 ops/sec ±22.77% (7 runs sampled)
bigi#mod x 97,774 ops/sec ±6.44% (9 runs sampled)
yaffle#mod x 500,369 ops/sec ±6.86% (8 runs sampled)
silentmatt-biginteger#mod x 17,479 ops/sec ±6.18% (8 runs sampled)
------------------------
Fastest is yaffle#mod
========================
Benchmarking: mul-mod k256
bn.js#mul-mod k256 x 821,985 ops/sec ±3.56% (8 runs sampled)
sjcl#mul-mod k256 x 349,720 ops/sec ±5.63% (7 runs sampled)
------------------------
Fastest is bn.js#mul-mod k256
========================
Benchmarking: pow k256
bn.js#pow k256 x 3,235 ops/sec ±9.99% (9 runs sampled)
bignum#pow k256 x 15,680 ops/sec ±39.71% (9 runs sampled)
------------------------
Fastest is bignum#pow k256
========================
Benchmarking: invm k256
bn.js#invm k256 x 5,544 ops/sec ±3.72% (8 runs sampled)
sjcl#invm k256 x 3,930 ops/sec ±7.69% (8 runs sampled)
------------------------
Fastest is bn.js#invm k256
========================
Benchmarking: gcd
bn.js#gcd x 21,713 ops/sec ±5.84% (9 runs sampled)
bigi#gcd x 29,660 ops/sec ±4.42% (8 runs sampled)
------------------------
Fastest is bigi#gcd
========================
Benchmarking: egcd
bn.js#egcd x 5,435 ops/sec ±23.47% (8 runs sampled)
------------------------
Fastest is bn.js#egcd
========================
Benchmarking: bitLength
bn.js#bitLength x 19,521,537 ops/sec ±44.08% (8 runs sampled)
------------------------
Fastest is bn.js#bitLength
========================

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions