Add k-way merge and heapsort benchmarks#65
Conversation
|
A couple results so far:
|
bb02e01 to
48e146c
Compare
|
@konsumlamm Do you have any objections to adding these few benchmarks, or can I just go ahead? |
|
Wow, I just realized I did the k-way merge benchmark all wrong. I'll get that fixed! |
konsumlamm
left a comment
There was a problem hiding this comment.
I have absolutely no objections to adding a few benchmarks, but some style nitpicks.
| heapSortRandoms n gen = heapSort $ take n (randoms gen) | ||
|
|
||
| heapSort :: Ord a => [a] -> [a] | ||
| heapSort xs = [b | (b, ~()) <- P.toAscList . P.fromList . map (\a -> (a, ())) $ xs] |
There was a problem hiding this comment.
Why use a list comprehension here instead of another map?
There was a problem hiding this comment.
I guess it shouldn't matter, because an outer map should fuse with toAscList (I think the latter has a rewrite rule) and avoid producing thunks to select the first components. But if that doesn't happen, things won't look great because rnf for lists doesn't fuse (maybe the assumption is that if you rnf something then you'll use it again, but that's not a great assumption).
|
OK, it should make a lot more sense now. Previously I was trying to merge unsorted streams, which was totally wrong and may well explain why the bare queues were performing worse than the augmented ones. Now the streams are sorted. |
|
I'm struggling to get the time it takes to run the benchmarks down to something reasonable. I tried setting a time limit, but Gauge blew right past it by an order of magnitude. Clearly I'm doing something wrong. Is there some number of evaluations per sample I need to tweak somehow? I don't think we should delay merging to fix this, but we should get to it at some point. |
There was a problem hiding this comment.
The time limit seems to work for me, any benchmark specifically that "blew right past it by an order of magnitude"?
We should consider using tasty-bench instead of gauge, it's a lot faster (and much lighter on dependencies), it wouldn't even require a time limit, although sometimes kWay (10^3) 1000000 seems to get stuck, maybe we should just remove that bench. Also, tasty-bench is much more actively maintained.
Better to have some benchmarks than none.
I ran into trouble when I added heapsort benchmarks for longer lists. 10^6 elements was unbearably slow. I don't think I ever got 10^7 to finish benchmarking. I don't really care what benchmarking framework we use, as long as it's reliable. Should we merge this, and then let you replace it with tasty? I don't know how to set that up. |
konsumlamm
left a comment
There was a problem hiding this comment.
I don't really care what benchmarking framework we use, as long as it's reliable. Should we merge this, and then let you replace it with tasty? I don't know how to set that up.
Ok (it's really straightforward).
Better to have some benchmarks than none.