Skip to content
Bernard Lambeau edited this page Feb 13, 2014 · 8 revisions

There are two kind of benchmarks:

  • counting iterations: for a fast single operation
  • time for a given n: for an operation depending on the size of the given input

Here are some ideas for the API:

Perfer.session do |s|
  s.iterations 'File.stat' do
    File.stat(Dir.tmpdir)
  end

  s.bench 'Array#sort' do |n|
    s.description "Sort an Array of n random integers"
    ary = Array.new(n) { rand(n) }

    s.measure do
      ary.sort
    end
  end
end

(short and sweet, but against POLS because metadata is evaluated multiple times (and so just reading tags would need one iteration). b has to respond to everything and the notion of "current job" is not a clean approach)

With another level of nesting, to separate benchmark code and metadata:

Perfer.session "Array#sort" do |s|
  s.bench "Array#sort" do |b|
    b.description "Sort an Array of n random integers"
    b.bench_code do |n|
      ary = Array.new(n) { rand(n) }

      b.measure do # should not be b here ideally
        ary.sort
      end
    end
  end
end

(looks quite complicated at first sight: 4 levels of nesting)

With the metadata before the #bench block:

Perfer.session "Array#sort" do |s|
  s.description "Sort an Array of n random integers"
  s.tags Array, :sorting
  s.bench "Array#sort" do |b, n|
    ary = Array.new(n) { rand(n) }

    b.measure do
      ary.sort
    end
  end
end

(fine if not much metadata, job grouping happens by consecutive lines, and no more by nesting)

And the metadata grouped in an instance-eval configuration block:

Perfer.session "Array#sort" do |s|
  s.metadata do
    description "Sort an Array of n random integers"
    tags Array, :sorting
    n "the Array size"
  end
  s.bench "Array#sort" do |b, n|
    ary = Array.new(n) { rand(n) }

    b.measure do # b could be replaced by s, so b would not be needed anymore
      ary.sort
    end
  end
end

(good if many items for metadata, again job grouping is less clear)

Clone this wiki locally