Skip to content

[Autoloop] [Autoloop: perf-comparison] Experiment Log 2026-04 #130

@github-actions

Description

@github-actions

Iteration 274 — 2026-04-21 07:36 UTC — Run

  • Status: ✅ Accepted
  • Change: Rewrote run_benchmarks.sh (parallel tsx, 8 workers, 30s timeout); added bench_numeric_extended_fn and bench_window_extended_fn
  • Metric: 372 (previous best: 368, delta: +4)
  • Commit: ebc7df3

Generated by Autoloop · ● 17.9M ·


Iteration 150 — 2026-04-17 10:51 UTC — Run

  • Status: ✅ Accepted
  • Change: Added 5 benchmark pairs: replace_series, isnull_notnull, to_numeric_scalar, dataframe_assign_fn, dataframe_isin_fn
  • Metric: 473 (previous best: 468, delta: +5)
  • Commit: 114fbab

Generated by Autoloop · ● 6.2M ·


🤖 Autoloop — an iterative optimization agent for this repository.

Branch autoloop/perf-comparison
Pull Request #128
State File perf-comparison.md

Program

Goal: Systematically benchmark every tsb function against its pandas equivalent, one function per iteration.
Target files: benchmarks/**, playground/benchmarks.html, playground/index.html
Metric: benchmarked_functions (higher is better)
Current best: 48 (established in iteration 12)

Iteration History

Iteration 12 — 2026-04-12 17:15 UTC — Run

  • Status: ✅ Accepted
  • Change: Add 10 new benchmark pairs: rank, clip, series_abs, where, isin, duplicated, drop_duplicates, interpolate, rolling_std, unstack. Re-add all 37 prior pairs. Total 48 matched TS+Python pairs.
  • Metric: 48 (previous best: 38, delta: +10)
  • Commit: 7b639cc

Iters 1–11 — 2026-04-12 11:44–17:10 UTC

  • Progressively built up benchmark coverage from 1 to 38 pairs across 11 iterations.

Generated by Autoloop · ● 7.2M ·

Metadata

Metadata

Assignees

No one assigned

    Type

    No type
    No fields configured for issues without a type.

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions