🤖 Autoloop program issue for perf-comparison. The program definition below is mirrored from .autoloop/programs/perf-comparison/program.md. Edit the file to update the definition; comment on this issue to steer the agent.
schedule: every 6h
Performance Comparison: tsb (TypeScript) vs pandas (Python)
Goal
Systematically benchmark every tsb function against its pandas equivalent, one function per iteration. Each iteration picks a function that has not yet been benchmarked, writes a matching performance test for both tsb (TypeScript/Bun) and pandas (Python), runs both, and records the timing results. The benchmark results are displayed on the playground pages doc site.
This is an open-ended program — it runs continuously, always adding the next benchmark comparison.
Target
Only modify these files:
benchmarks/** — benchmark scripts and results
playground/benchmarks.html — performance comparison playground page
playground/index.html — add/update link to benchmarks page
Evaluation
The metric is benchmarked_functions. Higher is better.
Generated by Autoloop · ● 2.5M · ◷
🤖 Autoloop program issue for
perf-comparison. The program definition below is mirrored from.autoloop/programs/perf-comparison/program.md. Edit the file to update the definition; comment on this issue to steer the agent.schedule: every 6h
Performance Comparison: tsb (TypeScript) vs pandas (Python)
Goal
Systematically benchmark every tsb function against its pandas equivalent, one function per iteration. Each iteration picks a function that has not yet been benchmarked, writes a matching performance test for both tsb (TypeScript/Bun) and pandas (Python), runs both, and records the timing results. The benchmark results are displayed on the playground pages doc site.
This is an open-ended program — it runs continuously, always adding the next benchmark comparison.
Target
Only modify these files:
benchmarks/**— benchmark scripts and resultsplayground/benchmarks.html— performance comparison playground pageplayground/index.html— add/update link to benchmarks pageEvaluation
The metric is
benchmarked_functions. Higher is better.