Problem
rtk grep caps results at 10 matches per file (src/grep_cmd.rs:113 — .take(10)), with a global max of 50.
This produces a "+N more" truncation that tells Claude signal exists but withholds it. Claude then re-runs the search with different args to recover the missing results — burning more tokens than a raw grep would have.
Root cause
src/grep_cmd.rs:113:
.take(10) // hardcoded per-file cap
src/main.rs:309: global max = 50.
Impact
- LLM sees "+N more" and loops to find the missing matches
- Each retry costs more tokens than the original search
- Net result: RTK makes token consumption worse than raw grep
Proposed fix
- Remove per-file cap (or raise to 50)
- Raise global max from 50 → 100
- Add optional
--max-per-file <N> flag for users who want stricter limits
Acceptance criteria
rtk grep "pattern" returns all matches per file up to the global max
- No "+N more" truncation that signals hidden results
- Token savings still ≥50% vs raw grep (from deduplication and grouping, not caps)
Problem
rtk grepcaps results at 10 matches per file (src/grep_cmd.rs:113—.take(10)), with a global max of 50.This produces a "+N more" truncation that tells Claude signal exists but withholds it. Claude then re-runs the search with different args to recover the missing results — burning more tokens than a raw
grepwould have.Root cause
src/grep_cmd.rs:113:src/main.rs:309: global max = 50.Impact
Proposed fix
--max-per-file <N>flag for users who want stricter limitsAcceptance criteria
rtk grep "pattern"returns all matches per file up to the global max