A command-line tool that uses AI to suggest descriptive filenames based on file contents. Features an animated ASCII Hat that "thinks" while streaming the LLM's reasoning tokens, then reveals the suggested name.
┌──────────────────────────────────────────────────┐
│ Hmm, this appears to be a sunset photograph │
│ taken over a mountain range... │
└──┐───────────────────────────────────────────────┘
│
╰─┐
|
/\
/ '.
/ .-'
| o o |
/ ~~~~ \
__/````````\__
IMG_3847.jpg
Works with any OpenAI-compatible API: local servers (llama.cpp, Ollama, vLLM, LM Studio) or cloud providers (OpenAI, Together, etc).
- Auto-enumerate duplicate filenames in batch mode (#9)
- Isolate extension from model output for reliable results with small models (#1)
--context/-cflag for guided naming (#2)- File metadata (size, mtime, MIME, EXIF) in LLM context (#3),
--no-metadatato opt out - LLM guard clause skips already-descriptive filenames (#4),
--forceto override --nothink/--fullthinkto control reasoning for guard clause and naming- Animation shows original filename until rename is confirmed (#11)
- Bats test suite with mock LLM server (#10)
- Content-based binary/image detection via
file(1)and magic bytes - Proper error handling for LLM connection failures
- Animated Sorting Hat with drop animation, blinking eyes, and streaming thought bubble
- Supports text files and images including JPEG, PNG, GIF, BMP, TIFF, WebP, and SVG (via vision/multimodal models)
- Auto-detects image files by extension
- Handles reasoning/thinking tokens from models like Qwen, DeepSeek, etc.
- Quiet mode for scripting (
--quiet/-q) - Configurable reasoning: guard clause defaults to no thinking, naming uses thinking.
--nothinkdisables both,--fullthinkenables both - Batch processing for entire directories (processes files sequentially)
- LLM-powered guard clause skips files that already have descriptive names using a two-turn conversation (
--forceto override) - Additional context for guided naming (
--context/-c) - File metadata (EXIF, timestamps, MIME type) included in LLM context (
--no-metadatato disable) - Interactive rename with confirmation (
--yes/-yto auto-rename without prompting) - Robust extension handling: isolates name stem from extension for reliable results with smaller models
- Bash 4+
- Python 3.6+
- An OpenAI-compatible LLM API endpoint
- For image naming: a vision-capable model (e.g., GPT-4o, LLaVA, Qwen-VL)
- Optional:
Pillow(pip install Pillow) for EXIF metadata extraction from images
# Clone the repo
git clone https://github.com/marksverdhei/sorting-hat.git
cd sorting-hat
# Option 1: Symlink to your PATH
ln -s "$(pwd)/hat" ~/.local/bin/hat
# Option 2: Copy directly
cp hat ~/.local/bin/hat
# Option 3: Add the repo to PATH
echo 'export PATH="$PATH:'"$(pwd)"'"' >> ~/.bashrcSet these environment variables (or export them in your shell profile):
| Variable | Default | Description |
|---|---|---|
LLM_BASE_URL |
http://localhost:8080 |
Base URL of your OpenAI-compatible API |
HAT_MODEL |
Qwen3.5-9b |
Model name to use |
HAT_API_KEY |
(empty) | API key (optional, for cloud providers) |
HAT_REASONING_BUDGET |
1024 |
Reasoning token budget for naming (-1 for unlimited) |
llama.cpp (local, default port):
export LLM_BASE_URL=http://localhost:8080
export HAT_MODEL=Qwen/Qwen3.5-9bOllama:
export LLM_BASE_URL=http://localhost:11434
export HAT_MODEL=Qwen/Qwen3.5-9bvLLM:
export LLM_BASE_URL=http://localhost:8000
export HAT_MODEL=Qwen/Qwen3.5-9bOpenAI:
export LLM_BASE_URL=https://api.openai.com
export HAT_MODEL=Qwen3.5-9b
export HAT_API_KEY=sk-...Hugging Face Inference:
export LLM_BASE_URL=https://router.huggingface.co/hf-inference
export HAT_MODEL=Qwen/Qwen3.5-9b
export HAT_API_KEY=hf_...# Suggest a name for a file (animated)
hat photo.jpg
# Suggest and prompt to rename
hat --rename IMG_20240301_143022.jpg
# Auto-rename without confirmation
hat -y IMG_20240301_143022.jpg
# Process all files in a directory
hat --batch ~/Downloads/
# Force image mode for a file
hat --image screenshot.png
# Quiet mode (no animation, for scripting)
hat --quiet document.pdf
# Dry run (show suggestion, don't ask to rename)
hat --dry-run report.txt
# Let the model choose the extension
hat --no-ext mystery-file
# Disable reasoning/thinking tokens for both guard clause and naming
hat --nothink photo.jpg
# Enable thinking for both guard clause and naming
hat --fullthink photo.jpg
# Provide context to guide naming
hat -c "quarterly finance report" document.pdf
# Process all files, even those with good names
hat --batch --force ~/Downloads/
# Skip metadata collection
hat --no-metadata photo.jpgThe suggested filename is printed to stdout, while the animation goes to stderr. This means you can capture just the name:
# Capture the suggested name
new_name=$(hat --quiet photo.jpg)
echo "Suggested: $new_name"
# Rename all files in a directory
for f in ~/unsorted/*; do
suggested=$(hat --quiet "$f")
[ -n "$suggested" ] && mv "$f" "$(dirname "$f")/$suggested"
done- Guard clause: Asks the LLM whether the current filename is already descriptive. If yes, skips the file. If no, the check conversation becomes context for the naming request (two-turn multi-turn). Use
--forceto skip the check entirely. - Metadata collection: Gathers file metadata (size, modification date, MIME type, EXIF for images) to give the LLM more context. Use
--no-metadatato skip. - File analysis: For text files, reads the first 4KB of content. For images, base64-encodes and sends via the OpenAI multimodal format.
- LLM query: Sends the content, metadata, and any user context (
--context) to your configured LLM with a prompt asking for a descriptive kebab-case filename. When the guard clause ran first, this becomes a multi-turn conversation with richer context. - Streaming display: Shows the model's reasoning tokens in a speech bubble above the animated hat (supports both
reasoning_contentfield and<think>tags). - Name sanitization: Cleans the response into a valid filename. When preserving extensions (default), the model only generates the name stem and the original extension is appended automatically.
The Sorting Hat drops onto the filename, thinks with animated eye blinks and a streaming thought bubble, then reveals the new name with a happy face:
- Drop phase: Hat falls from above onto the filename
- Thinking phase: Eyes blink, mouth animates, thought bubble streams reasoning tokens
- Reveal phase: Happy face, green result in bubble and below the hat
Use --quiet to skip the animation entirely.
MIT
