A thin terminal chat for wrapping any Large Language Model (LLM) API endpoint
Simplicity for people needing to control the code : it is not pure coding but also not vibe-coding, it is in between
It features:
-cross-platform compatibility
-strict file context management
-proxy support (to bypass geopolitical walls without VPN)
-session logging
Requirements: curl or wget, unzip (usually pre-installed)
curl -fsSL https://raw.githubusercontent.com/thunderbyte-labs/thin-wrap/main/install.sh | shOr with wget:
wget -qO- https://raw.githubusercontent.com/thunderbyte-labs/thin-wrap/main/install.sh | shWhat it does:
- Installs to
~/.local/binand~/.local/lib/thin-wrap/(no root required, root is blocked) - Asks where to store config: portable (with binary) or
~/.config/thin-wrap/ - Adds to your login shell PATH (
.profileon Linux,.bash_profileon macOS) - Detects your architecture automatically (x86_64, ARM64, Apple Silicon)
macOS Security Note: If you see "developer cannot be verified" warnings after manual install, run:
xattr -d com.apple.quarantine ~/.local/lib/thin-wrap/thin-wrapOr install via Homebrew (when available) to avoid this: brew install thunderbyte-labs/tap/thin-wrap
- Download
thin-wrap-Windows-x86_64.zipfrom the Releases page - Extract to a directory of your choice (e.g.,
C:\Program Files\thin-wrap\orC:\Users\YourName\bin\) - Add the directory to your PATH manually:
- Search "Environment Variables" in Start Menu
- Edit "Path" under User variables
- Add your extraction directory
- Run
thin-wrap.exefrom Command Prompt or PowerShell
curl -fsSL https://raw.githubusercontent.com/thunderbyte-labs/thin-wrap/main/uninstall.sh | shRemoves binary and wrapper, preserves config by default (asks before deleting API keys).
| OS | Architecture | Download |
|---|---|---|
| Linux | x86_64 | thin-wrap-Linux-x86_64.zip |
| Linux | ARM64 | thin-wrap-Linux-aarch64.zip |
| macOS | Intel | thin-wrap-Darwin-x86_64.zip |
| macOS | Apple Silicon | thin-wrap-Darwin-arm64.zip |
| Windows | x86_64 | thin-wrap-Windows-x86_64.zip |
Note: If the installer reports "unsupported architecture," download manually and extract to ~/.local/bin/ (Linux/macOS) or C:\Windows\ (Windows, not recommended).
- No root execution: The application refuses to run as root (Linux/macOS) to prevent accidental file permission changes
- Config precedence:
--config /path/to/config.jsonoverrides any auto-detected configuration - XDG compliance: Follows XDG Base Directory Specification for config storage when available
The application uses a config.json file located in the same directory as the executable.
A sample config.json is provided with the release. It contains two main sections:
A dictionary defining available LLM models. Each entry uses a unique model identifier as the key, with:
- api_key: The name of the environment variable that holds the actual API key (recommended for security). Direct API key strings may also be used, though this is discouraged.
- api_base_url: The base URL for the model's API endpoint.
- proxy (optional): Boolean flag indicating whether this model recommends using a proxy. If
trueand no proxy is configured via command line, the application will prompt for proxy selection when this model is chosen. Defaults tofalse.
Example entries (from the sample):
"gemini-2.5-flash": {
"api_key": "GOOGLE_API_KEY",
"api_base_url": "https://generativelanguage.googleapis.com/v1beta/openai/"
},
"deepseek-chat": {
"api_key": "DEEPSEEK_API_KEY",
"api_base_url": "https://api.deepseek.com/v1"
},
"anthropic/claude-sonnet-4.5": {
"api_key": "OPENROUTER_API_KEY",
"api_base_url": "https://openrouter.ai/api/v1",
"proxy": true
}Set the corresponding environment variables before running the application (e.g., export GOOGLE_API_KEY=your_actual_key).
Configuration for file backup behavior during intelligent code editing:
- timestamp_format: strftime format used for timestamps in backup filenames (default:
"%Y%m%d%H%M%S"). - extra_string: Additional string appended to backup filenames (default:
"thin-wrap"). - backup_old_file: Boolean controlling whether the original file is backed up before changes are applied (default:
false).
Users may edit config.json to add, remove, or modify models and backup settings as needed.
If config.json is missing or invalid, the application will raise an error with guidance.
- Multi-LLM Support: Seamlessly switch between providers like Claude, DeepSeek, Grok, Gemini, and others via the
/modelcommand. - File Context Management: Interactive three-column file browser (activated with Ctrl+B) for selecting editable and readable files, with a new file insertion flow and Ctrl+D shortcut to clear selected files.
- Proxy Support: Configure SOCKS5 or HTTP proxies to bypass geographic restrictions (e.g., for Anthropic or Gemini in restricted regions). Recommended providers: Webshare (tested), IPRoyal (untested), Proxy-Seller (untested). Use the
--proxyflag (e.g.,--proxy socks5://127.0.0.1:1080). Models can be configured with"proxy": trueto automatically prompt for proxy selection when chosen. - Intelligent Code Editing:
- Automatic file versioning with timestamped backups (e.g.,
file.pybecomesfile.202601301511.py). - Git-style diff reporting for changes using
git diff. - Preservation of file permissions and formatting.
- Automatic file versioning with timestamped backups (e.g.,
- Project Root Selection: Interactive selection of project root directory with history, Tab autocompletion, and support for
~(home directory). Change via/rootdircommand. - Multi-line Input: Compose messages across multiple lines; send with Alt+Enter.
- Message History Navigation: Navigate through previously sent messages and temporary drafts with Page Up/Down keys.
- Session Logging: Automatic saving of chat sessions as timestamped text files (e.g.,
llm_session_20260130_151145.txt) in the project root or user data directory. - Token Estimation: Built-in token estimator for input and output messages to monitor usage.
- Colorized UI Elements: Enhanced help menu and outputs with colorization for better readability.
- Improved Reloading: Debugged
/reloadcommand for loading previous conversations from the project root. - Cross-Platform Compatibility: Fully functional on Windows, macOS, and Linux, with platform-specific editors (Notepad on Windows, vim/nano on Unix) and path handling.
- Launch the application as described in the Installation section.
- Select a project root directory (if not specified via
--root-dir). You can choose "No root directory - Free chatting without file context" to enable free chat mode without file context. - Choose an LLM model from the available options.
- Enter your message and press Alt+Enter to send.
- Use commands starting with
/for additional functionality (see Commands below). - Manage file contexts with Ctrl+B to open the file browser menu. In free chat mode, Ctrl+B allows you to select a root directory and switch to file context mode.
Sessions are automatically saved upon exit or after each exchange.
Navigation: Use Page Up and Page Down to navigate through message history (sent messages and temporary drafts).
--root-dir <path>: Specify the project root directory.--read <files>: List of readable files (space-separated).--edit <files>: List of editable files (space-separated).--message <text>: Initial message to send to the LLM.--proxy <url>: Proxy URL (e.g.,socks5://127.0.0.1:1080).
Available in-chat commands:
/clear: Clear the current session context./bye: Exit the application (auto-saves the session)./helpor/?: Display help for commands./model: Switch to a different LLM model./reload: Load a previous conversation from available sessions in the project root./rootdir: Show or change the current project root directory./files: Open the file context management menu (equivalent to Ctrl+B)./proxy: Manage proxy (off to disable, number for previous, or new URL like socks5://127.0.0.1:1080).
The application is modular, with key components:
thin_wrap.py: Main entry point and chat loop.config.py: Configuration settings and model loading.llm_client.py: Unified LLM API wrapper using the OpenAI library.file_processor.py: Handles file queries, versioning, and diff generation.input_handler.py: Manages user input with editing capabilities.menu.py: Interactive menus for file browsing and selections.proxy_wrapper.py: Proxy configuration and validation.session_logger.py: Session saving and reloading.text_utils.py: Text cleaning and token estimation.ui.py: User interface elements like banners and colorized outputs.- Other utilities:
command_handler.py,tags.py.
- Verify environment variables for API keys.
- Ensure proxy format is correct if used.
- Check file permissions for editing.
- Adjust
config.jsonbackup settings if needed.
The project includes comprehensive tests to protect key functionalities:
# Install test dependencies
pip install -r requirements.txt
# Run all tests
python -m pytest tests/ -v
# Run specific test suites
python -m pytest tests/test_config_validation.py -v
python -m pytest tests/test_proxy_suggestion.py -v
python -m pytest tests/test_session_metadata.py -vTests protect critical functionality including:
- Configuration validation and loading
- Proxy suggestion for models requiring proxies
- Session metadata and preview generation
- Command handling and error management
- File context and free chat modes
- Input handling and draft navigation
GitHub Actions runs tests automatically on:
- Push to main, develop, and feature branches
- Pull requests targeting main branch
See TESTING.md for detailed testing guidelines and CI/CD setup.
Contributions are welcome! Please follow these guidelines:
- Maintain cross-platform compatibility.
- Adhere to Python style (PEP 8).
- Update documentation for new features.
- Test on multiple platforms.
- Submit pull requests to the
mainbranch.
This project is licensed under the AGPL-3.0 License. See the LICENSE file for details.