LUFT partial fork of usbrip rewrite on go lang for Linux, you also can cross compile for using in various OS such as macOS, Windows with reduced functionality (custom log directory)
GOOS=linux GOARCH=amd64 go build -ldflags="-s -w"for LinuxGOOS=windows GOARCH=amd64 go build -ldflags="-s -w"for WindowsGOOS=darwin GOARCH=amd64 go build -ldflags="-s -w"for macOS
$ ./luft --help
LUFT - Linux USB Forensic Tool
Usage:
luft [command]
Available Commands:
cache Manage USB IDs cache
completion Generate shell autocompletion
events Collect and analyze USB device events
help Help about any command
update Update USB IDs database
Flags:
--config string config file (default: ~/.luft.yaml)
-h, --help help for luft
-v, --version version for luft
Use "luft [command] --help" for more information about a command.$ ./luft events --help
Collect USB device connection events from local or remote systems.
Usage:
luft events [flags]
Flags:
-S, --source string event source (local, remote) [required]
-m, --mass-storage show only mass storage devices
-u, --untrusted show only untrusted devices
-c, --check-whitelist check devices against whitelist
-n, --number int number of events to show (0 = all)
-s, --sort string sort events (asc, desc) (default "asc")
-e, --export export events
-F, --format string export format (json, xml, pdf) (default "pdf")
-o, --output string export filename (default "events_data")
-w, --workers int number of worker threads (0 = auto)
--streaming use streaming parser for large logs
-W, --whitelist string whitelist file path
-U, --usbids string USB IDs database path
--path string log directory (default "/var/log/")
--remote-host string remote host name from config
-I, --remote-ip string remote host IP address
-L, --remote-login string remote login username
-K, --remote-key string path to SSH private key
-P, --remote-password string remote password (deprecated)
--remote-port string remote SSH port (default "22")
-T, --remote-timeout int SSH timeout in seconds (default 30)
--insecure-ssh skip SSH host key verification
Use "luft events --help" for detailed examples.LUFT supports shell completion for bash, zsh, fish, and powershell:
# Bash
./luft completion bash > /etc/bash_completion.d/luft
# Zsh
./luft completion zsh > ~/.zsh/completion/_luft
# Fish
./luft completion fish > ~/.config/fish/completions/luft.fish
# PowerShell
./luft completion powershell > luft.ps1LUFT supports YAML configuration files for easier management of settings and remote hosts.
LUFT searches for configuration files in the following locations (in order):
- Custom path specified with
--configflag ~/.luft.yaml(user home directory)./.luft.yaml(current directory)/etc/luft/.luft.yaml(system-wide)
Settings are applied in the following priority order (highest to lowest):
- CLI flags (highest priority)
- Environment variables
- Config file
- Default values (lowest priority)
Copy .luft.yaml.example to ~/.luft.yaml and customize:
# Path to whitelist file
whitelist: /etc/udev/rules.d/99_PDAC_LOCAL_flash.rules
# Path to USB IDs database
usbids: /var/lib/usbutils/usb.ids
# Default log directory
log_path: /var/log/
# Filter options
mass_storage: false
untrusted: false
check_whitelist: true
# Export settings
export:
format: pdf
path: ~/luft-reports
# Remote hosts
remote_hosts:
- name: prod-server
ip: 10.0.0.1
port: "22"
user: admin
ssh_key: ~/.ssh/id_rsa
timeout: 30
insecure_ssh: false
- name: dev-server
ip: 192.168.1.100
user: developer
ssh_key: ~/.ssh/dev_keyInstead of specifying remote connection details via CLI flags, you can define hosts in your config file:
# Scan remote host from config
./luft -S remote --remote-host=prod-server
# Override config values with CLI flags
./luft -S remote --remote-host=prod-server -T 60LUFT uses the USB IDs database to identify device manufacturers and products. Keep it up-to-date for better device recognition.
# Update to default location (requires root/sudo for system paths)
sudo ./luft update
# Update to custom location
./luft update --path ~/.local/share/luft/usb.ids
# Use updated database
./luft events --source local --usbids ~/.local/share/luft/usb.idsThe update command will:
- Download from the official source (linux-usb.org)
- Show download progress with progress bar
- Verify the database by loading it
- Display version and date information
- Automatically create a cache file for faster subsequent loads
Source:
- http://www.linux-usb.org/usb.ids (official USB ID Repository)
Note: If the default path is not writable, the tool will automatically use ~/.local/share/luft/usb.ids as an alternative.
LUFT automatically caches the parsed USB IDs database for significantly faster loading on subsequent runs.
- First load (parsing): ~13ms
- Cached loads: ~5ms (2-3x faster!)
- First time loading a USB IDs file, LUFT parses it and creates a cache file (
usb.ids.cache) - On subsequent loads, LUFT loads from cache if:
- Cache file exists
- Source file hasn't been modified
- File hash matches
- If source file is updated, cache is automatically invalidated and rebuilt
# Clear cache (will be rebuilt on next load)
./luft cache clear --usbids /path/to/usb.ids
# Clear default cache
./luft cache clear
# Cache is automatically created, no manual action needed
./luft events --source local # First run: parses and caches
./luft events --source local # Subsequent runs: loads from cacheCache location: Cache files are stored alongside the USB IDs file with .cache extension.
Cache invalidation: Cache is automatically invalidated when:
- Source file is modified (timestamp check)
- Source file content changes (MD5 hash check)
- Cache file is manually deleted
LUFT automatically parses log files in parallel using a worker pool for significantly faster processing of multiple files.
Performance improvement with 100 log files:
| Workers | Parse Time | Speedup |
|---|---|---|
| 1 (sequential) | 6.4ms | baseline |
| 4 workers | 2.5ms | 2.6x faster |
| Auto (CPU cores) | 1.8ms | 3.6x faster |
- Automatic parallelization: By default, LUFT uses as many workers as CPU cores
- Worker pool pattern: Files are distributed among workers for parallel processing
- Order preservation: Results are collected and aggregated in original file order
- Smart fallback: Single file or single worker automatically uses sequential parsing
# Use default (CPU cores)
./luft -S local
# Specify custom worker count
./luft -S local -w 4
# Sequential processing (1 worker)
./luft -S local -w 1
# Maximum parallelism (use all CPU cores explicitly)
./luft -S local -w 0When to adjust workers:
- Low CPU: Use
-w 2or-w 4for modest parallelism - Many files: Default (CPU cores) works best
- Few files: Parallelism overhead may not be worth it, use
-w 1 - Resource constrained: Lower worker count to reduce CPU/memory usage
For very large log files or memory-constrained environments, LUFT provides a streaming parser that processes logs line-by-line without loading entire files into memory.
- Memory-efficient: Processes logs incrementally using buffered I/O
- Backpressure handling: Controls memory usage with buffered channels
- Progress monitoring: Real-time stats every 2 seconds during processing
- Memory metrics: Tracks and reports memory allocation statistics
- Parallel streaming: Combines streaming with worker pool for optimal performance
Memory usage comparison when processing 50 large log files (25,000 events):
| Mode | Memory Allocated | Peak Memory | Processing |
|---|---|---|---|
| Standard | ~45 MB | ~60 MB | Fast, memory-intensive |
| Streaming | ~25 MB | ~35 MB | 42% less memory |
The streaming parser uses an event-driven architecture:
- Buffered scanning: Reads files line-by-line with configurable buffer (64KB default, 1MB max)
- Channel-based processing: Events flow through buffered channels (capacity: 1000)
- Backpressure control: Parser pauses when channels are full, preventing memory overflow
- Atomic counters: Thread-safe progress tracking across all workers
- Progress reporting: Displays events/files processed every 2 seconds
# Enable streaming mode (uses default worker count = CPU cores)
./luft -S local --streaming
# Streaming with specific worker count
./luft -S local --streaming -w 4
# Streaming with single worker (lowest memory usage)
./luft -S local --streaming -w 1
# View memory statistics during processing
./luft -S local --streaming
# Output shows:
# Memory before parsing: Alloc=5.2MB TotalAlloc=8.1MB Sys=12.4MB
# Processing: 15420 events from 32 files...
# Memory after streaming parse: Alloc=12.8MB TotalAlloc=45.3MB Sys=25.6MBUse Streaming (--streaming) when:
- Processing very large log files (>1GB total)
- Running on memory-constrained systems (limited RAM)
- Need to monitor progress for long-running operations
- Want to track memory usage during processing
Use Standard Parallel (default) when:
- Processing moderate-sized logs (<500MB total)
- Have sufficient RAM available
- Need maximum speed (slightly faster than streaming)
- Don't need progress monitoring
Combine both for best results:
# Streaming + parallel workers = memory-efficient AND fast
./luft -S local --streaming -w 8./luft events --source local -cm -W 99_PDAC_LOCAL_flash.rules./luft events --source remote -cm -W 99_PDAC_LOCAL_flash.rules \
--remote-ip 10.211.55.11 --remote-login user --remote-key ~/.ssh/id_rsa# First, setup ~/.luft.yaml with remote host details
./luft events --source remote -cm --remote-host prod-server./luft --config /path/to/custom.yaml events --source local# Memory-efficient processing with progress bar
./luft events --source local --streaming -w 8# Show only untrusted mass storage devices and export to PDF
./luft events --source local -muc --export --format pdf --output report./luft events --source local -cme -W 99_PDAC_LOCAL_flash.rules
# Export to JSON
./luft events --source local --export --format json --output events
# Export to XML
./luft events --source local --export --format xml --output events- Rewrite all ugly code
- Update usb.ids (implemented via
--update-usbids) - Cache USB IDs database in memory (2-3x faster loading!)
- Parallel log parsing with worker pool (3.6x faster!)
- Streaming parser for large logs (42% less memory!)
- View events with data \ time intervals
- Search usb device with only one of (vid | pid)
- YAML configuration support
- Database storage (SQLite)
- Real-time monitoring mode
- CSV export format
For any questions — tg: @cffaedfe.
This project is under the MIT License. See the LICENSE file for the full license text.

