Transform your computer into an AI inference hub. Connect from any device via peer-to-peer networking to run AI models locally on your desktop.
MyDeviceAI Desktop is a cross-platform Electron application that enables remote devices (ios & android via the MyDeviceAI app) to leverage your desktop's computing power for AI inference. Using WebRTC peer-to-peer connections, devices can send prompts and receive AI-generated responses without relying on cloud services.
- Local AI Inference: Run GGUF format models locally using llama.cpp
- Peer-to-Peer Networking: Direct WebRTC connections with devices via Cloudflare Workers signaling
- Model Management: Search, download, and switch between AI models from Hugging Face
- Cross-Platform: Supports macOS, Linux, and Windows
- Privacy-Focused: All inference happens locally on your machine
- Streaming Responses: Real-time token streaming for responsive AI interactions
- Modern UI: Clean, dark-themed interface with live connection monitoring
- Node.js v20 or higher
- 10+ GB free disk space (for AI models)
- Stable internet connection (for initial model download)
Download the latest installer for your platform:
- macOS:
.ziparchive (ARM/M-series Macs only)- Important: After extracting, run
xattr -c mydeviceai-desktop.appto remove quarantine attributes - Without this step, macOS will report the app as damaged and prevent it from opening
- The app will be code-signed in future releases
- Important: After extracting, run
- Linux:
.debpackage (Ubuntu/Debian) - Windows:
.exeinstaller
# Clone the repository
git clone https://github.com/navedmerchant/MyDeviceAI-Desktop.git
cd MyDeviceAI-Desktop
# Install dependencies
npm install
# Create environment configuration
cp src/Env.example.ts src/Env.ts
# Edit src/Env.ts with your P2P signaling server URL
# Start development server
npm start- Launch the application
- The app will automatically download llama.cpp for your platform
- Default AI model (Qwen3-4B, ~2.5GB) will be downloaded
- Once setup completes, you'll see your Room ID
- Use this Room ID to connect from other devices
- Note your Room ID displayed in the app
- On your mobile device or other computer, use the companion app
- Enter the Room ID to establish a peer-to-peer connection
- Send prompts and receive AI-generated responses
- Active Model: Displayed in the status bar
- Download Models: Search and download from Hugging Face
- Configure Parameters: Adjust temperature, top-p, max tokens, etc.
- Switch Models: Stop current model and load a different one
- Current Room ID: Shown at the top of the interface
- Regenerate Room: Click to create a new Room ID (disconnects current peers)
- Frontend: TypeScript, Electron, HTML/CSS
- Backend: Node.js, Electron main process
- AI Runtime: llama.cpp (bundled)
- Networking: WebRTC, Cloudflare Workers (signaling)
- Build System: Webpack, Electron Forge
src/
├── index.ts # Main process entry point
├── renderer.ts # UI logic and P2P client
├── preload.ts # IPC bridge (security layer)
├── llamaSetup.ts # llama.cpp management
├── modelManager.ts # Model download and lifecycle
├── p2pcf/ # P2P networking library
├── index.html # Main window template
└── index.css # Application styling
Communication uses WebRTC data channels with JSON messages:
hello: Initial peer handshakeversion_negotiate: Protocol version exchangeprompt: AI completion requesttokens: Streaming response chunksmodel_info: Current model metadata
# Development mode with hot reload
npm start
# Run linter
npm run lint
# Package application
npm run package
# Create platform installers
npm run make
# Publish release
npm run publishThe CI/CD pipeline automatically builds for:
- Linux: DEB package (Ubuntu/Debian) - uses Ubuntu-compiled llama.cpp
- macOS: ZIP distribution
- Windows: Squirrel installer
Builds are triggered on git tags matching v* pattern.
Environment Configuration:
The application requires a P2P signaling server URL to be configured in src/Env.ts.
P2P Signaling Server Deployment Options:
You can deploy the P2P CF signaling server using one of the following methods:
Option 1: Deploy on Cloudflare Workers
- Use the official P2PCF worker implementation: p2pcf/worker.js
- Deploy to Cloudflare Workers following their deployment guide
- Update
src/Env.tswith your Cloudflare Worker URL
Option 2: Deploy on Railway
- Use the standalone signaling server: p2pcf-signalling
- Follow the deployment steps in the repository to deploy on Railway
- Update
src/Env.tswith your Railway deployment URL
Model Parameters (configurable per model):
- Temperature
- Top-p, Top-k
- Maximum tokens
- Context window size
- GPU layers (for acceleration)
The application implements several security measures:
- Electron Fuses: Code integrity validation
- Preload Sandboxing: Minimal IPC surface
- Content Security Policy: Restricted resource loading
- No Remote Code: All code loaded from ASAR bundle
- Local Inference: No data sent to external servers
On macOS, you may see an error stating the app is damaged. This occurs due to Gatekeeper quarantine attributes on unsigned apps.
Solution:
xattr -c mydeviceai-desktop.appThis removes the quarantine attribute and allows the app to run. Future releases will include proper code signing to eliminate this step.
Note: The macOS build is compiled for ARM architecture (Apple Silicon/M-series chips) only. Intel Macs are not currently supported.
- Check logs in the collapsible "Logs" panel
- Ensure no other process is using the assigned port
- Verify model file integrity in the models directory
- Verify Room ID is correct
- Check firewall settings (WebRTC requires UDP)
- Ensure STUN/TURN servers are accessible
- Check available disk space (models are 2-10GB)
- Verify internet connection
- Try a different Hugging Face model
Contributions are welcome! Please:
- Fork the repository
- Create a feature branch
- Make your changes
- Submit a pull request
MIT
Naved Merchant (naved.merchant@gmail.com)
- llama.cpp - High-performance LLM inference
- Electron - Cross-platform desktop framework
- P2PCF - Peer-to-peer communication library
- Hugging Face - Model hosting and distribution
Note: This is an early-stage project. Features and APIs may change in future releases.