Skip to content
@zetic-ai

ZETIC

On-device AI For Everything - for any model, on any device, in any framework
Artboard 8 copy@3x

ZETIC - On-Device AI for Everything

Select. Benchmark. Deploy.
for any model, on any device, in any framework

We eliminate the need for costly GPU cloud servers by transforming your existing AI models into NPU-optimized, on-device runtimes in hours, not weeks, across any mobile device, any OS.

📡 Connect with Us

Website Discord LinkedIn Email


🚀 About ZETIC

AI services shouldn't be tied to the cloud.

Melange is our flagship end-to-end on-device AI deployment platform. We help mobile developers run AI models locally, from flagship smartphones to budget devices, making AI Faster, Cheaper, Safer, and Independent.

We provide:

  • Automated Model Conversion: PyTorch, ONNX, or TFLite → device-specific NPU libraries.
  • Peak Performance: Up to 60× faster than mobile CPU inference, with massive energy savings.
  • Cross-Platform SDKs: Swift, Kotlin, Flutter, React Native for any app stack.
  • Benchmarking: Test your models across 200+ global devices with real-world hardware metrics.
  • Full Privacy by Design: All inference happens locally; no data leaves the device.

🧠 Why We're Different

While other frameworks focus on model quantization or partial device deployment, we handle the entire lifecycle:

  1. Analyze model architecture and runtime requirements.
  2. Convert & Optimize for heterogeneous NPUs (Qualcomm, MediaTek, Apple, etc.).
  3. Benchmark on real devices for latency, accuracy, and memory.
  4. Deliver drop-in SDKs ready for mobile integration.
  5. Support continuous updates at scale.

No guesswork. No vendor lock-in. Just working on-device AI in hours, not weeks.


📊 Real Benchmark Highlights

YOLO26 — NPU Latency (ms)

Device Manufacturer CPU GPU NPU
Apple iPhone 16 Pro Apple 91.44 6.64 1.88
Apple iPhone 15 Pro Apple 86.80 9.22 2.57
Samsung Galaxy S25 Ultra Qualcomm 52.10 153.27 11.05
Samsung Galaxy Tab S9 Qualcomm 64.75 200.13 13.61
Xiaomi 13 Pro Qualcomm 58.97 118.20 12.79

Whisper-tiny-encoder — NPU Latency (ms)

Device Manufacturer CPU GPU NPU
Apple iPhone 16 Apple 553.78 42.56 18.82
Apple iPhone 15 Pro Apple 521.65 40.89 19.67
Samsung Galaxy S25 Ultra Qualcomm 246.08 102.36 128.94
Samsung Galaxy S24 Ultra Qualcomm 270.61 120.29 147.12
Xiaomi 12 Qualcomm 302.33 280.13 151.77

Note: Lower is better. Full dataset available on the Melange Dashboard.


👨🏻‍💻 Plug-and-play Integration

  • Deploying NPU-accelerated models takes just a few lines of code with the Melange SDK.

  • iOS Integration (Swift)

// (1) Load Melange model
let model = try ZeticMLangeModel(tokenKey: "MLANGE_PERSONAL_KEY", "MODEL_REPO_NAME")

// (2) Prepare model inputs
let inputs: [Tensor] = [] // Prepare your inputs

// (3) Run and get output tensors of the model
let outputs = try model.run(inputs)
  • Android Integration (Kotlin, Java)
// (1) Load Melange model
val model = ZeticMLangeModel(context, "MLANGE_PERSONAL_KEY", "MODEL_REPO_NAME")

// (2) Prepare model inputs
val inputs: Array<Tensor> = // Prepare your inputs

// (3) Run and get output tensors of the model
val outputs = model.run(inputs)

🛠️ Ready to Build?

Don't start from scratch. We have created a repository of production-ready, open-source, on-device AI apps that you can clone, run, and modify in minutes.



📚 Resources

Official Links

  • Website: zetic.ai
  • Melange Dashboard: mlange.zetic.ai — Get NPU-optimized SDKs, view benchmarks, and upload custom models.
  • Documentation: docs.zetic.ai — Full API reference and implementation guides.
  • Discord: Join our Community — Get support, share your projects, and meet other developers.

Check Out Our Official App

See Melange performance in action on your own device: ZeticApp: Android | iOS

Pinned Loading

  1. ZETIC_Melange_apps ZETIC_Melange_apps Public

    NPU powered On-device AI Mobile applications using Melange

    Swift 43 6

Repositories

Showing 10 of 18 repositories

Top languages

Loading…

Most used topics

Loading…