Node.js WebNN-flavor polyfill API backed by rustnn and ONNX Runtime via a Rust napi-rs addon.
This is a Node polyfill (not a browser implementation). It keeps WebNN naming close to the W3C WebNN API, with Node-only helpers for model loading.
packages/webnn-node/- TypeScript WebNN-flavor APIpackages/webnn-node/native/- Rust napi-rs addon (rustnn+ ONNX Runtime)demo/- Demo app that downloads and runs a WebNN model from Hugging Face Hub
- Node.js
>= 20 - Rust toolchain (
cargo,rustc) - Build toolchain for native addons (C/C++ compiler, linker, Python if required by your platform)
The native addon depends on upstream rustnn from GitHub (configured in packages/webnn-node/native/Cargo.toml):
https://github.com/rustnn/rustnn(branch = "main")
If you want to test with a local rustnn checkout, change that dependency to a local path in Cargo.toml.
rustnn uses ONNX Runtime dynamic loading (ort with load-dynamic). You need ONNX Runtime shared libraries installed and discoverable.
Preferred setup:
- Install ONNX Runtime shared library for your platform.
- Set one of:
ORT_DYLIB_PATHto the exact ORT library file.ORT_LIB_DIRto a directory containing ORT libs.
Examples:
# macOS
export ORT_DYLIB_PATH=/path/to/libonnxruntime.dylib
# Linux
export ORT_DYLIB_PATH=/path/to/libonnxruntime.so
# Windows (PowerShell)
$env:ORT_DYLIB_PATH="C:\path\to\onnxruntime.dll"The demo also tries common install locations automatically; explicit env vars are still the most reliable.
npm install
npm run buildEquivalent Makefile flow:
make install
make buildnpm run demoMakefile flow (downloads ORT like rustnn then runs demo with env vars):
make demoOptional demo overrides:
DEMO_PROMPT="The future of AI is" DEMO_MAX_NEW_TOKENS=32 npm run demoWith make:
DEMO_PROMPT="The future of AI is" DEMO_MAX_NEW_TOKENS=32 make demoDemo behavior:
- Downloads/caches
tarekziade/SmolLM-135M-webnnvia@huggingface/hubsnapshotDownload. - Loads WebNN graph files from the snapshot directory.
- Uses
rustnnto parse/validate/lower to ONNX. - Executes with ONNX Runtime backend.
- Performs autoregressive generation and prints generated text.
This repo now includes a Makefile patterned after rustnn's ORT setup logic:
make onnxruntime-downloaddownloads the platform-specific ONNX Runtime package intotarget/onnxruntime.make demoinjectsORT_DYLIB_PATH(andLD_LIBRARY_PATHon Linux) before running the demo.make demo-onlyruns the already-built demo with the same ORT env setup.make cleanremoves build artifacts and downloaded ORT files.
installWebNNPolyfill()attachesnavigator.mlonglobalThisml.createContext(options)new MLGraphBuilder(context)builder.input(name, descriptor)builder.constant(descriptor, data)builder.add(a, b)builder.mul(a, b)builder.build(outputs)context.createTensor(descriptor)context.writeTensor(tensor, data)context.dispatch(graph, inputs, outputs)context.readTensor(tensor)graph.destroy()
Node-only helper:
ml.loadModelFromHub(repoId, options)