Provably Correct Machine Learning
Machine learning where bugs are caught at compile time, not in production.
Axiom.jl is a next-generation ML framework that combines:
-
Compile-time verification - Shape errors caught before runtime
-
Formal guarantees - Mathematical proofs about model behavior
-
High performance - Rust backend for production speed
-
Julia elegance - Express models as mathematical specifications
using Axiom
@axiom ImageClassifier begin
input :: Image(224, 224, 3)
output :: Probabilities(1000)
features = input |> ResNet50(pretrained=true) |> GlobalAvgPool()
output = features |> Dense(2048, 1000) |> Softmax
# GUARANTEED at compile time
@ensure sum(output) ≈ 1.0
@ensure all(output .>= 0)
end
model = ImageClassifier()
predictions = model(images) # Type-safe, verified, fast# PyTorch: Runtime error after hours of training
# Axiom.jl: Compile error in milliseconds
@axiom BrokenModel begin
input :: Tensor{Float32, (224, 224, 3)}
features = input |> Conv(64, (3,3))
output = features |> Dense(10) # COMPILE ERROR!
# "Shape mismatch: Conv output is (222,222,64), Dense expects vector"
end@axiom SafeClassifier begin
# ...
@ensure valid_probabilities(output) # Runtime assertion
@prove ∀x. sum(softmax(x)) == 1.0 # Mathematical proof
end
# Generate verification certificates
cert = verify(model) |> generate_certificate
save_certificate(cert, "fda_submission.cert")# Load any PyTorch model
model = from_pytorch("resnet50.pth")
# Get 2-3x speedup immediately
output = model(input) # Uses Rust backend
# Add verification
result = verify(model, properties=[ValidProbabilities()])using Axiom
# Define a simple classifier
model = Sequential(
Dense(784, 256, relu),
Dense(256, 10),
Softmax()
)
# Generate sample data
x = randn(Float32, 32, 784)
# Inference
predictions = model(x)
# Verify properties
@ensure all(sum(predictions, dims=2) .≈ 1.0)using Axiom
@axiom MNISTClassifier begin
input :: Tensor{Float32, (:batch, 28, 28, 1)}
output :: Probabilities(10)
features = input |> Conv(32, (3,3)) |> ReLU |> MaxPool((2,2))
features = features |> Conv(64, (3,3)) |> ReLU |> MaxPool((2,2))
flat = features |> GlobalAvgPool() |> Flatten
output = flat |> Dense(64, 10) |> Softmax
@ensure valid_probabilities(output)
end
model = MNISTClassifier()ML models are deployed in critical applications:
-
Medical diagnosis
-
Autonomous vehicles
-
Financial systems
-
Criminal justice
Yet our tools allow bugs to slip through to production.
-
Home - Start here
-
Vision - Why we built this
-
@axiom DSL - Model definition guide
-
Verification - @ensure and @prove
-
Migration Guide - From PyTorch
-
FAQ - Common questions
Axiom.jl/
├── src/ # Julia source
│ ├── Axiom.jl # Main module
│ ├── types/ # Tensor type system
│ ├── layers/ # Neural network layers
│ ├── dsl/ # @axiom macro system
│ ├── verification/ # @ensure, @prove
│ ├── training/ # Optimizers, loss functions
│ └── backends/ # Julia/Rust backends
├── rust/ # Rust performance backend
│ ├── src/
│ │ ├── ops/ # Matrix, conv, activation ops
│ │ └── ffi.rs # Julia FFI bindings
│ └── Cargo.toml
├── test/ # Test suite
├── examples/ # Example models
└── docs/ # Documentation & wiki-
✓ v0.1 - Core framework, DSL, verification basics
-
❏ v0.2 - Full Rust backend, GPU support
-
❏ v0.3 - Hugging Face integration, model zoo
-
❏ v0.4 - Advanced proofs, SMT integration
-
❏ v1.0 - Production ready, industry certifications
We welcome contributions! See CONTRIBUTING.md.
-
Bug reports and feature requests
-
Documentation improvements
-
New layers and operations
-
Performance optimizations
-
Verification methods
Axiom’s proof system is Julia-native by default. SMT solving runs through
packages/SMTLib.jl with no Rust dependency. The Rust SMT runner is an optional
backend you can enable for hardened subprocess control.
Julia-native example:
@prove ∃x. x > 0Optional Rust runner:
export AXIOM_SMT_RUNNER=rust
export AXIOM_SMT_LIB=/path/to/libaxiom_core.so
export AXIOM_SMT_SOLVER=z3@prove ∃x. x > 0Palimpsest-MPL-1.0 License - see LICENSE for details.
Axiom.jl builds on the shoulders of giants:
The future of ML is verified.