- Improving Neural Network Calibration with Radius-Based Regularization
- Bachelor Thesis at Sapienza University of Rome
- Research Area: Model Calibration, Hyperbolic Geometry, Uncertainty Estimation
This project introduces a novel radius-based regularization technique for improving the calibration of neural networks. Inspired by hyperbolic geometry, the approach leverages radius alignment to enhance the reliability of model predictions. Our method significantly reduces Expected Calibration Error (ECE), improving uncertainty estimation without requiring post-training calibration steps.
- Developed for both Euclidean & Hyperbolic neural networks
- Reduces calibration errors (ECE, MCE, RMSCE) by up to 50%
- Outperforms traditional regularization methods (Label Smoothing, Focal Loss)
- Title: Calibrating Neural Networks via Radius Regularization
- Status: Under Review
- Radius-Based Confidence Calibration: Predictive confidence is regularized using hyperbolic radii, ensuring better alignment with probability estimates.
- Hyperbolic & Euclidean Support: Works on both hyperbolic and standard deep learning architectures.
- No Post-Processing Required: Unlike traditional calibration techniques, our method integrates into training without additional tuning.
- Extending radius-based regularization to generative models
