Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
51 changes: 51 additions & 0 deletions projects/interpretable-ml.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,51 @@
---
name: Towards Interpretable Machine Learning in High-Energy Physics
postdate: 2026-03-03
categories:
- ML/AI
durations:
- Any
experiments:
- Any
skillset:
- Python
- ML
status:
- Available
project:
- Any
location:
- Any
commitment:
- Any
program:
- Any
shortdescription: Survey interpretability techniques for ML models used in HEP, and propose practical guidelines for the field.
description: >
Machine learning is now everywhere in particle physics — from identifying what kind of particle
created a jet, to filtering interesting collisions in real time, to generating simulated data.
These models work impressively well, but we often have little idea why they make the decisions
they do.

A growing toolbox of interpretability methods exists — techniques like SHAP values and attention
maps can highlight which inputs matter most, feature importance rankings from decision trees can
reveal what a model has learned. A [Nature Reviews Physics commentary](https://doi.org/10.1038/s42254-022-00456-0) argued
that interpretability is essential for ML in physics, yet there is no agreed-upon standard for
what "interpretable" even means in this context, let alone best practices for achieving it.

In this project, you will survey and hands-on compare interpretability methods across different
ML tasks in high-energy physics. Starting from existing trained models, e.g. jet
classifiers, you will apply post-hoc explanation tools (such as
SHAP and attention visualisation), compare them against other alternatives, and ask: do these methods agree? Do they
reveal real physics? Can we reverse-engineer what a model has
learned and express it in terms a physicist would recognise?

The main deliverable will be twofold — firstly a practical set of guidelines: when should HEP physicists use which
interpretability approach, what are the pitfalls, and where are the open problems? Secondly: an open source repository of
tools that can be used to understand ML models.

contacts:
- name: Liv Våge
email: liv.helen.vage@cern.ch

mentees:
55 changes: 55 additions & 0 deletions projects/logic-gate-nn.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,55 @@
---
name: Logic Gate Neural Networks for Jet Substructure Classification
postdate: 2026-03-03
categories:
- ML/AI
durations:
- Any
experiments:
- Any
skillset:
- Python
- ML
status:
- Available
project:
- IRIS-HEP
location:
- Any
commitment:
- Any
program:
- IRIS-HEP fellow
shortdescription: Build ultra-fast jet classifiers with differentiable logic gate networks
description: >
At the Large Hadron Collider (LHC), protons smash together billions of times per second,
producing sprays of particles called "jets." Figuring out what created each jet is one of the most
important classification problems in particle physics. Today's best classifiers use large neural networks,
but the LHC's trigger system needs to make decisions in under a microsecond on specialised hardware chips called FPGAs. These chips
are built from simple logic gates — tiny circuits that compute basic Boolean operations like AND,
OR, and XOR.

So what if we built a neural network entirely out of those same logic gates? That's exactly what
differentiable logic gate networks do. Instead of multiplying numbers through layers of neurons,
they wire together simple Boolean operations and learn which gate to use at each position. A
[NeurIPS 2022 paper](https://arxiv.org/abs/2210.08277) showed how to train these networks with
standard gradient descent — and the results are staggering: they are among the fastest machine
learning models, capable of classifying over a million images per second on a single
CPU core. This idea has already been applied to jet classification as a benchmark task, alongside
related approaches like [LogicNets and PolyLUT](https://arxiv.org/abs/2506.07367), and real-time
jet classification on trigger hardware has been [demonstrated with latencies around 100 ns](https://doi.org/10.1088/2632-2153/ad5f10).

In this project you will work with the [HLS4ML jet tagging dataset](https://zenodo.org/records/3602260), which contains simulated LHC jets across five classes
(gluon, light quark, W, Z, and top). Using our library [torchlogix])https://github.com/ligerlac/torchlogix) you'll build a logic gate network classifier
, benchmark it against conventional neural network baselines, and explore how accuracy
trades off against network size and depth. Stretch goals include comparing against other
logic-gate-based approaches or exploring what it takes to get these models running on real
hardware.

contacts:
- name: Liv Våge
email: liv.helen.vage@cern.ch
- name: Lino Gerlach
email: lino.oscar.gerlach@cern.ch

mentees:
Loading