From d4abce9791df553e432eb2213d50d9bd8f3b5b62 Mon Sep 17 00:00:00 2001 From: liv Date: Tue, 3 Mar 2026 11:10:15 +0100 Subject: [PATCH 1/6] add logic gate project --- projects/logic-gate-nn.yml | 59 ++++++++++++++++++++++++++++++++++++++ 1 file changed, 59 insertions(+) create mode 100644 projects/logic-gate-nn.yml diff --git a/projects/logic-gate-nn.yml b/projects/logic-gate-nn.yml new file mode 100644 index 0000000..0dc39e0 --- /dev/null +++ b/projects/logic-gate-nn.yml @@ -0,0 +1,59 @@ + +--- +name: Logic Gate Neural Networks for Jet Substructure Classification +postdate: 2026-03-03 +categories: + - ML/AI +durations: + - Any +experiments: + - Any +skillset: + - Python + - ML +status: + - Available +project: + IRIS-HEP +location: + - Any +commitment: + - Any +program: + - IRIS-HEP fellow +shortdescription: Build ultra-fast jet classifiers with differentiable logic gate networks +description: + At the Large Hadron Collider (LHC), protons smash together billions of times per second, + producing sprays of particles called "jets." Figuring out what created each jet is one of the most + important classification problems in particle physics. Today's best classifiers use large neural networks, + but the LHC's trigger system needs to make decisions in under a microsecond on specialised hardware chips called FPGAs. These chips + are built from simple logic gates — tiny circuits that compute basic Boolean operations like AND, + OR, and XOR. + + So what if we built a neural network entirely out of those same logic gates? That's exactly what + differentiable logic gate networks do. Instead of multiplying numbers through layers of neurons, + they wire together simple Boolean operations and learn which gate to use at each position. A + NeurIPS 2022 paper (https://arxiv.org/abs/2210.08277) showed how to train these networks with + standard gradient descent — and the results are staggering: they are among the fastest machine + learning models, capable of classifying over a million images per second on a single + CPU core. This idea has already been applied to jet classification as a benchmark task, alongside + related approaches like LogicNets and PolyLUT (https://arxiv.org/abs/2506.07367), and real-time + jet classification on trigger hardware has been demonstrated with latencies around 100 nanoseconds + (https://doi.org/10.1088/2632-2153/ad5f10). + + In this project you will work with the HLS4ML jet tagging dataset + (https://zenodo.org/records/3602260), which contains simulated LHC jets across five classes + (gluon, light quark, W, Z, and top). You will build and train a logic gate network classifier in + PyTorch, benchmark it against conventional neural network baselines, and explore how accuracy + trades off against network size and depth. Stretch goals include comparing against other + logic-gate-based approaches or exploring what it takes to get these models running on real + hardware. + + +contacts: + - name: Liv Våge + email: liv.helen.vage@cern.ch + - name: Lino Gerlach + email: lino.gerlach@cern.ch + +mentees: \ No newline at end of file From d79881183b330f5c42aad57801c57fe1dbde26f5 Mon Sep 17 00:00:00 2001 From: liv Date: Tue, 3 Mar 2026 14:44:19 +0100 Subject: [PATCH 2/6] add interpretable ml project --- projects/interpretable-ml.yml | 58 +++++++++++++++++++++++++++++++++++ 1 file changed, 58 insertions(+) create mode 100644 projects/interpretable-ml.yml diff --git a/projects/interpretable-ml.yml b/projects/interpretable-ml.yml new file mode 100644 index 0000000..bee5509 --- /dev/null +++ b/projects/interpretable-ml.yml @@ -0,0 +1,58 @@ +#remove commented text (after "#") in your project yml, including this line.. +#See the project_metadata.yml file in this repository for expected responses to each attribute. If you need +#to add additional responses, please modify project_metadata.yml accordingly +--- +name: Towards Interpretable Machine Learning in High-Energy Physics +postdate: 2026-03-03 +categories: + - ML/AI +durations: + - Any +experiments: + - Any +skillset: + - Python + - ML +status: + - Available +project: + - Any # o if associated to a community project, add it here (from those listed in project_metadata.yml) +location: + - Any # otherwise "Remote" or "In person" +commitment: + - Any # otherwise "Part time" or "Full time" +program: + - Any +shortdescription: Survey interpretability techniques for ML models used in HEP, and propose practical guidelines for the field. +description: + Machine learning is now everywhere in particle physics — from identifying what kind of particle + created a jet, to filtering interesting collisions in real time, to generating simulated data. + These models work impressively well, but we often have little idea why they make the decisions + they do. + + A growing toolbox of interpretability methods exists — techniques like SHAP values and attention + maps can highlight which inputs matter most, feature importance rankings from decision trees can + reveal what a model has learned. A Nature Reviews Physics commentary (https://doi.org/10.1038/s42254-022-00456-0) argued + that interpretability is essential for ML in physics, yet there is no agreed-upon standard for + what "interpretable" even means in this context, let alone best practices for achieving it. + + In this project, you will survey and hands-on compare interpretability methods across different + ML tasks in high-energy physics. Starting from existing trained models, e.g. jet + classifiers, you will apply post-hoc explanation tools (such as + SHAP and attention visualisation), compare them against other alternatives, and ask: do these methods agree? Do they + reveal real physics? Can we reverse-engineer what a model has + learned and express it in terms a physicist would recognise? + + The main deliverable will be twofold - firsly a practical set of guidelines: when should HEP physicists use which + interpretability approach, what are the pitfalls, and where are the open problems? Secondly: an open source repository of + tools that can be used to understand ML models. + +contacts: + - name: Liv Våge + email: liv.helen.vage@cern.ch + + +mentees: # keep an empty list until the project has started or a student is identified +# when that happens add a list with name: and link: attributes for each students +# - name: Students name +# - link: #url for project page From fbc745d903d99e9a89d28dcb19f89f91168a6f59 Mon Sep 17 00:00:00 2001 From: liv Date: Tue, 3 Mar 2026 14:52:26 +0100 Subject: [PATCH 3/6] fix: small formatting issue --- projects/interpretable-ml.yml | 15 +++++---------- 1 file changed, 5 insertions(+), 10 deletions(-) diff --git a/projects/interpretable-ml.yml b/projects/interpretable-ml.yml index bee5509..0c143b2 100644 --- a/projects/interpretable-ml.yml +++ b/projects/interpretable-ml.yml @@ -1,6 +1,4 @@ -#remove commented text (after "#") in your project yml, including this line.. -#See the project_metadata.yml file in this repository for expected responses to each attribute. If you need -#to add additional responses, please modify project_metadata.yml accordingly + --- name: Towards Interpretable Machine Learning in High-Energy Physics postdate: 2026-03-03 @@ -16,11 +14,11 @@ skillset: status: - Available project: - - Any # o if associated to a community project, add it here (from those listed in project_metadata.yml) + - Any location: - - Any # otherwise "Remote" or "In person" + - Any commitment: - - Any # otherwise "Part time" or "Full time" + - Any program: - Any shortdescription: Survey interpretability techniques for ML models used in HEP, and propose practical guidelines for the field. @@ -52,7 +50,4 @@ contacts: email: liv.helen.vage@cern.ch -mentees: # keep an empty list until the project has started or a student is identified -# when that happens add a list with name: and link: attributes for each students -# - name: Students name -# - link: #url for project page +mentees: \ No newline at end of file From 7045238b38dbdbf70295ffcb6552bd1e6771c5fc Mon Sep 17 00:00:00 2001 From: liv Date: Tue, 3 Mar 2026 15:30:17 +0100 Subject: [PATCH 4/6] fix: minor yaml issues --- projects/interpretable-ml.yml | 6 +++--- projects/logic-gate-nn.yml | 16 +++++++--------- 2 files changed, 10 insertions(+), 12 deletions(-) diff --git a/projects/interpretable-ml.yml b/projects/interpretable-ml.yml index 0c143b2..5694bff 100644 --- a/projects/interpretable-ml.yml +++ b/projects/interpretable-ml.yml @@ -22,7 +22,7 @@ commitment: program: - Any shortdescription: Survey interpretability techniques for ML models used in HEP, and propose practical guidelines for the field. -description: +description: > Machine learning is now everywhere in particle physics — from identifying what kind of particle created a jet, to filtering interesting collisions in real time, to generating simulated data. These models work impressively well, but we often have little idea why they make the decisions @@ -30,7 +30,7 @@ description: A growing toolbox of interpretability methods exists — techniques like SHAP values and attention maps can highlight which inputs matter most, feature importance rankings from decision trees can - reveal what a model has learned. A Nature Reviews Physics commentary (https://doi.org/10.1038/s42254-022-00456-0) argued + reveal what a model has learned. A [Nature Reviews Physics commentary](https://doi.org/10.1038/s42254-022-00456-0) argued that interpretability is essential for ML in physics, yet there is no agreed-upon standard for what "interpretable" even means in this context, let alone best practices for achieving it. @@ -41,7 +41,7 @@ description: reveal real physics? Can we reverse-engineer what a model has learned and express it in terms a physicist would recognise? - The main deliverable will be twofold - firsly a practical set of guidelines: when should HEP physicists use which + The main deliverable will be twofold — firstly a practical set of guidelines: when should HEP physicists use which interpretability approach, what are the pitfalls, and where are the open problems? Secondly: an open source repository of tools that can be used to understand ML models. diff --git a/projects/logic-gate-nn.yml b/projects/logic-gate-nn.yml index 0dc39e0..b4fdd7f 100644 --- a/projects/logic-gate-nn.yml +++ b/projects/logic-gate-nn.yml @@ -14,7 +14,7 @@ skillset: status: - Available project: - IRIS-HEP + - IRIS-HEP location: - Any commitment: @@ -22,7 +22,7 @@ commitment: program: - IRIS-HEP fellow shortdescription: Build ultra-fast jet classifiers with differentiable logic gate networks -description: +description: > At the Large Hadron Collider (LHC), protons smash together billions of times per second, producing sprays of particles called "jets." Figuring out what created each jet is one of the most important classification problems in particle physics. Today's best classifiers use large neural networks, @@ -33,16 +33,14 @@ description: So what if we built a neural network entirely out of those same logic gates? That's exactly what differentiable logic gate networks do. Instead of multiplying numbers through layers of neurons, they wire together simple Boolean operations and learn which gate to use at each position. A - NeurIPS 2022 paper (https://arxiv.org/abs/2210.08277) showed how to train these networks with + [NeurIPS 2022 paper](https://arxiv.org/abs/2210.08277) showed how to train these networks with standard gradient descent — and the results are staggering: they are among the fastest machine learning models, capable of classifying over a million images per second on a single CPU core. This idea has already been applied to jet classification as a benchmark task, alongside - related approaches like LogicNets and PolyLUT (https://arxiv.org/abs/2506.07367), and real-time - jet classification on trigger hardware has been demonstrated with latencies around 100 nanoseconds - (https://doi.org/10.1088/2632-2153/ad5f10). + related approaches like [LogicNets and PolyLUT](https://arxiv.org/abs/2506.07367), and real-time + jet classification on trigger hardware has been [demonstrated with latencies around 100 ns](https://doi.org/10.1088/2632-2153/ad5f10). - In this project you will work with the HLS4ML jet tagging dataset - (https://zenodo.org/records/3602260), which contains simulated LHC jets across five classes + In this project you will work with the [HLS4ML jet tagging dataset](https://zenodo.org/records/3602260), which contains simulated LHC jets across five classes (gluon, light quark, W, Z, and top). You will build and train a logic gate network classifier in PyTorch, benchmark it against conventional neural network baselines, and explore how accuracy trades off against network size and depth. Stretch goals include comparing against other @@ -54,6 +52,6 @@ contacts: - name: Liv Våge email: liv.helen.vage@cern.ch - name: Lino Gerlach - email: lino.gerlach@cern.ch + email: lino.oscar.gerlach@cern.ch mentees: \ No newline at end of file From ab3895f80f81e9ee06110e908061abcd87d39d6e Mon Sep 17 00:00:00 2001 From: liv Date: Tue, 3 Mar 2026 15:47:48 +0100 Subject: [PATCH 5/6] fix: add torchlogix to description --- projects/logic-gate-nn.yml | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/projects/logic-gate-nn.yml b/projects/logic-gate-nn.yml index b4fdd7f..cc220b3 100644 --- a/projects/logic-gate-nn.yml +++ b/projects/logic-gate-nn.yml @@ -41,8 +41,8 @@ description: > jet classification on trigger hardware has been [demonstrated with latencies around 100 ns](https://doi.org/10.1088/2632-2153/ad5f10). In this project you will work with the [HLS4ML jet tagging dataset](https://zenodo.org/records/3602260), which contains simulated LHC jets across five classes - (gluon, light quark, W, Z, and top). You will build and train a logic gate network classifier in - PyTorch, benchmark it against conventional neural network baselines, and explore how accuracy + (gluon, light quark, W, Z, and top). Using our library [torchlogix])https://github.com/ligerlac/torchlogix) you'll build a logic gate network classifier + , benchmark it against conventional neural network baselines, and explore how accuracy trades off against network size and depth. Stretch goals include comparing against other logic-gate-based approaches or exploring what it takes to get these models running on real hardware. From 99e173ff667c955cba530c4c7b9df9306f56c6c6 Mon Sep 17 00:00:00 2001 From: "pre-commit-ci[bot]" <66853113+pre-commit-ci[bot]@users.noreply.github.com> Date: Tue, 3 Mar 2026 14:49:55 +0000 Subject: [PATCH 6/6] [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci --- projects/interpretable-ml.yml | 26 ++++++++++++-------------- projects/logic-gate-nn.yml | 22 ++++++++++------------ 2 files changed, 22 insertions(+), 26 deletions(-) diff --git a/projects/interpretable-ml.yml b/projects/interpretable-ml.yml index 5694bff..a8d20c7 100644 --- a/projects/interpretable-ml.yml +++ b/projects/interpretable-ml.yml @@ -1,24 +1,23 @@ - --- name: Towards Interpretable Machine Learning in High-Energy Physics postdate: 2026-03-03 categories: - ML/AI durations: - - Any + - Any experiments: - - Any + - Any skillset: - Python - ML status: - Available project: - - Any + - Any location: - - Any + - Any commitment: - - Any + - Any program: - Any shortdescription: Survey interpretability techniques for ML models used in HEP, and propose practical guidelines for the field. @@ -26,28 +25,27 @@ description: > Machine learning is now everywhere in particle physics — from identifying what kind of particle created a jet, to filtering interesting collisions in real time, to generating simulated data. These models work impressively well, but we often have little idea why they make the decisions - they do. + they do. A growing toolbox of interpretability methods exists — techniques like SHAP values and attention maps can highlight which inputs matter most, feature importance rankings from decision trees can reveal what a model has learned. A [Nature Reviews Physics commentary](https://doi.org/10.1038/s42254-022-00456-0) argued that interpretability is essential for ML in physics, yet there is no agreed-upon standard for what "interpretable" even means in this context, let alone best practices for achieving it. - + In this project, you will survey and hands-on compare interpretability methods across different ML tasks in high-energy physics. Starting from existing trained models, e.g. jet classifiers, you will apply post-hoc explanation tools (such as SHAP and attention visualisation), compare them against other alternatives, and ask: do these methods agree? Do they reveal real physics? Can we reverse-engineer what a model has learned and express it in terms a physicist would recognise? - + The main deliverable will be twofold — firstly a practical set of guidelines: when should HEP physicists use which - interpretability approach, what are the pitfalls, and where are the open problems? Secondly: an open source repository of - tools that can be used to understand ML models. - + interpretability approach, what are the pitfalls, and where are the open problems? Secondly: an open source repository of + tools that can be used to understand ML models. + contacts: - name: Liv Våge email: liv.helen.vage@cern.ch - -mentees: \ No newline at end of file +mentees: diff --git a/projects/logic-gate-nn.yml b/projects/logic-gate-nn.yml index cc220b3..0956f41 100644 --- a/projects/logic-gate-nn.yml +++ b/projects/logic-gate-nn.yml @@ -1,13 +1,12 @@ - --- name: Logic Gate Neural Networks for Jet Substructure Classification postdate: 2026-03-03 categories: - ML/AI durations: - - Any + - Any experiments: - - Any + - Any skillset: - Python - ML @@ -16,20 +15,20 @@ status: project: - IRIS-HEP location: - - Any + - Any commitment: - - Any + - Any program: - IRIS-HEP fellow shortdescription: Build ultra-fast jet classifiers with differentiable logic gate networks description: > At the Large Hadron Collider (LHC), protons smash together billions of times per second, - producing sprays of particles called "jets." Figuring out what created each jet is one of the most - important classification problems in particle physics. Today's best classifiers use large neural networks, + producing sprays of particles called "jets." Figuring out what created each jet is one of the most + important classification problems in particle physics. Today's best classifiers use large neural networks, but the LHC's trigger system needs to make decisions in under a microsecond on specialised hardware chips called FPGAs. These chips are built from simple logic gates — tiny circuits that compute basic Boolean operations like AND, OR, and XOR. - + So what if we built a neural network entirely out of those same logic gates? That's exactly what differentiable logic gate networks do. Instead of multiplying numbers through layers of neurons, they wire together simple Boolean operations and learn which gate to use at each position. A @@ -39,19 +38,18 @@ description: > CPU core. This idea has already been applied to jet classification as a benchmark task, alongside related approaches like [LogicNets and PolyLUT](https://arxiv.org/abs/2506.07367), and real-time jet classification on trigger hardware has been [demonstrated with latencies around 100 ns](https://doi.org/10.1088/2632-2153/ad5f10). - + In this project you will work with the [HLS4ML jet tagging dataset](https://zenodo.org/records/3602260), which contains simulated LHC jets across five classes (gluon, light quark, W, Z, and top). Using our library [torchlogix])https://github.com/ligerlac/torchlogix) you'll build a logic gate network classifier , benchmark it against conventional neural network baselines, and explore how accuracy trades off against network size and depth. Stretch goals include comparing against other logic-gate-based approaches or exploring what it takes to get these models running on real hardware. - contacts: - name: Liv Våge email: liv.helen.vage@cern.ch - - name: Lino Gerlach + - name: Lino Gerlach email: lino.oscar.gerlach@cern.ch -mentees: \ No newline at end of file +mentees: