Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
71 commits
Select commit Hold shift + click to select a range
9c49c57
Add multiobjective solver and regularized training (#783)
gonlairo May 30, 2024
cbcb229
Fix tests
stes Aug 24, 2024
97ad03f
Apply fixes to pass ruff tests
stes Jan 25, 2025
36282d0
Fix typos
stes Jan 25, 2025
5fa5b42
Update license headers, fix additional ruff errors
stes Jan 25, 2025
9ef548d
remove unused comment
stes Jan 25, 2025
a9e7027
Merge branch 'main' into aistats2025
MMathisLab Feb 18, 2025
a2f7117
rename regcl in codebase
stes Feb 18, 2025
761f373
change regcl name in dockerfile
stes Feb 18, 2025
2052ec9
Improve attribution module
stes Feb 18, 2025
5e483d0
Fix imports name naming
stes Feb 19, 2025
4254173
add basic integration test
stes Feb 19, 2025
1d69957
temp disable of binary check
stes Feb 19, 2025
1d10668
Add legacy multiobjective model for backward compat
stes Feb 19, 2025
5e30829
add synth import back in
stes Feb 19, 2025
458958f
Fix docstrings and type annot in cebra/models/jacobian_regularizer.py
stes Feb 19, 2025
d906ad5
add xcebra to tests
stes Feb 19, 2025
6f91018
add missing cvxpy dep
stes Feb 19, 2025
df4f661
fix docstrings
stes Feb 19, 2025
d81b93d
more docstrings to fix attr error
stes Feb 19, 2025
e54c0d1
Merge branch 'main' into aistats2025
MMathisLab Mar 2, 2025
1f57b30
Merge branch 'main' into aistats2025
MMathisLab Mar 15, 2025
ea7ae46
Merge branch 'main' into aistats2025
MMathisLab Apr 14, 2025
6e6915a
Merge branch 'main' into aistats2025
stes Apr 17, 2025
73238ba
Improve build setup for docs
stes Apr 17, 2025
7fb7393
update pydata theme options
stes Apr 17, 2025
34836ee
Add README for docs folder
stes Apr 17, 2025
cc5f3ef
Fix demo notebook build
stes Apr 17, 2025
f4a08b5
Finish build setup
stes Apr 17, 2025
92c5e9e
update git workflow
stes Apr 17, 2025
dc82226
Merge remote-tracking branch 'origin/stes/upgrade-docs-rebased' into …
stes Apr 17, 2025
6b6af82
Move demo notebooks to CEBRA-demos repo
stes Apr 17, 2025
cf5f5a2
revert unneeded changes in solver
stes Apr 17, 2025
ff12a3d
formatting in solver
stes Apr 17, 2025
2db1d22
further minimize solver diff
stes Apr 17, 2025
9d982be
Revert unneeded updates to the solver
stes Apr 17, 2025
e20fda1
fix citation
stes Apr 17, 2025
61cb9b7
fix docs build, missing refs
stes Apr 17, 2025
74988ac
remove file dependency from xcebra int test
stes Apr 17, 2025
1a8fd96
remove unneeded change in registry
stes Apr 17, 2025
d382c4c
update gitignore
stes Apr 17, 2025
446cc67
update docs
stes Apr 17, 2025
9123923
exclude some assets
stes Apr 17, 2025
e4faad6
include binary file check again
stes Apr 17, 2025
8e3c83e
add timeout to workflow
stes Apr 17, 2025
e794ee1
add timeout also to docs build
stes Apr 17, 2025
8f236d1
switch build back to sphinx for gh actions
stes Apr 17, 2025
24d6402
pin sphinx version in setup.cfg
stes Apr 17, 2025
1f64a12
attempt workflow fix
stes Apr 17, 2025
69e22d7
attempt to fix build workflow
stes Apr 17, 2025
1b8e1d7
update to sphinx-build
stes Apr 17, 2025
8f903c8
fix build workflow
stes Apr 17, 2025
1f924b2
fix indent error
stes Apr 17, 2025
f4cd549
fix build system
stes Apr 17, 2025
f2dd965
revert demos to main
stes Apr 17, 2025
6e7104a
Merge remote-tracking branch 'origin/stes/upgrade-docs-rebased' into …
stes Apr 17, 2025
691bb12
adapt workflow for testing
stes Apr 17, 2025
49c7b10
bump version to 0.6.0rc1
stes Apr 17, 2025
9462caf
format imports
stes Apr 17, 2025
e4d717d
docs writing
stes Apr 17, 2025
a7c9562
enable build on dev branch
stes Apr 17, 2025
df6679d
fix some review comments
stes Apr 17, 2025
f5dc743
extend multiobjective docs
stes Apr 17, 2025
7435d2f
Set version to alpha
stes Apr 17, 2025
ea37d02
make tempdir platform independent
stes Apr 17, 2025
90e9bbf
Merge branch 'main' into aistats2025
stes Apr 18, 2025
cadd612
Remove ratinabox and ephysiopy as deps
stes Apr 18, 2025
7f278b1
Apply review comments
stes Apr 20, 2025
3978687
Merge branch 'main' into aistats2025
MMathisLab Apr 20, 2025
e311a14
Merge branch 'main' into aistats2025
MMathisLab Apr 23, 2025
ec95857
Update Makefile
MMathisLab Apr 23, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 5 additions & 1 deletion .github/workflows/docs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,11 @@ jobs:
with:
repository: AdaptiveMotorControlLab/cebra-demos
path: docs/source/demo_notebooks
ref: main
# NOTE(stes): This is a temporary branch to add the xCEBRA demo notebooks
# to the docs. Once the notebooks are merged into main, we can remove this
# branch and change the ref to main.
# ref: main
ref: stes/add-xcebra

- name: Set up Python 3.10
uses: actions/setup-python@v5
Expand Down
10 changes: 10 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,16 @@ experiments/sweeps
exports/
demo_notebooks/
assets/
.remove

# demo run
.vscode/
auxiliary_behavior_data.h5
cebra_model.pt
data.npz
grid_search_models/
neural_data.npz
saved_models/

# demo run
.vscode/
Expand Down
2 changes: 1 addition & 1 deletion Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ RUN make dist
FROM cebra-base

# install the cebra wheel
ENV WHEEL=cebra-0.5.0-py3-none-any.whl
ENV WHEEL=cebra-0.6.0a1-py3-none-any.whl
WORKDIR /build
COPY --from=wheel /build/dist/${WHEEL} .
RUN pip install --no-cache-dir ${WHEEL}'[dev,integrations,datasets]'
Expand Down
4 changes: 2 additions & 2 deletions Makefile
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
CEBRA_VERSION := 0.5.0
CEBRA_VERSION := 0.6.0a1

dist:
python3 -m pip install virtualenv
Expand Down Expand Up @@ -55,7 +55,7 @@ interrogate:
--ignore-private \
--ignore-magic \
--omit-covered-files \
-f 90 \
-f 80 \
cebra

# Build documentation using sphinx
Expand Down
80 changes: 80 additions & 0 deletions NOTICE.yml
Original file line number Diff line number Diff line change
Expand Up @@ -35,3 +35,83 @@
- 'tests/**/*.py'
- 'docs/**/*.py'
- 'conda/**/*.yml'

- header: |
CEBRA: Consistent EmBeddings of high-dimensional Recordings using Auxiliary variables
© Mackenzie W. Mathis & Steffen Schneider (v0.4.0+)
Source code:
https://github.com/AdaptiveMotorControlLab/CEBRA
Please see LICENSE.md for the full license document:
https://github.com/AdaptiveMotorControlLab/CEBRA/blob/main/LICENSE.md
Adapted from https://github.com/rpatrik96/nl-causal-representations/blob/master/care_nl_ica/dep_mat.py,
licensed under the following MIT License:
MIT License
Copyright (c) 2022 Patrik Reizinger
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
include:
- 'cebra/attribution/jacobian.py'


- header: |
CEBRA: Consistent EmBeddings of high-dimensional Recordings using Auxiliary variables
© Mackenzie W. Mathis & Steffen Schneider (v0.4.0+)
Source code:
https://github.com/AdaptiveMotorControlLab/CEBRA
Please see LICENSE.md for the full license document:
https://github.com/AdaptiveMotorControlLab/CEBRA/blob/main/LICENSE.md
This file contains the PyTorch implementation of Jacobian regularization described in [1].
Judy Hoffman, Daniel A. Roberts, and Sho Yaida,
"Robust Learning with Jacobian Regularization," 2019.
[arxiv:1908.02729](https://arxiv.org/abs/1908.02729)
Adapted from https://github.com/facebookresearch/jacobian_regularizer/blob/main/jacobian/jacobian.py
licensed under the following MIT License:
MIT License
Copyright (c) Facebook, Inc. and its affiliates.
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
include:
- 'cebra/models/jacobian_regularizer.py'
2 changes: 1 addition & 1 deletion PKGBUILD
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
# Maintainer: Steffen Schneider <stes@hey.com>
pkgname=python-cebra
_pkgname=cebra
pkgver=0.5.0
pkgver=0.6.0a1
pkgrel=1
pkgdesc="Consistent Embeddings of high-dimensional Recordings using Auxiliary variables"
url="https://cebra.ai"
Expand Down
2 changes: 1 addition & 1 deletion cebra/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -66,7 +66,7 @@

import cebra.integrations.sklearn as sklearn

__version__ = "0.5.0"
__version__ = "0.6.0a1"
__all__ = ["CEBRA"]
__allow_lazy_imports = False
__lazy_imports = {}
Expand Down
38 changes: 38 additions & 0 deletions cebra/attribution/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,38 @@
#
# CEBRA: Consistent EmBeddings of high-dimensional Recordings using Auxiliary variables
# © Mackenzie W. Mathis & Steffen Schneider (v0.4.0+)
# Source code:
# https://github.com/AdaptiveMotorControlLab/CEBRA
#
# Please see LICENSE.md for the full license document:
# https://github.com/AdaptiveMotorControlLab/CEBRA/blob/main/LICENSE.md
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
"""Attribution methods for CEBRA.

This module was added in v0.6.0 and contains attribution methods described and benchmarked
in [Schneider2025]_.


.. [Schneider2025] Schneider, S., González Laiz, R., Filippova, A., Frey, M., & Mathis, M. W. (2025).
Time-series attribution maps with regularized contrastive learning.
The 28th International Conference on Artificial Intelligence and Statistics.
https://openreview.net/forum?id=aGrCXoTB4P
"""
import cebra.registry

cebra.registry.add_helper_functions(__name__)

from cebra.attribution.attribution_models import *
from cebra.attribution.jacobian_attribution import *
142 changes: 142 additions & 0 deletions cebra/attribution/_jacobian.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,142 @@
#
# CEBRA: Consistent EmBeddings of high-dimensional Recordings using Auxiliary variables
# © Mackenzie W. Mathis & Steffen Schneider (v0.4.0+)
# Source code:
# https://github.com/AdaptiveMotorControlLab/CEBRA
#
# Please see LICENSE.md for the full license document:
# https://github.com/AdaptiveMotorControlLab/CEBRA/blob/main/LICENSE.md
#
# Adapted from https://github.com/rpatrik96/nl-causal-representations/blob/master/care_nl_ica/dep_mat.py,
# licensed under the following MIT License:
#
# MIT License
#
# Copyright (c) 2022 Patrik Reizinger
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in all
# copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
#

from typing import Union

import numpy as np
import torch


def tensors_to_cpu_and_double(vars_: list[torch.Tensor]) -> list[torch.Tensor]:
"""Convert a list of tensors to CPU and double precision.

Args:
vars_: List of PyTorch tensors to convert

Returns:
List of tensors converted to CPU and double precision
"""
cpu_vars = []
for v in vars_:
if v.is_cuda:
v = v.to("cpu")
cpu_vars.append(v.double())
return cpu_vars


def tensors_to_cuda(vars_: list[torch.Tensor],
cuda_device: str) -> list[torch.Tensor]:
"""Convert a list of tensors to CUDA device.

Args:
vars_: List of PyTorch tensors to convert
cuda_device: CUDA device to move tensors to

Returns:
List of tensors moved to specified CUDA device
"""
cpu_vars = []
for v in vars_:
if not v.is_cuda:
v = v.to(cuda_device)
cpu_vars.append(v)
return cpu_vars


def compute_jacobian(
model: torch.nn.Module,
input_vars: list[torch.Tensor],
mode: str = "autograd",
cuda_device: str = "cuda",
double_precision: bool = False,
convert_to_numpy: bool = True,
hybrid_solver: bool = False,
) -> Union[torch.Tensor, np.ndarray]:
"""Compute the Jacobian matrix for a given model and input.

This function computes the Jacobian matrix using PyTorch's autograd functionality.
It supports both CPU and CUDA computation, as well as single and double precision.

Args:
model: PyTorch model to compute Jacobian for
input_vars: List of input tensors
mode: Computation mode, currently only "autograd" is supported
cuda_device: Device to use for CUDA computation
double_precision: If True, use double precision
convert_to_numpy: If True, convert output to numpy array
hybrid_solver: If True, concatenate multiple outputs along dimension 1

Returns:
Jacobian matrix as either PyTorch tensor or numpy array
"""
if double_precision:
model = model.to("cpu").double()
input_vars = tensors_to_cpu_and_double(input_vars)
if hybrid_solver:
output = model(*input_vars)
output_vars = torch.cat(output, dim=1).to("cpu").double()
else:
output_vars = model(*input_vars).to("cpu").double()
else:
model = model.to(cuda_device).float()
input_vars = tensors_to_cuda(input_vars, cuda_device=cuda_device)

if hybrid_solver:
output = model(*input_vars)
output_vars = torch.cat(output, dim=1)
else:
output_vars = model(*input_vars)

if mode == "autograd":
jacob = []
for i in range(output_vars.shape[1]):
grads = torch.autograd.grad(
output_vars[:, i:i + 1],
input_vars,
retain_graph=True,
create_graph=False,
grad_outputs=torch.ones(output_vars[:, i:i + 1].shape).to(
output_vars.device),
)
jacob.append(torch.cat(grads, dim=1))

jacobian = torch.stack(jacob, dim=1)

jacobian = jacobian.detach().cpu()

if convert_to_numpy:
jacobian = jacobian.numpy()

return jacobian
Loading