Skip to content

Conversation

@larsoner
Copy link
Member

@larsoner larsoner commented Aug 29, 2018

We currently do covariance regularization on the full rank data. Thus rank-deficient covariance matrices become full rank. This PR changes it so that regularization is done in the lower-dimensional space, then projected back out, which should preserve the original rank.

This probably has some small impact on full-rank MEG data (e.g., 306 channels with a handful of projs) but the impact is magnified by Maxwell filtering due to the larger rank reduction.

  • Something is fishy with the loglik values are pretty uniformly -39 in the previous tests across all methods, because they all provide the same quality estimates when using low-rank computation. This score was the best score before, so it's reasonable they all achieve it now (assuming the differences before were driven by how well the low-rank-ness was captured by each method).
  • Need to check examples
  • Try on SSS'ed data
  • Add check that mne.cov.regularize(..., proj=True) gives same result as diagonal_fixed now
  • Fix mne.cov.regularize to respect rank with new rank argument
  • Fix defaults for diagonal_fixed method to match those of mne.cov.regularize
  • Fix some missing FIFF constants, and fix check for missing ones MRG: Fix constants #5691
  • Fix job output in info for maxwell_filtering to reflect what we actually do MRG: Fix constants #5691
  • Make CIs happy
  • Check diagonal_fixed result -- amplitude decreases in plot_white example?
  • Merge MRG: Fix constants #5691 and rebase this
  • Update API to reflect beamformer changes

Updated examples (930bee9):

It looks like most did not change. I did change bst_phantom_elekta to use method='shrunk', which no longer increases the cov rank to 306 (hooray!).

@larsoner larsoner mentioned this pull request Aug 29, 2018
5 tasks
@larsoner larsoner added this to the 0.18 milestone Oct 4, 2018
@wmvanvliet
Copy link
Contributor

Can you explain what are the benefits exactly? Sounds like you've thought this through and came to some insight.

@larsoner
Copy link
Member Author

When doing an inverse if you regularize the full (306, 306) covariance, you will end up with something that is full rank even if the original rank was, say, 80 after maxwell_filter. And when you pinv (or really C ** -0.5) this matrix you will detect that there are 306 components when there really should only be 80.

So instead the approach would be to take the (306, 306) and left and right multiply by a rank-reducing matrix that takes it to (80, 80), do the regularization, then multiply back out to the (306, 306) shape. Make sense?

@wmvanvliet
Copy link
Contributor

Yes, it makes sense, do you see a big difference in the inverse results?

@larsoner
Copy link
Member Author

larsoner commented Oct 19, 2018

For SSS'ed data with a lot of trials on master, using method='empirical' seems to always look better than any regularized covariances, which should not be the case -- at worst they should look roughly the same. I'm assuming this will fix it, but haven't gotten it totally working yet so haven't checked the outputs to ensure they are better.

@agramfort
Copy link
Member

agramfort commented Oct 19, 2018 via email

@codecov
Copy link

codecov bot commented Nov 1, 2018

Codecov Report

Merging #5481 into master will decrease coverage by 0.05%.
The diff coverage is 98.02%.

@@            Coverage Diff             @@
##           master    #5481      +/-   ##
==========================================
- Coverage   88.51%   88.46%   -0.06%     
==========================================
  Files         369      369              
  Lines       69151    69307     +156     
  Branches    11651    11662      +11     
==========================================
+ Hits        61206    61309     +103     
- Misses       5090     5118      +28     
- Partials     2855     2880      +25

@larsoner larsoner changed the title WIP: Do regularization in PCA space MRG: Do regularization in PCA space Nov 2, 2018
@larsoner
Copy link
Member Author

larsoner commented Nov 2, 2018

Okay I think this is ready for review/merge. We should probably wait for 0.18 just to be safe / have time to test it, though, so I'm going to leave set as WIP until we release.

@larsoner larsoner changed the title MRG: Do regularization in PCA space WIP: Do regularization in PCA space Nov 2, 2018
@larsoner larsoner changed the title WIP: Do regularization in PCA space MRG: Do regularization in PCA space Nov 2, 2018
@larsoner
Copy link
Member Author

larsoner commented Nov 2, 2018

On second thought, I think the tests are pretty convincing, so I'd be happy putting this in 0.17. Let's see if @agramfort @dengemann you are convinced, too :)

@larsoner
Copy link
Member Author

larsoner commented Nov 2, 2018

EDIT: Moved updated examples text to top issue

@larsoner
Copy link
Member Author

larsoner commented Nov 2, 2018

Okay I also fixed the diagonal_fixed estimator defaults in compute_covariance to match those of mne.cov.regularize (I think it was a bug that they did not -- eeg=0. by default for example). I also fixed the FIFF constant checks (and added some constants) because I realized we were missing some for Maxwell filtering.

EDIT: Also updated the list of examples above to match commit 1283a0a.

@larsoner
Copy link
Member Author

larsoner commented Nov 2, 2018

Talking to @dengemann to be safe we should probably wait for 0.18 to merge this. So I'll set to WIP for now. (Also apparently Travis is unhappy so I have some work to do anyway.) Also I'm not 100% sure the diagonal_fixed result is correct. Need to check.

@larsoner larsoner modified the milestones: 0.17, 0.18 Nov 2, 2018
@larsoner larsoner changed the title MRG: Do regularization in PCA space WIP: Do regularization in PCA space Nov 2, 2018
@larsoner larsoner mentioned this pull request Nov 2, 2018
@larsoner
Copy link
Member Author

larsoner commented Nov 3, 2018

Fixed the "problem" with the plot_evoked_whitening.py example. The differences were caused in the changes in defaults, which seems okay. URLs above will have been updated to reflect commit a30e48b.

"""Compute covariance auto mode."""
# rescale to improve numerical stability
_apply_scaling_array(data.T, picks_list=picks_list, scalings=scalings)
if rank != 'full':
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should be a better test. If len(data) == 306 and rank == 306 this block should not be entered.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A couple of things:

  1. The check for this would actually be a bit annoying, as you need to then triage based on the various incantations of rank input, instead of just letting _smart_eigh be the one place this is handled / estimated.
  2. In principle it should not matter. As long as we do the rotation properly, it should gracefully fall back to identical behavior.

I have now added a test that point (2) holds. It turns out that it did not before (whoops) but it does now!

@larsoner
Copy link
Member Author

larsoner commented Nov 7, 2018

Hmm, it looks like something bad happened to the EEG scaling here

https://10134-1301584-gh.circle-artifacts.com/0/html/auto_examples/visualization/plot_evoked_whitening.html

It was not that way before the recent rotation PR:

https://10075-1301584-gh.circle-artifacts.com/0/html/auto_examples/visualization/plot_evoked_whitening.html

I'll look into it.

@larsoner
Copy link
Member Author

larsoner commented Nov 7, 2018

@larsoner larsoner modified the milestones: 0.18, 0.17 Nov 8, 2018
@larsoner larsoner changed the title WIP: Do regularization in PCA space MRG: Do regularization in PCA space Nov 8, 2018
- Add :func:`mne.io.read_raw_fieldtrip`, :func:`mne.read_epochs_fieldtrip` and :func:`mne.read_evoked_fieldtrip` to import FieldTrip data. By `Thomas Hartmann`_ and `Dirk Gütlin`_.

- Add ``rank`` parameter to :func:`mne.compute_covariance`, :func:`mne.cov.regularize` and related functions to preserve data rank during regularization by `Eric Larson`_ and `Denis Engemann`_

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you also add that we now support faster low-rank (PCA-space) computation?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pushed a commit, if it's not what you want feel free to give me a more complete suggestion or push a change

I avoided the word "PCA" because it's more about dealing with rank deficiency rather than projecting onto (orthogonal) directions of highest variance

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One could still highlight that the option can make things significantly faster for SSSd data.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"and speed up computation" is already in there

@dengemann
Copy link
Member

After playing with several datasets my impression is that this PR does not make things worse than they are. Instead for certain SSS'd the default looks now correct, hence things are improved. As noticed before, the speedup is practically meaningful. I'd suggest to merge this and then work on the outstanding issues in #4676 which are not magically addressed by this PR -- just for expectation management. I'll approve and suggest @agramfort to take his tour.

@agramfort
Copy link
Member

pushed a cosmit.

now with the inverse method the inverse problem are automagically solved in low rank space? or we still have square whitener even if rank is low?

@larsoner
Copy link
Member Author

This does not change the behavior of inverse operators / LCMV. So it's still stored square IIRC though at some point gets reduced internally (at least for minimum norm)

@larsoner
Copy link
Member Author

Your cosmit broke a test

@larsoner
Copy link
Member Author

... and somehow that cosmit also had a non-unicode char that broke Python 2. Quite impressive for a +3/-3 commit @agramfort :)

@agramfort
Copy link
Member

to get your forgiveness (if I may ask) let me merge this one...

@agramfort agramfort merged commit c80e57c into mne-tools:master Nov 13, 2018
@larsoner larsoner deleted the lowrank branch November 13, 2018 16:10
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants