Skip to content

Conversation

@larsoner
Copy link
Member

@larsoner larsoner commented Feb 15, 2019

When looking into unification of beamformer and minimum norm, I realized we should first clean up some minimum norm stuff:

  • _prepare_forward should not compute the whitener, too. Separate these, and fix minimum norm and inverse sparse.
  • _get_whitener should not be called in our codebase, but rather compute_whitener, if possible. This means that the compute_whitener needs a pca option, which is added in MRG: add new compute_rank function #5876. It probably also needs a backproject option, too. _get_whitener is used in Fix : simulate evoked with different sensor types + take into account… #5940, so that should be merged before this can be completed.
  • The depth prior and orientation prior computations should be moved into prepare_forward. There is code dup in minimum_norm, mxne_inverse, and _compute_beamformer that can be merged here. But beamformers shouldn't be modified until MRG+1: ENH: refactor beamformer channel picks with utils #5872 is in.
  • Deal with pca weirdness (is it even necessary?) in inverse_sparse
  • Simplify and vectorize depth prior calculation based on spectral norm
  • Unify _check_comps and _check_compensation_grade

This PR is essentially stuck until #5876 and #5940 are merged. EDIT: And later, #5872.

The next PR(s) should:

  1. Deal with inconsistencies between MxNE and MNE (MRG: Unify _prepare_gain #5984)
  2. Make compute_depth_prior operate on forward, not gain, gain_info, is_fixed_orient
  3. Implement for beamformers (Leadfield normalization for LCMV beamformer #5881)

@larsoner larsoner marked this pull request as ready for review February 15, 2019 16:21
@larsoner larsoner force-pushed the refactor-inv branch 2 times, most recently from 26922c5 to 0d63719 Compare February 15, 2019 16:23
@larsoner
Copy link
Member Author

This pull request introduces 1 alert when merging 0d63719 into 12a8549 - view on LGTM.com

new alerts:

  • 1 for Unused local variable

Comment posted by LGTM.com

@larsoner
Copy link
Member Author

This pull request introduces 1 alert when merging 0d63719 into 44c2ba4 - view on LGTM.com

new alerts:

  • 1 for Unused local variable

Comment posted by LGTM.com

@codecov
Copy link

codecov bot commented Feb 15, 2019

Codecov Report

Merging #5947 into master will increase coverage by 0.03%.
The diff coverage is 94.44%.

@@            Coverage Diff             @@
##           master    #5947      +/-   ##
==========================================
+ Coverage    88.8%   88.83%   +0.03%     
==========================================
  Files         401      401              
  Lines       72783    72866      +83     
  Branches    12167    12178      +11     
==========================================
+ Hits        64635    64733      +98     
+ Misses       5219     5215       -4     
+ Partials     2929     2918      -11

@larsoner
Copy link
Member Author

This pull request introduces 1 alert when merging 22cb6b1 into 44c2ba4 - view on LGTM.com

new alerts:

  • 1 for Unused local variable

Comment posted by LGTM.com

@larsoner larsoner force-pushed the refactor-inv branch 2 times, most recently from 94b6bb5 to b84e2ea Compare February 18, 2019 17:53
@larsoner
Copy link
Member Author

@agramfort is there any reason to keep the pca argument in inverse_sparse solvers? Right now pca=False just makes the first n_zero rows in the whitener equal to zero, which seems a bit useless.

@agramfort
Copy link
Member

no I don't see the point of keeping the zeros with pca=True. I would suggest to remove this parameter. It should not change the results (in theory) but to be tested.

@larsoner larsoner changed the title WIP: Refactor forward and cov prep MRG: Refactor forward and cov prep Feb 21, 2019
@larsoner
Copy link
Member Author

Okay @agramfort ready for review/merge from my end. There are a few interesting meaningful differences between the inverse_sparse code and minimum_norm code, which is now made clear by differences in the kwargs of _prepare_forward:

MNE
_prepare_forward(...
    rank=rank, pca='white', limit_depth_chs=limit_depth_chs,
    loose_method='svd', limit=10., allow_fixed_depth=False, whiten='after')

MxNE
_prepare_forward(...,
    rank=None, pca=True, limit_depth_chs=False, loose_method='sum',
    limit=None, allow_fixed_depth=True, whiten='before')

Things with fixed values here are not exposed to the user, others are.

  • pca as we know probably does not matter.
  • rank and limit_depth_chs you were (probably?) expecting to differ, and could be changed to be exposed to the user. (They are in MNE but not in MxNE.) At least limit_depth_chs=True in MxNE breaks existing tests. (Are they fragile/not robust enough or are these real? Open question.)
  • loose_method -- MNE uses a SVD-based approach, MxNE just uses the sum of norms. Changing MxNE to use the SVD-based approach, tests still passed.
  • limit -- setting this to the MNE default of 10. instead of None did not break the MxNE tests
  • allow_fixed_depth -- setting this to the MNE default of False caused some failures because 1) it wanted a free-ori forward passed for fixed-orientation with depth weighting, and 2) some other tests failed because the orientation was wrong (probably some later orientation computation/adjustment is wrong here).
  • whiten -- this one was tough to find, but mattered. MNE whitens after all priors are computed, MxNE whitens before they are computed. Two tests break if MxNE uses whiten='after' like MNE. Four tests break if MNE uses whiten='before' including the test_localization_bias (for MNE it goes from 83%+ localized properly to 0%).

Not sure how best to move forward with unifying these approaches, but at least the differences are now clearly isolated for someone to easily play with.

Ready for review/merge from my end.

@larsoner
Copy link
Member Author

Okay docstrings added, functions added to python_reference.rst, and I removed _check_comps in favor of the redundant _check_compensation_grade

"""
# XXX this perhaps should just take ``forward`` instead of ``G`` and
# ``gain_info``. However, it's not easy to do this given that the
# mixed norm code requires that ``G`` is whitened before this chunk
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

New note added -- if we can get mixed_norm not to do the whitening before depth weighting, then we can simplify our code a bit, and make this take forward just like compute_orient_prior does.

@agramfort agramfort merged commit 8dc4634 into mne-tools:master Feb 22, 2019
@agramfort
Copy link
Member

thx heaps @larsoner !

@larsoner larsoner deleted the refactor-inv branch February 22, 2019 12:23
jeythekey pushed a commit to jeythekey/mne-python that referenced this pull request Apr 27, 2019
* MAINT: Refactor forward and cov prep

* FIX: Spelling

* FIX: Fixes after rebase

* FIX: Eradicate _get_whitener

* FIX: Missed one

* WIP: Make _prepare_forward more inclusive

* FIX: Unify computations

* ENH: Vectorize depth prior calculation

* MAINT: Docstrings and unification

* DOC: Comments for einsum
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants