Skip to content

Conversation

@larsoner
Copy link
Member

@larsoner larsoner commented Feb 21, 2019

Follow-up to #5947. This first step modifies LCMV to make it easier to eventually use _prepare_forward, specifically by:

  • Always using a noise_cov. This simplifies later calculations and branching (see more - lines than + here). In the case of a single channel type, we can just use make_ad_hoc_cov, which makes a diagonal matrix. This makes it so we don't need to do anything extra with projs manually, compute_whitener does it for us. In principle make_ad_hoc_cov could also be a solution for the multiple-ch-types-no-cov problem, but we can stick with the error and let users use make_ad_hoc_cov themselves in that case.
  • Applying the same projectors / whitener to G and Cm, as noted here.
  • Applying the projectors/whitener after lead-field normalization. This is how it's done in the minimum norm code (though not mixed-norm), and at least does not seem to break things.
  • Leave for later: adding check for subspace equivalence rather than just rank match (see below).

Let's see if tests still pass. They do locally.

One potential problem I foresee is that make_ad_hoc_cov should be fine with projections, but will create problems with data that has been rank-reduced with SSS when noise_cov=None is used. However, this should be exceedingly rare since most SSS'ed data is Neuromag, which has two sensor types. To cover this, we'd need some way to project our make_ad_hoc_cov into the same data subspace as data_cov, which actually would be pretty straightforward given _smart_eigh gives us that projection already and it's used in the cov code. We can wait to deal with this until we hit a use case. But I'm assuming YAGNI for now.

At some point it might be useful regardless not just to check that data_cov and noise_cov match rank, but that they actually occupy the same vector subspace (have the same null space). Matching effective rank is a necessary but not sufficient condition for this. But in most cases (projection, even SSS when using baseline cov) it should be okay. The one place I see this being useful is if someone uses SSS and takes a noise cov from the empty room instead of the baseline -- these could have the same effective rank but occupy different subspaces (unless the empty room was processed identically to the data, which is probably not what most people do, properly at least). But in this case, you'll just amplify some near-zero components a bit (so should stay near zero) and drop a few components you otherwise shouldn't from the data_cov, so this is perhaps not a deal-breaker either.

cc @britta-wstnr to see if this makes some sense.

Closes #5882.

@codecov
Copy link

codecov bot commented Feb 21, 2019

Codecov Report

Merging #5979 into master will increase coverage by <.01%.
The diff coverage is 95.23%.

@@            Coverage Diff             @@
##           master    #5979      +/-   ##
==========================================
+ Coverage   88.94%   88.94%   +<.01%     
==========================================
  Files         401      401              
  Lines       72977    73011      +34     
  Branches    12132    12135       +3     
==========================================
+ Hits        64912    64943      +31     
- Misses       5183     5188       +5     
+ Partials     2882     2880       -2

@agramfort
Copy link
Member

@wmvanvliet or @britta-wstnr can you do a first pass of review?

@larsoner larsoner changed the title MAINT: Simplify LCMV whitening MRG: Simplify LCMV whitening Feb 22, 2019
@larsoner larsoner added this to the 0.18 milestone Feb 22, 2019
Copy link
Member

@britta-wstnr britta-wstnr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we know whether lcmv without a noise cov matrix gives same answer here as it does in master? If not it might be good to ensure that - I can look at it tomorrow for example.

@larsoner
Copy link
Member Author

Do we know whether lcmv without a noise cov matrix gives same answer here as it does in master?

In principle it should and the tests still pass. But yes it would be good if you could test it and make sure.

@larsoner larsoner force-pushed the refactor-beamformer branch from 6e74a52 to d7d50b4 Compare February 22, 2019 20:05
@britta-wstnr
Copy link
Member

Hi @larsoner I did a quick check of your branch against master.

  1. The differences in the filter weights are small, that looked okay (with small I mean around 10 orders of magnitude smaller than the weights themselves).
  2. The actual output of the beamformer after apply_lcmv looks less promising. I get big differences. If I look at filters['whitener'] I have a diagonal matrix with 2.0e+12 on the diagonal. filters['noise_cov'] is diagonal with 2.5e-25. Note that the whitener gets multiplied with the data in apply_lcmv.

I didn't have the time yet to track what exactly is happening, only looked at the outputs of sample data yet. Can you reproduce this? I might have more time tomorrow to have a more thorough look.

@larsoner
Copy link
Member Author

larsoner commented Feb 23, 2019

  1. The differences in the filter weights are small, that looked okay (with small I mean around 10 orders of magnitude smaller than the weights themselves).

That's a bit surprising. The weights and whitener scalings should be opposite of one another. On master each should be scaled by ~1e-12 or ~1e12, respectively, when passing noise_cov=make_ad_hoc_cov(info) compared to noise_cov=None, but the effect should be that the STC output is the same. And then this PR should just make it so that those offsetting scale factors are always present.

@larsoner
Copy link
Member Author

... in other words / to express it numerically, on master I can add these lines to test_lcmv_reg_proj:

    with pytest.raises(ValueError, match='several sensor types'):
        make_lcmv(epochs.info, forward, data_cov, reg=0.05,
                  noise_cov=None)
    epochs.pick_types('grad')
    kwargs = dict(reg=0.05, pick_ori=None, weight_norm=None)
    filters_cov = make_lcmv(epochs.info, forward, data_cov,
                            noise_cov=noise_cov, **kwargs)
    filters_nocov = make_lcmv(epochs.info, forward, data_cov,
                              noise_cov=None, **kwargs)
    ad_hoc = mne.make_ad_hoc_cov(epochs.info)
    filters_adhoc = make_lcmv(epochs.info, forward, data_cov,
                              noise_cov=ad_hoc, **kwargs)
    evoked = epochs.average()
    stc_cov = apply_lcmv(evoked, filters_cov)
    stc_nocov = apply_lcmv(evoked, filters_nocov)
    stc_adhoc = apply_lcmv(evoked, filters_adhoc)
    assert_allclose(stc_adhoc.data, stc_nocov.data)
    assert_allclose(
        filters_nocov['weights'],
        filters_adhoc['weights'] * 2e12)
    assert_allclose(
        filters_nocov['weights'],
        np.dot(filters_adhoc['weights'], filters_adhoc['whitener']))

And it passes.

@britta-wstnr
Copy link
Member

I checked with noise_cov=None against master - I hope to have another look tomorrow!

@larsoner larsoner changed the title MRG: Simplify LCMV whitening WIP: Simplify LCMV whitening Feb 24, 2019
@larsoner
Copy link
Member Author

larsoner commented Feb 24, 2019

Setting to WIP because we should get #5979 #5984 in first, since it contains improved regression tests for LCMV

@britta-wstnr
Copy link
Member

@larsoner do you mean #5984 ? (you are linking back here above)

@larsoner
Copy link
Member Author

Whoops, yes

@larsoner larsoner force-pushed the refactor-beamformer branch from eafb8e1 to 71af0f3 Compare February 27, 2019 16:36
@larsoner larsoner changed the title WIP: Simplify LCMV whitening MRG: Simplify LCMV whitening Feb 27, 2019
@larsoner
Copy link
Member Author

Okay @britta-wstnr it still passes with the enhanced tests from #5984, this is ready for review/merge from my end. This makes it very close to being able to use _prepare_forward for easier incorporation of depth and orientation priors.

@britta-wstnr
Copy link
Member

@larsoner awesome - I would like to do another check beyond the implemented tests to examine what I found before, if that's okay. I can get to that tomorrow.

@larsoner
Copy link
Member Author

larsoner commented Mar 4, 2019

Ping @britta-wstnr

@britta-wstnr
Copy link
Member

Reporting back: the order of magnitude in the output has def. changed:

image

This is with noise_cov=None and weight_norm='unit-noise-gain'.
The spatial filter seems to be the same, I guess the problems arise when applying the whitening to the data during apply_lcmv ?

@larsoner
Copy link
Member Author

larsoner commented Mar 5, 2019

the order of magnitude in the output has def. changed:

But is this a bad thing? It seems like the output is more correct now, along the lines of what you get if you use an empirical noise cov (rather than make_ad_hoc_cov), thus making things more consistent.

@larsoner
Copy link
Member Author

larsoner commented Mar 5, 2019

... and at least now there is some reference for what the "unit-noise-gain" is referenced to -- before it was "nothing" and now it's "an ad-hoc diagonal noise covariance". I think this a usability/understandability gain, and a note in whats_new BUG section should be enough here.

If it's not, I can set make_ad_hoc_cov to just give np.eye with no scale factors. But I think the output ends up less understandable this way.

@britta-wstnr
Copy link
Member

Mh, I am hesitant about doing this implicitly, especially with regard to comparability towards other toolboxes and expectations of signal magnitude.
Thoughts, @sarangnemo ?

@larsoner larsoner force-pushed the refactor-beamformer branch from 71af0f3 to 40a6755 Compare March 5, 2019 16:30
@sarangnemo
Copy link

sarangnemo commented Mar 5, 2019

I agree with @britta-wstnr -- the NAI variant already serves the purpose of scaling according to noise covariance. The plain unit-noise-gain should therefore remain unscaled, to preserve a noise gain of... unity. ;-) This also would also preserve direct comparability of results with, e.g., FieldTrip and other UNG implementations.

stc_nocov = apply_lcmv(evoked, filters_nocov)
stc_adhoc = apply_lcmv(evoked, filters_adhoc)

# Compare adhoc and nocov: scale difference is necessitated by using std=1.
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@britta-wstnr let's figure it out later. For now I've added tests that the amplitudes are as expected for None, NAI and unit-noise-gain (the first two not depending on noise_cov, the last being normalized or not depending on whether or not noise_cov is None). The amplitudes you get now should be like the ones you get in master, can you check?

If we think it makes sense, we can remove the std=1. from the _check_one_ch_type function. I think it would since it makes things more consistent (as shown now by the tests, I think), but let's do it later so this PR isn't held up.

@larsoner
Copy link
Member Author

larsoner commented Mar 5, 2019

The plain unit-noise-gain should therefore remain unscaled, to preserve a noise gain of... unity.

My point is that:

  • If weight_norm='unit-noise-gain' normalizing or not depends on whether noise_cov=None or not. If there is a noise cov provided, the output amplitudes are on the order of 1. If noise_cov=None, they are on the order of 1e-11 (at least for gradiometer data).
  • If weight_norm='nai', it is always on the order of 1.
  • If weight_norm=None, it is always on the order of 1e-11 (again for grads at least).

If this is the intended behavior, it's fine with me.

@larsoner
Copy link
Member Author

larsoner commented Mar 6, 2019

@britta-wstnr can you check one more time that the amplitudes are now the same as in master? They should be reverted to what they were before.

@britta-wstnr
Copy link
Member

Hi @larsoner, I think there might be a misunderstanding concerning "weight normalizing". The unit-noise-gain beamformer does not rely on a noise covariance matrix to normalize the weights. This normalization refers to constraints that the beamformer implements (cf. the unit gain constraint for the "vanilla" LCMV beamformer).
The effect of whitening with a noise covariance matrix is an additional effect and changes output values (looking at order in this case).
The NAI, on the other hand, is a unit-noise-gain beamformer whose output gets additionally scaled by a noise constant (derived from the eigenvalues of the cov matrix).
I hope this makes sense?

I will look at the output and report back!

@larsoner
Copy link
Member Author

larsoner commented Mar 7, 2019

Makes sense! These relationships should already be tested (and probably not commented 100% correctly) in the PR now. Feel free to suggest better wordings for the comments around the tests if they're not precise enough. But hopefully the outputs are the same as they were before now.

If so, once this is in, I can try using _prepare_forward to do some of the stuff the beamformer code does now -- and we can get lead field normalization from it. I know the DICS code already has some lead field normalization, so I'll incorporate that, too (since it has a non-blockwise norrmalization for free ori that isn't used in the minimum norm or sparse inverse code).

@britta-wstnr
Copy link
Member

Awesome, I will have a review pass on the PR then!

@larsoner larsoner force-pushed the refactor-beamformer branch from 42216b1 to 3291265 Compare March 7, 2019 21:07
Copy link
Member

@britta-wstnr britta-wstnr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tested against master and the unit-noise-gain beamformer now gives the expected output (regarding order of magnitude).

@larsoner
Copy link
Member Author

larsoner commented Mar 8, 2019

@britta-wstnr okay to merge once I make the comment updates (and ensure CIs are happy)?

@britta-wstnr
Copy link
Member

@larsoner, yes, LGTM except for some clearer comments in the tests!

@larsoner
Copy link
Member Author

larsoner commented Mar 8, 2019

Azure error is unrelated, merging. Thanks for the reviews @britta-wstnr

@larsoner larsoner merged commit 5633254 into mne-tools:master Mar 8, 2019
@larsoner larsoner deleted the refactor-beamformer branch March 8, 2019 13:19
@wmvanvliet
Copy link
Contributor

Cool! Hopefully, this will being us closer to implementing whitening for DICS. Thanks @larsoner and @britta-wstnr!

jeythekey pushed a commit to jeythekey/mne-python that referenced this pull request Apr 27, 2019
* MAINT: Simplify LCMV whitening

* FIX: std=1.

* FIX: Revert errant example change

* DOC: Better comments
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants