Skip to content

Conversation

@britta-wstnr
Copy link
Member

@britta-wstnr britta-wstnr commented Jan 25, 2019

Moved some of the channel picking facility to utils:
_check_info_inv (formerly _compute_beamformer._setup_picks) picks channels based on the data / noise cov matrix and forward model, taking into account channels marked as bad in info and reference channels.
Next step for this PR is to write tests for this function.

Two additional functionalities have been added: check for compensation and dropping ref channels during computation of spatial filter (will comment in code to mark those).

ping @agramfort

To Do:

  • add tests

if not get_current_comp(info) == get_current_comp(forward['info']):
raise ValueError('Data and forward model do not have same '
'compensation applied.')

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@agramfort : this is new, checking whether compensation is identical between data and forward model

# test whether different compensations throw error
info_comp = evoked.info.copy()
set_current_comp(info_comp, 1)
pytest.raises(ValueError, make_lcmv, info_comp, fwd, data_cov)
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@agramfort : this is the test for the compensation comparison

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

modern style is:

with pytest.raises(ValueError, match='does not match'):
    make_lcmv(...)

It's clearer what's being caught, and more precise in catching it

ref_chs = pick_types(info, meg=False, ref_meg=True)
ref_chs = [info['ch_names'][ch] for ch in ref_chs]
ch_names = [ch for ch in ch_names if ch not in ref_chs]

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@agramfort : this code block is new as well, throwing out any reference channels that might still be in info for the beamformer. Now that we have this function in utils we probably need to reconsider whether this behavior is wanted by any inverse model?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IIRC the forward does not contain the reference channels (only implicitly via compensation if it has been applied when computing the forward) so I don't think it should matter.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, that is true, I acutally checked that. Talking to @agramfort, we thought about getting rid of them to ensure further computation doesn't go wrong in any case for the beamformer.

@agramfort
Copy link
Member

+1 on this. For others, the idea is have common check functions that will be used across inverse methods that have data, a forward or filters or an inv and eventually some covs.

all inverse methods end up doing similar checks. Let's unify this.

sounds good?

@larsoner
Copy link
Member

Yes! +1 for as much unification as possible across minimum_norm, beamformer, inverse_sparse, and cov.py for how this stuff is handled.

@britta-wstnr
Copy link
Member Author

For test writing: test_check.py does not load any data (yet). _check_info_inv operates on info, forward, noise_cov, and data_cov. Is it worth loading real data for that, or should I rather build some fake dictionaries?

For the use of the new function anywhere but in the beamformer module I would be very glad about some help, as I don't know my way within those functions good enough yet to catch where it would be needed.

@agramfort
Copy link
Member

no strong feeling about loading real data or not. We tend to use real data everywhere but maybe it would be interesting to consider having a way to mock more objects to speed up the tests by avoiding IO of complex fif files.

@larsoner
Copy link
Member

I/O is generally not our testing bottleneck, it's computation time.

It's usually easiest and fast enough to load the necessary files from the testing dataset

@britta-wstnr
Copy link
Member Author

Based on test_lcmv.py, the data covariance matrix would need to be estimated, but I guess that is no bottleneck either? If okay, I would add all this to test_check

@agramfort
Copy link
Member

agramfort commented Jan 30, 2019 via email

@codecov
Copy link

codecov bot commented Feb 4, 2019

Codecov Report

Merging #5872 into master will decrease coverage by 0.06%.
The diff coverage is 95.19%.

@@            Coverage Diff             @@
##           master    #5872      +/-   ##
==========================================
- Coverage   88.76%   88.69%   -0.07%     
==========================================
  Files         401      401              
  Lines       72425    72631     +206     
  Branches    12122    12146      +24     
==========================================
+ Hits        64288    64423     +135     
- Misses       5212     5284      +72     
+ Partials     2925     2924       -1

@britta-wstnr
Copy link
Member Author

ping @larsoner @agramfort
I implemented some tests now, I think the failing CI is unrelated to my commits?

@larsoner
Copy link
Member

larsoner commented Feb 5, 2019

If you rebase it will probably fix CircleCI

@larsoner
Copy link
Member

larsoner commented Feb 5, 2019

The Travis flake errors are legitimate. They suggest you have some un-covered lines in utils/check.py:

https://travis-ci.org/mne-tools/mne-python/jobs/488493098#L3176

@britta-wstnr
Copy link
Member Author

britta-wstnr commented Feb 5, 2019

Thanks @larsoner, misread the Travis output. Will fix and rebase!

@britta-wstnr
Copy link
Member Author

@larsoner I think Travis timed out ...

@larsoner
Copy link
Member

larsoner commented Feb 6, 2019

All better, ready for review/merge from your end?

Copy link
Member

@larsoner larsoner left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Minor nitpicks here

# test whether different compensations throw error
info_comp = evoked.info.copy()
set_current_comp(info_comp, 1)
pytest.raises(ValueError, make_lcmv, info_comp, fwd, data_cov)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

modern style is:

with pytest.raises(ValueError, match='does not match'):
    make_lcmv(...)

It's clearer what's being caught, and more precise in catching it

@britta-wstnr
Copy link
Member Author

Addressed @agramfort 's comments, will get to @larsoner 's a bit later. Thanks guys!

@britta-wstnr
Copy link
Member Author

ping @larsoner : I need help with CIs again ... I don't think it is related to what I did? Should I rebase?

@larsoner
Copy link
Member

Ignore errors related to physionet download. This will require checking Travis logs. Or wait an hour until #5932 is in, and I can restart Travis (which builds a merged version of the PR, so will get the fixes automatically on build restart).

@massich
Copy link
Contributor

massich commented Feb 13, 2019

@larsoner the failing download is actually this one https://travis-ci.org/mne-tools/mne-python/jobs/490227429#L2490 it does it sometimes.

but yes, right now the CIs are broken. My fault.

@britta-wstnr
Copy link
Member Author

Okay, thanks -- then let's see what Travis says later!

Copy link
Member

@larsoner larsoner left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM +1 for merge

@larsoner larsoner changed the title Beamformer: refactor channel picks with utils MRG+1: ENH: refactor beamformer channel picks with utils Feb 13, 2019
@larsoner
Copy link
Member

Travis isn't building a merged version (I forgot it still uses the old .travis.yml, and merges the rest of the code), you can rebase, or just ignore the 3.7 build for now.

@agramfort
Copy link
Member

we could merge this but I would suggest to see if the new _check_info_inv can be shared among all inverse solvers. @larsoner can you have a look?

@larsoner
Copy link
Member

@agramfort @britta-wstnr I started looking into it. There is a lot of overlap between:

  1. mne.minimum_norm.inverse._prepare_forward plus the depth-prior-computing parts of make_inverse_operator, and
  2. mne.beamformer._compute_beamformer._prepare_beamformer_input plus _check_info_inv (which could actually be refactored to live in _prepare_beamformer_input).

These two code paths seem to do in principle the same things:

  1. check for channel naming consistency
  2. do channel picking of the gain matrix and cov matrix
  3. check comps
  4. compute a depth prior

The best thing to do is probably to merge these things, probably by modifying _prepare_forward as necessary. Let me know if you think it makes sense to do here, which could delay merge for some days or weeks, or if you'd rather see a separate PR.

@agramfort
Copy link
Member

agramfort commented Feb 14, 2019 via email

@larsoner
Copy link
Member

I didn't think rank would take very long but it did. Hopefully only an hour or so, but who knows. Regardless, we should do it.

I'll try to add it here (won't push until it's close) and if it takes too long we can just merge.

@larsoner
Copy link
Member

I think we should merge this as-is, and I'll tackle unification separately. It's going to need to be a multi-step process, and there's no need to hold this up in the meantime.

@larsoner larsoner mentioned this pull request Feb 15, 2019
6 tasks
@massich
Copy link
Contributor

massich commented Feb 15, 2019

ok to merge as is, and refactor in #5947

@larsoner larsoner merged commit 44c2ba4 into mne-tools:master Feb 15, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants