-
-
Notifications
You must be signed in to change notification settings - Fork 1.5k
MRG+1: ENH: refactor beamformer channel picks with utils #5872
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
| if not get_current_comp(info) == get_current_comp(forward['info']): | ||
| raise ValueError('Data and forward model do not have same ' | ||
| 'compensation applied.') | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@agramfort : this is new, checking whether compensation is identical between data and forward model
mne/beamformer/tests/test_lcmv.py
Outdated
| # test whether different compensations throw error | ||
| info_comp = evoked.info.copy() | ||
| set_current_comp(info_comp, 1) | ||
| pytest.raises(ValueError, make_lcmv, info_comp, fwd, data_cov) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@agramfort : this is the test for the compensation comparison
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
modern style is:
with pytest.raises(ValueError, match='does not match'):
make_lcmv(...)
It's clearer what's being caught, and more precise in catching it
| ref_chs = pick_types(info, meg=False, ref_meg=True) | ||
| ref_chs = [info['ch_names'][ch] for ch in ref_chs] | ||
| ch_names = [ch for ch in ch_names if ch not in ref_chs] | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@agramfort : this code block is new as well, throwing out any reference channels that might still be in info for the beamformer. Now that we have this function in utils we probably need to reconsider whether this behavior is wanted by any inverse model?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IIRC the forward does not contain the reference channels (only implicitly via compensation if it has been applied when computing the forward) so I don't think it should matter.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, that is true, I acutally checked that. Talking to @agramfort, we thought about getting rid of them to ensure further computation doesn't go wrong in any case for the beamformer.
|
+1 on this. For others, the idea is have common check functions that will be used across inverse methods that have data, a forward or filters or an inv and eventually some covs. all inverse methods end up doing similar checks. Let's unify this. sounds good? |
|
Yes! +1 for as much unification as possible across |
|
For test writing: For the use of the new function anywhere but in the beamformer module I would be very glad about some help, as I don't know my way within those functions good enough yet to catch where it would be needed. |
|
no strong feeling about loading real data or not. We tend to use real data everywhere but maybe it would be interesting to consider having a way to mock more objects to speed up the tests by avoiding IO of complex fif files. |
|
I/O is generally not our testing bottleneck, it's computation time. It's usually easiest and fast enough to load the necessary files from the testing dataset |
|
Based on |
|
+1
… |
Codecov Report
@@ Coverage Diff @@
## master #5872 +/- ##
==========================================
- Coverage 88.76% 88.69% -0.07%
==========================================
Files 401 401
Lines 72425 72631 +206
Branches 12122 12146 +24
==========================================
+ Hits 64288 64423 +135
- Misses 5212 5284 +72
+ Partials 2925 2924 -1 |
|
ping @larsoner @agramfort |
|
If you rebase it will probably fix CircleCI |
|
The Travis https://travis-ci.org/mne-tools/mne-python/jobs/488493098#L3176 |
|
Thanks @larsoner, misread the Travis output. Will fix and rebase! |
953927f to
4f506e7
Compare
|
@larsoner I think Travis timed out ... |
|
All better, ready for review/merge from your end? |
larsoner
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Minor nitpicks here
mne/beamformer/tests/test_lcmv.py
Outdated
| # test whether different compensations throw error | ||
| info_comp = evoked.info.copy() | ||
| set_current_comp(info_comp, 1) | ||
| pytest.raises(ValueError, make_lcmv, info_comp, fwd, data_cov) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
modern style is:
with pytest.raises(ValueError, match='does not match'):
make_lcmv(...)
It's clearer what's being caught, and more precise in catching it
|
Addressed @agramfort 's comments, will get to @larsoner 's a bit later. Thanks guys! |
|
ping @larsoner : I need help with CIs again ... I don't think it is related to what I did? Should I rebase? |
|
Ignore errors related to physionet download. This will require checking Travis logs. Or wait an hour until #5932 is in, and I can restart Travis (which builds a merged version of the PR, so will get the fixes automatically on build restart). |
|
@larsoner the failing download is actually this one https://travis-ci.org/mne-tools/mne-python/jobs/490227429#L2490 it does it sometimes. but yes, right now the CIs are broken. My fault. |
|
Okay, thanks -- then let's see what Travis says later! |
larsoner
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM +1 for merge
|
Travis isn't building a merged version (I forgot it still uses the old |
19cdb47 to
e4ba0d3
Compare
|
we could merge this but I would suggest to see if the new _check_info_inv can be shared among all inverse solvers. @larsoner can you have a look? |
|
@agramfort @britta-wstnr I started looking into it. There is a lot of overlap between:
These two code paths seem to do in principle the same things:
The best thing to do is probably to merge these things, probably by modifying |
|
that's what I feared... no strong feeling. Do you think it would take time
to fix?
… |
|
I didn't think I'll try to add it here (won't push until it's close) and if it takes too long we can just merge. |
|
I think we should merge this as-is, and I'll tackle unification separately. It's going to need to be a multi-step process, and there's no need to hold this up in the meantime. |
|
ok to merge as is, and refactor in #5947 |
Moved some of the channel picking facility to
utils:_check_info_inv(formerly_compute_beamformer._setup_picks) picks channels based on the data / noise cov matrix and forward model, taking into account channels marked as bad ininfoand reference channels.Next step for this PR is to write tests for this function.
Two additional functionalities have been added: check for compensation and dropping ref channels during computation of spatial filter (will comment in code to mark those).
ping @agramfort
To Do: