Skip to content

Conversation

@larsoner
Copy link
Member

@larsoner larsoner commented Feb 22, 2019

Okay @agramfort I had to change a tolerance for one gamma_map test. All other tests should be passing except mne/inverse_sparse/tests/test_mxne_inverse.py:test_mxne_inverse_standard:

>       assert stc_prox.vertices[1][0] in label.vertices
E       IndexError: index 0 is out of bounds for axis 0 with size 0

Then the loose_method='sum' should change to loose_method='svd', but this will break the sphere test. (The orientation only matches to 0.89, not 0.99, and the dipole fit is not 1.7mm away but 30mm :( ).

I'm out of my depth working on the MxNE stuff, so hopefully you find some hacking time soon.

Todo:

  • add limit_depth_chs='whiten' to MxNE and MNE
  • fix test_mxne_inverse_standard
  • set loose_method='svd' and fix the failing sphere model test
  • add bias tests for LCMV

Next PR:

  • rework compute_depth_prior to operate on forward

@larsoner larsoner mentioned this pull request Feb 22, 2019
6 tasks
@codecov
Copy link

codecov bot commented Feb 22, 2019

Codecov Report

❗ No coverage uploaded for pull request base (master@73ea718). Click here to learn what that means.
The diff coverage is 93.85%.

@@            Coverage Diff            @@
##             master    #5984   +/-   ##
=========================================
  Coverage          ?   88.83%           
=========================================
  Files             ?      401           
  Lines             ?    72880           
  Branches          ?    12180           
=========================================
  Hits              ?    64742           
  Misses            ?     5218           
  Partials          ?     2920

@larsoner larsoner force-pushed the unify branch 2 times, most recently from 2a18786 to fed164b Compare February 23, 2019 21:51
@larsoner
Copy link
Member Author

@agramfort okay with you for a backward compat break to refactor compute_depth_prior to take forward instead of gain, gain_info, is_fixed_orient, patch_areas? Given that this function wasn't in the doc and buried in mne.forward.forward.compute_depth_prior I think it's probably fine, esp. if we note it well in the doc. Or should we do a proper deprecation cycle? We could allow gain to be an instance of forward, and have the other options be None and removed in 0.19. Let me know which you prefer.

If you want this refactoring/deprecation to appear in this PR, let me know. Otherwise this can be merged and the compute_depth_prior cleanups can be the next PR.

@larsoner larsoner changed the title WIP: Unify _prepare_gain MRG(?): Unify _prepare_gain Feb 24, 2019
@pytest.mark.parametrize('method, lower, upper, kwargs, depth', [
('MNE', 35, 40, {}, dict(limit_depth_chs=False)),
('MNE', 45, 55, {}, 0.8),
('MNE', 65, 70, {}, dict(limit_depth_chs='whiten')),
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

FYI this shows the performance improvement for MNE when using whiten-based depth weighting (at least in this case, with this measure for localization bias)

@larsoner larsoner changed the title MRG(?): Unify _prepare_gain MRG: Unify _prepare_gain Feb 24, 2019
@britta-wstnr britta-wstnr mentioned this pull request Feb 25, 2019
4 tasks
@agramfort
Copy link
Member

agramfort commented Feb 27, 2019 via email

@larsoner larsoner merged commit f7bb2ef into mne-tools:master Feb 27, 2019
@larsoner larsoner deleted the unify branch February 27, 2019 15:30
jeythekey pushed a commit to jeythekey/mne-python that referenced this pull request Apr 27, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants