Skip to content

Conversation

@alexrockhill
Copy link
Contributor

@alexrockhill alexrockhill commented Jul 9, 2021

I thought the checks that the image warp worked properly and handling of the coordinate frames would be better handled by a function than leaving to copy paste. After writing it, the function seems rather cumbersome so it would be nice to get a review and maybe this isn't the best way to go about it. I tried splitting it into a piece for making an image and a piece for warping an image but that was cumbersome as well.

Also I didn't write a test because it is just too cumbersome but the example did work...

@larsoner
Copy link
Member

Also I didn't write a test because it is just too cumbersome but the example did work...

Everything in MNE should be regression tested -- it's an important part of the work and worth the effort! It's also a useful skill to learn in terms of how to take a complicated problem and distill it to something that proves things work properly, without taking too long to do so.

One idea would be to fake sample's T1 into being CT-like. For example, eyeball two or three 3x3 blocks of voxels (in some known small volume regions) in freeview to call "electrodes". In the test, nib.load the T1, "color in" those hard-coded 3x3 blocks (using numpy slicing to assign values), save the image back out. Use the function in this PR to warp this "sample CT" to fsaverage, assert that you get your 2 or 3 electrodes out, and that they live in the correct fsaverage volumetric ROIs. It sounds like a lot, but I think it will end up being < 30 lines, and should make sure that this function works.

This also lends itself nicely to testing other cases, for example if you color in a single voxel, you might not get this point back, and the function should warn -- and you can assert via pytest.warns(...) that it does, etc.

@alexrockhill
Copy link
Contributor Author

Also I didn't write a test because it is just too cumbersome but the example did work...

Everything in MNE should be regression tested -- it's an important part of the work and worth the effort! It's also a useful skill to learn in terms of how to take a complicated problem and distill it to something that proves things work properly, without taking too long to do so.

One idea would be to fake sample's T1 into being CT-like. For example, eyeball two or three 3x3 blocks of voxels (in some known small volume regions) in freeview to call "electrodes". In the test, nib.load the T1, "color in" those hard-coded 3x3 blocks (using numpy slicing to assign values), save the image back out. Use the function in this PR to warp this "sample CT" to fsaverage, assert that you get your 2 or 3 electrodes out, and that they live in the correct fsaverage volumetric ROIs. It sounds like a lot, but I think it will end up being < 30 lines, and should make sure that this function works.

This also lends itself nicely to testing other cases, for example if you color in a single voxel, you might not get this point back, and the function should warn -- and you can assert via pytest.warns(...) that it does, etc.

Sounds good, I wrote that after a long day of working on too many PRs at once but also after thinking about it I don't think it's that cumbersome, it's 4 required arguments which is a lot but I think that's the best way for making it modular and making sure you can check step by step. I'm almost done with the test, will push soon.

@alexrockhill
Copy link
Contributor Author

Ok, this should pass all the tests, but it does take 50 seconds locally to test. @larsoner if you have any ideas to make it faster that would be nice but I'm not sure I have any. Other than that, should be good to go.

@alexrockhill alexrockhill changed the title [ENH, WIP] Encapsulate warp elec image in function [ENH, MRG] Encapsulate warp elec image in function Jul 13, 2021
Copy link
Member

@larsoner larsoner left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just a couple of high-level comments, I can look a bit deeper once you address those. Also 50sec will be too long, I can see if there are ways to speed it up later this week! If you want to try in the meantime, I would do something like add @profile to the top of the test function:

...
@pytest.mark.slowtest
@testing.requires_testing_data
@profile
def test_warp_montage_volume():
    ...

and do:

kernprof -lbv ~/.local/bin/pytest mne/tests/test_surface.py -k warp_montage

it should tell you which lines of the test are slow

@alexrockhill
Copy link
Contributor Author

Hmmm so

reg_affine, sdr_morph = compute_volume_registration(
        subject_brain, template_brain, zooms=5, niter=[10, 10, 5],
        pipeline=['affine', 'sdr'])

raises

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "<decorator-gen-22>", line 24, in compute_volume_registration
  File "/Users/alexrockhill/projects/mne-python/mne/transforms.py", line 1635, in compute_volume_registration
    return _compute_volume_registration(
  File "/Users/alexrockhill/projects/mne-python/mne/transforms.py", line 1653, in _compute_volume_registration
    pipeline = _validate_pipeline(pipeline)
  File "/Users/alexrockhill/projects/mne-python/mne/transforms.py", line 1578, in _validate_pipeline
    raise ValueError(
ValueError: Steps in pipeline are out of order, expected ('affine', 'sdr') but got ['affine', 'sdr'] instead

but works with a tuple.

I might fix that on this PR if that's okay

@alexrockhill
Copy link
Contributor Author

Ok, now the test is down to 20 seconds :)

@alexrockhill
Copy link
Contributor Author

Ok ready for review @larsoner, whenever you get a chance

@alexrockhill
Copy link
Contributor Author

So I changed the tests so that the zooms=5 but full iterations version is the ground truth but the euclidean distance is still ~0.11 which I think is in meters so that's 110 cm which is really far off...

"Ground truth" (with zooms=5)

[[-0.27778788, 0.24251515, -0.35693939],
 [-0.30033333, 0.24785714, -0.35014286],
 [-0.32261947, 0.25295575, -0.34614159]]

Extra quick version (affine + SDR with low iterations)

[[-0.31253333,  0.352     , -0.3247    ]
 [-0.33082353,  0.35317647, -0.31488235]
 [-0.34751786,  0.34966071, -0.30701786]]

@alexrockhill
Copy link
Contributor Author

I'll try adding back translation and rigid and see how much of a time hit that is for the precision.

@larsoner
Copy link
Member

Yes it's possible that the affine step with low zooms breaks things, so doing translation+rigid+SDR in the test (skipping affine) or even just translation+rigid (no affine or SDR) with zooms=5 if it gets us speed + good enough accuracy (< 1 cm would be nice)

@alexrockhill
Copy link
Contributor Author

Yes it's possible that the affine step with low zooms breaks things, so doing translation+rigid+SDR in the test (skipping affine) or even just translation+rigid (no affine or SDR) with zooms=5 if it gets us speed + good enough accuracy (< 1 cm would be nice)

Great, I'll try that!

@alexrockhill
Copy link
Contributor Author

Hmmm with zooms=5 even without the affine, we're still at about 10 cm

@alexrockhill
Copy link
Contributor Author

I'll try messing with the parameters a bit more, it's possible that a few more translation iterations will fix things

@larsoner
Copy link
Member

I would use nibabel.orthoview() on the before and after images to see what's going wrong. I wouldn't even really expect an identity "registration" for sample-fsaverage to be off by 10cm for most points...

@larsoner
Copy link
Member

(but maybe I underestimate how far off they are)

@alexrockhill
Copy link
Contributor Author

Wait, I'm thinking in mm, they are off my ~1 cm 🤦

0.010855916835345538
0.008643073755143938
0.007148324234890798

@larsoner
Copy link
Member

Oh good, I think atol=0.01 / decimal=2 i.e. 1 cm is good enough for the 5mm-zooms version!

@alexrockhill
Copy link
Contributor Author

Ok good to go but it was 28 seconds locally, I tried turning down the iterations and skipping steps but everything I did lost precision so I think this is about the best to be ~1 cm precise

@alexrockhill
Copy link
Contributor Author

Ok this is good to go by me if it's okay with you

to show Eric

revert all Brain API changes, for another PR

in progress

in progress

working version of montage warping function

remove off-target
@larsoner
Copy link
Member

Will merge once CIs come back happy, thanks @alexrockhill!

@larsoner
Copy link
Member

Pushed a commit to set coord_frame='mri' and will merge once CIs come back happy!

(After doing that I realized that it doesn't matter much for this PR which coordinate frame things are plotted in, but it'll be useful when we combine with Brain in #9545, so no harm in having it here.)

@alexrockhill
Copy link
Contributor Author

Ok, I think good to go!

@larsoner larsoner merged commit 4875e04 into mne-tools:main Jul 16, 2021
@larsoner
Copy link
Member

Indeed, thanks @alexrockhill ! Time for some fun rebasing elsewhere :)

@alexrockhill alexrockhill deleted the elec_img branch July 16, 2021 22:16
@alexrockhill
Copy link
Contributor Author

Indeed, thanks @alexrockhill ! Time for some fun rebasing elsewhere :)

The plan is that the dominos fall perfectly!

larsoner added a commit to alexrockhill/mne-python that referenced this pull request Jul 19, 2021
* upstream/main:
  [MRG, ENH] Find aseg labels near montage (mne-tools#9545)
  Add label to colorbar in GAT plot [skip actions] (mne-tools#9582)
  [ENH, MRG] Encapsulate warp elec image in function (mne-tools#9544)
  [DOC, MRG] Add "info" to `docdict` (mne-tools#9574)
  [MRG] Add `units` parameter to get_data for Evoked (mne-tools#9578)
  [MRG, ENH] Get annotation description from snirf stim name (mne-tools#9575)
  [MRG] ENH, FIX: Add tmin/tmax parameters to get_data methods, fix bug in find_bads_ecg (mne-tools#9556)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants