-
-
Notifications
You must be signed in to change notification settings - Fork 1.5k
MRG+1: Elekta averager #3205
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
MRG+1: Elekta averager #3205
Conversation
mne/event.py
Outdated
| 'eog':self.eogmax, 'ecg':self.ecgmax} | ||
|
|
||
| def get_epochs(self, raw, catname, picks=None, reject=None, stim_channel=None, mask=0): | ||
| """ Get mne.Epochs instance corresponding to the given category. """ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nest the import here. In other words, put this just below this line:
from .epochs import Epochs
Then delete the import from the top. This will solve the circular import problem you're having.
|
can you run flake8 tool on your code and follow the pep8 guidelines? |
|
I dug a little into the TRIUX/Vectorview issue. |
|
I created a fiff file (95 megs) with various event/category definitions and some environmental interference acting as "signal". With that file, the code above now seems to give results identical to the MaxFilter averager (except for some baseline effect, haven't yet figured out where it comes from). I guess this fiff could be used for the unit tests eventually. |
Sure. Maybe to keep file size down you could do |
|
I still get a DC shift compared to the Elekta averager, so my unit tests don't pass at the moment. I'm waiting for response from Elekta to confirm that their code really doesn't do any baseline or filtering operations. |
|
Have you tried putting in a baseline to see if it gets closer? You could also compute the PSD of the two datasets to get some evidence of whether or not filtering has been applied |
|
@Eric89GXL If I apply a pre-stimulus baseline to the epochs before averaging, I get almost 100% match with Elekta data, but Elekta claims that they don't use any baseline in their averager. BTW, do you know why the audvis_raw example data does not have info['acq_pars'] at all? I guess it was somehow modified and rewritten? My goal would be to implement support also for pre-3.4 DACQ versions (older Vectorview systems), so I guess more testing data would be needed for that. I can try to find some old data from our lab. |
Our precedent in cases like this is to put some indignant comment like "# XXX this baseline is necessary to get a good match even though the Elekta averager isn't supposed to baseline correct???" and do what is necessary to get a good match :) That way it won't hold up the PR while we wait for clarification. The sample data are old, and from MGH. Maybe they have a special setup...? I don't think anyone manually removed it. The best thing is probably to find data from your lab. |
|
OK. Just to be sure, can you confirm my understanding that the following sequence of operations should perform no baselining, detrending or filtering whatsoever:
|
|
Yep, that should do it. You can confirm by doing |
|
Yep. The funny thing is that the MaxAve data is also shifted, but by a different amount. Applying baseline to both makes them identical. Anyway, evoked responses are usually baselined before analysis, so I guess it's not a big deal. I'll just write the unit tests for the baselined data as you suggested. |
|
Don't forget the angry/confused comment, too :) But seriously put in a comment with |
|
Will do. |
|
There's now an unit test that averages all categories from the raw data and compares to Elekta averaged data to within 1 fT(/cm). I truncated the raw file to 3 MEG channels. |
|
Can you open a PR to add the data file to the mne-testing-data repository? |
|
I guess you'll probably add a raw and an evoked file? |
|
Sure, but I'm a bit confused about where the files should go? The test files under mne/io/tests/data/ don't seem to be in the mne-testing-data repository. |
|
don't add more files in mne/io/tests/data/
we now add files in https://github.com/mne-tools/mne-testing-data
|
|
@agramfort OK. Should I put it under /MEG/sample then? |
|
No that is only for "sample" subject data (from the MNE-Python sample
dataset). Put it in "misc" if we have it, if not then create the folder and
put it there.
|
|
Created PR at mne-tools/mne-testing-data#11 |
mne/event.py
Outdated
| return events_out | ||
|
|
||
|
|
||
| class Elekta_event(object): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ElektaEvent
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually, this class is pretty thin. Can you change it just to use a dict?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can think about it. I would prefer classes, since there is basically no cost and they are easier to expand later. Also if the user wants to create his own categories/events, sanity checking is much easier inside a class. (I would like to make some API expansions later so that one can easily change averaging parameters or create new events)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think all these classes should be private if possible -- people should never need to see them if we can avoid it. But maybe I'm missing some future API design plans.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
found this from an old tweet by @agramfort : https://www.youtube.com/watch?v=o9pEzgHorH0 :) we shouldn't overfit for future design plans ...
|
Thanks @jasmainak, I understood that |
|
Not sure what the coverage error is about - should there be more tests? |
|
Take a look at the report and see which lines you missed. You might need
some assert_raises to hit exceptions, for example. Try to get over 90%
coverage. Sometimes it seems not to update the comment or status properly,
so be sure to check the site. And failing builds, if you have them, do not
contribute coverage, so those need to pass first (on mobile so can't see
directly).
|
mne/event.py
Outdated
| s += '>' | ||
| return s | ||
|
|
||
| def __getitem__(self, items): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please document this method
Take a look how we do it with epochs
| else: | ||
| raise KeyError('No such category') | ||
|
|
||
| def __len__(self): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same here
|
Example looks good Can you add your new dataset to _download_all_datasets in datasets/utils.py? It will prevent the progress bar junk in example rendering in master. If you want to venture into bonus territory you can also modify circle.yml to also download it for PRs if necessary based on a regex like we do for sample and some BS datasets. But otherwise LGTM, thanks for sticking with so many revisions. The next PR should be easier :₩ |
|
That was supposed to be a smiley but apparently I mistyped on my phone |
|
Looking at coverage, the problems are On Sep 20, 2016 7:30 AM, "Eric Larson" larson.eric.d@gmail.com wrote:
|
Thanks to everybody (and especially Eric) for devoting a lot of time and effort to this. |
|
Now it doesn't seem to run coverage checks, so I'm not sure what's the situation. |
|
It reported on your last commit -- it just waits until all Travis runs are done. You have 87% coverage on changed lines, but the misses are mostly in |
|
Ok. I'm not sure if |
|
Yeah definitely not using the multimodal dataset, but wouldn't our |
|
Yep. Tested in latest commit. |
|
LGTM +1 for merge. @agramfort feel free to merge if you're happy |
|
thx @jjnurminen |
|
Thanks! On 24 Sep 2016 23:06, "Alexandre Gramfort" notifications@github.com wrote:
|
Discussed in #3097
Summary: Elekta/Neuromag DACQ (data acquisition) supports rather flexible event and
averaging logic that is currently not implemented in mne-python. It also stores all averaging
parameters in the fiff file, so raw data can be easily reaveraged in postprocessing.
The purpose of this PR is
Implementation: a class that takes all the relevant info from
raw.info['acq_pars'].API:
get_condition() gives averaging info for a given condition, that can be fed to mne.Epochs
Technical details: DACQ supports defining 32 different events which correspond to trigger
transitions. Events support pre- and post-transition bit masking. Based on the events, 32 averaging categories can be defined. Each category defines a reference event that is the zero-time point for collecting the corresponding epochs. Epoch length is defined by the start and end times (given relative to the reference event). A conditional ('required') event is also supported; if defined, it must appear in a specified time window (before or after the reference event) for the epoch to be valid.