Skip to content

Conversation

@honzaseda
Copy link

I was using the MNE tool for parsing BrainVision data files in a paper where we needed to parse all the information from the segmentation and impedance sections of a header file. As i did not find these informations in the current raw parsed data, an enhancement of the current parser was implemented that adds these sections to the raw info.

Considering a possible wider use case for these changes, we decided that this enhancement could be included in the tool directly.


Enhancements:

The function parse_segmentation takes any of the key-value pairs in the segmentation subsections of the BrainVision file starting with S e g m e n t a t i o n / A v e r a g i n g, as well as some of the added key-values in the Common Infos. The segmentation is stored in a new info item segmentation.

The function parse_impedance creates a dict of all the channel impedances from the header file. All of the channels in the raw info are iterated over, and if an impedance record exists, they are appended to the channel dict.

The header file that these were tested on is in mne/io/brainvision/tests/data/test_segmentation_impedance.vhdr (also used for the unit tests)


Problems:

However, even though these changes work well in our local environment and the parsing of the values in itself works good, implementing these changes directly to MNE causes some of the already existing unit tests to fail, and I'm not really sure how to solve these issues. When i try to add any new attributes to the info structure, even an empty list.

These are the failing tests:

FAILED mne/io/brainvision/tests/test_brainvision.py::test_brainvision_data_highpass_filters - RuntimeWarning: Could not get size for self.info
FAILED mne/io/brainvision/tests/test_brainvision.py::test_brainvision_data_lowpass_filters - RuntimeWarning: Could not get size for self.info
FAILED mne/io/brainvision/tests/test_brainvision.py::test_brainvision_data_partially_disabled_hw_filters - AssertionError: assert {'acq_pars', ... 'comps', ...} == {'acq_pars', ... 'comps', ...}
FAILED mne/io/brainvision/tests/test_brainvision.py::test_brainvision_data_software_filters_latin1_global_units - AssertionError: assert {'acq_pars', ... 'comps', ...} == {'acq_pars', ... 'comps', ...}
FAILED mne/io/brainvision/tests/test_brainvision.py::test_brainvision_data - RuntimeWarning: Could not get size for self.info

Any ideas where the problem could be for the existing tests and how this could be improved?

@larsoner
Copy link
Member

As i did not find these informations in the current raw parsed data, an enhancement of the current parser was implemented that adds these sections to the raw info.

The set of values / things that can be added to Info is restricted by the FIF format spec, so we can't add things there arbitrarily. Instead it might be worth adding these to the RawBrainvision class as attributes or methods, such as raw_bv.get_impedances() or so. They'll be lost on save-read round-trip (to FIF) but that shouldn't be too much of a problem for getting things like impedances.

@honzaseda
Copy link
Author

Thanks for the suggestion, I will look into this.

@honzaseda
Copy link
Author

Moved the parsed data into attributes of the RawBrainvision class

@honzaseda honzaseda marked this pull request as ready for review May 12, 2020 18:11
Copy link
Member

@agramfort agramfort left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

apparently they're two features here. The ability to read general metadata on channels and reading in averaged data ie Evoked objects in brainvision format.

return len(markers_basename - bv_markers) == 0


def parse_impedance(settings):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this should be a private function

Suggested change
def parse_impedance(settings):
def _parse_impedance(settings):

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@honzaseda you forgot to make this method private as well

SegmentationType=MARKERBASED
SegmentDataPoints=1100
Averaged=YES
AveragedSegments=80
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

does it mean it should be read as an Evoked dataset?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, if Averaged is set to YES, it means that the data file contains data that have been averaged across segments.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this tells me that we should add a read_evoked_brainvision no?
I would rather not have raw_raw_brainvision read such a header without complaining

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Probably? I'm not really sure as i do not know the inner workings of MNE that well.

@agramfort
Copy link
Member

@sappelhoff or @cbrnr can you pitch in?

Copy link
Member

@sappelhoff sappelhoff left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @honzaseda, thanks for contributing these enhancements. --> might have been easier to try to include one at a time though :-)

I only had a look at the impedance parsing so far and that looks okay to me as it is. I am just not sure whether it's a good idea to make the impedances an attribute of raw, because BrainVision raw files will be the only files to have this attribute?

The alternative would be to optionally return the impedances as a data table at read time. e.g. raw, impedances = mne.io.read_raw_brainvision(path, return_impedances=True)

re: segmentation (not having checked the code yet) --> I never met somebody who used this feature of the BrainVision Recorder software :-) ... first time, yay.

@cbrnr
Copy link
Contributor

cbrnr commented May 14, 2020

There is raw._raw_extras to store this kind of stuff. For example, almost all of the EDF header goes into that attribute. Not sure if putting it in that dict is a better option than creating a raw.impedances attribute or a getter method.

@larsoner
Copy link
Member

BrainVision raw files will be the only files to have this attribute?

I think this is okay. It will also be lost on FIF round-trip but that's also okay. If there are other readers that support impedances we can think about a more global way of storing them or something. As long as we pick a sensible attribute here we can always extend it to other readers later via a mixin or something.

The alternative would be to optionally return the impedances as a data table at read time

This would also be okay, but it's not all that extensible of there are other attributes people will want from other readers. You end up with a bunch of return_... kwargs or otherwise changing the API signature. Seems cleaner than just

There is raw._raw_extras to store this kind of stuff.

Users should not expect private attributes to be stable, they can be changed in a backward compatible way at any time during development with no warning. So we should not advertise that people store information they want in there, or make use of private attributes in general.

BTW ideally / by design the only things that should actually live in _raw_extras are the things you need to actually _read_segment_file.

@cbrnr
Copy link
Contributor

cbrnr commented May 14, 2020

Yes, if this should be used by users it's better not to rely on that private attribute. I'd vote for a simple impedances attribute then (instead of a getter method).

@agramfort
Copy link
Member

agramfort commented May 14, 2020 via email

@sappelhoff
Copy link
Member

one strategy could be to use the chs['loc'][6] entry for impedance.

can we have unlimited space in the chs['loc'][X] entries? Or is this somehow bounded by FIFF rules?

If it's bounded, using the limited space for impedance values would seem like a waste to me.

If it's not bounded +1

Otherwise, with Eric's description I know also vote for a simple attribute.

And I also vote for @honzaseda to split the two enhancements into 2 PRs =)

@agramfort
Copy link
Member

agramfort commented May 14, 2020 via email

@honzaseda
Copy link
Author

Removed the segmentation parsing, I will create a separate PR for this and leave only the impedance parsing here.

Left the impedances as an attribute of the RawBrainvision class for now.

@honzaseda honzaseda changed the title WIP, ENH: BrainVision impedance and segmentation parsing WIP, ENH: BrainVision impedance parsing May 15, 2020

Channels
--------
# Name Phys. Chn. Resolution / Unit Low Cutoff [s] High Cutoff [Hz] Notch [Hz] Series Res. [kOhm] Gradient Offset
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See comment in other PR, this should eventually be added to mne-testing-data

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

for the sake of this PR you could add the impedances to one of the existing VHDR files in https://github.com/mne-tools/mne-python/tree/master/mne/io/brainvision/tests/data and then use that file for testing.

@sappelhoff
Copy link
Member

just a note, this PR could also come in handy for MNE-BIDS to automatically populate the impedance field in electrodes.tsv BIDS files.

We're having a short "unconference" at my institute soon, and maybe I'll find time there to finish this. Unless you have other plans @honzaseda ?

@sappelhoff
Copy link
Member

We're having a short "unconference" at my institute soon, and maybe I'll find time there to finish this. Unless you have other plans @honzaseda ?

I just went ahead and finished it, and included you in the credits. Thanks for making a start with this PR @honzaseda!

See #7974

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants