-
-
Notifications
You must be signed in to change notification settings - Fork 1.5k
[MRG] Move files in config to a python file #9762
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
drammock
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
looking good. Mostly I'm suggesting clarifying the code comments, but one important improvement to make is to avoid writing the temporary file altogether.
Co-authored-by: Daniel McCloy <dan@mccloy.info>
Most of the datasets work with the default values of the pooch processors, so for the generic function I suggest: where For this PR, it would probably be good to pull out the processors into the individual dicts that define each dataset. I can push a commit to do that if you're not sure how best to do it. |
Sounds good. That looks good to me.
How would we do this actually since the |
Oops, you're right. I was thinking that when the |
|
I think we need to then separate out Probably best to save that for the next PR since that depends on creation of the generic public API function? If so, this is good to go by me. |
|
Looks like the azure pipeline failures are timing out. I don't think due to this PR at least? |
that's been happening for several days, I haven't had a chance to look into it yet |
| # of the downloaded dataset (ex: "MNE_DATASETS_EEGBCI_PATH"). | ||
| # - (optional) token : An authorization token, if one is needed to access the | ||
| # download URL. | ||
| fnirs_motor = dict( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@adam2392 sorry I was slow to review here. I would suggest you write it as:
MNE_DATASETS = dict()
MNE_DATASETS['fnirs_motor'] = dict(
this way you don't need to replicate below the list of datasets so you won't forget one.
can you open a new PR?
Reference issue
First part of #9736
What does this implement/fix?
Moves dataset_checksums file into a dictionary of dictionaries that define each dataset uniquely.
Additional information
The next phase would be to:
mne.datasets.fetch_dataset(), which takes in a dictionary of parameters that must havearchive_name,folder_name,url,hashdefined and optionally can pass inconfig_keyandtoken_data_pathto accept these parameters instead_data_pathinternally to leverageMNE_DATASETSto pass in the necessary parameters to_data_path.