-
Notifications
You must be signed in to change notification settings - Fork 2
About
Here's a high-level overview of what these files do and how we use them.
Any questions, comments or suggestions can be sent to Scott Burns sburns@nmr.mgh.harvard.edu
This is a python program that we use to analyze all of our MRI data. It's run from the command line with the following usage:
semprmm_pipeline.py subject [more subjects] option [more options]
It's sort of a meta-pipeline in that it exposes functionality from many different neuroimaging packages. It contains functions for nearly everything we'd like to do with our MR data. Corresponding options of interest for each function are listed.
- Copying DICOM images from Bourget
--copy_dicom
- Converting the DICOMs to useful imaging formats
--scan_only--scan2cfg--cfg2info--unpack--unpack_all
- Setting up and running SPM batches for preprocessing and 1st-level analysis
--setup_preproc--setup_outliers--run_preproc--run_outliers
- Setting up/running preprocessing,1st level stats with FSFast
--setup_fs_analysis--setup_fs_stats--run_fs_stats
- Setting up and running the cortical reconstruction using FreeSurfer
--setup_recon--run_recon
- Setting up and running 2nd-level (group) analyses
--setup_second--run_second
- Running FreeSurfer in "batch mode" to output jpegs of the 2nd-level results overlayed onto fsaverage and packaging it into a directory of html files.
--surf_second--package_second
There's lot of study-dependent code in semprmm_pipeline.py now, but take a look at it for ideas.
This a python library that contains study-independent code. Using the import command in python, you can readily incorporate these functions into your own pipeline if you so desire.
Here are the functions and a brief overview of what they do:
- find_session pass in a list of options for findsession, like [,"-x", ] and it returns a string to the DICOM directory on Bourget.
- mirror runs an rsync process from the src to the dest. Can optionally run in the background.
- scan_only pass in the DICOM directory as src, the unpacking target dir as targ, and a path the scan.log file as scan and this function runs unpacksdcmdir in the --scanonly mode.
- scan_to_cfg This function helps in converting the scan.log file to the cfg.txt that can be used for the full unpacking. It's very flexible but because of this somewhat difficult to explain. Ask me if you have questions (and see semprmm_pipeline:scan2cfg for how I use it).
- unpack This runs the full unpacksdcmdir command using src,targ as the scan_only function and cfg as the path to the cfg.txt file that may have been made with the help of scan_to_cfg
- load_data Python provides a simple data persistence (saving objects for later use) module called pickle. This function wraps pickle into an easy to use function in which all you have to do is supply the object to save as data and a path to a file.
- save_data The inverse of load_data. Pass in a file path that contains "pickled data" and returns the object.
- f2f_replace This stands for "file to file with replace". Think of it as copying a file from one location to another, but with an added twist. You can pass in a dictionary (the replace arg) with keywords and their values and anywhere in the incoming file that $ exists, it will be replaced with the in the outgoing file. I use this function heavily in semprmm_pipeline to copy template SPM batches to subject-specific batches.
- run_process This function takes in a list of a command-line command. For instance, if you've written out a shell script and made the file executable, then passing the path to the file into the function will actually run the script. You can optionally pass in streams in which stdout and stderror will be printed. By design, this function returns the process object to the caller and it's up to the caller to either wait for the process to exit or let it run in the background.
- And many other generalized helper functions...
These scripts define preprocessing batches for SPM for our different paradigms. They all basically do the same thing: realign, correct for slice timing, coregister, segment, normalize and smooth (with both 6mm and 8mm kernels). The little bit of code at the end of the files tell matlab to actually run these SPM modules.
These scripts define 1st-level statistical processing for our different paradigms. Nothing too fancy.
These script defines a one-sample T test for 2nd-level group analysis. Again, nothing too fancy.
This shell script wraps two calls to TkSurfer (for lh and rh) and the code to convert tiffs (which TkSurfer outputs) to more readily usable jpegs.
These are TCL scripts that command TkSurfer to make six images (Anterior,Posterior,Superior,Inferior,Medial, and Lateral views) for each hemisphere. There's a coupling here with the images this script makes and the images make_images.sh expects to convert, FYI.
Any other files depend on our stimulus presentation program and are probably not applicable to you, but you can take a look if you'd like.