Running the HCP pipeline
Running the HCP pipeline#
For an overview on how to prepare data and run the HCP preprocessing steps, see Overview of steps for running the HCP pipeline.
HCP minimal preprocessing pipeline implements the processing steps prepared by the Human Connectome Project. The pipelines are described in detail in Glasser et al. (2013), whereas the general approach is presented in Glasser et al. (2016). QuNex includes a modified branch of the HCP that enables additional flexibility and application of the pipelines on wider array of datasets that would not meet HCP minimum acquisition requirements.
Overall the preprocessing is organized in a series of steps that are run consecutively. For full details, please consult Glasser et al. (2013). Here is a brief list of the pipelines and the corresponding QuNex commands:
Information on general settings, folder structure, running commands, and logging.
Processing of structural images#
Processes the T1w and T2w images and aligns them to the MNI atlas space.
Runs optimized FreeSurfer processing for tissue segmentation, reconstruction of cortical mantle and identification of major cortical and subcortical anatomical structures.
Computes additional surface files, converts them to CIFTI format, and generates myelin maps.
Runs the longitudinal HCP FreeSurfer pipeline.
Processing of BOLD images#
Processing of Diffusion Weighted images (DWI)#
Computes motion correction, distortion correction, eddy current correction and registration to native T1 space. Assumes presence of phase-encoding reversed DWI pairs. After this step data are ready for FSL's dtifit, bedpostx and probtrackx.
Computes motion correction, distortion correction, eddy current correction and registration to native T1 space. Works with 'legacy' data that is not phase-encoding reversed. It accepts standard field maps or runs if no field maps are present (results will be worse). After this step legacy data are ready for FSL's dtifit, bedpostx and probtrackx.
Each of the steps is run using the specified
qunex command. The processing is optimized so that multiple processes can be run in parallel, and the processing can be distributed over multiple nodes on a supercomputer cluster (please refer to General information on running the HCP pipeline How to run each of the steps is described in detail along with the list of relevant parameters that can or need to be specified on the respective pages.
In addition to the HCP minimal preprocessing pipeline, the QuNex suite also implements the following HCP pipelines.
Prepares and runs ICA-based classifier for identifying signal vs. noise components of fMRI data.
Runs multi-modal surface-based functional alignment of individual sessions' cortical data to group template.
Runs GLM analyses on HCP preprocessed data.