qx_utilities.bash.dwi_bedpostx_gpu(sessionsfolder, sessions, gradnonlin, scheduler, fibers='3', weight='1', burnin='1000', jumps='1250', sample='25', model='2', rician='yes', overwrite='no', nogpu='no')#


This function runs the FSL bedpostx command, by default it will facilitate GPUs to speed the processing.

It explicitly assumes the Human Connectome Project folder structure for preprocessing and completed diffusion processing. DWI data is expected to be in the following folder:



--sessionsfolder (str):

Path to study folder that contains sessions.

--sessions (str):

Comma separated list of sessions to run.

--fibers (str, default '3'):

Number of fibres per voxel.

--weight (str, default '1'):

ARD weight, more weight means less secondary fibres per voxel.

--burnin (str, default '1000'):

burnin period.

--jumps (str, default '1250'):

Number of jumps.

--sample (str, default '25'):

sample every.

--model (str, default '2'):

Deconvolution model:

  • '1' ... with sticks,

  • '2' ... with sticks with a range of diffusivities,

  • '3' ... with zeppelins.

--rician (str, default 'yes'):

Replace the default Gaussian noise assumption with rician noise ('yes'/'no').

--gradnonlin (str, default detailed below):

Consider gradient nonlinearities ('yes'/'no'). By default set automatically. Set to 'yes' if the file grad_dev.nii.gz is present, set to 'no' if it is not.

--overwrite (str, default 'no'):

Delete prior run for a given session.

--scheduler (str):

A string for the cluster scheduler (PBS or SLURM) followed by relevant options, e.g. for SLURM the string would look like this: --scheduler='SLURM,jobname=<name_of_job>, time=<job_duration>, cpus-per-task=<cpu_number>,mem-per-cpu=<memory>, partition=<queue_to_send_job_to>'

--nogpu (flag, default 'no'):

If set, this command will be processed useing a CPU instead of a GPU.


Apptainer (Singularity) and GPU support:

If nogpu is not provided, this command will facilitate GPUs to speed up processing. Since the command uses CUDA binaries, an NVIDIA GPU is required. To give access to CUDA drivers to the system inside the Apptainer (Singularity) container, you need to use the --nv flag of the qunex_container script.


Example with a scheduler and GPU processing:

qunex dwi_bedpostx_gpu \
    --sessionsfolder='<path_to_study_sessions_folder>' \
    --sessions='<comma_separarated_list_of_cases>' \
    --fibers='3' \
    --burnin='3000' \
    --model='3' \
    --scheduler='<name_of_scheduler_and_options>' \

Example without GPU processing:

qunex dwi_bedpostx_gpu \
    --sessionsfolder='<path_to_study_sessions_folder>' \
    --sessions='<comma_separarated_list_of_cases>' \
    --fibers='3' \
    --burnin='3000' \
    --model='3' \
    --overwrite='yes' \