Skip to content
Snippets Groups Projects
Commit 6d6055bd authored by Carolyn McNabb's avatar Carolyn McNabb
Browse files

adding files for SWI analysis - initial commit

parent d399e01d
No related branches found
No related tags found
No related merge requests found
#!/bin/bash
#SBATCH --mail-user=mcnabbc1@cardiff.ac.uk
#SBATCH --mail-type=END
#SBATCH --job-name=SW2_merge_echoes
#SBATCH --partition=cubric-rocky8
#SBATCH -o /cubric/collab/599_hub/sbatch_reports/SW2_merge_echoes_%j.out
#SBATCH -e /cubric/collab/599_hub/sbatch_reports/SW2_merge_echoes_%j.err
#SBATCH --array=1-3
#change gitpath to the location of this repository
gitpath=/home/sapcm15/gitlab/brainandgenomicshub/SWI_SEPIA
export PATH=$PATH:${gitpath}
# use the SLURM_ARRAY_TASK_ID to define the subject ID (sub) from the subject_list.txt file
sub=$(sed -n ${SLURM_ARRAY_TASK_ID}p ${gitpath}/subject_list.txt)
echo "Merging SWI echoes for ${sub}"
batch_merge_echoes.sh ${sub}
\ No newline at end of file
#!/bin/bash
#SBATCH --mail-user=mcnabbc1@cardiff.ac.uk
#SBATCH --mail-type=END
#SBATCH --job-name=SW2_SWIhdbet
#SBATCH --partition=cubric-v100
#SBATCH --gpus=2
#SBATCH -o /cubric/collab/599_hub/sbatch_reports/SW2_SWIhdbet_%j.out
#SBATCH -e /cubric/collab/599_hub/sbatch_reports/SW2_SWIhdbet_%j.err
#change gitpath to the location of this repository
gitpath=/home/sapcm15/gitlab/brainandgenomicshub/SWI_SEPIA
export PATH=$PATH:${gitpath}
#Define the lines of the subject list that you want to work between - in this example, I read lines 1-3 of the subject list and assigned them to subs. Do not remove the 'p' after the second number.
subs=$( sed -n '1,3p' ${gitpath}/subject_list.txt )
#set paths
bids_dir=/cubric/collab/599_hub/hub_data
swi_dir=${bids_dir}/derivatives/SWI_SEPIA/preprocessed
hdbet_dir=${bids_dir}/derivatives/hdbet/preprocessed
#make a directory for the hdbet folder and delete anything that already exists in that folder (if the folder had been used for a previous command)
echo "Creating and/or deleting hd-bet folder in derivatives dir"
mkdir -p ${hdbet_dir}/head
if [ -d ${hdbet_dir}/head ]; then
echo "deleting old head data"
rm ${hdbet_dir}/head/*
fi
echo "copying data into brain extraction folder"
for sub in ${subs}; do
cp ${bids_dir}/${sub}/ses-01/swi/${sub}_ses-01_echo-1_part-mag_GRE.nii.gz ${hdbet_dir}/head/${sub}_ses-01_part-mag_GRE.nii.gz
done
echo "Performing brain extraction using HD-BET"
hd-bet -i ${hdbet_dir}/head -o ${hdbet_dir}/brain
echo "copying brain extracted data back into SWI folder"
for sub in ${subs}; do
cp ${hdbet_dir}/brain/${sub}_ses-01_part-mag_GRE_mask.nii.gz ${swi_dir}/${sub}/${sub}_ses-01_mask.nii.gz
done
# Susceptibility weighted imaging pipeline for BSPRINT study
## Carolyn McNabb, 2025
### This pipeline uses the [SEPIA](https://github.com/kschan0214/sepia/releases) Toolbox. Documentation for this toolbox can be found [here](https://sepia-documentation.readthedocs.io/en/latest/getting_started/Installation.html)
Note, this toolbox relies of multiple other toolboxes to be installed first. For the Brain and Genomics Hub project, this has already been done and these are stored in `/cubric/collab/599_hub/software` along with the SEPIA toolbox.
## File mapping for reference
The UK biobank sequence was used to acquire these data and produces 5 files, 2 magnitude and 3 phase images. Data include files with coils combined and uncombined, as well as a normalised magnitude image, which we will use for our analysis.
**Note this is not needed for the analysis but is here for reference if anyone ever needs it**
```
014_scans_SWI_3mm_Updated_v1.1_20250217124620_c24_e1.nii.gz --> Magnitude images per coil (echo 1) --> MAG_TE1
014_scans_SWI_3mm_Updated_v1.1_20250217124620_c29_e2.nii.gz --> Magnitude images per coil (echo 2) --> MAG_TE2
015_scans_SWI_3mm_Updated_v1.1_20250217124620_e1.nii.gz --> Total magnitude image (echo 1) --> SWI_TOTAL_MAG_notNorm
015_scans_SWI_3mm_Updated_v1.1_20250217124620_e2.nii.gz --> Total magnitude image (echo 2) --> SWI_TOTAL_MAG_notNorm_TE2
016_scans_SWI_3mm_Updated_v1.1_20250217124620_e1.nii.gz --> Total magnitude image (normalised) (echo 1) --> SWI_TOTAL_MAG
016_scans_SWI_3mm_Updated_v1.1_20250217124620_e2.nii.gz --> Total magnitude image (normalised) (echo 2) --> SWI_TOTAL_MAG_TE2
017_scans_SWI_3mm_Updated_v1.1_20250217124620_c1_e2_ph.nii.gz --> Phase images per coil (echo 1) --> PHA_TE1
017_scans_SWI_3mm_Updated_v1.1_20250217124620_c23_e1_ph.nii.gz --> Phase images per coil (echo 2) --> PHA_TE2
018_scans_SWI_3mm_Updated_v1.1_20250217124620_e1_ph.nii.gz --> Total phase image (echo 1) --> SWI_TOTAL_PHA
018_scans_SWI_3mm_Updated_v1.1_20250217124620_e2_ph.nii.gz --> Total phase image (echo 1) --> SWI_TOTAL_PHA_TE2
```
For the analysis in SEPIA, we will use the normalised total magnitude data and the total phase data.
## Before you start
To execute the batch scripts in the gitlab repository, you will need to change the permissions. To do this, open a linux terminal and type:
```
/path/to/this/repository
export PATH=$PATH:.
chmod u+x batch*.sh
```
## Subject list
Next, create a subject list, including a separate line for each subject in the BSPRINT study. In the linux terminal, type:
```
cd /cubric/collab/599_hub/hub_data
ls -d sub* >> /path/to/this/repository/subject_list.txt
```
## Data preparation
SEPIA likes the data to be in 4D format, with the two echoes included in the same file. We will create these manually and store them in the `/cubric/collab/599_hub/hub_data/derivatives/SWI_SEPIA/preprocessed` directory. Each subject will have their own folder within this directory.
Before running this step, open `0.0_merge_echoes_SLURM.sh` and make sure all paths are correct and the report is being sent to your own email address. You should also check that paths in `batch_merge_echoes.sh`. **Note that this is a necessary step for ALL scripts in this repository but I will only mention it here.**
To run this step, in the linux terminal, type:
```
/path/to/this/repository
sbatch 0.0_merge_echoes_SLURM.sh
```
To check on the progress of your SLURM script, in the terminal, type:
```
squeue -u yourusername
```
## Brain extraction
Brain extraction is performed using [HD-BET](https://github.com/MIC-DKFZ/HD-BET) because it does a really good job and is likely better than the built-in brain extraction (FSL's bet) in the SEPIA toolbox.
To run the brain extraction, in the terminal, type:
```
sbatch 0.1_hd-bet_SLURM.sh
```
## Running SEPIA
SEPIA runs in MATLAB. In order to get it to run, I had to add the paths to all folders and subfolders in the software folder.
In the MATLAB terminal, type:
```
addpath(genpath('/cubric/collab/599_hub/software/'))
sepia
```
I have copied the config file to the gitlab repo for batch scripting but there is more refinement needed first as the maps look okay but not wonderful. I think this is the phase unwrapping or another part of the preprocessing. Maybe some experimentation is needed.
File added
MEDI_HOME = '/cubric/collab/599_hub/software/MEDI_toolbox-2024.11.26/';
STISuite_HOME = '/cubric/collab/599_hub/software/STISuite_V3.0/';
FANSI_HOME = '/cubric/collab/599_hub/software/FANSI-toolbox/';
SEGUE_HOME = '/cubric/collab/599_hub/software/SEGUE_28012021/';
MRISC_HOME = '/cubric/collab/599_hub/software/MRI_susceptibility_calculation_12072021/';
MRITOOLS_HOME = '/cubric/collab/599_hub/software/mritools_Linux_3.5.5/';
\ No newline at end of file
#!/bin/bash
#Carolyn McNabb 2025
#This script will merge the different echoes from the SWI data for the BPRINT project
#get the subject id from the SLURM script
sub=$1
#Set paths and variables
gitpath=/home/sapcm15/gitlab/brainandgenomicshub/SWI_SEPIA
data_dir=/cubric/collab/599_hub/hub_data/${sub}/ses-01/swi
out_dir=/cubric/collab/599_hub/hub_data/derivatives/SWI_SEPIA/preprocessed/${sub}
#make a directory for the files to go into
mkdir -p ${out_dir}
#merge echoes for magnitude and phase data and store in the output directory
for part in mag phase; do
fslmerge -t ${out_dir}/${sub}_ses-01_part-${part}_GRE.nii.gz ${data_dir}/${sub}_ses-01_echo-1_part-${part}_GRE.nii.gz ${data_dir}/${sub}_ses-01_echo-2_part-${part}_GRE.nii.gz
#copy the json files for the first echo over - this seems to be fine for sepia
cp ${data_dir}/${sub}_ses-01_echo-1_part-${part}_GRE.json ${out_dir}/${sub}_ses-01_part-${part}_GRE.json
done
#lastly, copy the SEPIA header file into the folder so it can be picked up by the toolbox
cp ${gitpath}/SEPIA_header.mat ${out_dir}
\ No newline at end of file
% This file is generated by SEPIA version: v1.2.2.6
% add general Path
sepia_addpath;
% Input/Output filenames
input = struct();
input = '/cubric/collab/599_hub/hub_data/derivatives/SWI_SEPIA/preprocessed/sub-59947961' ;
output_basename = '/cubric/collab/599_hub/hub_data/derivatives/SWI_SEPIA/output/sub-59947961/' ;
mask_filename = ['/cubric/collab/599_hub/hub_data/derivatives/SWI_SEPIA/preprocessed/sub-59947961/sub-59947961_ses-01_mask.nii.gz'] ;
% General algorithm parameters
algorParam = struct();
algorParam.general.isBET = 0 ;
algorParam.general.isInvert = 0 ;
algorParam.general.isRefineBrainMask = 0 ;
% Total field recovery algorithm parameters
algorParam.unwrap.echoCombMethod = 'Optimum weights' ;
algorParam.unwrap.unwrapMethod = '3D best path' ;
algorParam.unwrap.isEddyCorrect = 0 ;
algorParam.unwrap.isSaveUnwrappedEcho = 0 ;
% Background field removal algorithm parameters
algorParam.bfr.refine_method = '3D Polynomial' ;
algorParam.bfr.refine_order = 4 ;
algorParam.bfr.erode_radius = 0 ;
algorParam.bfr.erode_before_radius = 0 ;
algorParam.bfr.method = 'VSHARP (STI suite)' ;
algorParam.bfr.radius = 12 ;
% QSM algorithm parameters
algorParam.qsm.reference_tissue = 'None' ;
algorParam.qsm.method = 'TKD' ;
algorParam.qsm.threshold = 0.15 ;
sepiaIO(input,output_basename,mask_filename,algorParam);
sub-59947961
sub-59956832
sub-59963174
test
\ No newline at end of file
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment