This repository focuses on the MEGRE (acquired using the 2 channel cryo-probe coil using the bi-polar readout gradient scheme) preprocessing, which mostly includes coil and repetition combination.
The complete workflow is pieced together using
pydra, the 2nd generation of the widely
known nipype project.
The images were reconstructed from the raw k-space data using an in-house developed MATLAB routine. Channel and repetition combination was handled separately as described below.
TODO: ask KH for details, how he handled the bipolar readout gradients and flips.
Before the two channels and the many repetitions could be combined, all images needed to be corrected for two types of misalignments:
- The bi-polar readout acquisition scheme caused the odd and even echoes to be shifted along the readout direction by 0.5–2 voxels.
- Additionally, due to the temperature increasing early in the session, an unaccounted for frequency drift caused a nearly linear shift along the slice encoding direction, observed from one echo to the other within the first repetition, tapering off in subsequent repetitions.
Both of these shifts are compensated for by registering each echo to the last one using the Fourier shift theorem as done in skimage.registration.phase_cross_correlation (or more precisely its port to JAX). TODO: add the citation.
Each repetition is processed independently. All moving echoes are registered independently but concurrently. The registration is performed in two steps:
- the necessary shift is identified in 3D using the magnitude data after the channels were combined using the naive sum-of-squares (SoS).
- each complex echo was shifted by multiplying its Fourier transform with the appropriate phase ramp. All coil channels were shifted by the same amount.
The image below shows the estimated shifts for the readout (blue), phase encoding (orange), and slice encoding (green) directions.
In order to preserve the correct phase information, MCPC 3D S is employed, as implemented in MriResearchTools.jl. It is implemented in Julia, is reasonably fast, conveniently distributed as binaries, and, importantly, has a correction for phase gradients, caused by the bipolar readout. This phase difference between the odd and even echoes, is expected to be the same in all channels and is thus removed by the algorithm after the channels are combined (see Korbinins's PhD thesis, page 53, 3.1.3 Bipolar Corrections).
The implementation is able to output the estimated phase offsets individual to each
channel, however those will not capture the even-vs-odd echo difference.
In order to access them explicitly (if needed), one would need to run the channel
combination twice: with and without the -b flag and compare the stored phase data.
Additionally, the implementation requires the individual echoes to be stacked along the last axis, while the coils — along the second to last.
Due to the high resolution of the acquisition and the resulting exceeding duration
of each repetition (TODO: mention the TA), the phase of each individual
repetition appears to have an arbitrary additive phase term, consisting of a
scalar phase offset (TODO: should be discussed)
The two images below illustrate the observed offsets between a few representative repetitions (1 and 9, then 26 and 9). Each odd row shows the computed phase difference between the two repetitions. Each even row shows the same difference after the fitted polynomial phase was removed. Columns show different echo times evolving from left to right.
TODDO: fix the explanation below. As can be seen from the figure above, in the case of the first repetition the frequency distribution over the FOV does not appear linear. When such an offset is fitted with a linear model, we observed that unconstrained optimisation may fail while attempting to explain the nonlinear frequency inhomogeneities using very steep frequency gradients. This situation can be overcome either by simply omitting the troubled repetition (which in most samples was only the first), by modelling the frequency distribution with a polynomial of a higher order, or, as we chose to do in this work, by regularising the simpler fit using prior information. This approach was implemented as finding the maximum-a-posteriori (MAP) estimate of the parameters by means of stochastic variational inference (SVI). TODO: add references While in most samples unconstrained optimisation only failed when fitting the phase offset between the reference and the first repetition, which would, arguably, not require a probabilistic model treatment, in one of the samples fitting the phase offsets with polynomials using the unconstrained optimisation with L-BFGS solver was failing consistently in a high proportion of the repetitions. We found empirically that the probabilistic model with the priors shown below has successfully regularised the fitting.
The exact probabilistic model was specified as follows:
φ0 ~ Normal[-π,π](0, 1) # truncated normal
ω0 ~ Normal(0, 1)
dω ~ Normal([0, 0, 0], [1, 1, 1])
concentration ~ Normal+(10, 3) # Normal truncated to nonnegatives only
phase offset ~ VonMises(φ0 + TE (ω0 + dω · r), concentration × magn) # likelihood
VonMises distribution is used for the likelihood to account for the observed wrapping of the phase offset. The likelihood is centred on the predicted phase offset and has the concentration parameter proportional to the magnitude of the signal: the lower is the signal in a given voxel, the flatter is the likelihood there, the less is the contribution of such voxel.
The image below illustrates the mean phase evolution within the framed ROI in a few axial slices (columns). Grey lines represent samples from the prior predictive distribution (in this case indicate that the priors do not coerce the fitting in any specific direction); the black line shows the observed (wrapped in the middle row or unwrapped in the bottom row) phase offset evolution over echo times; the orange line shows the predicted phase offset evolution based on the MAP estimate of the parameters.
The model was implemented using numpyro,
a probabilistic programming language implemented in python and leveraging jax
TODO: cite both for efficient graph processing and automatic differentiation.
The figure below shows the distribution of the coefficients over the repetition number for one of the samples. TODO: Combine all coefficients in one image and add a little discussion to it, just for ourselves.
After all repetitions had their phases referenced to a single selected repetition, it was safe to perform complex averaging of the signals without the possibility of destructive signal interference. This step at last yielded a single ME GRE dataset per sample.
Was performed using QSMxT cite, an open source suite for convenient and reproducable susceptibility analysis. As described in paper on the guideliens, a conventional QSM pipeline involves the following steps:
- Masking
- Background field removal
- Dipole inversion
All of these steps are conveniently implemented in a workflow within QSMxT, and offer both sane predefined defaults and a possibility to configure desirable parameters.
Each echo was unwrapped using a 4D unwrapping algorithm ROMEO, implemented in Julia. The phase in all unwrapped echoes was then scaled the corresponding TEs and weighted using the magnitude of the corresponding echoes to maximise the SNR of the resulting frequency map, as suggested in cite.
Besides performing a robust exact phase unwrapping (as opposed to a class of
optimisation based unwrapping algorithms, which yield an approximation of the true
phase), ROMEO defines a number of quality maps based on the processed phase.
These maps are combined HOW exactly and which and then thresholded.
The optimal threshold parameters were selected using the Otsu method and then
additionally multiplied by 1.3 to generate a more aggressive mask and by 1.75
for a more coherent mask. The latter was subsequently filled using a Gaussian filter
and morphological hole filling. At last the two masks were eroded
(with what kernel?) 2 times for the aggressive mask which attempts to exclude
the strong sources and 4 times for the second, filled mask.
The off-resonance frequency map, acquired in the echo combination step, was then filtered twice using the two masks described above, in order to remove the field components originating from the sources outside of each mask. Since the phase (and thus the frequency map) is not particularly reliable around the strong sources (where the signal is often dephased and voided), attempting to process the phase from the areas without such voids in general yields a more reliable local field (and subsequently QSM). However, for completeness, the fields from the strong sources are still reconstructed thanks to the derived mask with the holes filled. In both cases we used Projection on Dipole Fields (PDF) cite to iteratively identify the field components that would originate only from the sources within the mask
Both sets of local fields (once scaled from Hz to ppm), are concurrently inverted by means of the rapid two step inversion (RTS). This approach belongs to the family of splitting methods where the entire k-space is separated in a well- and ill-defined regions based on the absolute value of the dipole kernel in the Fourier space. The coefficients in the well-defined part of the Fourier space are taking from an iterative dipole inversion using an LSMR solver. The coefficients in the ill-conditioned part of the k-space are then recovered using the ADMM, due to the presence of an overall total variation regularisation.
The two susceptibility maps produces from the two local fields, that in turn were derived from the two masks, are then combined using the same masks. Most of the susceptibility map is taken from the inversion of the more accurate local field, obtained without the strong sources. The rest is taken from the secondary susceptibility map. The maps are then additionally referenced to the mean of the sample within the masks.




