Article

The Rician distribution of noisy MRI data

Wiley
Magnetic Resonance in Medicine
Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

The image intensity in magnetic resonance magnitude images in the presence of noise is shown to be governed by a Rician distribution. Low signal intensities (SNR < 2) are therefore biased due to the noise. It is shown how the underlying noise can be estimated from the images and a simple correction scheme is provided to reduce the bias. The noise characteristics in phase images are also studied and shown to be very different from those of the magnitude images. Common to both, however, is that the noise distributions are nearly Gaus-sian for SNR larger than two.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Specification of the parameter mean is most naturally performed as the function of the modulus of the Fourier space coordinates. But if the distribution in image space is Gaussian then the distribution over real and imaginary components in Fourier space is Gaussian, leading to the distribution over the modulus being Rician [11]. In most cases a Gaussian likelihood is an appropriate choice, but the Rican is not conjugate to the Gaussian. ...
... As a result, it is assumed that the large number of coefficient values that are less than a predetermined threshold value represent noise and can be zeroed, and the few coefficients that are greater than the threshold, represent signal, but should "shrink" by some amount, also specified by the threshold [5]. Choice of an effective threshold is the primary goal of wavelet shrinkage algorithms [11]. For this paper, thresholds proposed in [6] and [9] are used. ...
... and the prior mean and variance functions defined in Equations (10) and (11) below and illustrated in Figure (2) are applied on scales finer than the critical scale .: ...
Preprint
Full-text available
Bayesian image restoration has had a long history of successful application but one of the limitations that has prevented more widespread use is that the methods are generally computationally intensive. The authors recently addressed this issue by developing a method that performs the image enhancement in an orthogonal space (Fourier space in that case) which effectively transforms the problem from a large multivariate optimization problem to a set of smaller independent univariate optimization problems. The current paper extends these methods to analysis in another orthogonal basis, wavelets. While still providing the computational efficiency obtained with the original method in Fourier space, this extension allows more flexibility in adapting to local properties of the images, as well as capitalizing on the long history of developments for wavelet shrinkage methods. In addition, wavelet methods, including empirical Bayes specific methods, have recently been developed to effectively capture multifractal properties of images. An extension of these methods is utilized to enhance the recovery of textural characteristics of the underlying image. These enhancements should be beneficial in characterizing textural differences such as those occurring in medical images of diseased and healthy tissues. The Bayesian framework defined in the space of wavelets provides a flexible model that is easily extended to a variety of imaging contexts.
... Thermal noise is an inherent property in any object populated by electrons, such as the human body or electronic systems, and hence unavoidable in any Magnetic Resonance Imaging (MRI) experiment (Redpath, 1998). Its statistical properties are well characterised in the complex MRI domain (zero-mean Gaussian with equal variance in real and imaginary channels), as well as in the magnitude image domain for single-channel receiver coils (Rician distribution) (Andersen, 1996;Gudbjartsson and Patz, 1995;Henkelman, 1985). However, noise characterisation is less straightforward with modern MRI acquisitions, as it depends on various factors, including the type of reconstruction, filtering, number of receiver coils and type of acceleration used (Dietrich et al., 2008;Sotiropoulos et al., 2013c). ...
... Whereas noise in the complex domain is zero-mean Gaussian, dMRI signal produced by modern protocols is noncentral-Chi distributed in the general case (Aja-Fernández et al., 2011;Aja-Fernández and Vegas-Sánchez-Ferrero, 2016). This can lead to an elevated noise floor in the magnitude domain (the minimum measurable signal given the noise level) compared to classical MRI experiments where magnitude signal follows a Rician distribution (Gudbjartsson and Patz, 1995;Salvador et al., 2005). As dMRI information is encoded in the signal attenuation, intensities can be as low as the noise floor in a) regions where the signal attenuation is sufficiently high (e.g. ...
Article
Full-text available
Development of diffusion MRI (dMRI) denoising approaches has experienced considerable growth over the last years. As noise can inherently reduce accuracy and precision in measurements, its effects have been well characterised both in terms of uncertainty increase in dMRI-derived features and in terms of biases caused by the noise floor, the smallest measurable signal given the noise level. However, gaps in our knowledge still exist in objectively characterising dMRI denoising approaches in terms of both of these effects and assessing their efficacy. In this work, we reconsider what a denoising method should and should not do and we accordingly define criteria to characterise the performance. We propose a comprehensive set of evaluations, including i) benefits in improving signal quality and reducing noise variance, ii) gains in reducing biases and the noise floor and improving, iii) preservation of spatial resolution, iv) agreement of denoised data against a gold standard, v) gains in downstream parameter estimation (precision and accuracy), vi) efficacy in enabling noise-prone applications, such as ultra-high-resolution imaging. We further provide newly acquired complex datasets (magnitude and phase) with multiple repeats that sample different SNR regimes to highlight performance differences under different scenarios. Without loss of generality, we subsequently apply a number of exemplar patch-based denoising algorithms to these datasets, including Non-Local Means, Marchenko-Pastur PCA (MPPCA) in the magnitude and complex domain and NORDIC, and compare them with respect to the above criteria and against a gold standard complex average of multiple repeats. We demonstrate that all tested denoising approaches reduce noise-related variance, but not always biases from the elevated noise floor. They all induce a spatial resolution penalty, but its extent can vary depending on the method and the implementation. Some denoising approaches agree with the gold standard more than others and we demonstrate challenges in even defining such a standard. Overall, we show that dMRI denoising performed in the complex domain is advantageous to magnitude domain denoising with respect to all the above criteria.
... Hence, inappropriate results may be produced by the clustering methods because of not considering the spatial contextual information to handle the noises present. Different types of noise which possibly can corrupt the brain MRI are Rician [32,33], Gaussian [34,35], salt & pepper [36,37], uniform [37], and periodic [15] noise. ...
... Figure 11(c) to (h) are the respective segmented results obtained Table 3, it is found that the technique KISS outperformed the counterpart techniques as the segmented result generated by the technique has least noise present. Table 3 Experimental results of accuracy (average of 10 runs in %) carried out on man-made two toned image (shown in Fig. 11 The proposed technique is also examined considering "salt & pepper" and "rician" noises [32,33] in the same datasets previously used. The level of "salt & pepper" noise considered as 6%, and "rician" noise considered as 5%. ...
Article
Full-text available
Segmentation of different brain tissues such as white matter (WM), cerebrospinal fluid (CSF), and gray matter (GM) form magnetic resonance image (MRI) is a challenging task as the various tissue regions are complex because of their overlapping, vague and indiscernible and nonlinearly separable nature. Additionally, MRI often suffers from noise and outliers. Clustering based segmentation performance can be improved significantly by providing some amount of label pixels in the clustering process by employing semi-supervision. To cope with the above mentioned challenges, for the segmentation of brain MRI a novel semi-supervised technique called Kernel Induced Semi-supervised Spatial clustering (KISS) is proposed. The method is a judicious integration of (i) kernel trick to enhance the chances of linear separability of different regions of brain boundaries, (ii) spatial contextual information to handle the noise and outliers, (iii) fuzzy set to deal with overlapping and uncertainty of different tissue boundaries, (iv) rough set to cope with the indiscernibility and vagueness of various tissue boundaries and (v) semi-supervision to direct the process of clustering in a better direction by supplying some labeled pixels with constraint seeded policy. Several benchmark true and synthetic MRI datasets with and without added noise are examined. The efficacy of the proposed technique is validated with three unsupervised and two semi-supervised techniques and evaluated using different validity indices. Improvements achieved by the proposed technique in accuracy against the closest competitive techniques are 1.125%, 0.864%, 0.729% and 1.130% for normal BrainWeb datasets and 3.53%, 2.57%, 2.90% and 4.62% for the same datasets considering 6% added “salt & pepper" noise whereas 0.475%, 0.553%, 0.585% and 0.496% for normal IBSR datasets. Improvements considering "rician" noise are 1.62% and 1.42% for IBSR datasets 144 and 150 respectively whereas, 3.73% and 0.61% for BrainWeb datasets 85 and 100 respectively. The paired t-test results statistically signify the better results in support of the proposed technique. The confidence interval test also confirms the superiority of the proposed method over other counter part techniques. The proposed technique turned out to be effective for brain MRI segmentation. Therefore, it may be utilized for other medical image segmentations in future.
... It is difficult to denoise an MRI because magnitude images, which consist of real and imaginary parts, are commonly used [3,29]. The noise in the magnitude MR image follows a Rician noise distribution [29], which is significantly more complicated than traditional additive Gaussian noise. ...
... It is difficult to denoise an MRI because magnitude images, which consist of real and imaginary parts, are commonly used [3,29]. The noise in the magnitude MR image follows a Rician noise distribution [29], which is significantly more complicated than traditional additive Gaussian noise. ...
... Similarly, output from magnetic resonance imaging (MRI) is corrupted by Gaussian noise at sensor level, consequently noise in the magnitude (radial) MRI image is modelled using the Rician distribution [5]. Contrarily, images captured in poor illumination or low light, suffer from the signal dependent shot noise due to low photon count, which is modelled using the Poisson distribution [6,7]. ...
... Similarly, in synthetic aperture radar (SAR) imaging, noise due to scattering of the reflected electromagnetic waves is modelled as a multiplicative speckle noise which is known to be distributed by Rayleigh distribution [11,13]. Other examples include MRI imaging where noise in radial or magnitude MRI is Rician distributed [5]. Consequently, denoising methods are specifically designed for these applications which take into account the dynamics of the noise being suppressed. ...
Thesis
Full-text available
Denoising refers to the process of suppressing noise or artefacts in data. It has become an important preprocessing step in many engineering and scientific applications owing to the fact that noise distorts and obscure useful information within data of interest and therefore affects the accuracy of related signal processing systems. Among existing fully parametric approaches to data denoising, a main obstacle is the requirement to specify true signal model in addition to the noise model. This information about true signal model may not be readily available in many practical applications of interest. To that end, we propose a novel denoising framework for multivariate (multiple-channel), multidimensional (2D) and univariate (single-channel) data sets that is fully data-driven and only requires noise model for its operation. The proposed denoising framework operates in transform domain and employs empirical distribution function (EDF) based statistics under the umbrella of statistical goodness of fit (GoF) test. More specifically, local GoF test using EDF statistic is employed at multiple data scales to check whether the data coefficients belong to noise or true signal. The coefficients corresponding to noise are discarded while those belonging to signal are retained to obtain a denoised signal. The proposed methods are shown to effectively suppress noise belonging to additive white Gaussian noise model in multivariate data, images and univariate or single-channel datasets. Moreover, extensions of the proposed denoising framework to cater for nonconventional yet practically important noise models, such as signal dependent Poisson noise and multiplicative speckle noise, are also presented. One of the main contributions of this thesis is a multivariate denoising method using the GoF test at multiple scales. The GoF test, as explained above, is not directly applicable to multichannel data without compromising its inherent inter-channel correlation structure. In this thesis, we propose to use Mahalanobis distance (MD) statistic to transform an input data residing in multidimensional space $R^M$ to a single-dimensional space of positive real numbers $R_+$, while preserving original correlations within multiple channels of input data. In addition, due to a one-to-one correspondence between the distribution of original and MD-transformed data, we propose to perform the GoF test on MD-transformed data at multiple scales to perform the multivariate signal denoising. We give extensive experimental results to demonstrate the superiority of the proposed method against existing multivariate denoising methods on a large class of input signals. We also propose denoising methods for 2D signals (images) and 1D univariate signals using multiscale GoF test based on EDF statistic. We demonstrate the effectiveness of the methods on additive white Gaussian noise models in both cases. For the image denoising method, we present applications of the proposed approach in two nonconventional applications involving non-Gaussian noise and/or multiplicative noise models. Those include denoising of images from CMOS/CCD sensors and synthetic aperture radar (SAR) imaging. In the former case, noise is typically modelled as signal dependent additive Poisson-Gaussian, whereas in the latter case, a multiplicative speckle noise model is employed. We show the superiority of the proposed framework over conventional approaches on the real data obtained from both these applications. The outline of the proposed work include a novel multivariate GoF test based on Mahalanobis distance to perform multivariate signal denoising along with the use of the existing GoF test on multiple data scales for signal and image denoising. Additionally, proposed denoising strategies are fully data driven and only require the noise model that is generally known in most applications. Furthermore, proposed framework is also used to address the cases of nonconventional (non-Gaussian) noise in practice.
... For the phantom images in Figure 3, the SNR is calculated as the maximum value in the central 10 × 10 pixel region of the image, divided by the noise determined from the standard deviation in the same region in an empty bore image. A Rician correction scaling factor of 2π 2 /3 is applied to account for the non-zero mean of the magnitude images [12]. ...
... Sample dilutions ranged from 24 µg Fe (undiluted tracer) to 3.92 ng, decreasing first to 2.0 µg, and then by factors of 2. The reported SNR is the single 5-sec image's mean SNR determined from 60 measurements of each phantom. A Rician to Gaussian correction [12] is applied to account for the non-zero mean of the magnitude images. The signal was linear with concentration as seen by the linear fit with R 2 = 0.99988 and slope of 0.86 SNR/ng. ...
Article
Full-text available
Objective. Non-invasive functional brain imaging modalities are limited in number, each with its own complex trade-offs between sensitivity, spatial and temporal resolution, and the directness with which the measured signals reflect neuronal activation. Magnetic Particle Imaging (MPI) directly maps the cerebral blood volume (CBV), and its high sensitivity derives from the nonlinear magnetization of the superparamagnetic iron oxide nanoparticle (SPION) tracer confined to the blood pool. Our work evaluates functional MPI (fMPI) as a new hemodynamic functional imaging modality by mapping the CBV response in a rodent model where CBV is modulated by hypercapnic breathing manipulation. Approach. The rodent fMPI time-series data were acquired with a mechanically rotating field-free line MPI scanner capable of 5 sec temporal resolution and 3 mm spatial resolution. The rat's CBV was modulated for 30 minutes with alternating 5 min hyper-/hypocapnic states, and processed using conventional fMRI tools. We compare our results to fMRI responses undergoing similar hypercapnia protocols found in the literature, and reinforce this comparison in a study of one rat with 9.4T BOLD fMRI using the identical protocol. Main results. The initial image in the time-series showed mean resting brain voxel SNR values, averaged across rats, of 99.9 following the first 10 mg/kg SPION injection and 134 following the second. The time-series fit a conventional General Linear Model (GLM) with a 15-40% CBV change and a peak pixel CNR between 12 and 29, 2-6x higher than found in fMRI. Significance. This work introduces a functional modality with high sensitivity, although currently limited spatial and temporal resolution. With future clinical-scale development, a large increase in sensitivity could supplement other modalities and help transition functional brain imaging from a neuroscience tool focusing on population averages to a clinically relevant modality capable of detecting differences in individual patients.
... Thermal noise is an inherent property in any object populated by electrons, such as the human body or electronic systems, and hence unavoidable in any Magnetic Resonance Imaging (MRI) experiment (Redpath, 1998). Its statistical properties are well characterised in the complex MRI domain (zero-mean Gaussian with equal variance in real and imaginary channels), as well as in the magnitude image domain for single-channel receiver coils (Rician distribution) (Gudbjartsson and Patz, 1995;Henkelman, 1985). However, noise characterisation is less straightforward with modern MRI acquisitions, as it depends on various factors, including the type of reconstruction, filtering, number of receiver coils and type of acceleration used (Dietrich et al., 2008a;Sotiropoulos et al., 2013c). ...
... We computed the power for each order as P l = 2016). This can lead to an elevated noise floor in the magnitude domain (the minimum measurable signal given the noise level) compared to classical MRI experiments where magnitude signal follows a Rician distribution (Gudbjartsson and Patz, 1995;Salvador et al., 2005). As dMRI information is encoded in the signal attenuation, intensities can be as low as the noise floor in a) regions where the signal attenuation is sufficiently high (e.g. ...
Preprint
Full-text available
Development of diffusion MRI (dMRI) denoising approaches has experienced considerable growth over the last years. As noise can inherently reduce accuracy and precision in measurements, its effects have been well characterised both in terms of uncertainty increase in dMRI-derived features and in terms of biases caused by the noise floor, the smallest measurable signal given the noise level. However, gaps in our knowledge still exist in objectively characterising dMRI denoising approaches in terms of both of these effects and assessing their efficacy. In this work, we reconsider what a denoising method should and should not do and we accordingly define criteria to characterise the performance. We propose a comprehensive set of evaluations, including i) benefits in improving signal quality and reducing uncertainty, ii) gains in reducing biases and the noise floor, iii) preservation of spatial resolution, iv) agreement of denoised data against a gold standard, v) efficacy in enabling noise-prone applications, such as ultra-high resolution imaging. We further provide newly acquired complex datasets (magnitude and phase) with multiple repeats that sample different SNR regimes to highlight performance differences under different scenarios. Without loss of generality, we subsequently apply a number of exemplar patch-based denoising algorithms to these datasets, including Non-Local Means, Marchenko-Pastur PCA (MPPCA) in the magnitude and complex domain and NORDIC, and compare them with respect to the above criteria and against a gold standard complex average of multiple repeats. We demonstrate that all tested denoising approaches reduce noise-related variance, but not always biases from the elevated noise floor. They all induce a spatial resolution penalty, but its extent can vary depending on the method and the implementation. Some denoising approaches agree with the gold standard more than others and we demonstrate challenges in even defining such a standard. Overall, we show that dMRI denoising performed in the complex domain is advantageous to magnitude domain denoising with respect to all the above criteria.
... Gudbjartsson and Patz 16 showed that the phase distribution can be approximated by a normal distribution for a sufficiently large SNR (SNR > 3). Because of the linear transformation from the phase PDF to the temperature PDF given by the PRFS, this normal character is maintained. ...
... Therefore, this study does not evaluate the model's robustness for a varying MRI measurement error realistically. Still, due to the MRI measurement error for the real and imaginary image being Gaussian in nature, 16 it can be expected that the results of this study closely resemble those obtained by a more realistic evaluation. ...
Article
Full-text available
Background Monitoring minimally invasive thermo ablation procedures using magnetic resonance (MR) thermometry allows therapy of tumors even close to critical anatomical structures. Unfortunately, intraoperative monitoring remains challenging due to the necessary accuracy and real‐time capability. One reason for this is the statistical error introduced by MR measurement, which causes the prediction of ablation zones to become inaccurate. Purpose In this work, we derive a probabilistic model for the prediction of ablation zones during thermal ablation procedures based on the thermal damage model CEM43. By integrating the statistical error caused by MR measurement into the conventional prediction, we hope to reduce the amount of falsely classified voxels. Methods The probabilistic CEM43 model is empirically evaluated using a polyacrilamide gel phantom and three in‐vivo pig livers. Results The results show a higher accuracy in three out of four data sets, with a relative difference in Sørensen–Dice coefficient from −3.04%$-3.04\%$ to 3.97% compared to the conventional model. Furthermore, the ablation zones predicted by the probabilistic model show a false positive rate with a relative decrease of 11.89%–30.04% compared to the conventional model. Conclusion The presented probabilistic thermal dose model might help to prevent false classification of voxels within ablation zones. This could potentially result in an increased success rate for MR‐guided thermal ablation procedures. Future work may address additional error sources and a follow‐up study in a more realistic clinical context.
... The thermal noise that is produced by both the receiver coil and the sample itself is the most significant source of noise in MRI. Rician distributions may be used to provide an accurate description of this noise [21]. ...
Chapter
Book series on Medical Science gives the opportunity to students and doctors from all over the world to publish their research work in a set of Preclinical sciences, Internal medicine, Surgery and Public Health. This book series aim to inspire innovation and promote academic quality through outstanding publications of scientists and doctors. It also provides a premier interdisciplinary platform for researchers, practitioners, and educators to publish the most recent innovations, trends, and concerns as well as practical challenges encountered and solutions adopted in the fields of Medical Science. It also provides a remarkable opportunity for the academic, research and doctors communities to address new challenges and share solutions and discuss future research directions.
... ν 1 is randomly selected in [0.5, 0.85] with uniform probability. 4. y is contaminated by noise following a Rice distribution (Rice, 1944;Gudbjartsson and Patz, 1995;Alexander, 2009) with the given SNR. ...
Article
Full-text available
Diffusion-weighted magnetic resonance imaging provides invaluable insights into in-vivo neurological pathways. However, accurate and robust characterization of white matter fibers microstructure remains challenging. Widely used spherical deconvolution algorithms retrieve the fiber Orientation Distribution Function (ODF) by using an estimation of a response function, i.e., the signal arising from individual fascicles within a voxel. In this paper, an algorithm of blind spherical deconvolution is proposed, which only assumes the axial symmetry of the response function instead of its exact knowledge. This algorithm provides a method for estimating the peaks of the ODF in a voxel without any explicit response function, as well as a method for estimating signals associated with the peaks of the ODF, regardless of how those peaks were obtained. The two stages of the algorithm are tested on Monte Carlo simulations, as well as compared to state-of-the-art methods on real in-vivo data for the orientation retrieval task. Although the proposed algorithm was shown to attain lower angular errors than the state-of-the-art constrained spherical deconvolution algorithm on synthetic data, it was outperformed by state-of-the-art spherical deconvolution algorithms on in-vivo data. In conjunction with state-of-the art methods for axon bundles direction estimation, the proposed method showed its potential for the derivation of per-voxel per-direction metrics on synthetic as well as in-vivo data.
... which is common in the literature and provides a reasonable approximation as long as the signal to noise ratio is not too low (Gudbjartsson and Patz, 1995). Denoting s l = B A (s l,1 , ..., s l,M ) ⊺ , S = (s 1 , ..., s L ), and parameters ξ (n) = (ξ 1 , ..., ξ n ), m (n) = (m 1 , ..., m n ), combining the forward model (5) and measurement model (6) results in the observed data likelihood: ...
Preprint
Full-text available
Diffusion MRI (dMRI) is the primary imaging modality used to study brain microstructure in vivo. Reliable and computationally efficient parameter inference for common dMRI biophysical models is a challenging inverse problem, due to factors such as variable dimensionalities (reflecting the unknown number of distinct white matter fiber populations in a voxel), low signal-to-noise ratios, and non-linear forward models. These challenges have led many existing methods to use biologically implausible simplified models to stabilize estimation, for instance, assuming shared microstructure across all fiber populations within a voxel. In this work, we introduce a novel sequential method for multi-fiber parameter inference that decomposes the task into a series of manageable subproblems. These subproblems are solved using deep neural networks tailored to problem-specific structure and symmetry, and trained via simulation. The resulting inference procedure is largely amortized, enabling scalable parameter estimation and uncertainty quantification across all model parameters. Simulation studies and real imaging data analysis using the Human Connectome Project (HCP) demonstrate the advantages of our method over standard alternatives. In the case of the standard model of diffusion, our results show that under HCP-like acquisition schemes, estimates for extra-cellular parallel diffusivity are highly uncertain, while those for the intra-cellular volume fraction can be estimated with relatively high precision.
... However, the random noise that magnetic resonance imaging will generate during the acquisition process together with abnormal signals in the raw data, will affect the quality of magnetic resonance imaging and medical diagnosis. Noise in the image generally follows the Rician distribution, which in both the real and imaginary fields of the K-space data [1] is manifested by the presence of independent Gaussian noise with equal variance and a mean of 0, which is signally dependent and Rician distribution in the spatial domain [2]. Hence noise removal is essential for understanding and evaluating MR images. ...
Article
Full-text available
Nuclear magnetic resonance (NMR) signals as the key role of research on denoising, are mainly influenced by Rician noise. Restoring clean Rician medical images from signal-related Rician noise is a challenging task with great practical significance. In this paper, an energy function based on MAP estimation is proposed, mainly for conditions of low noise, where the noise distribution is approximately Gaussian. Then the mathematical model is embedded into the network structure to realize the knot of knowledge-driven and network learning. The experiment shows that the proposed model only uses simple and lightweight network modules and a small amount of training data, which has a better denoising effect and has achieved satisfactory results under the two evaluation indexes.
... The formulation of the blind deconvolution problem used in previous neural blind deconvolution studies, y = x * k, accounts for bleeding of voxel signal into neighbouring voxels, but it does not account for background noise. As MRI signals contain rician noise [43], and IVIM MRI using EPI is particularly prone to noise, we modified the formulation of the problem to include a noise term: ...
Article
Full-text available
Objective. To improve intravoxel incoherent motion imaging (IVIM) magnetic resonance Imaging quality using a new image denoising technique and model-independent parameterization of the signal versus b-value curve. Approach. IVIM images were acquired for 13 head-and-neck patients prior to radiotherapy. Post-radiotherapy scans were also acquired for five of these patients. Images were denoised prior to parameter fitting using neural blind deconvolution, a method of solving the ill-posed mathematical problem of blind deconvolution using neural networks. The signal decay curve was then quantified in terms of several area under the curve (AUC) parameters. Improvements in image quality were assessed using blind image quality metrics, total variation (TV), and the correlations between parameter changes in parotid glands with radiotherapy dose levels. The validity of blur kernel predictions was assessed by the testing the method's ability to recover artificial ‘pseudokernels’. AUC parameters were compared with monoexponential, biexponential, and triexponential model parameters in terms of their correlations with dose, contrast-to-noise (CNR) around parotid glands, and relative importance via principal component analysis. Main results. Image denoising improved blind image quality metrics, smoothed the signal versus b-value curve, and strengthened correlations between IVIM parameters and dose levels. Image TV was reduced and parameter CNRs generally increased following denoising. AUC parameters were more correlated with dose and had higher relative importance than exponential model parameters. Significance. IVIM parameters have high variability in the literature and perfusion-related parameters are difficult to interpret. Describing the signal versus b-value curve with model-independent parameters like the AUC and preprocessing images with denoising techniques could potentially benefit IVIM image parameterization in terms of reproducibility and functional utility.
... Unlike other modalities where Gaussian noise is common, MRI intensities at low signalto-noise ratios align more closely with a Rician distribution. This normalization method helps achieve a realistic representation of tissue contrast, ensuring consistent and comparable intensity values across scans [17]. ...
Article
Full-text available
Introduction Accurate diagnosis and treatment of kidney tumors greatly benefit from automated solutions for detection and classification on MRI. In this study, we explore the application of a deep learning algorithm, YOLOv7, for detecting kidney tumors on contrast-enhanced MRI. Material and methods We assessed the performance of YOLOv7 tumor detection on excretory phase MRIs in a large institutional cohort of patients with RCC. Tumors were segmented on MRI using ITK-SNAP and converted to bounding boxes. The cohort was randomly divided into ten benchmarks for training and testing the YOLOv7 algorithm. The model was evaluated using both 2-dimensional and a novel in-house developed 2.5-dimensional approach. Performance measures included F1, Positive Predictive Value (PPV), Sensitivity, F1 curve, PPV-Sensitivity curve, Intersection over Union (IoU), and mean average PPV (mAP). Results A total of 326 patients with 1034 tumors with 7 different pathologies were analyzed across ten benchmarks. The average 2D evaluation results were as follows: Positive Predictive Value (PPV) of 0.69 ± 0.05, sensitivity of 0.39 ± 0.02, and F1 score of 0.43 ± 0.03. For the 2.5D evaluation, the average results included a PPV of 0.72 ± 0.06, sensitivity of 0.61 ± 0.06, and F1 score of 0.66 ± 0.04. The best model performance demonstrated a 2.5D PPV of 0.75, sensitivity of 0.69, and F1 score of 0.72. Conclusion Using computer vision for tumor identification is a cutting-edge and rapidly expanding subject. In this work, we showed that YOLOv7 can be utilized in the detection of kidney cancers.
... Thirdly, background noise measurements in vivo are very challenging and might be the source of the large variances for SNRs and CNRs estimates in this study. Additional issues include the Rician noise distribution in modulus data [18]. Given this, we selected three consecutive levels without any obvious motion artifacts and drew ROIs in the pharyngeal-cavity as large as possible, and used the average. ...
Article
Full-text available
Background To investigate the feasibility of the simultaneous multi-slice (SMS) accelerated acquisition technique in 2 dimension-RARE/turbo spin echo (TSE) sequence with special purpose coils in head and neck imaging. Methods Thirty-six healthy volunteers and eight patients with head and neck squamous cell carcinoma were recruited in this prospective study. The MR protocols included 2D-T1/T2-weighted TSE sequences, both with the SMS (T1/T2w-SMS-TSE) technique and without it (T1/T2w-TSE). A pair of special purpose coils was additionally used in sub-group of participants. Subjective image quality scores, quantitative values of images, including signal-to-noise ratios (SNRs) and contrast-to-noise ratios (CNRs), were evaluated. Results The acquisition time was 1:15 min and 1:09 min for T1w and T2w-SMS-TSE, with a respective 39% and 47% reduction compared to conventional sequences. The image quality of all the images were scored equivalent. Although the SMS technique seemed to slightly reduce the SNRs of T1/T2w images and CNRs of T1w images, the additional use of special purpose coils further increased the SNRs and CNRs in most cases. Focal lesions of eight HNSCC patients were finely delineated in all the protocols with similar and good diagnostic value. Conclusions Combining SMS–TSE imaging with special purpose coils is a feasible and effective approach to reduce the total acquisition time, and obtain equivalent image quality and better SNRs and CNRs in most cases, as compared to the conventional T1/T2w-TSE sequence in head and neck imaging.
... Diffusion MRI preprocessing included the following steps: (i) Signal debiasing utilizing the acquired noise map (Gudbjartsson and Patz, 1995;St-Jean et al., 2020). (ii) MP PCA Denoising (Cordero-Grande et al., 2019). ...
Article
Full-text available
To decipher the evolution of the hominoid brain and its functions, it is essential to conduct comparative studies in primates, including our closest living relatives. However, strong ethical concerns preclude in vivo neuroimaging of great apes. We propose a responsible and multidisciplinary alternative approach that links behavior to brain anatomy in non-human primates from diverse ecological backgrounds. The brains of primates observed in the wild or in captivity are extracted and fixed shortly after natural death, and then studied using advanced MRI neuroimaging and histology to reveal macro-and microstructures. By linking detailed neuroanatomy with observed behavior within and across primate species, our approach provides new perspectives on brain evolution. Combined with endocranial brain imprints extracted from computed tomographic scans of the skulls these data provide a framework for decoding evolutionary changes in hominin fossils. This approach is poised to become a key resource for investigating the evolution and functional differentiation of hominoid brains.
... The literature reports many state-of-the-art methods for the removal of spurious noise information from the MR data for correct diagnosis of the disease. 7,8 An underlying Rician model was considered by Nowak and Pizurica 9 for removing the spurious Rician noise using the wavelet-based technique. The diffusion coefficients of the Perona Malik filter were combined with Rician data attachment by Basu et al. for the removal of Rician Noise. 10 Scalar Rician Noise-Reducing Anisotropic Diffusion (SRNRAD) and Oriented Rician Noise-Reducing Anisotropic Diffusion (ORNRAD) were proposed by Krissian and Aja-Fernandez for reducing the contents of Rician distributed noise. ...
Article
Full-text available
Magnetic Resonance Imaging (MRI) provides detailed information about soft tissues, which is essential for disease analysis. However, the presence of Rician noise in MR images introduces uncertainties that challenge medical practitioners during analysis. The objective of our research paper is to introduce an innovative dual-channel deep learning (DL) model designed to effectively denoize MR images. The methodology of this model integrates two distinct pathways, each equipped with unique normalization and activation techniques, facilitating the creation of a wide range of image features. Specifically, we employ Group Normalization in combination with Parametric Rectified Linear Units (PRELU) and Local Response Normalizations (LRN) alongside Scaled Exponential Linear Units (SELU) within both channels of our denoizing network. The outcomes of our proposed network exhibit clinical relevance, empowering medical professionals to conduct more efficient disease analysis. When evaluated by experienced radiologists, our results were deemed satisfactory. The network achieved a noteworthy improvement in performance metrics without requiring retraining. Specifically, there was a (6.65 ± 0.03)% enhancement in Peak Signal-to-Noise Ratio (PSNR) values and a (3.9 ± 0.02)% improvement in Structural Similarity Index (SSIM) values. Furthermore, when evaluated on the dataset on which the network was initially trained, the increase in PSNR and SSIM values was even more pronounced, with a (7.5 ± 0.02)% improvement in PSNR and a (4.3 ± 0.01)% enhancement in SSIM. Evaluation metrics, such as SSIM and PSNR, demonstrated a notable enhancement in the results obtained using our network. The statistical significance of our findings is evident, with p-values consistently less than 0.05 (p < 0.05). Importantly, our network demonstrates exceptional generalizability, as it performs remarkably well on different datasets without the need for retraining.
... Fig. 6 shows two representative patients randomly selected from low and high-risk groups as determined using the WHOG entropy feature. Robustness of features to noise: We tested the robustness of the extracted features to noise and the impact of added noise in measuring the association with OS by adding zerocentered Gaussian noise as well as Rician noise [40], [41]. WHOG features showed less variability measured using coefficient of variation for the different noise levels. ...
Article
Full-text available
Directionally sensitive radiomic features including the histogram of oriented gradient (HOG) have been shown to provide objective and quantitative measures for predicting disease outcomes in multiple cancers. However, radiomic features are sensitive to imaging variabilities including acquisition differences, imaging artifacts and noise, making them impractical for using in the clinic to inform patient care. We treat the problem of extracting robust local directionality features by mapping via optimal transport a given local image patch to an iso-intense patch of its mean. We decompose the transport map into sub-work costs each transporting in different directions. To test our approach, we evaluated the ability of the proposed approach to quantify tumor heterogeneity from magnetic resonance imaging (MRI) scans of brain glioblastoma multi-forme, computed tomography (CT) scans of head and neck squamous cell carcinoma as well as longitudinal CT scans in lung cancer patients treated with immunotherapy. By considering the entropy difference of the extracted local directionality within tumor regions, we found that patients with higher entropy in their images, had significantly worse overall survival for all three datasets, which indicates that tumors that have images exhibiting flows in many directions may be more malignant. This may seem to reflect high tumor histologic grade or disorganization. Furthermore, by comparing the changes in entropy longitudinally using two imaging time points, we found patients with reduction in entropy from baseline CT are associated with longer overall survival (hazard ratio = 1.95, 95% confidence interval of 1.4-2.8, p = 1.65e-5). The proposed method provides a robust, training free approach to quantify the local directionality contained in images.
... This straightforward approach may be biased by the rectified noise floor at low SNR values. 29,30 We further note that the TR was kept the same for both the MAGNUS DWI and whole-body DWI in this study. Future studies may boost SNR in MAGNUS DWI by using more image averaging and a shorter TR without lengthening scan time. ...
Article
Full-text available
Purpose To demonstrate the technical feasibility and the value of ultrahigh‐performance gradient in imaging the prostate in a 3T MRI system. Methods In this local institutional review board–approved study, prostate MRI was performed on 4 healthy men. Each subject was scanned in a prototype 3T MRI system with a 42‐cm inner‐diameter gradient coil that achieves a maximum gradient amplitude of 200 mT/m and slew rate of 500 T/m/s. PI‐RADS V2.1–compliant axial T2‐weighted anatomical imaging and single‐shot echo planar DWI at standard gradient of 70 mT/m and 150 T/m/s were obtained, followed by DWI at maximum performance (i.e., 200 mT/m and 500 T/m/s). In comparison to state‐of‐the‐art clinical whole‐body MRI systems, the high slew rate improved echo spacing from 1020 to 596 μs and, together with a high gradient amplitude for diffusion encoding, TE was reduced from 55 to 36 ms. Results In all 4 subjects (waist circumference = 81–91 cm, age = 45–65 years), no peripheral nerve stimulation sensation was reported during DWI. Reduced image distortion in the posterior peripheral zone prostate gland and higher signal intensity, such as in the surrounding muscle of high‐gradient DWI, were noted. Conclusion Human prostate MRI at simultaneously high gradient amplitude of 200 mT/m and slew rate of 500 T/m/s is feasible, demonstrating that improved gradient performance can address image distortion and T2 decay–induced SNR issues for in vivo prostate imaging.
... The relevant entities such as lung, bones, blood pool, myocardium, etc. were segmented on a typical cardiac cine 4CH MRI dataset (case 4 in Table 1, full MRI properties in Table 3). The intensity distribution of each entity was analyzed with both Gaussian and Rician distribution fitting ( Fig. 3) [30]. Given the minimal differences between the fitted distributions (mean square distances 0.05 and 0.21 for blood pool and myocardium, respectively), we decided to use the Gaussian for the simulation. ...
Article
Full-text available
Purpose Numerical phantom methods are widely used in the development of medical imaging methods. They enable quantitative evaluation and direct comparison with controlled and known ground truth information. Cardiac magnetic resonance has the potential for a comprehensive evaluation of the mitral valve (MV). The goal of this work is the development of a numerical simulation framework that supports the investigation of MRI imaging strategies for the mitral valve. Methods We present a pipeline for synthetic image generation based on the combination of individual anatomical 3D models with a position-based dynamics simulation of the mitral valve closure. The corresponding images are generated using modality-specific intensity models and spatiotemporal sampling concepts. We test the applicability in the context of MRI imaging strategies for the assessment of the mitral valve. Synthetic images are generated with different strategies regarding image orientation (SAX and rLAX) and spatial sampling density. Results The suitability of the imaging strategy is evaluated by comparing MV segmentations against ground truth annotations. The generated synthetic images were compared to ones acquired with similar parameters, and the result is promising. The quantitative analysis of annotation results suggests that the rLAX sampling strategy is preferable for MV assessment, reaching accuracy values that are comparable to or even outperform literature values. Conclusion The proposed approach provides a valuable tool for the evaluation and optimization of cardiac valve image acquisition. Its application to the use case identifies the radial image sampling strategy as the most suitable for MV assessment through MRI.
... Thus, three additional testing datasets are used along with the traditional testing data (Site A -Clean). Site A Noisy -MRI noise may be approximated as Gaussian for a signal-tonoise ratio (SNR) greater than 2 [9]. Therefore, this work simulates noisy MRI images using Gaussian noise of 0 mean with a variance of 0.01. ...
Preprint
Full-text available
Out-of-distribution (OOD) generalization poses a serious challenge for modern deep learning (DL). OOD data consists of test data that is significantly different from the model's training data. DL models that perform well on in-domain test data could struggle on OOD data. Overcoming this discrepancy is essential to the reliable deployment of DL. Proper model calibration decreases the number of spurious connections that are made between model features and class outputs. Hence, calibrated DL can improve OOD generalization by only learning features that are truly indicative of the respective classes. Previous work proposed domain-aware model calibration (DOMINO) to improve DL calibration, but it lacks designs for model generalizability to OOD data. In this work, we propose DOMINO++, a dual-guidance and dynamic domain-aware loss regularization focused on OOD generalizability. DOMINO++ integrates expert-guided and data-guided knowledge in its regularization. Unlike DOMINO which imposed a fixed scaling and regularization rate, DOMINO++ designs a dynamic scaling factor and an adaptive regularization rate. Comprehensive evaluations compare DOMINO++ with DOMINO and the baseline model for head tissue segmentation from magnetic resonance images (MRIs) on OOD data. The OOD data consists of synthetic noisy and rotated datasets, as well as real data using a different MRI scanner from a separate site. DOMINO++'s superior performance demonstrates its potential to improve the trustworthy deployment of DL on real clinical data.
... The nine diffusion-T 1 -T 2 4D volumes with different TEs and TIs, and the four diffusion-T 2 4D volumes with different TEs were preprocessed separately in the following order: (1) noise level estimation and removal using the MP-PCA method (Veraart et al., 2016) by using the matrix centering and patch-based aggregation options (Manjon et al., 2013), as implemented in dipy (Garyfallidis et al., 2014) 1 ; (2) attenuation of the Rician-noise dependent bias in the signal by implementing the postprocessing correction scheme proposed by Gudbjartsson and Patz (1995) and (3) motion, geometric distortions, and eddy current corrections using the "topup" and "eddy" tools included in FSL (Andersson et al., 2003;Andersson and Sotiropoulos, 2016). ...
Article
Full-text available
Axon radius is a potential biomarker for brain diseases and a crucial tissue microstructure parameter that determines the speed of action potentials. Diffusion MRI (dMRI) allows non-invasive estimation of axon radius, but accurately estimating the radius of axons in the human brain is challenging. Most axons in the brain have a radius below one micrometer, which falls below the sensitivity limit of dMRI signals even when using the most advanced human MRI scanners. Therefore, new MRI methods that are sensitive to small axon radii are needed. In this proof-of-concept investigation, we examine whether a surface-based axonal relaxation process could mediate a relationship between intra-axonal T2 and T1 times and inner axon radius, as measured using postmortem histology. A unique in vivo human diffusion-T1-T2 relaxation dataset was acquired on a 3T MRI scanner with ultra-strong diffusion gradients, using a strong diffusion-weighting (i.e., b = 6,000 s/mm²) and multiple inversion and echo times. A second reduced diffusion-T2 dataset was collected at various echo times to evaluate the model further. The intra-axonal relaxation times were estimated by fitting a diffusion-relaxation model to the orientation-averaged spherical mean signals. Our analysis revealed that the proposed surface-based relaxation model effectively explains the relationship between the estimated relaxation times and the histological axon radius measured in various corpus callosum regions. Using these histological values, we developed a novel calibration approach to predict axon radius in other areas of the corpus callosum. Notably, the predicted radii and those determined from histological measurements were in close agreement.
... Background noise: DW images are affected by background noise following either a rician (Gudbjartsson and Patz, 1995) or a chi-squared distribution (Luisier et al., 2012) and more often than not present quite poor signal-to-noise ratio. Some patchbased algorithms (Manjón et al., 2008) enable the suppression of such types of noise but require long execution times and vast amounts of memory space, which can be cumbersome given the large size of diffusion volumes. ...
Article
Full-text available
The lack of “gold standards” in Diffusion Weighted Imaging (DWI) makes validation cumbersome. To tackle this task, studies use translational analysis where results in humans are benchmarked against findings in other species. Non-Human Primates (NHP) are particularly interesting for this, as their cytoarchitecture is closely related to humans. However, tools used for processing and analysis must be adapted and finely tuned to work well on NHP images. Here, we propose versaFlow, a modular pipeline implemented in Nextflow, designed for robustness and scalability. The pipeline is tailored to in vivo NHP DWI at any spatial resolution; it allows for maintainability and customization. Processes and workflows are implemented using cutting-edge and state-of-the-art Magnetic Resonance Imaging (MRI) processing technologies and diffusion modeling algorithms, namely Diffusion Tensor Imaging (DTI), Constrained Spherical Deconvolution (CSD), and DIstribution of Anisotropic MicrOstructural eNvironments in Diffusion-compartment imaging (DIAMOND). Using versaFlow, we provide an in-depth study of the variability of diffusion metrics computed on 32 subjects from 3 sites of the Primate Data Exchange (PRIME-DE), which contains anatomical T1-weighted (T1w) and T2-weighted (T2w) images, functional MRI (fMRI), and DWI of NHP brains. This dataset includes images acquired over a range of resolutions, using single and multi-shell gradient samplings, on multiple scanner vendors. We perform a reproducibility study of the processing of versaFlow using the Aix-Marseilles site's data, to ensure that our implementation has minimal impact on the variability observed in subsequent analyses. We report very high reproducibility for the majority of metrics; only gamma distribution parameters of DIAMOND display less reproducible behaviors, due to the absence of a mechanism to enforce a random number seed in the software we used. This should be taken into consideration when future applications are performed. We show that the PRIME-DE diffusion data exhibits a great level of variability, similar or greater than results obtained in human studies. Its usage should be done carefully to prevent instilling uncertainty in statistical analyses. This hints at a need for sufficient harmonization in acquisition protocols and for the development of robust algorithms capable of managing the variability induced in imaging due to differences in scanner models and/or vendors.
... where u is the clean image of size m × n , η R and η I are Gaussian noise with zero mean and standard deviation σ , f is the observed noisy image. According to Gudbjartsson and Patz [1] , the probability density function (PDF) of f is ...
Article
Restoring images corrupted by Rician noise is a challenging issue in medical image processing. In the existing methods, the model-driven method can not recover the images well, and the learning-based methods lack good interpretability. In this paper, we propose a plug-and-play (PnP) method to remove Rician noise. Due to the statistical properties of Rician distribution and the implicit deep image priors, the problem is non-convex. We present a convergent PnP method to address these issues by an adaptively relaxed alternating direction method of multipliers. Theoretically, we give some useful mathematical properties and the global linear convergence of the proposed method by an adaptive relaxation strategy. Experimental results show that the proposed method outperforms the existing state-of-art traditional and learning-based methods.
... Data augmentation was performed by adding Gaussian noise to the velocity data while keeping corresponding pressure intact. Velocity noise in MRI was considered to follow a Gaussian random noise distribution 27 ; therefore, random Gaussian noise with SD starting from 0.01 cm/s to 0.3 cm/s with a 0.01 cm/s increment was added to create augmented training samples. Three inputs to the network consisted of training matrices of size 20 × 16 for 20 axial slices and 16 timepoints. ...
Article
Full-text available
Purpose To estimate relative transvalvular pressure gradient (TVPG) noninvasively from 4D flow MRI. Methods A novel deep learning–based approach is proposed to estimate pressure gradient across stenosis from four‐dimensional flow MRI (4D flow MRI) velocities. A deep neural network 4D flow Velocity‐to‐Presure Network (4Dflow‐VP‐Net) was trained to learn the spatiotemporal relationship between velocities and pressure in stenotic vessels. Training data were simulated by computational fluid dynamics (CFD) for different pulsatile flow conditions under an aortic flow waveform. The network was tested to predict pressure from CFD‐simulated velocity data, in vitro 4D flow MRI data, and in vivo 4D flow MRI data of patients with both moderate and severe aortic stenosis. TVPG derived from 4Dflow‐VP‐Net was compared to catheter‐based pressure measurements for available flow rates, in vitro and Doppler echocardiography–based pressure measurement, in vivo. Results Relative pressures calculated by 4Dflow‐VP‐Net and in vitro pressure catheterization revealed strong correlation (r² = 0.91). Correlations analysis of TVPG from reference CFD and 4Dflow‐VP‐Net for 450 simulated flow conditions showed strong correlation (r² = 0.99). TVPG from in vitro MRI had a correlation coefficient of r² = 0.98 with reference CFD. 4Dflow‐VP‐Net, applied to 4D flow MRI in 16 patients, showed comparable TVPG measurement with Doppler echocardiography (r² = 0.85). Bland–Altman analysis of TVPG measurements showed mean bias and limits of agreement of −0.20 ± 2.07 mmHg and 0.19 ± 0.45 mmHg for CFD‐simulated velocities and in vitro 4D flow velocities. In patients, overestimation of Doppler echocardiography relative to TVPG from 4Dflow‐VP‐Net (10.99 ± 6.77 mmHg) was observed. Conclusion The proposed approach can predict relative pressure in both in vitro and in vivo 4D flow MRI of aortic stenotic patients with high fidelity.
... MRI noise. Voxel intensities in noisy magnitude MR images follow the Rician distribution (Gudbjartsson and Patz, 1995;Rice, 1944;Cárdenas-Blanco et al., 2008). This phenomenon is the result of magnitude image formation from complex data with independent and identically distributed Gaussian noise in each channel. ...
Article
Full-text available
Deep artificial neural networks (DNNs) have moved to the forefront of medical image analysis due to their success in classification, segmentation, and detection challenges. A principal challenge in large-scale deployment of DNNs in neuroimage analysis is the potential for shifts in signal-to-noise ratio, contrast, resolution, and presence of artifacts from site to site due to variances in scanners and acquisition protocols. DNNs are famously susceptible to these distribution shifts in computer vision. Currently, there are no benchmarking platforms or frameworks to assess the robustness of new and existing models to specific distribution shifts in MRI, and accessible multi-site benchmarking datasets are still scarce or task-specific. To address these limitations, we propose ROOD-MRI: a novel platform for benchmarking the Robustness of DNNs to Out-Of-Distribution (OOD) data, corruptions, and artifacts in MRI. This flexible platform provides modules for generating benchmarking datasets using transforms that model distribution shifts in MRI, implementations of newly derived benchmarking metrics for image segmentation, and examples for using the methodology with new models and tasks. We apply our methodology to hippocampus, ventricle, and white matter hyperintensity segmentation in several large studies, providing the hippocampus dataset as a publicly available benchmark. By evaluating modern DNNs on these datasets, we demonstrate that they are highly susceptible to distribution shifts and corruptions in MRI. We show that while data augmentation strategies can substantially improve robustness to OOD data for anatomical segmentation tasks, modern DNNs using augmentation still lack robustness in more challenging lesion-based segmentation tasks. We finally benchmark U-Nets and vision transformers, finding robustness susceptibility to particular classes of transforms across architectures. The presented open-source platform enables generating new benchmarking datasets and comparing across models to study model design that results in improved robustness to OOD data and corruptions in MRI.
... Another important avenue for future research involves investigating the effect of the distribution of the measurement errors. While the additive normal model (12) for the noise is widely used in the statistical analysis of diffusion MRI and has been found to be quite reliable when the SNR > 2 ( Gudbjartsson and Patz, 1995), it is often only an approximation to the true Rician/non-central χ 2 noise distribution. It has been shown that some leastsquares based estimates result in SNR-dependent non-vanishing bias of parameter estimates (Polzehl and Tabelow, 2016). ...
Preprint
Inferring brain connectivity and structure \textit{in-vivo} requires accurate estimation of the orientation distribution function (ODF), which encodes key local tissue properties. However, estimating the ODF from diffusion MRI (dMRI) signals is a challenging inverse problem due to obstacles such as significant noise, high-dimensional parameter spaces, and sparse angular measurements. In this paper, we address these challenges by proposing a novel deep-learning based methodology for continuous estimation and uncertainty quantification of the spatially varying ODF field. We use a neural field (NF) to parameterize a random series representation of the latent ODFs, implicitly modeling the often ignored but valuable spatial correlation structures in the data, and thereby improving efficiency in sparse and noisy regimes. An analytic approximation to the posterior predictive distribution is derived which can be used to quantify the uncertainty in the ODF estimate at any spatial location, avoiding the need for expensive resampling-based approaches that are typically employed for this purpose. We present empirical evaluations on both synthetic and real in-vivo diffusion data, demonstrating the advantages of our method over existing approaches.
... Gaussian noise is the most common type of noise in MRI images and is considered an inseparable part of these images [3]. Many methods based on pre-and post-processing have tried to reduce the noise of MRA images. ...
Article
Full-text available
Magnetic resonance imaging (MRI), notwithstanding the vital information they provide, introduces issues in diagnostic work due to inherent noise. The poor signal-to-noise ratio (SNR) in MRI images demonstrates the importance of post-processing procedures. There are numerous strategies for reducing noise, but there is still a need for a solution that has both high accuracy and a fast convergence speed. In this study, we present a total variation (TV) method for noise reduction in MRI images utilizing a modified Barzilai–Borwein method (MBBM) algorithm. The proposed method is tested on three noisy and blurred images (IXI data set section 3.1). The findings reveal that the reconstructed images have less noise and sharper edges. In comparison to similar existing iterative methods, Gradient projection method (GP) and Barzilai–Borwein method (BBM), the proposed method achieves higher accuracy in a substantially smaller number of iterations. The MBBM technique enhanced the quality metrics, Peak signal-to-noise ratio (PSNR), and structural similarity (SSIM) for the first data set (data 2a). For instance, the PSNR and SSIM for noise level 0.06 improved from 2.62 and 0.22 to 33.12 and 0.90, respectively. This improvement in rating criteria was also seen in the next two experiments (data 2b and data 2c). When compared to similar methods, experimental results reveal that the suggested method is faster, more precise, more successful at preserving edges, and less computationally demanding. Consequently, the presented method is effective and reliable both quantitatively and qualitatively.
... Normally distributed noise was added, rather than Rician noise, since Rice distribution approaches a Gaussian one at sufficiently high signal-to-noise ratio (SNR) (>2). 42 Since the level of added noise affected the performance of the network, this parameter was treated as hyperparameter. ...
Article
Full-text available
Objectives: Artificial intelligence (AI) methods can be applied to enhance contrast in diagnostic images beyond that attainable with the standard doses of contrast agents (CAs) normally used in the clinic, thus potentially increasing diagnostic power and sensitivity. Deep learning-based AI relies on training data sets, which should be sufficiently large and diverse to effectively adjust network parameters, avoid biases, and enable generalization of the outcome. However, large sets of diagnostic images acquired at doses of CA outside the standard-of-care are not commonly available. Here, we propose a method to generate synthetic data sets to train an "AI agent" designed to amplify the effects of CAs in magnetic resonance (MR) images. The method was fine-tuned and validated in a preclinical study in a murine model of brain glioma, and extended to a large, retrospective clinical human data set. Materials and methods: A physical model was applied to simulate different levels of MR contrast from a gadolinium-based CA. The simulated data were used to train a neural network that predicts image contrast at higher doses. A preclinical MR study at multiple CA doses in a rat model of glioma was performed to tune model parameters and to assess fidelity of the virtual contrast images against ground-truth MR and histological data. Two different scanners (3 T and 7 T, respectively) were used to assess the effects of field strength. The approach was then applied to a retrospective clinical study comprising 1990 examinations in patients affected by a variety of brain diseases, including glioma, multiple sclerosis, and metastatic cancer. Images were evaluated in terms of contrast-to-noise ratio and lesion-to-brain ratio, and qualitative scores. Results: In the preclinical study, virtual double-dose images showed high degrees of similarity to experimental double-dose images for both peak signal-to-noise ratio and structural similarity index (29.49 dB and 0.914 dB at 7 T, respectively, and 31.32 dB and 0.942 dB at 3 T) and significant improvement over standard contrast dose (ie, 0.1 mmol Gd/kg) images at both field strengths. In the clinical study, contrast-to-noise ratio and lesion-to-brain ratio increased by an average 155% and 34% in virtual contrast images compared with standard-dose images. Blind scoring of AI-enhanced images by 2 neuroradiologists showed significantly better sensitivity to small brain lesions compared with standard-dose images (4.46/5 vs 3.51/5). Conclusions: Synthetic data generated by a physical model of contrast enhancement provided effective training for a deep learning model for contrast amplification. Contrast above that attainable at standard doses of gadolinium-based CA can be generated through this approach, with significant advantages in the detection of small low-enhancing brain lesions.
... Thus, a tradeoff was made between scan time and expected accuracy in high T 1 measurements, resulting in a higher coefficient of variation for the CSF T 1 measurements compared to WM and GM. A limitation to this study is that the T 1 fitting protocol did not account for Rician noise in the IR magnitude images, which can improve fitting outcomes [22,23]. ...
Article
Full-text available
Objective To measure healthy brain $${T}_{1}$$ T 1 and $${T}_{2}$$ T 2 relaxation times at 0.064 T. Materials and methods $${T}_{1}$$ T 1 and $${T}_{2}$$ T 2 relaxation times were measured in vivo for 10 healthy volunteers using a 0.064 T magnetic resonance imaging (MRI) system and for 10 test samples on both the MRI and a separate 0.064 T nuclear magnetic resonance (NMR) system. In vivo $${T}_{1}$$ T 1 and $${T}_{2}$$ T 2 values are reported for white matter (WM), gray matter (GM), and cerebrospinal fluid (CSF) for automatic segmentation regions and manual regions of interest (ROIs). Results $${T}_{1}$$ T 1 sample measurements on the MRI system were within 10% of the NMR measurement for 9 samples, and one sample was within 11%. Eight $${T}_{2}$$ T 2 sample MRI measurements were within 25% of the NMR measurement, and the two longest $${T}_{2}$$ T 2 samples had more than 25% variation. Automatic segmentations generally resulted in larger $${T}_{1}$$ T 1 and $${T}_{2}$$ T 2 estimates than manual ROIs. Discussion $${T}_{1}$$ T 1 and $${T}_{2}$$ T 2 times for brain tissue were measured at 0.064 T. Test samples demonstrated accuracy in WM and GM ranges of values but underestimated long $${T}_{2}$$ T 2 in the CSF range. This work contributes to measuring quantitative MRI properties of the human body at a range of field strengths.
... Lastly, it is also possible that differences in general image quality and noise distribution between the datasets may result in FA differences. It is known that noise of MRI data with a lower signal-to-noise ratio follows non-Gaussian distribution [84]. Such noise distribution may affect the estimation of diffusivity measurements [85]. ...
Article
Full-text available
Diffusion-weighted magnetic resonance imaging (dMRI) is the only available method to measure the tissue properties of white matter tracts in living human brains and has opened avenues for neuroscientific and clinical studies on human white matter. However, dMRI using conventional simultaneous multi-slice (SMS) single-shot echo planar imaging (ssEPI) still presents challenges in the analyses of some specific white matter tracts, such as the optic nerve, which are heavily affected by susceptibility-induced artifacts. In this study, we evaluated dMRI data acquired by using SMS readout-segmented EPI (rsEPI), which aims to reduce susceptibility-induced artifacts by dividing the acquisition space into multiple segments along the readout direction to reduce echo spacing. To this end, we acquired dMRI data from 11 healthy volunteers by using SMS ssEPI and SMS rsEPI, and then compared the dMRI data of the human optic nerve between the SMS ssEPI and SMS rsEPI datasets by visual inspection of the datasets and statistical comparisons of fractional anisotropy (FA) values. In comparison with the SMS ssEPI data, the SMS rsEPI data showed smaller susceptibility-induced distortion and exhibited a significantly higher FA along the optic nerve. In summary, this study demonstrates that despite its prolonged acquisition time, SMS rsEPI is a promising approach for measuring the tissue properties of the optic nerve in living humans and will be useful for future neuroscientific and clinical investigations of this pathway.
... MR images get noise due to scanned objects, electrical fluctuations during acquisition, noise due to RF coils and conductors, and hardware components [3]. Noises in MRIs are often considered as Racian distribution with low levels (SNR < 2) [4] and the Gaussian distributions with SNR > 2 [5]. For denoising, many filters such as spatial domain filters, transform domain filters, anisotropic filters, nonlocal means, and wavelet transform approaches are used [6][7][8][9]. ...
Article
Full-text available
The image enhancement for the natural images is the vast field where the quality of the images degrades based on the capturing and processing methods employed by the capturing devices. Based on noise type and estimation of noise, filter need to be adopted for enhancing the quality of the image. In the same manner, the medical field also needs some filtering mechanism to reduce the noise and detection of the disease based on the clarity of the image captured; in accordance with it, the preprocessing steps play a vital role to reduce the burden on the radiologist to make the decision on presence of disease. Based on the estimated noise and its type, the filters are selected to delete the unwanted signals from the image. Hence, identifying noise types and denoising play an important role in image analysis. The proposed framework addresses the noise estimation and filtering process to obtain the enhanced images. This paper estimates and detects the noise types, namely Gaussian, motion artifacts, Poisson, salt-andpepper, and speckle noises. Noise is estimated by using discrete wavelet transformation (DWT). This separates the image into quadruple sub-bands. Noise and HH sub-band are high-frequency components. HH sub-band also has vertical edges. These vertical edges are removed by performing Hadamard operation on downsampled Sobel edge-detected image and HH sub-band. Using HH sub-band after removing vertical edges is considered for estimating the noise. The Rician energy equation is used to estimate the noise. This is given as input for Artificial Neural Network to improve the estimated noise level. For identifying noise type, CNN is used. After removing vertical edges, the HH sub-band is given to the CNN model for classification. The classification accuracy results of identifying noise type are 100% on natural images and 96.3% on medical images.
... To evaluate the performance of axonal diameter mapping with and without noise, we fit the single compartment model in Eq. (7) to the noiseless signalS a (b) in Section 2.5.1 and its magnitude signal with Rician noise added, where the noise levels in the real and imaginary parts of the signal are both σ = S 0 /SNR (Gudbjartsson and Patz, 1995;Koay and Basser, 2006) with nondiffusion weighted signal S 0 ≡ 1 and SNR = ∞ (no noise) or 100, respectively. ...
Preprint
Full-text available
We consider the effect of non-cylindrical axonal shape on axonal diameter mapping with diffusion MRI. Practical sensitivity to axon diameter is attained at strong diffusion weightings b, where the deviation from the 1/radical b scaling yields the finite transverse diffusivity, which is then translated into axon diameter. While axons are usually modeled as perfectly straight, impermeable cylinders, the local variations in diameter (caliber variation or beading) and direction (undulation) have been observed in microscopy data of human axons. Here we quantify the influence of cellular-level features such as caliber variation and undulation on axon diameter estimation. For that, we simulate the diffusion MRI signal in realistic axons segmented from 3-dimensional electron microscopy of a human brain sample. We then create artificial fibers with the same features and tune the amplitude of their caliber variations and undulations. Numerical simulations of diffusion in fibers with such tunable features show that caliber variations and undulations result in under- and over-estimation of axon diameters, correspondingly; this bias can be as large as 100%. Given that increased axonal beading and undulations have been observed in pathological tissues, such as traumatic brain injury and ischemia, the interpretation of axon diameter alterations in pathology may be significantly confounded.
... volumes with different TEs were preprocessed separately in the following order: (1) noise level estimation and removal using the MP-PCA method (Veraart et al., 2016) by using the matrix centring and patch-based aggregation options (Manjon et al., 2013), as implemented in dipy (Garyfallidis et al., 2014) (https://dipy.org/); (2) attenuation of the Rician-noise dependent bias in the signal by implementing the postprocessing correction scheme proposed by (Gudbjartsson and Patz, 1995) and ...
Preprint
Full-text available
Axon radius is a potential biomarker for brain diseases and a crucial tissue microstructure parameter that determines the speed of action potentials. Diffusion MRI (dMRI) allows non-invasive estimation of axon radius, but accurately estimating the radius of axons in the human brain is challenging. Most axons in the brain have a radius below one micrometre, which falls below the sensitivity limit of dMRI signals even when using the most advanced human MRI scanners. Therefore, new MRI methods that are sensitive to small axon radii are needed. In this proof-of-concept investigation, we examine whether a surface-based axonal relaxation process could mediate a relationship between intra-axonal T2 and T1 times and inner axon radius, as measured using postmortem histology. A unique in vivo human diffusion-T1-T2 relaxation dataset was acquired on a 3T MRI scanner with ultra-strong diffusion gradients, using a strong diffusion-weighting (i.e., b=6000 s/mm2) and multiple inversion and echo times. A second reduced diffusion-T2 dataset was collected at various echo times to evaluate the model further. The intra-axonal relaxation times were estimated by fitting a diffusion-relaxation model to the orientation-averaged spherical mean signals. Our analysis revealed that the proposed surface-based relaxation model effectively explains the relationship between the estimated relaxation times and the histological axon radius measured in various corpus callosum regions. Using these histological values, we developed a novel calibration approach to predict axon radius in other areas of the corpus callosum. Notably, the predicted radii and those determined from histological measurements were in close agreement.
Article
Full-text available
Purpose The performance of modern image reconstruction methods is commonly judged using quantitative error metrics like root mean squared‐error and the structural similarity index, which are calculated by comparing reconstructed images against fully sampled reference data. In practice, the reference data will contain noise and is not a true gold standard. In this work, we demonstrate that the “hidden noise” present in reference data can substantially confound standard approaches for ranking different image reconstruction results. Methods Using both experimental and simulated k‐space data and several different image reconstruction techniques, we examined whether there was correlation between performance metrics obtained with typical noisy reference data versus those obtained with higher‐quality reference data. Results For conventional performance metrics, the reconstructions that matched best with the higher‐quality reference data were substantially different from the reconstructions that matched best with typical noisy reference data. This leads to suboptimal reconstruction results if the performance with respect to noisy reference data is used to select which reconstruction methods/parameters to employ. These issues were reduced when employing alternative error metrics that better account for noise. Conclusion Reference data containing hidden noise can substantially mislead the ranking of image reconstruction methods when using conventional error metrics, but this issue can be mitigated with alternative error metrics.
Preprint
Full-text available
Orbitrap mass spectrometry is widely used in the life-sciences. However, like all mass spectrometers, non-uniform (heteroscedastic) noise introduces bias in multivariate analysis complicating data interpretation. Here, we study the noise structure of a high-field Orbitrap mass analyzer integrated into a secondary ion mass spectrometer (OrbiSIMS). Using a stable primary ion beam to provide a well-controlled source of secondary ions from a silver sample, we find that noise has three characteristic regimes (1) at low signals the ion trap detector noise and a censoring algorithm dominate, (2) at intermediate signals counting noise specific to the SIMS emission process is most significant and has Poisson-like statistical properties, and (3) at high signal levels other sources of measurement variation become important and the data are overdispersed relative to Poisson. We developed a generative model for Orbitrap-based mass spectrometry data that directly incorporates the number of ions and accounts for the noise distribution over the entire intensity range. We find, for silver ions, a detection limit of 3.7 ions independent of ion generation rate. Using this understanding, we introduce a new scaling method, termed WSoR, to reduce the effects of noise bias in multivariate analysis and show it is more effective than the most common data preprocessing methods (root mean scaling, Pareto scaling and log transform) for the simple silver data. For more complex biological images with lower signal intensities the WSoR, Pareto and root mean scaling methods have similar performance and are significantly better than no scaling or, especially, log transform.
Article
Full-text available
Purpose This study aims to evaluate two distinct approaches for fiber radius estimation using diffusion‐relaxation MRI data acquired in biomimetic microfiber phantoms that mimic hollow axons. The methods considered are the spherical mean power‐law approach and a T2‐based pore size estimation technique. Theory and Methods A general diffusion‐relaxation theoretical model for the spherical mean signal from water molecules within a distribution of cylinders with varying radii was introduced, encompassing the evaluated models as particular cases. Additionally, a new numerical approach was presented for estimating effective radii (i.e., MRI‐visible mean radii) from the ground truth radii distributions, not reliant on previous theoretical approximations and adaptable to various acquisition sequences. The ground truth radii were obtained from scanning electron microscope images. Results Both methods show a linear relationship between effective radii estimated from MRI data and ground‐truth radii distributions, although some discrepancies were observed. The spherical mean power‐law method overestimated fiber radii. Conversely, the T2‐based method exhibited higher sensitivity to smaller fiber radii, but faced limitations in accurately estimating the radius in one particular phantom, possibly because of material‐specific relaxation changes. Conclusion The study demonstrates the feasibility of both techniques to predict pore sizes of hollow microfibers. The T2‐based technique, unlike the spherical mean power‐law method, does not demand ultra‐high diffusion gradients, but requires calibration with known radius distributions. This research contributes to the ongoing development and evaluation of neuroimaging techniques for fiber radius estimation, highlights the advantages and limitations of both methods, and provides datasets for reproducible research.
Article
Full-text available
Purpose To develop a tissue field‐filtering algorithm, called maximum spherical mean value (mSMV), for reducing shadow artifacts in QSM of the brain without requiring brain‐tissue erosion. Theory and Methods Residual background field is a major source of shadow artifacts in QSM. The mSMV algorithm filters large field‐magnitude values near the border, where the maximum value of the harmonic background field is located. The effectiveness of mSMV for artifact removal was evaluated by comparing existing QSM algorithms in numerical brain simulation as well as using in vivo human data acquired from 11 healthy volunteers and 93 patients. Results Numerical simulation showed that mSMV reduces shadow artifacts and improves QSM accuracy. Better shadow reduction, as demonstrated by lower QSM variation in the gray matter and higher QSM image quality score, was also observed in healthy subjects and in patients with hemorrhages, stroke, and multiple sclerosis. Conclusion The mSMV algorithm allows QSM maps that are substantially equivalent to those obtained using SMV‐filtered dipole inversion without eroding the volume of interest.
Article
Full-text available
Purpose To determine the sensitivity profiles of probabilistic and deterministic DTI tractography methods in estimating geometric properties in arm muscle anatomy. Methods Spin‐echo diffusion‐weighted MR images were acquired in the dominant arm of 10 participants. Both deterministic and probabilistic tractography were performed in two different muscle architectures of the parallel‐structured biceps brachii (and the pennate‐structured flexor carpi ulnaris. Muscle fascicle geometry estimates and number of fascicles were evaluated with respect to tractography turning angle, polynomial fitting order, and SNR. The DTI tractography estimated fascicle lengths were compared with measurements obtained from conventional cadaveric dissection and ultrasound modalities. Results The probabilistic method generally estimated fascicle lengths closer to ranges reported by conventional methods than the deterministic method, most evident in the biceps brachii (p > 0.05), consisting of longer, arc‐like fascicles. For both methods, a wide turning angle (50º–90°) generated fascicle lengths that were in close agreement with conventional methods, most evident in the flexor carpi ulnaris (p > 0.05), consisting of shorter, feather‐like fascicles. The probabilistic approach produced at least two times more fascicles than the deterministic approach. For both approaches, second‐order fitting yielded about double the complete tracts as third‐order fitting. In both muscles, as SNR decreased, deterministic tractography produced less fascicles but consistent geometry (p > 0.05), whereas probabilistic tractography produced a consistent number but altered geometry of fascicles (p < 0.001). Conclusion Findings from this study provide best practice recommendations for implementing DTI tractography in skeletal muscle and will inform future in vivo studies of healthy and pathological muscle structure.
Article
Full-text available
A key parameter of interest recovered from hyperpolarized (HP) MRI measurements is the apparent pyruvate-to-lactate exchange rate, kPL\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$k_{{\textrm PL}}$$\end{document}, for measuring tumor metabolism. This manuscript presents an information-theory-based optimal experimental design approach that minimizes the uncertainty in the rate parameter, kPL\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$k_{{\textrm PL}}$$\end{document}, recovered from HP-MRI measurements. Mutual information is employed to measure the information content of the HP measurements with respect to the first-order exchange kinetics of the pyruvate conversion to lactate. Flip angles of the pulse sequence acquisition are optimized with respect to the mutual information. A time-varying flip angle scheme leads to a higher parameter optimization that can further improve the quantitative value of mutual information over a constant flip angle scheme. However, the constant flip angle scheme, 35 and 28 degrees for pyruvate and lactate measurements, leads to an accuracy and precision comparable to the variable flip angle schemes obtained from our method. Combining the comparable performance and practical implementation, optimized pyruvate and lactate flip angles of 35 and 28 degrees, respectively, are recommended.
Article
Full-text available
Purpose To assess the feasibility of CEST‐based creatine (Cr) mapping in brain at 3T using the guanidino (Guan) proton resonance. Methods Wild type and knockout mice with guanidinoacetate N‐methyltransferase deficiency and low Cr and phosphocreatine (PCr) concentrations in the brain were used to assign the Cr and protein‐based arginine contributions to the GuanCEST signal at 2.0 ppm. To quantify the Cr proton exchange rate, two‐step Bloch–McConnell fitting was used to fit the extracted CrCEST line‐shape and multi‐B1 Z‐spectral data. The pH response of GuanCEST was simulated to demonstrate its potential for pH mapping. Results Brain Z‐spectra of wild type and guanidinoacetate N‐methyltransferase deficiency mice show a clear Guan proton peak at 2.0 ppm at 3T. The CrCEST signal contributes ∼23% to the GuanCEST signal at B1 = 0.8 μT, where a maximum CrCEST effect of 0.007 was detected. An exchange rate range of 200–300 s⁻¹ was estimated for the Cr Guan protons. As revealed by the simulation, an elevated GuanCEST in the brain is observed when B1 is less than 0.4 μT at 3T, when intracellular pH reduces by 0.2. Conversely, the GuanCEST decreases when B1 is greater than 0.4 μT with the same pH drop. Conclusions CrCEST mapping is possible at 3T, which has potential for detecting intracellular pH and Cr concentration in brain.
Article
Full-text available
We propose a general method for combining multiple models to predict tissue microstructure, with an exemplar using in vivo diffusion-relaxation MRI data. The proposed method obviates the need to select a single ’optimum’ structure model for data analysis in heterogeneous tissues where the best model varies according to local environment. We break signal interpretation into a three-stage sequence: (1) application of multiple semi-phenomenological models to predict the physical properties of tissue water pools contributing to the observed signal; (2) from each Stage-1 semi-phenomenological model, application of a tissue microstructure model to predict the relative volumes of tissue structure components that make up each water pool; and (3) aggregation of the predictions of tissue structure, with weightings based on model likelihood and fractional volumes of the water pools from Stage-1. The multiple model approach is expected to reduce prediction variance in tissue regions where a complex model is overparameterised, and bias where a model is underparameterised. The separation of signal characterisation (Stage-1) from biological assignment (Stage-2) enables alternative biological interpretations of the observed physical properties of the system, by application of different tissue structure models. The proposed method is exemplified with human prostate diffusion-relaxation MRI data, but has potential application to a wide range of analyses where a single model may not be optimal throughout the sampled domain.
Article
Full-text available
Purpose To mitigate the problem of noisy parameter maps with high uncertainties by casting parameter mapping as a denoising task based on Deep Image Priors. Methods We extend the concept of denoising with Deep Image Prior (DIP) into parameter mapping by treating the output of an image‐generating network as a parametrization of tissue parameter maps. The method implicitly denoises the parameter mapping process by filtering low‐level image features with an untrained convolutional neural network (CNN). Our implementation includes uncertainty estimation from Bernoulli approximate variational inference, implemented with MC dropout, which provides model uncertainty in each voxel of the denoised parameter maps. The method is modular, so the specifics of different applications (e.g., T1 mapping) separate into application‐specific signal equation blocks. We evaluate the method on variable flip angle T1 mapping, multi‐echo T2 mapping, and apparent diffusion coefficient mapping. Results We found that deep image prior adapts successfully to several applications in parameter mapping. In all evaluations, the method produces noise‐reduced parameter maps with decreased uncertainty compared to conventional methods. The downsides of the proposed method are the long computational time and the introduction of some bias from the denoising prior. Conclusion DIP successfully denoise the parameter mapping process and applies to several applications with limited hyperparameter tuning. Further, it is easy to implement since DIP methods do not use network training data. Although time‐consuming, uncertainty information from MC dropout makes the method more robust and provides useful information when properly calibrated.
Article
Full-text available
Background The use of a gradient echo spin echo (GESE) method to obtain rapid T2 and T2* estimation in the heart has been proposed. The effect of acquisition parameter settings on T2 and T2* bias and precision have not been investigated in depth. Purpose To understand factors impacting the quantification of T2 and T2* values with a gradient echo spin echo (GESE) method using echo planar imaging (EPI) readouts in a reduced field of view acquisition. Methods The GESE method is implemented with a reduced field‐of‐view using an outer volume suppression (OVS) technique to minimize the time for multi‐echo EPI readouts. The number of EPI readouts (images) for the GESE is optimized using Cramer‐Rao Lower Bound (CRLB) and Monte Carlo simulations with a nonlinear least‐square (NLLS) estimator. The SNR requirements were studied using the latter simulation method for a selected range of T2 and T2* values and T2/T2* ratios. Two healthy control subjects were imaged with the proposed GESE sequence and evaluated with the NLLS estimation method. In addition, the proposed OVS method was compared with a saturation bands OVS method in one subject. Clinical T2 and T2* mappings were used as the reference. Results The optimal number of EPI readouts is five and the performance is slightly better when the refocusing pulse is placed between the 2nd and 3rd readouts. The SNR requirement for achieving a target bias < 1 ms and standard deviation (SD) < 5 ms is more demanding when T2/T2* ratio increases. The minimum SNR requirement in the GESE acquisition should vary from 6 to 20 depending on specific myocardial T2 and T2* values at 3T. The T2 and T2* estimates using the proposed OVS method and the saturation bands OVS method are both similar to the reference. Conclusion The GESE sequence with five EPI readouts is a feasible and efficient technique that can estimate T2 and T2* values in the septal myocardium within a heartbeat when the SNR requirement can be satisfied.
Article
Full-text available
Purpose Water saturation shift referencing (WASSR) Z‐spectra are used commonly for field referencing in chemical exchange saturation transfer (CEST) MRI. However, their analysis using least‐squares (LS) Lorentzian fitting is time‐consuming and prone to errors because of the unavoidable noise in vivo. A deep learning–based single Lorentzian Fitting Network (sLoFNet) is proposed to overcome these shortcomings. Methods A neural network architecture was constructed and its hyperparameters optimized. Training was conducted on a simulated and in vivo–paired data sets of discrete signal values and their corresponding Lorentzian shape parameters. The sLoFNet performance was compared with LS on several WASSR data sets (both simulated and in vivo 3T brain scans). Prediction errors, robustness against noise, effects of sampling density, and time consumption were compared. Results LS and sLoFNet performed comparably in terms of RMS error and mean absolute error on all in vivo data with no statistically significant difference. Although the LS method fitted well on samples with low noise, its error increased rapidly when increasing sample noise up to 4.5%, whereas the error of sLoFNet increased only marginally. With the reduction of Z‐spectral sampling density, prediction errors increased for both methods, but the increase occurred earlier (at 25 vs. 15 frequency points) and was more pronounced for LS. Furthermore, sLoFNet performed, on average, 70 times faster than the LS‐method. Conclusion Comparisons between LS and sLoFNet on simulated and in vivo WASSR MRI Z‐spectra in terms of robustness against noise and decreased sample resolution, as well as time consumption, showed significant advantages for sLoFNet.
Article
An attempt is made to remove some of the uncertainty surrounding the sensitivity of an NMR experiment involving human samples. It is shown that noise may be associated not only with the receiving coil resistance, but also with dielectric and inductive losses in the sample. Although steps may be taken to minimize the dielectric losses, this is not the case for the magnetic losses, and an estimate is made of their effects upon the signal-to-noise ratio. Approximate values of the latter are calculated for the head and torso and some experimental constraints briefly discussed.
Conference Paper
An algorithm is presented for fully automated detection of brain contours from single-echo 3-D coronal MRI data. The technique detects structures in a head data volume in a hierarchical fashion. Detections consist of histogram-based thresholding operation, followed by a morphological cleanup procedure of the binary threshold mask images. Anatomic knowledge, essential for the discrimination between desired and undesired structures, is implemented through a sequence of conventional and new morphological operations. Innovative use of 3-D distance transformations allows implicit evaluation of anatomic relationships for structure recognition. Overlap tests between neighbouring slice images are used to propagate coherent 2-D brain masks through the third dimension. A summary of results of testing the algorithm on 23 test data sets is presented, with a discussion of potential for clinical application and generalization to other problems, and of limitations of the technique.
Article
We show that for magnetic resonance (MR) images with signal-to-noise ratio (SNR) less than 2 it is advantageous to use a phase-corrected real reconstruction, rather than the more usual magnitude reconstruction. We discuss the results of the phase correction algorithm used to experimentally verify the result. We supplement the existing literature by presenting closed form expressions (in an MR context) for the probability distribution and first moments of the signal resulting from a magnitude reconstruction.
Article
The fundamental limit for NMR imaging is set by an intrinsic signal-to-noise ratio (SNR) for a particular combination of rf antenna and imaging subjects. The intrinsic SNR is the signal from a small volume of material in the sample competing with electrical noise from thermally generated, random noise currents in the sample. The intrinsic SNR has been measured for a number of antenna-body section combinations at several different values of the static magnetic field and is proportional to B0. We have applied the intrinsic and system SNR to predict image SNR and have found satisfactory agreement with measurements on images. The relationship between SNR and pixel size is quite different in NMR than it is with imaging modalities using ionizing radiation, and indicates that the initial choice of pixel size is crucial in NMR. The analog of "contrast-detail-dose" plots for ionizing radiation imaging modalities is the "contrast-detail-time" plot in NMR, which should prove useful in choosing a suitable pixel array to visualize a particular anatomical detail for a given NMR receiving antenna.
Article
Power spectrum or magnitude images are frequently presented in magnetic resonance imaging. In such images, measurement of signal intensity at low signal levels is compounded with the noise. This report describes how to extract true intensity measurements in the presence of noise.
Article
A nuclear magnetic resonance (NMR) imaging system signal-to-noise calibration technique based on an NMR projection of distilled water in a cylindrical bottle is proposed. This measurement can characterize any arrangement of rf coils in any magnetic field as signal to noise per ml times root Hz. Inductive losses in a typical patient must be included in the calibration, and such losses can be simulated in a particular system by an externally attached resistor(s) appropriate to that system. Alternatively, an rf inductive damping phantom consisting of a conducting loop of wire containing an appropriate resistor is suggested that can be inserted into any NMR imaging coil to simulate subject Q damping. The same resistor can be used, independent of the details of the coil construction. Furthermore, if the loop inductance is tuned out at each frequency with a series capacitor, then the same loop resistance will serve for all frequencies as a good approximation to human subject damping. This "projection method" signal-to-noise ratio is related to the conventional signal-to-noise ratio measured from a Lorentzian-shaped spectral line as psi P = psi L [2/T2]1/2, where psi stands for signal-to-noise ratio, subscripts P and L stand, respectively, for the projection and "Lorentzian" methods, and T2 is the transverse relaxation time of the spectral line used in the Lorentzian method.
Article
Zero-mean noise introduced into quadrature detected MRI signals is generally rectified by the reconstruction algorithm to give a nonzero background intensity in the displayed image. In low signal-to-noise ratio (SNR) images, this background will inflate region of interest (ROI) signal measurements, leading to improper T2 and diffusion fits. A method is described here which separates signal from noise by computing power images from traditional magnitude data. Parameters measured from such power images show closer agreement with true values that do those derived from magnitude images. Because the correction algorithm is the same for all pixel intensities, it can be used with regions of interest with heterogeneous values.
Article
Estimating the true signal-to-noise ratio (SNR) of magnetic resonance (MR) images with low signal is confounded by the magnitude presentation of the data. This paper suggests a simple solution to this problem. A common method of measuring SNR compares the mean signal to the standard deviation of the noise. This SNR measure was found to be satisfactory for high but not low signal-to-noise image regions because of noise bias. These inconsistencies are removed by introducing unbiased definitions of the signal and noise levels in terms of their root-mean-square values. The approaches are compared by evaluating the SNR values for MR medical images.
Article
The image intensity in magnetic resonance magnitude images in the presence of noise is shown to be governed by a Rician distribution. Low signal intensities (SNR < 2) are therefore biased due to the noise. It is shown how the underlying noise can be estimated from the images and a simple correction scheme is provided to reduce the bias. The noise characteristics in phase images are also studied and shown to be very different from those of the magnitude images. Common to both,however, is that the noise distributions are nearly Gaussian for SNR larger than two.
Article
A software procedure is presented for fully automated detection of brain contours from single-echo 3-D MRI data, developed initially for scans with coronal orientation. The procedure detects structures in a head data volume in a hierarchical fashion. Automatic detection starts with a histogram-based thresholding step, whenever necessary preceded by an image intensity correction procedure. This step is followed by a morphological procedure which refines the binary threshold mask images. Anatomical knowledge, essential for the discrimination between desired and undesired structures, is implemented in this step through a sequence of conventional and novel morphological operations, using 2-D and 3-D operations. A final step of the procedure performs overlap tests on candidate brain regions of interest in neighboring slice images to propagate coherent 2-D brain masks through the third dimension. Results are presented for test runs of the procedure on 23 coronal whole-brain data sets, and one sagittal whole-brain data set. Finally, the potential of the technique for generalization to other problems is discussed, as well as limitations of the technique.
The use of power images to perform quantitative analysis on low SNR MR images
  • A J Miller
  • P M Joseph
A. J. Miller, P. M. Joseph, The use of power images to perform quantitative analysis on low SNR MR images. M a p. Reson. Imaging 11, 1051-1056 (1993).