An iterative multi‐path fully convolutional neural network for automatic cardiac segmentation in cine MR images

2019 ◽  
Vol 46 (12) ◽  
pp. 5652-5665 ◽  
Author(s):  
Zongqing Ma ◽  
Xi Wu ◽  
Xin Wang ◽  
Qi Song ◽  
Youbing Yin ◽  
...  
Diagnostics ◽  
2021 ◽  
Vol 11 (9) ◽  
pp. 1554
Author(s):  
Philippe Germain ◽  
Armine Vardazaryan ◽  
Nicolas Padoy ◽  
Aissam Labani ◽  
Catherine Roy ◽  
...  

The automatic classification of various types of cardiomyopathies is desirable but has never been performed using a convolutional neural network (CNN). The purpose of this study was to evaluate currently available CNN models to classify cine magnetic resonance (cine-MR) images of cardiomyopathies. Method: Diastolic and systolic frames of 1200 cine-MR sequences of three categories of subjects (395 normal, 411 hypertrophic cardiomyopathy, and 394 dilated cardiomyopathy) were selected, preprocessed, and labeled. Pretrained, fine-tuned deep learning models (VGG) were used for image classification (sixfold cross-validation and double split testing with hold-out data). The heat activation map algorithm (Grad-CAM) was applied to reveal salient pixel areas leading to the classification. Results: The diastolic–systolic dual-input concatenated VGG model cross-validation accuracy was 0.982 ± 0.009. Summed confusion matrices showed that, for the 1200 inputs, the VGG model led to 22 errors. The classification of a 227-input validation group, carried out by an experienced radiologist and cardiologist, led to a similar number of discrepancies. The image preparation process led to 5% accuracy improvement as compared to nonprepared images. Grad-CAM heat activation maps showed that most misclassifications occurred when extracardiac location caught the attention of the network. Conclusions: CNN networks are very well suited and are 98% accurate for the classification of cardiomyopathies, regardless of the imaging plane, when both diastolic and systolic frames are incorporated. Misclassification is in the same range as inter-observer discrepancies in experienced human readers.


2020 ◽  
Vol 10 (5) ◽  
pp. 1023-1032
Author(s):  
Lin Qi ◽  
Haoran Zhang ◽  
Xuehao Cao ◽  
Xuyang Lyu ◽  
Lisheng Xu ◽  
...  

Accurate segmentation of the blood pool of left ventricle (LV) and myocardium (or left ventricular epicardium, MYO) from cardiac magnetic resonance (MR) can help doctors to quantify LV ejection fraction and myocardial deformation. To reduce doctor’s burden of manual segmentation, in this study, we propose an automated and concurrent segmentation method of the LV and MYO. First, we employ a convolutional neural network (CNN) architecture to extract the region of interest (ROI) from short-axis cardiac cine MR images as a preprocessing step. Next, we present a multi-scale feature fusion (MSFF) CNN with a new weighted Dice index (WDI) loss function to get the concurrent segmentation of the LV and MYO. We use MSFF modules with three scales to extract different features, and then concatenate feature maps by the short and long skip connections in the encoder and decoder path to capture more complete context information and geometry structure for better segmentation. Finally, we compare the proposed method with Fully Convolutional Networks (FCN) and U-Net on the combined cardiac datasets from MICCAI 2009 and ACDC 2017. Experimental results demonstrate that the proposed method could perform effectively on LV and MYOs segmentation in the combined datasets, indicating its potential for clinical application.


2021 ◽  
Author(s):  
Ritu Lahoti ◽  
Sunil Kumar Vengalil ◽  
Punith B Venkategowda ◽  
Neelam Sinha ◽  
Vinod Veera Reddy

2020 ◽  
Vol 30 (11) ◽  
pp. 5923-5932
Author(s):  
M.-L. Kromrey ◽  
D. Tamada ◽  
H. Johno ◽  
S. Funayama ◽  
N. Nagata ◽  
...  

Abstract Objectives To reveal the utility of motion artifact reduction with convolutional neural network (MARC) in gadoxetate disodium–enhanced multi-arterial phase MRI of the liver. Methods This retrospective study included 192 patients (131 men, 68.7 ± 10.3 years) receiving gadoxetate disodium–enhanced liver MRI in 2017. Datasets were submitted to a newly developed filter (MARC), consisting of 7 convolutional layers, and trained on 14,190 cropped images generated from abdominal MR images. Motion artifact for training was simulated by adding periodic k-space domain noise to the images. Original and filtered images of pre-contrast and 6 arterial phases (7 image sets per patient resulting in 1344 sets in total) were evaluated regarding motion artifacts on a 4-point scale. Lesion conspicuity in original and filtered images was ranked by side-by-side comparison. Results Of the 1344 original image sets, motion artifact score was 2 in 597, 3 in 165, and 4 in 54 sets. MARC significantly improved image quality over all phases showing an average motion artifact score of 1.97 ± 0.72 compared to 2.53 ± 0.71 in original MR images (p < 0.001). MARC improved motion scores from 2 to 1 in 177/596 (29.65%), from 3 to 2 in 119/165 (72.12%), and from 4 to 3 in 34/54 sets (62.96%). Lesion conspicuity was significantly improved (p < 0.001) without removing anatomical details. Conclusions Motion artifacts and lesion conspicuity of gadoxetate disodium–enhanced arterial phase liver MRI were significantly improved by the MARC filter, especially in cases with substantial artifacts. This method can be of high clinical value in subjects with failing breath-hold in the scan. Key Points • This study presents a newly developed deep learning–based filter for artifact reduction using convolutional neural network (motion artifact reduction with convolutional neural network, MARC). • MARC significantly improved MR image quality after gadoxetate disodium administration by reducing motion artifacts, especially in cases with severely degraded images. • Postprocessing with MARC led to better lesion conspicuity without removing anatomical details.


2019 ◽  
Vol 9 (1) ◽  
Author(s):  
Ryohei Fukuma ◽  
Takufumi Yanagisawa ◽  
Manabu Kinoshita ◽  
Takashi Shinozaki ◽  
Hideyuki Arita ◽  
...  

AbstractIdentification of genotypes is crucial for treatment of glioma. Here, we developed a method to predict tumor genotypes using a pretrained convolutional neural network (CNN) from magnetic resonance (MR) images and compared the accuracy to that of a diagnosis based on conventional radiomic features and patient age. Multisite preoperative MR images of 164 patients with grade II/III glioma were grouped by IDH and TERT promoter (pTERT) mutations as follows: (1) IDH wild type, (2) IDH and pTERT co-mutations, (3) IDH mutant and pTERT wild type. We applied a CNN (AlexNet) to four types of MR sequence and obtained the CNN texture features to classify the groups with a linear support vector machine. The classification was also performed using conventional radiomic features and/or patient age. Using all features, we succeeded in classifying patients with an accuracy of 63.1%, which was significantly higher than the accuracy obtained from using either the radiomic features or patient age alone. In particular, prediction of the pTERT mutation was significantly improved by the CNN texture features. In conclusion, the pretrained CNN texture features capture the information of IDH and TERT genotypes in grade II/III gliomas better than the conventional radiomic features.


2021 ◽  
Vol 20 ◽  
pp. 153303382110464
Author(s):  
Jiankui Yuan ◽  
Elisha Fredman ◽  
Jian-Yue Jin ◽  
Serah Choi ◽  
David Mansur ◽  
...  

The aim of this work is to study the dosimetric effect from generated synthetic computed tomography (sCT) from magnetic resonance (MR) images using a deep learning algorithm for Gamma Knife (GK) stereotactic radiosurgery (SRS). The Monte Carlo (MC) method is used for dose calculations. Thirty patients were retrospectively selected with our institution IRB’s approval. All patients were treated with GK SRS based on T1-weighted MR images and also underwent conventional external beam treatment with a CT scan. Image datasets were preprocessed with registration and were normalized to obtain similar intensity for the pairs of MR and CT images. A deep convolutional neural network arranged in an encoder–decoder fashion was used to learn the direct mapping from MR to the corresponding CT. A number of metrics including the voxel-wise mean error (ME) and mean absolute error (MAE) were used for evaluating the difference between generated sCT and the true CT. To study the dosimetric accuracy, MC simulations were performed based on the true CT and sCT using the same treatment parameters. The method produced an MAE of 86.6 ± 34.1 Hundsfield units (HU) and a mean squared error (MSE) of 160.9 ± 32.8. The mean Dice similarity coefficient was 0.82 ± 0.05 for HU > 200. The difference for dose-volume parameter D95 between the ground true dose and the dose calculated with sCT was 1.1% if a synthetic CT-to-density table was used, and 4.9% compared with the calculations based on the water-brain phantom.


Sign in / Sign up

Export Citation Format

Share Document