scholarly journals Deep learning applied to lung ultrasound videos for scoring COVID-19 patients: A multicenter study

2021 ◽  
Vol 149 (5) ◽  
pp. 3626-3634
Author(s):  
Federico Mento ◽  
Tiziano Perrone ◽  
Anna Fiengo ◽  
Andrea Smargiassi ◽  
Riccardo Inchingolo ◽  
...  
Diagnostics ◽  
2021 ◽  
Vol 11 (11) ◽  
pp. 2049
Author(s):  
Robert Arntfield ◽  
Derek Wu ◽  
Jared Tschirhart ◽  
Blake VanBerlo ◽  
Alex Ford ◽  
...  

Lung ultrasound (LUS) is an accurate thoracic imaging technique distinguished by its handheld size, low-cost, and lack of radiation. User dependence and poor access to training have limited the impact and dissemination of LUS outside of acute care hospital environments. Automated interpretation of LUS using deep learning can overcome these barriers by increasing accuracy while allowing point-of-care use by non-experts. In this multicenter study, we seek to automate the clinically vital distinction between A line (normal parenchyma) and B line (abnormal parenchyma) on LUS by training a customized neural network using 272,891 labelled LUS images. After external validation on 23,393 frames, pragmatic clinical application at the clip level was performed on 1162 videos. The trained classifier demonstrated an area under the receiver operating curve (AUC) of 0.96 (+/−0.02) through 10-fold cross-validation on local frames and an AUC of 0.93 on the external validation dataset. Clip-level inference yielded sensitivities and specificities of 90% and 92% (local) and 83% and 82% (external), respectively, for detecting the B line pattern. This study demonstrates accurate deep-learning-enabled LUS interpretation between normal and abnormal lung parenchyma on ultrasound frames while rendering diagnostically important sensitivity and specificity at the video clip level.


2021 ◽  
Vol 11 (15) ◽  
pp. 6976
Author(s):  
Miroslav Jaščur ◽  
Marek Bundzel ◽  
Marek Malík ◽  
Anton Dzian ◽  
Norbert Ferenčík ◽  
...  

Certain post-thoracic surgery complications are monitored in a standard manner using methods that employ ionising radiation. A need to automatise the diagnostic procedure has now arisen following the clinical trial of a novel lung ultrasound examination procedure that can replace X-rays. Deep learning was used as a powerful tool for lung ultrasound analysis. We present a novel deep-learning method, automated M-mode classification, to detect the absence of lung sliding motion in lung ultrasound. Automated M-mode classification leverages semantic segmentation to select 2D slices across the temporal dimension of the video recording. These 2D slices are the input for a convolutional neural network, and the output of the neural network indicates the presence or absence of lung sliding in the given time slot. We aggregate the partial predictions over the entire video recording to determine whether the subject has developed post-surgery complications. With a 64-frame version of this architecture, we detected lung sliding on average with a balanced accuracy of 89%, sensitivity of 82%, and specificity of 92%. Automated M-mode classification is suitable for lung sliding detection from clinical lung ultrasound videos. Furthermore, in lung ultrasound videos, we recommend using time windows between 0.53 and 2.13 s for the classification of lung sliding motion followed by aggregation.


CHEST Journal ◽  
2021 ◽  
Author(s):  
Francesco Raimondi ◽  
Fiorella Migliaro ◽  
Iuri Corsini ◽  
Fabio Meneghin ◽  
Luca Pierri ◽  
...  

2021 ◽  
Vol 4 (1) ◽  
Author(s):  
Endre Grøvik ◽  
Darvin Yi ◽  
Michael Iv ◽  
Elizabeth Tong ◽  
Line Brennhaug Nilsen ◽  
...  

AbstractThe purpose of this study was to assess the clinical value of a deep learning (DL) model for automatic detection and segmentation of brain metastases, in which a neural network is trained on four distinct MRI sequences using an input-level dropout layer, thus simulating the scenario of missing MRI sequences by training on the full set and all possible subsets of the input data. This retrospective, multicenter study, evaluated 165 patients with brain metastases. The proposed input-level dropout (ILD) model was trained on multisequence MRI from 100 patients and validated/tested on 10/55 patients, in which the test set was missing one of the four MRI sequences used for training. The segmentation results were compared with the performance of a state-of-the-art DeepLab V3 model. The MR sequences in the training set included pre-gadolinium and post-gadolinium (Gd) T1-weighted 3D fast spin echo, post-Gd T1-weighted inversion recovery (IR) prepped fast spoiled gradient echo, and 3D fluid attenuated inversion recovery (FLAIR), whereas the test set did not include the IR prepped image-series. The ground truth segmentations were established by experienced neuroradiologists. The results were evaluated using precision, recall, Intersection over union (IoU)-score and Dice score, and receiver operating characteristics (ROC) curve statistics, while the Wilcoxon rank sum test was used to compare the performance of the two neural networks. The area under the ROC curve (AUC), averaged across all test cases, was 0.989 ± 0.029 for the ILD-model and 0.989 ± 0.023 for the DeepLab V3 model (p = 0.62). The ILD-model showed a significantly higher Dice score (0.795 ± 0.104 vs. 0.774 ± 0.104, p = 0.017), and IoU-score (0.561 ± 0.225 vs. 0.492 ± 0.186, p < 0.001) compared to the DeepLab V3 model, and a significantly lower average false positive rate of 3.6/patient vs. 7.0/patient (p < 0.001) using a 10 mm3 lesion-size limit. The ILD-model, trained on all possible combinations of four MRI sequences, may facilitate accurate detection and segmentation of brain metastases on a multicenter basis, even when the test cohort is missing input MRI sequences.


Sign in / Sign up

Export Citation Format

Share Document