scholarly journals Deep Learning Techniques of Fatty Liver Using Multi-view Ultrasound Images scanned by Different Scanners (Preprint)

10.2196/30066 ◽  
2021 ◽  
Author(s):  
Taewoo Kim ◽  
Dong Hyun Lee ◽  
Eun-Kee Park ◽  
Sanghun Choi
2021 ◽  
Author(s):  
Taewoo Kim ◽  
Dong Hyun Lee ◽  
Eun-Kee Park ◽  
Sanghun Choi

BACKGROUND Fat fraction values obtained from magnetic resonance images (MRI) can be used to obtain an accurate diagnosis of fatty liver diseases. However, MRI is expensive and cannot be performed for everyone. OBJECTIVE In this study, we aim to develop multi-view ultrasound image-based convolutional deep learning models to detect fatty liver disease and yield fat fraction values. METHODS We extracted 90 (the right intercostal view) and 90 (the right intercostal view containing the right renal cortex) ultrasound images from 39 fatty liver subjects (MRI-PDFF ≥ 5%) and 51 normal subjects (MRI-PDFF < 5%) containing MRI-PDFF values from Good Gang-An Hospital. We combined liver and kidney-liver (CLKL) images to train the deep learning models, and developed classification and regression models based on VGG19 to classify fatty liver disease and yield fat fraction values. We employed the data augmentation techniques such as flip and rotation to prevent the deep learning model from overfitting. We determined the deep learning model with performance metrics such as accuracy, sensitivity, specificity, and coefficient of determination (R2). RESULTS In demographic information, all metrics such as age and sex were similar between the two groups, i.e., fatty liver disease and normal subjects. In classification, model trained on CLKL images achieved 80.1% accuracy, 86.2% precision, and 80.5% specificity to detect fatty liver disease. In regression, the predicted fat fraction values of the regression model trained on CLKL images correlated with MRI-proton density fat fraction (MRI-PDFF) values (R2, 0.633), indicating that the predicted fat fraction values were moderately estimated. CONCLUSIONS With deep learning techniques and multi-view ultrasound images, it is potentially possible to replace MRI-PDFF values with deep learning predictions for detecting fatty liver disease and estimating fat fraction values.


Sensors ◽  
2021 ◽  
Vol 21 (16) ◽  
pp. 5304
Author(s):  
Se-Yeol Rhyou ◽  
Jae-Chern Yoo

Diagnosing liver steatosis is an essential precaution for detecting hepatocirrhosis and liver cancer in the early stages. However, automatic diagnosis of liver steatosis from ultrasound (US) images remains challenging due to poor visual quality from various origins, such as speckle noise and blurring. In this paper, we propose a fully automated liver steatosis prediction model using three deep learning neural networks. As a result, liver steatosis can be automatically detected with high accuracy and precision. First, transfer learning is used for semantically segmenting the liver and kidney (L-K) on parasagittal US images, and then cropping the L-K area from the original US images. The second neural network also involves semantic segmentation by checking the presence of a ring that is typically located around the kidney and cropping of the L-K area from the original US images. These cropped L-K areas are inputted to the final neural network, SteatosisNet, in order to grade the severity of fatty liver disease. The experimental results demonstrate that the proposed model can predict fatty liver disease with the sensitivity of 99.78%, specificity of 100%, PPV of 100%, NPV of 99.83%, and diagnostic accuracy of 99.91%, which is comparable to the common results annotated by medical experts.


Author(s):  
Manish Balamurugan ◽  
Kathryn Chung ◽  
Venkat Kuppoor ◽  
Smruti Mahapatra ◽  
Aliaksei Pustavoitau ◽  
...  

Abstract In this study, we present USDL, a novel model that employs deep learning algorithms in order to reconstruct and enhance corrupted ultrasound images. We utilize an unsupervised neural network called an autoencoder which works by compressing its input into a latent-space representation and then reconstructing the output from this representation. We trained our model on a dataset that compromises of 15,700 in vivo images of the neck, wrist, elbow, and knee vasculature and compared the quality of the images generated using the structural similarity index (SSIM) and peak to noise ratio (PSNR). In closely simulated conditions, the architecture exhibited an average reconstruction accuracy of 90% as indicated by our SSIM. Our study demonstrates that USDL outperforms state of the art image enhancement and reconstruction techniques in both image quality and computational complexity, while maintaining the architecture efficiency.


2020 ◽  
Author(s):  
Robert Arntfield ◽  
Blake VanBerlo ◽  
Thamer Alaifan ◽  
Nathan Phelps ◽  
Matt White ◽  
...  

AbstractObjectivesLung ultrasound (LUS) is a portable, low cost respiratory imaging tool but is challenged by user dependence and lack of diagnostic specificity. It is unknown whether the advantages of LUS implementation could be paired with deep learning techniques to match or exceed human-level, diagnostic specificity among similar appearing, pathological LUS images.DesignA convolutional neural network was trained on LUS images with B lines of different etiologies. CNN diagnostic performance, as validated using a 10% data holdback set was compared to surveyed LUS-competent physicians.SettingTwo tertiary Canadian hospitals.Participants600 LUS videos (121,381 frames) of B lines from 243 distinct patients with either 1) COVID-19, Non-COVID acute respiratory distress syndrome (NCOVID) and 3) Hydrostatic pulmonary edema (HPE).ResultsThe trained CNN performance on the independent dataset showed an ability to discriminate between COVID (AUC 1.0), NCOVID (AUC 0.934) and HPE (AUC 1.0) pathologies. This was significantly better than physician ability (AUCs of 0.697, 0.704, 0.967 for the COVID, NCOVID and HPE classes, respectively), p < 0.01.ConclusionsA deep learning model can distinguish similar appearing LUS pathology, including COVID-19, that cannot be distinguished by humans. The performance gap between humans and the model suggests that subvisible biomarkers within ultrasound images could exist and multi-center research is merited.


Sign in / Sign up

Export Citation Format

Share Document