scholarly journals Automatic Segmentation of Pancreatic Tumors Using Deep Learning on a Video Image of Contrast-Enhanced Endoscopic Ultrasound

2021 ◽  
Vol 10 (16) ◽  
pp. 3589
Author(s):  
Yuhei Iwasa ◽  
Takuji Iwashita ◽  
Yuji Takeuchi ◽  
Hironao Ichikawa ◽  
Naoki Mita ◽  
...  

Background: Contrast-enhanced endoscopic ultrasound (CE-EUS) is useful for the differentiation of pancreatic tumors. Using deep learning for the segmentation and classification of pancreatic tumors might further improve the diagnostic capability of CE-EUS. Aims: The aim of this study was to evaluate the capability of deep learning for the automatic segmentation of pancreatic tumors on CE-EUS video images and possible factors affecting the automatic segmentation. Methods: This retrospective study included 100 patients who underwent CE-EUS for pancreatic tumors. The CE-EUS video images were converted from the originals to 90-second segments with six frames per second. Manual segmentation of pancreatic tumors from B-mode images was performed as ground truth. Automatic segmentation was performed using U-Net with 100 epochs and was evaluated with 4-fold cross-validation. The degree of respiratory movement (RM) and tumor boundary (TB) were divided into 3-degree intervals in each patient and evaluated as possible factors affecting the segmentation. The concordance rate was calculated using the intersection over union (IoU). Results: The median IoU of all cases was 0.77. The median IoUs in TB-1 (clear around), TB-2, and TB-3 (unclear more than half) were 0.80, 0.76, and 0.69, respectively. The IoU for TB-1 was significantly higher than that of TB-3 (p < 0.01). However, there was no significant difference between the degrees of RM. Conclusion: Automatic segmentation of pancreatic tumors using U-Net on CE-EUS video images showed a decent concordance rate. The concordance rate was lowered by an unclear TB but was not affected by RM.

2019 ◽  
Vol 2019 ◽  
pp. 1-7 ◽  
Author(s):  
Chen Huang ◽  
Junru Tian ◽  
Chenglang Yuan ◽  
Ping Zeng ◽  
Xueping He ◽  
...  

Objective. Deep vein thrombosis (DVT) is a disease caused by abnormal blood clots in deep veins. Accurate segmentation of DVT is important to facilitate the diagnosis and treatment. In the current study, we proposed a fully automatic method of DVT delineation based on deep learning (DL) and contrast enhanced magnetic resonance imaging (CE-MRI) images. Methods. 58 patients (25 males; 28~96 years old) with newly diagnosed lower extremity DVT were recruited. CE-MRI was acquired on a 1.5 T system. The ground truth (GT) of DVT lesions was manually contoured. A DL network with an encoder-decoder architecture was designed for DVT segmentation. 8-Fold cross-validation strategy was applied for training and testing. Dice similarity coefficient (DSC) was adopted to evaluate the network’s performance. Results. It took about 1.5s for our CNN model to perform the segmentation task in a slice of MRI image. The mean DSC of 58 patients was 0.74± 0.17 and the median DSC was 0.79. Compared with other DL models, our CNN model achieved better performance in DVT segmentation (0.74± 0.17 versus 0.66±0.15, 0.55±0.20, and 0.57±0.22). Conclusion. Our proposed DL method was effective and fast for fully automatic segmentation of lower extremity DVT.


2020 ◽  
Vol 11 (5) ◽  
pp. 576-586
Author(s):  
Alice Fantazzini ◽  
Mario Esposito ◽  
Alice Finotello ◽  
Ferdinando Auricchio ◽  
Bianca Pane ◽  
...  

Abstract Purpose The quantitative analysis of contrast-enhanced Computed Tomography Angiography (CTA) is essential to assess aortic anatomy, identify pathologies, and perform preoperative planning in vascular surgery. To overcome the limitations given by manual and semi-automatic segmentation tools, we apply a deep learning-based pipeline to automatically segment the CTA scans of the aortic lumen, from the ascending aorta to the iliac arteries, accounting for 3D spatial coherence. Methods A first convolutional neural network (CNN) is used to coarsely segment and locate the aorta in the whole sub-sampled CTA volume, then three single-view CNNs are used to effectively segment the aortic lumen from axial, sagittal, and coronal planes under higher resolution. Finally, the predictions of the three orthogonal networks are integrated to obtain a segmentation with spatial coherence. Results The coarse segmentation performed to identify the aortic lumen achieved a Dice coefficient (DSC) of 0.92 ± 0.01. Single-view axial, sagittal, and coronal CNNs provided a DSC of 0.92 ± 0.02, 0.92 ± 0.04, and 0.91 ± 0.02, respectively. Multi-view integration provided a DSC of 0.93 ± 0.02 and an average surface distance of 0.80 ± 0.26 mm on a test set of 10 CTA scans. The generation of the ground truth dataset took about 150 h and the overall training process took 18 h. In prediction phase, the adopted pipeline takes around 25 ± 1 s to get the final segmentation. Conclusion The achieved results show that the proposed pipeline can effectively localize and segment the aortic lumen in subjects with aneurysm.


2019 ◽  
Vol 9 (1) ◽  
Author(s):  
Jason Kugelman ◽  
David Alonso-Caneiro ◽  
Scott A. Read ◽  
Jared Hamwood ◽  
Stephen J. Vincent ◽  
...  

Abstract The analysis of the choroid in the eye is crucial for our understanding of a range of ocular diseases and physiological processes. Optical coherence tomography (OCT) imaging provides the ability to capture highly detailed cross-sectional images of the choroid yet only a very limited number of commercial OCT instruments provide methods for automatic segmentation of choroidal tissue. Manual annotation of the choroidal boundaries is often performed but this is impractical due to the lengthy time taken to analyse large volumes of images. Therefore, there is a pressing need for reliable and accurate methods to automatically segment choroidal tissue boundaries in OCT images. In this work, a variety of patch-based and fully-convolutional deep learning methods are proposed to accurately determine the location of the choroidal boundaries of interest. The effect of network architecture, patch-size and contrast enhancement methods was tested to better understand the optimal architecture and approach to maximize performance. The results are compared with manual boundary segmentation used as a ground-truth, as well as with a standard image analysis technique. Results of total retinal layer segmentation are also presented for comparison purposes. The findings presented here demonstrate the benefit of deep learning methods for segmentation of the chorio-retinal boundary analysis in OCT images.


Diagnostics ◽  
2021 ◽  
Vol 11 (2) ◽  
pp. 316
Author(s):  
Tatsuya Ishii ◽  
Akio Katanuma ◽  
Haruka Toyonaga ◽  
Koki Chikugo ◽  
Hiroshi Nasuno ◽  
...  

Although pancreatic neuroendocrine neoplasms (PNENs) are relatively rare tumors, their number is increasing with advances in diagnostic imaging modalities. Even small lesions that are difficult to detect using computed tomography or magnetic resonance imaging can now be detected with endoscopic ultrasound (EUS). Contrast-enhanced EUS is useful, and not only diagnosis but also malignancy detection has become possible by evaluating the vascularity of tumors. Pathological diagnosis using EUS with fine-needle aspiration (EUS-FNA) is useful when diagnostic imaging is difficult. EUS-FNA can also be used to evaluate the grade of malignancy. Pooling the data of the studies that compared the PNENs grading between EUS-FNA samples and surgical specimens showed a concordance rate of 77.5% (κ-statistic = 0.65, 95% confidence interval = 0.59–0.71, p < 0.01). Stratified analysis for small tumor size (2 cm) showed that the concordance rate was 84.5% and the kappa correlation index was 0.59 (95% confidence interval = 0.43–0.74, p < 0.01). The evolution of ultrasound imaging technologies such as contrast-enhanced and elastography and the artificial intelligence that analyzes them, the evolution of needles, and genetic analysis, will further develop the diagnosis and treatment of PNENs in the future.


Sensors ◽  
2021 ◽  
Vol 22 (1) ◽  
pp. 245
Author(s):  
Seok Oh ◽  
Young-Jae Kim ◽  
Young-Taek Park ◽  
Kwang-Gi Kim

The automatic segmentation of the pancreatic cyst lesion (PCL) is essential for the automated diagnosis of pancreatic cyst lesions on endoscopic ultrasonography (EUS) images. In this study, we proposed a deep-learning approach for PCL segmentation on EUS images. We employed the Attention U-Net model for automatic PCL segmentation. The Attention U-Net was compared with the Basic U-Net, Residual U-Net, and U-Net++ models. The Attention U-Net showed a better dice similarity coefficient (DSC) and intersection over union (IoU) scores than the other models on the internal test. Although the Basic U-Net showed a higher DSC and IoU scores on the external test than the Attention U-Net, there was no statistically significant difference. On the internal test of the cross-over study, the Attention U-Net showed the highest DSC and IoU scores. However, there was no significant difference between the Attention U-Net and Residual U-Net or between the Attention U-Net and U-Net++. On the external test of the cross-over study, all models showed no significant difference from each other. To the best of our knowledge, this is the first study implementing segmentation of PCL on EUS images using a deep-learning approach. Our experimental results show that a deep-learning approach can be applied successfully for PCL segmentation on EUS images.


2018 ◽  
Vol 37 (6) ◽  
pp. 545-557 ◽  
Author(s):  
Xavier Roynard ◽  
Jean-Emmanuel Deschaud ◽  
François Goulette

This paper introduces a new urban point cloud dataset for automatic segmentation and classification acquired by mobile laser scanning (MLS). We describe how the dataset is obtained from acquisition to post-processing and labeling. This dataset can be used to train pointwise classification algorithms; however, given that a great attention has been paid to the split between the different objects, this dataset can also be used to train the detection and segmentation of objects. The dataset consists of around [Formula: see text] of MLS point cloud acquired in two cities. The number of points and range of classes mean that it can be used to train deep-learning methods. In addition, we show some results of automatic segmentation and classification. The dataset is available at: http://caor-mines-paristech.fr/fr/paris-lille-3d-dataset/ .


2008 ◽  
Vol 6 (5) ◽  
pp. 590-597.e1 ◽  
Author(s):  
Christoph F. Dietrich ◽  
Andre Ignee ◽  
Barbara Braden ◽  
Ana Paula Barreiros ◽  
Michaela Ott ◽  
...  

Stroke ◽  
2020 ◽  
Vol 51 (Suppl_1) ◽  
Author(s):  
Leonard L Yeo ◽  
Melih Engin ◽  
Robin Lange ◽  
Sethu R Boopathy ◽  
Yang Cunli ◽  
...  

Purpose: Time-of-Flight (TOF) MRA is commonly used for grading cerebrovascular diseases. Analysis of cerebral arteries in MRA TOF is a challenging and time consuming task that would benefit from automation. Established image processing methods for automatic segmentation of cerebral arteries suffer from common artefacts such as kissing vessels (when two nearby vessels touch) and signal intensity drop (especially in the presence of pathology). Artificial intelligence models are promising candidates for resolving such artefacts. Here, we propose and assess the performance of a deep learning model for automatic segmentation of cerebral arteries in MRA TOF which is robust to common MRI artefacts. Methods: A 3D convolutional neural network (CNN) is proposed for automatic segmentation of intracranial arteries in MRA TOF. The neural network is trained with a custom loss function and residual blocks to penalize the occurrence of common artefacts such as kissing vessels. The model is trained and tested on a dataset consisting of 82 subjects (50 healthy volunteers and 32 patients with intracranial stenosis) following a 3-fold cross-validation method, i.e. 3 models are trained where each model is blind to one-third of the data in the training process to avoid bias. Manual segmentation of the arteries done by an expert reader are used as ground-truth for training and testing the model. Results: The proposed deep learning model achieved excellent accuracy compared against the ground truth (Dice score 0.89). Our proposed deep learning model outperformed a state-of-the-art neural network for image segmentation (3DU-Net, Dice score 0.85) and resulted in considerably less occurences of artefacts such as kissing vessels (9% of cases had segmentation artefacts for our model vs 16% for 3D U-Net). The proposed deep learning model was fast, taking only 2 seconds to produce a 3D model of the arteries on a laptop with a dedicated GPU. Conclusion: The proposed deep learning model accurately segments intracranial arteries in MRA TOF and is robust to common artefacts of MR imaging thanks to implementation of a custom loss function. The model can potentially increase the accuracy and speed of grading cerebrovascular diseases.


PLoS ONE ◽  
2021 ◽  
Vol 16 (9) ◽  
pp. e0257371
Author(s):  
Daisuke Nishiyama ◽  
Hiroshi Iwasaki ◽  
Takaya Taniguchi ◽  
Daisuke Fukui ◽  
Manabu Yamanaka ◽  
...  

Accurate gluteus medius (GMd) volume evaluation may aid in the analysis of muscular atrophy states and help gain an improved understanding of patient recovery via rehabilitation. However, the segmentation of muscle regions in GMd images for cubic muscle volume assessment is time-consuming and labor-intensive. This study automated GMd-region segmentation from the computed tomography (CT) images of patients diagnosed with hip osteoarthritis using deep learning and evaluated the segmentation accuracy. To this end, 5250 augmented pairs of training data were obtained from five participants, and a conditional generative adversarial network was used to identify the relationships between the image pairs. Using the preserved test datasets, the results of automatic segmentation with the trained deep learning model were compared to those of manual segmentation in terms of the dice similarity coefficient (DSC), volume similarity (VS), and shape similarity (MS). As observed, the average DSC values for automatic and manual segmentations were 0.748 and 0.812, respectively, with a significant difference (p < 0.0001); the average VS values were 0.247 and 0.203, respectively, with no significant difference (p = 0.069); and the average MS values were 1.394 and 1.156, respectively, with no significant difference (p = 0.308). The GMd volumes obtained by automatic and manual segmentation were 246.2 cm3 and 282.9 cm3, respectively. The noninferiority of the DSC obtained by automatic segmentation was verified against that obtained by manual segmentation. Accordingly, the proposed GAN-based automatic GMd-segmentation technique is confirmed to be noninferior to manual segmentation. Therefore, the findings of this research confirm that the proposed method not only reduces time and effort but also facilitates accurate assessment of the cubic muscle volume.


Author(s):  
Abramo Agosti ◽  
Enea Shaqiri ◽  
Matteo Paoletti ◽  
Francesca Solazzo ◽  
Niels Bergsland ◽  
...  

Abstract Objective In this study we address the automatic segmentation of selected muscles of the thigh and leg through a supervised deep learning approach. Material and methods The application of quantitative imaging in neuromuscular diseases requires the availability of regions of interest (ROI) drawn on muscles to extract quantitative parameters. Up to now, manual drawing of ROIs has been considered the gold standard in clinical studies, with no clear and universally accepted standardized procedure for segmentation. Several automatic methods, based mainly on machine learning and deep learning algorithms, have recently been proposed to discriminate between skeletal muscle, bone, subcutaneous and intermuscular adipose tissue. We develop a supervised deep learning approach based on a unified framework for ROI segmentation. Results The proposed network generates segmentation maps with high accuracy, consisting in Dice Scores ranging from 0.89 to 0.95, with respect to “ground truth” manually segmented labelled images, also showing high average performance in both mild and severe cases of disease involvement (i.e. entity of fatty replacement). Discussion The presented results are promising and potentially translatable to different skeletal muscle groups and other MRI sequences with different contrast and resolution.


Sign in / Sign up

Export Citation Format

Share Document