Comparison of different deep learning approaches for parotid gland segmentation from CT images

Author(s):  
Annika Hänsch ◽  
Michael Schwier ◽  
Tomasz Morgas ◽  
Jan Klein ◽  
Horst K. Hahn ◽  
...  
2018 ◽  
Vol 6 (01) ◽  
pp. 1 ◽  
Author(s):  
Annika Hänsch ◽  
Michael Schwier ◽  
Tobias Gass ◽  
Tomasz Morgas

Cancers ◽  
2021 ◽  
Vol 14 (1) ◽  
pp. 40
Author(s):  
Gyu Sang Yoo ◽  
Huan Minh Luu ◽  
Heejung Kim ◽  
Won Park ◽  
Hongryull Pyo ◽  
...  

We aimed to evaluate and compare the qualities of synthetic computed tomography (sCT) generated by various deep-learning methods in volumetric modulated arc therapy (VMAT) planning for prostate cancer. Simulation computed tomography (CT) and T2-weighted simulation magnetic resonance image from 113 patients were used in the sCT generation by three deep-learning approaches: generative adversarial network (GAN), cycle-consistent GAN (CycGAN), and reference-guided CycGAN (RgGAN), a new model which performed further adjustment of sCTs generated by CycGAN with available paired images. VMAT plans on the original simulation CT images were recalculated on the sCTs and the dosimetric differences were evaluated. For soft tissue, a significant difference in the mean Hounsfield unites (HUs) was observed between the original CT images and only sCTs from GAN (p = 0.03). The mean relative dose differences for planning target volumes or organs at risk were within 2% among the sCTs from the three deep-learning approaches. The differences in dosimetric parameters for D98% and D95% from original CT were lowest in sCT from RgGAN. In conclusion, HU conservation for soft tissue was poorest for GAN. There was the trend that sCT generated from the RgGAN showed best performance in dosimetric conservation D98% and D95% than sCTs from other methodologies.


2019 ◽  
Vol 37 (15_suppl) ◽  
pp. e14592-e14592
Author(s):  
Junshui Ma ◽  
Rongjie Liu ◽  
Gregory V. Goldmacher ◽  
Richard Baumgartner

e14592 Background: Radiomic features derived from CT scans have shown promise in predicting treatment response (Sun et al 2018, and others). We carried out a proof-of-concept study to investigate the use of CT images to predict lesion-level response. Methods: CT images from Merck studies KEYNOTE-010 (NCT01905657) and KEYNOTE-024 (NCT02142738), were used. Data from each study were evaluated separately and split for training (80%) and validation (20%) in each study. A lesion was classified as “shrinking” if ≥30% size reduction from baseline was seen on any future scan. There were 2004 (613 shrinking vs. 1391 non-shrinking) and 588 (311 vs. 277) lesions in KN10 and KN24, respectively. 130 radiomic features were extracted, followed by random forest to predict lesion response. In addition, end-to-end deep learning was used, which predicts the response directly from ROIs of CT images. Models were trained in two ways: (1) using pre-treatment baseline (BL) only or (2) using both BL and the first post-treatment image (V1) as predictors. Finally, to evaluate the predictive power without relying on initial lesion size, size information was omitted from CT images. Results: Results from the KN10 and KN24 are summarized in Table. Conclusions: The results suggest that the BL CT images alone have little power to predict lesion response, while BL and the first post-baseline image exhibit high predictive power. Although a substantial part of the predictive power can be attributed to change in ROI size, the predictive power does exist in other aspects of CT images. Overall, the radiomic signature followed by random forest produced predictions similar to, if not better than, the deep learning approach. [Table: see text]


2021 ◽  
Author(s):  
Mohamed A. Naser ◽  
Kareem A. Wahid ◽  
Abdallah Sherif Radwan Mohamed ◽  
Moamen Abobakr Abdelaal ◽  
Renjie He ◽  
...  

Determining progression-free survival (PFS) for head and neck squamous cell carcinoma (HNSCC) patients is a challenging but pertinent task that could help stratify patients for improved overall outcomes. PET/CT images provide a rich source of anatomical and metabolic data for potential clinical biomarkers that would inform treatment decisions and could help improve PFS. In this study, we participate in the 2021 HECKTOR Challenge to predict PFS in a large dataset of HNSCC PET/CT images using deep learning approaches. We develop a series of deep learning models based on the DenseNet architecture using a negative log-likelihood loss function that utilizes PET/CT images and clinical data as separate input channels to predict PFS in days. Internal model validation based on 10-fold cross-validation using the training data (N=224) yielded C-index values up to 0.622 (without) and 0.842 (with) censoring status considered in C-index computation, respectively. We then implemented model ensembling approaches based on the training data cross-validation folds to predict the PFS of the test set patients (N=101). External validation on the test set for the best ensembling method yielded a C-index value of 0.694. Our results are a promising example of how deep learning approaches can effectively utilize imaging and clinical data for medical outcome prediction in HNSCC, but further work in optimizing these processes is needed.


Author(s):  
K. A. Saneera Hemantha Kulathilake ◽  
Nor Aniza Abdullah ◽  
Aznul Qalid Md Sabri ◽  
Khin Wee Lai

AbstractComputed Tomography (CT) is a widely use medical image modality in clinical medicine, because it produces excellent visualizations of fine structural details of the human body. In clinical procedures, it is desirable to acquire CT scans by minimizing the X-ray flux to prevent patients from being exposed to high radiation. However, these Low-Dose CT (LDCT) scanning protocols compromise the signal-to-noise ratio of the CT images because of noise and artifacts over the image space. Thus, various restoration methods have been published over the past 3 decades to produce high-quality CT images from these LDCT images. More recently, as opposed to conventional LDCT restoration methods, Deep Learning (DL)-based LDCT restoration approaches have been rather common due to their characteristics of being data-driven, high-performance, and fast execution. Thus, this study aims to elaborate on the role of DL techniques in LDCT restoration and critically review the applications of DL-based approaches for LDCT restoration. To achieve this aim, different aspects of DL-based LDCT restoration applications were analyzed. These include DL architectures, performance gains, functional requirements, and the diversity of objective functions. The outcome of the study highlights the existing limitations and future directions for DL-based LDCT restoration. To the best of our knowledge, there have been no previous reviews, which specifically address this topic.


2019 ◽  
Vol 2019 (1) ◽  
pp. 360-368
Author(s):  
Mekides Assefa Abebe ◽  
Jon Yngve Hardeberg

Different whiteboard image degradations highly reduce the legibility of pen-stroke content as well as the overall quality of the images. Consequently, different researchers addressed the problem through different image enhancement techniques. Most of the state-of-the-art approaches applied common image processing techniques such as background foreground segmentation, text extraction, contrast and color enhancements and white balancing. However, such types of conventional enhancement methods are incapable of recovering severely degraded pen-stroke contents and produce artifacts in the presence of complex pen-stroke illustrations. In order to surmount such problems, the authors have proposed a deep learning based solution. They have contributed a new whiteboard image data set and adopted two deep convolutional neural network architectures for whiteboard image quality enhancement applications. Their different evaluations of the trained models demonstrated their superior performances over the conventional methods.


2019 ◽  
Author(s):  
Qian Wu ◽  
Weiling Zhao ◽  
Xiaobo Yang ◽  
Hua Tan ◽  
Lei You ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document