scholarly journals Challenges in Building of Deep Learning Models for Glioblastoma Segmentation: Evidence from Clinical Data

Author(s):  
Anvar Kurmukov ◽  
Aleksandra Dalechina ◽  
Talgat Saparov ◽  
Mikhail Belyaev ◽  
Svetlana Zolotova ◽  
...  

In this article, we compare the performance of a state-of-the-art segmentation network (UNet) on two different glioblastoma (GB) segmentation datasets. Our experiments show that the same training procedure yields almost twice as bad results on the retrospective clinical data compared to the BraTS challenge data (in terms of Dice score). We discuss possible reasons for such an outcome, including inter-rater variability and high variability in magnetic resonance imaging (MRI) scanners and scanner settings. The high performance of segmentation models, demonstrated on preselected imaging data, does not bring the community closer to using these algorithms in clinical settings. We believe that a clinically applicable deep learning architecture requires a shift from unified datasets to heterogeneous data.

2021 ◽  
Vol 1 ◽  
Author(s):  
Shanshan Wang ◽  
Guohua Cao ◽  
Yan Wang ◽  
Shu Liao ◽  
Qian Wang ◽  
...  

Artificial intelligence (AI) as an emerging technology is gaining momentum in medical imaging. Recently, deep learning-based AI techniques have been actively investigated in medical imaging, and its potential applications range from data acquisition and image reconstruction to image analysis and understanding. In this review, we focus on the use of deep learning in image reconstruction for advanced medical imaging modalities including magnetic resonance imaging (MRI), computed tomography (CT), and positron emission tomography (PET). Particularly, recent deep learning-based methods for image reconstruction will be emphasized, in accordance with their methodology designs and performances in handling volumetric imaging data. It is expected that this review can help relevant researchers understand how to adapt AI for medical imaging and which advantages can be achieved with the assistance of AI.


2016 ◽  
Author(s):  
Krzysztof J. Gorgolewski ◽  
Fidel Alfaro-Almagro ◽  
Tibor Auer ◽  
Pierre Bellec ◽  
Mihai Capotă ◽  
...  

AbstractThe rate of progress in human neurosciences is limited by the inability to easily apply a wide range of analysis methods to the plethora of different datasets acquired in labs around the world. In this work, we introduce a framework for creating, testing, versioning and archiving portable applications for analyzing neuroimaging data organized and described in compliance with the Brain Imaging Data Structure (BIDS). The portability of these applications (BIDS Apps) is achieved by using container technologies that encapsulate all binary and other dependencies in one convenient package. BIDS Apps run on all three major operating systems with no need for complex setup and configuration and thanks to the comprehensiveness richness of the BIDS standard they require little manual user input. Previous containerized data processing solutions were limited to single user environments and not compatible with most multi-tenant High Performance Computing systems. BIDS Apps overcome this limitation by taking advantage of the Singularity container technology. As a proof of concept, this work is accompanied by 22 ready to use BIDS Apps, packaging a diverse set of commonly used neuroimaging algorithms.Author SummaryMagnetic Resonance Imaging (MRI) is a non-invasive way to measure human brain structure and activity that has been used for over 25 years. There are thousands MRI studies performed every year generating a substantial amount of data. At the same time, many new data analysis methods are being developed every year. The potential of using new analysis methods on the variety of existing and newly acquired data is hindered by difficulties in software deployment and lack of support for standardized input data. Here we propose to use container technology to make deployment of a wide range of data analysis techniques easy. In addition, we adapt the existing data analysis tools to interface with data organized in a standardized way. We hope that this approach will enable researchers to access a wider range of methods when analyzing their data which will lead to accelerated progress in human neuroscience.


Sensors ◽  
2021 ◽  
Vol 21 (21) ◽  
pp. 7018
Author(s):  
Justin Lo ◽  
Jillian Cardinell ◽  
Alejo Costanzo ◽  
Dafna Sussman

Deep learning (DL) algorithms have become an increasingly popular choice for image classification and segmentation tasks; however, their range of applications can be limited. Their limitation stems from them requiring ample data to achieve high performance and adequate generalizability. In the case of clinical imaging data, images are not always available in large quantities. This issue can be alleviated by using data augmentation (DA) techniques. The choice of DA is important because poor selection can possibly hinder the performance of a DL algorithm. We propose a DA policy search algorithm that offers an extended set of transformations that accommodate the variations in biomedical imaging datasets. The algorithm makes use of the efficient and high-dimensional optimizer Bi-Population Covariance Matrix Adaptation Evolution Strategy (BIPOP-CMA-ES) and returns an optimal DA policy based on any input imaging dataset and a DL algorithm. Our proposed algorithm, Medical Augmentation (Med-Aug), can be implemented by other researchers in related medical DL applications to improve their model’s performance. Furthermore, we present our found optimal DA policies for a variety of medical datasets and popular segmentation networks for other researchers to use in related tasks.


10.2196/24973 ◽  
2021 ◽  
Vol 9 (1) ◽  
pp. e24973
Author(s):  
Thao Thi Ho ◽  
Jongmin Park ◽  
Taewoo Kim ◽  
Byunggeon Park ◽  
Jaehee Lee ◽  
...  

Background Many COVID-19 patients rapidly progress to respiratory failure with a broad range of severities. Identification of high-risk cases is critical for early intervention. Objective The aim of this study is to develop deep learning models that can rapidly identify high-risk COVID-19 patients based on computed tomography (CT) images and clinical data. Methods We analyzed 297 COVID-19 patients from five hospitals in Daegu, South Korea. A mixed artificial convolutional neural network (ACNN) model, combining an artificial neural network for clinical data and a convolutional neural network for 3D CT imaging data, was developed to classify these cases as either high risk of severe progression (ie, event) or low risk (ie, event-free). Results Using the mixed ACNN model, we were able to obtain high classification performance using novel coronavirus pneumonia lesion images (ie, 93.9% accuracy, 80.8% sensitivity, 96.9% specificity, and 0.916 area under the curve [AUC] score) and lung segmentation images (ie, 94.3% accuracy, 74.7% sensitivity, 95.9% specificity, and 0.928 AUC score) for event versus event-free groups. Conclusions Our study successfully differentiated high-risk cases among COVID-19 patients using imaging and clinical features. The developed model can be used as a predictive tool for interventions in aggressive therapies.


2020 ◽  
Author(s):  
Sanghun Choi ◽  
Jae-Kwang Lim ◽  
Thao Thi Ho ◽  
Jongmin Park ◽  
Taewoo Kim ◽  
...  

BACKGROUND Many COVID-19 patients rapidly progress into respiratory failure with a broad range of severity. Identification of the high-risk cases is critical for early intervention. OBJECTIVE The aim of this study is to develop deep learning models that can rapidly diagnose high-risk COVID-19 patients based on computed tomography (CT) images and clinical data. METHODS We analyzed 297 COVID-19 patients from five hospitals in Daegu, South Korea. A mixed model (ACNN) including an artificial neural network for clinical data and a convolution-neural network for 3D CT imaging data is developed to classify high-risk cases with a severe progression (event) from low-risk COVID-19 patients (event-free). RESULTS By using the mixed ACNN model, we could obtain high classification performance using novel coronavirus pneumonia (NCP) lesion images (93.9% accuracy, 80.8% sensitivity, 96.9% specificity, and 0.916 AUC) and using lung segmentation images (94.3% accuracy, 74.7% sensitivity, 95.9% specificity, and 0.928 AUC) for event vs. event-free groups. CONCLUSIONS Our study has successfully differentiated high-risk cases among COVID-19 patients using the imaging and clinical features of COVID-19 patients. The developed model is potentially utilized as a prediction tool for intervening active therapy.


2021 ◽  
Vol 13 (18) ◽  
pp. 3594
Author(s):  
Lang Xia ◽  
Ruirui Zhang ◽  
Liping Chen ◽  
Longlong Li ◽  
Tongchuan Yi ◽  
...  

Pine wilt disease (PWD) is a serious threat to pine forests. Combining unmanned aerial vehicle (UAV) images and deep learning (DL) techniques to identify infected pines is the most efficient method to determine the potential spread of PWD over a large area. In particular, image segmentation using DL obtains the detailed shape and size of infected pines to assess the disease’s degree of damage. However, the performance of such segmentation models has not been thoroughly studied. We used a fixed-wing UAV to collect images from a pine forest in Laoshan, Qingdao, China, and conducted a ground survey to collect samples of infected pines and construct prior knowledge to interpret the images. Then, training and test sets were annotated on selected images, and we obtained 2352 samples of infected pines annotated over different backgrounds. Finally, high-performance DL models (e.g., fully convolutional networks for semantic segmentation, DeepLabv3+, and PSPNet) were trained and evaluated. The results demonstrated that focal loss provided a higher accuracy and a finer boundary than Dice loss, with the average intersection over union (IoU) for all models increasing from 0.656 to 0.701. From the evaluated models, DeepLLabv3+ achieved the highest IoU and an F1 score of 0.720 and 0.832, respectively. Also, an atrous spatial pyramid pooling module encoded multiscale context information, and the encoder–decoder architecture recovered location/spatial information, being the best architecture for segmenting trees infected by the PWD. Furthermore, segmentation accuracy did not improve as the depth of the backbone network increased, and neither ResNet34 nor ResNet50 was the appropriate backbone for most segmentation models.


2019 ◽  
Author(s):  
Yu-Heng Lai ◽  
Wei-Ning Chen ◽  
Te-Cheng Hsu ◽  
Che Lin ◽  
Yu Tsao ◽  
...  

AbstractNon-small cell lung cancer (NSCLC) is one of the most common lung cancers worldwide. Accurate prognostic stratification of NSCLC can become an important clinical reference when designing therapeutic strategies for cancer patients. With this clinical application in mind, we developed a deep neural network (DNN) combining heterogeneous data sources of gene expression and clinical data to accurately predict the prognosis of NSCLC patients. Based on microarray data from a cohort set (614 patients), seven well-known NSCLC markers were used to group patients into marker- and marker+ subgroups. Using a systems biology approach, prognosis relevance values (PRV) were then calculated to select eight additional novel prognostic gene markers. Gene markers along with clinical data were then used to develop an integrative DNN via bimodal learning to predict the 5-year survival rate of NSCLC patients with tremendously high accuracy (AUC: 0.8163, accuracy: 75.44%), which is superior to all other existing methods based on AUC. Using the capability of deep learning, we believe that our predicted cancer prognosis can be a promising index helping oncologists and physicians develop personalized therapy and build the foundation of precision medicine in the future.


2020 ◽  
Author(s):  
Md. Kamrul Hasan ◽  
Md. Ashraful Alam ◽  
Lavsen Dahal ◽  
Md. Toufick E Elahi ◽  
Shidhartho Roy ◽  
...  

ABSTRACTA large number of studies in the past months have proposed deep learning-based Artificial Intelligence (AI) tools for automated detection of COVID-19 using publicly available datasets of Chest X-rays (CXRs) or CT scans for training and evaluation. Most of these studies report high accuracy when classifying COVID-19 patients from normal or other commonly occurring pneumonia cases. However, these results are often obtained on cross-validation studies without an independent test set coming from a separate dataset and have biases such as the two classes to be predicted come from two completely different datasets. In this work, we investigate potential overfitting and biases in such studies by designing different experimental setups within the available public data constraints and highlight the challenges and limitations of developing deep learning models with such datasets. We propose a deep learning architecture for COVID-19 classification that combines two very popular classification networks, ResNet and Xception, and use it to carry out the experiments to investigate challenges and limitations. The results show that the deep learning models can overestimate their performance due to biases in the experimental design and overfitting to the training dataset. We compare the proposed architecture to state-of-the-art methods utilizing an independent test set for evaluation, where some of the identified bias and overfitting issues are reduced. Although our proposed deep learning architecture gives the best performance with our best possible setup, we highlight the challenges in comparing and interpreting various deep learning algorithms’ results. While the deep learning-based methods using chest imaging data show promise in being helpful for clinical management and triage of COVID-19 patients, our experiments suggest that a larger, more comprehensive database with less bias is necessary for developing tools applicable in real clinical settings.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Jared Hamwood ◽  
Beat Schmutz ◽  
Michael J. Collins ◽  
Mark C. Allenby ◽  
David Alonso-Caneiro

AbstractThis paper proposes a fully automatic method to segment the inner boundary of the bony orbit in two different image modalities: magnetic resonance imaging (MRI) and computed tomography (CT). The method, based on a deep learning architecture, uses two fully convolutional neural networks in series followed by a graph-search method to generate a boundary for the orbit. When compared to human performance for segmentation of both CT and MRI data, the proposed method achieves high Dice coefficients on both orbit and background, with scores of 0.813 and 0.975 in CT images and 0.930 and 0.995 in MRI images, showing a high degree of agreement with a manual segmentation by a human expert. Given the volumetric characteristics of these imaging modalities and the complexity and time-consuming nature of the segmentation of the orbital region in the human skull, it is often impractical to manually segment these images. Thus, the proposed method provides a valid clinical and research tool that performs similarly to the human observer.


Sign in / Sign up

Export Citation Format

Share Document