scholarly journals Deep Learning Application for Analyzing of Constituents and Their Correlations in the Interpretations of Medical Images

Diagnostics ◽  
2021 ◽  
Vol 11 (8) ◽  
pp. 1373
Author(s):  
Tudor Florin Ursuleanu ◽  
Andreea Roxana Luca ◽  
Liliana Gheorghe ◽  
Roxana Grigorovici ◽  
Stefan Iancu ◽  
...  

The need for time and attention, given by the doctor to the patient, due to the increased volume of medical data to be interpreted and filtered for diagnostic and therapeutic purposes has encouraged the development of the option to support, constructively and effectively, deep learning models. Deep learning (DL) has experienced an exponential development in recent years, with a major impact on interpretations of the medical image. This has influenced the development, diversification and increase of the quality of scientific data, the development of knowledge construction methods and the improvement of DL models used in medical applications. All research papers focus on description, highlighting, classification of one of the constituent elements of deep learning models (DL), used in the interpretation of medical images and do not provide a unified picture of the importance and impact of each constituent in the performance of DL models. The novelty in our paper consists primarily in the unitary approach, of the constituent elements of DL models, namely, data, tools used by DL architectures or specifically constructed DL architecture combinations and highlighting their “key” features, for completion of tasks in current applications in the interpretation of medical images. The use of “key” characteristics specific to each constituent of DL models and the correct determination of their correlations, may be the subject of future research, with the aim of increasing the performance of DL models in the interpretation of medical images.

2021 ◽  
Vol 22 (15) ◽  
pp. 7911
Author(s):  
Eugene Lin ◽  
Chieh-Hsin Lin ◽  
Hsien-Yuan Lane

A growing body of evidence currently proposes that deep learning approaches can serve as an essential cornerstone for the diagnosis and prediction of Alzheimer’s disease (AD). In light of the latest advancements in neuroimaging and genomics, numerous deep learning models are being exploited to distinguish AD from normal controls and/or to distinguish AD from mild cognitive impairment in recent research studies. In this review, we focus on the latest developments for AD prediction using deep learning techniques in cooperation with the principles of neuroimaging and genomics. First, we narrate various investigations that make use of deep learning algorithms to establish AD prediction using genomics or neuroimaging data. Particularly, we delineate relevant integrative neuroimaging genomics investigations that leverage deep learning methods to forecast AD on the basis of incorporating both neuroimaging and genomics data. Moreover, we outline the limitations as regards to the recent AD investigations of deep learning with neuroimaging and genomics. Finally, we depict a discussion of challenges and directions for future research. The main novelty of this work is that we summarize the major points of these investigations and scrutinize the similarities and differences among these investigations.


Author(s):  
Wenjia Cai ◽  
Jie Xu ◽  
Ke Wang ◽  
Xiaohong Liu ◽  
Wenqin Xu ◽  
...  

Abstract Anterior segment eye diseases account for a significant proportion of presentations to eye clinics worldwide, including diseases associated with corneal pathologies, anterior chamber abnormalities (e.g. blood or inflammation) and lens diseases. The construction of an automatic tool for the segmentation of anterior segment eye lesions will greatly improve the efficiency of clinical care. With research on artificial intelligence progressing in recent years, deep learning models have shown their superiority in image classification and segmentation. The training and evaluation of deep learning models should be based on a large amount of data annotated with expertise, however, such data are relatively scarce in the domain of medicine. Herein, the authors developed a new medical image annotation system, called EyeHealer. It is a large-scale anterior eye segment dataset with both eye structures and lesions annotated at the pixel level. Comprehensive experiments were conducted to verify its performance in disease classification and eye lesion segmentation. The results showed that semantic segmentation models outperformed medical segmentation models. This paper describes the establishment of the system for automated classification and segmentation tasks. The dataset will be made publicly available to encourage future research in this area.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Shelly Soffer ◽  
Eyal Klang ◽  
Orit Shimon ◽  
Yiftach Barash ◽  
Noa Cahan ◽  
...  

AbstractComputed tomographic pulmonary angiography (CTPA) is the gold standard for pulmonary embolism (PE) diagnosis. However, this diagnosis is susceptible to misdiagnosis. In this study, we aimed to perform a systematic review of current literature applying deep learning for the diagnosis of PE on CTPA. MEDLINE/PUBMED were searched for studies that reported on the accuracy of deep learning algorithms for PE on CTPA. The risk of bias was evaluated using the QUADAS-2 tool. Pooled sensitivity and specificity were calculated. Summary receiver operating characteristic curves were plotted. Seven studies met our inclusion criteria. A total of 36,847 CTPA studies were analyzed. All studies were retrospective. Five studies provided enough data to calculate summary estimates. The pooled sensitivity and specificity for PE detection were 0.88 (95% CI 0.803–0.927) and 0.86 (95% CI 0.756–0.924), respectively. Most studies had a high risk of bias. Our study suggests that deep learning models can detect PE on CTPA with satisfactory sensitivity and an acceptable number of false positive cases. Yet, these are only preliminary retrospective works, indicating the need for future research to determine the clinical impact of automated PE detection on patient care. Deep learning models are gradually being implemented in hospital systems, and it is important to understand the strengths and limitations of these algorithms.


2019 ◽  
Vol 17 (3) ◽  
pp. e0204 ◽  
Author(s):  
Krishnaswamy R. Aravind ◽  
Purushothaman Raja ◽  
Rajendran Ashiwin ◽  
Konnaiyar V. Mukesh

Aim of study: The application of pre-trained deep learning models, AlexNet and VGG16, for classification of five diseases (Epilachna beetle infestation, little leaf, Cercospora leaf spot, two-spotted spider mite and Tobacco Mosaic Virus (TMV)) and a healthy plant in Solanum melongena (brinjal in Asia, eggplant in USA and aubergine in UK) with images acquired from smartphones.Area of study: Images were acquired from fields located at Alangudi (Pudukkottai district), Tirumalaisamudram and Pillayarpatti (Thanjavur district) – Tamil Nadu, India.Material and methods: Most of earlier studies have been carried out with images of isolated leaf samples, whereas in this work the whole or part of the plant images were utilized for the dataset creation. Augmentation techniques were applied to the manually segmented images for increasing the dataset size. The classification capability of deep learning models was analysed before and after augmentation. A fully connected layer was added to the architecture and evaluated for its performance.Main results: The modified architecture of VGG16 trained with the augmented dataset resulted in an average validation accuracy of 96.7%. Despite the best accuracy, all the models were tested with sample images from the field and the modified VGG16 resulted in an accuracy of 93.33%.Research highlights: The findings provide a guidance for possible factors to be considered in future research relevant to the dataset creation and methodology for efficient prediction using deep learning models.


Cancers ◽  
2021 ◽  
Vol 13 (7) ◽  
pp. 1590
Author(s):  
Laith Alzubaidi ◽  
Muthana Al-Amidie ◽  
Ahmed Al-Asadi ◽  
Amjad J. Humaidi ◽  
Omran Al-Shamma ◽  
...  

Deep learning requires a large amount of data to perform well. However, the field of medical image analysis suffers from a lack of sufficient data for training deep learning models. Moreover, medical images require manual labeling, usually provided by human annotators coming from various backgrounds. More importantly, the annotation process is time-consuming, expensive, and prone to errors. Transfer learning was introduced to reduce the need for the annotation process by transferring the deep learning models with knowledge from a previous task and then by fine-tuning them on a relatively small dataset of the current task. Most of the methods of medical image classification employ transfer learning from pretrained models, e.g., ImageNet, which has been proven to be ineffective. This is due to the mismatch in learned features between the natural image, e.g., ImageNet, and medical images. Additionally, it results in the utilization of deeply elaborated models. In this paper, we propose a novel transfer learning approach to overcome the previous drawbacks by means of training the deep learning model on large unlabeled medical image datasets and by next transferring the knowledge to train the deep learning model on the small amount of labeled medical images. Additionally, we propose a new deep convolutional neural network (DCNN) model that combines recent advancements in the field. We conducted several experiments on two challenging medical imaging scenarios dealing with skin and breast cancer classification tasks. According to the reported results, it has been empirically proven that the proposed approach can significantly improve the performance of both classification scenarios. In terms of skin cancer, the proposed model achieved an F1-score value of 89.09% when trained from scratch and 98.53% with the proposed approach. Secondly, it achieved an accuracy value of 85.29% and 97.51%, respectively, when trained from scratch and using the proposed approach in the case of the breast cancer scenario. Finally, we concluded that our method can possibly be applied to many medical imaging problems in which a substantial amount of unlabeled image data is available and the labeled image data is limited. Moreover, it can be utilized to improve the performance of medical imaging tasks in the same domain. To do so, we used the pretrained skin cancer model to train on feet skin to classify them into two classes—either normal or abnormal (diabetic foot ulcer (DFU)). It achieved an F1-score value of 86.0% when trained from scratch, 96.25% using transfer learning, and 99.25% using double-transfer learning.


2021 ◽  
Author(s):  
Roberto Bentivoglio ◽  
Elvin Isufi ◽  
Sebastian Nicolaas Jonkman ◽  
Riccardo Taormina

Abstract. Deep Learning techniques have been increasingly used in flood risk management to overcome the limitations of accurate, yet slow, numerical models, and to improve the results of traditional methods for flood mapping. In this paper, we review 45 recent publications to outline the state-of-the-art of the field, identify knowledge gaps, and propose future research directions. The review focuses on the type of deep learning models used for various flood mapping applications, the flood types considered, the spatial scale of the studied events, and the data used for model development. The results show that models based on convolutional layers are usually more accurate as they leverage inductive biases to better process the spatial characteristics of the flooding events. Traditional models based on fully-connected layers, instead, provide accurate results when coupled with other statistical models. Deep learning models showed increased accuracy when compared to traditional approaches and increased speed when compared to numerical methods. While there exist several applications in flood susceptibility, inundation, and hazard mapping, more work is needed to understand how deep learning can assist real-time flood warning during an emergency, and how it can be employed to estimate flood risk. A major challenge lies in developing deep learning models that can generalize to unseen case studies and sites. Furthermore, all reviewed models and their outputs, are deterministic, with limited considerations for uncertainties in outcomes and probabilistic predictions. The authors argue that these identified gaps can be addressed by exploiting recent fundamental advancements in deep learning or by taking inspiration from developments in other applied areas. Models based on graph neural networks and neural operators can work with arbitrarily structured data and thus should be capable of generalizing across different case studies and could account for complex interactions with the natural and built environment. Neural operators can also speed up numerical models while preserving the underlying physical equations and could thus be used for reliable real-time warning. Similarly, probabilistic models can be built by resorting to Deep Gaussian Processes.


2021 ◽  
Author(s):  
Atiq Rehman ◽  
Samir Brahim Belhaouari

<div><div><div><p>Video classification task has gained a significant success in the recent years. Specifically, the topic has gained more attention after the emergence of deep learning models as a successful tool for automatically classifying videos. In recognition to the importance of video classification task and to summarize the success of deep learning models for this task, this paper presents a very comprehensive and concise review on the topic. There are a number of existing reviews and survey papers related to video classification in the scientific literature. However, the existing review papers are either outdated, and therefore, do not include the recent state-of-art works or they have some limitations. In order to provide an updated and concise review, this paper highlights the key findings based on the existing deep learning models. The key findings are also discussed in a way to provide future research directions. This review mainly focuses on the type of network architecture used, the evaluation criteria to measure the success, and the data sets used. To make the review self- contained, the emergence of deep learning methods towards automatic video classification and the state-of-art deep learning methods are well explained and summarized. Moreover, a clear insight of the newly developed deep learning architectures and the traditional approaches is provided, and the critical challenges based on the benchmarks are highlighted for evaluating the technical progress of these methods. The paper also summarizes the benchmark datasets and the performance evaluation matrices for video classification. Based on the compact, complete, and concise review, the paper proposes new research directions to solve the challenging video classification problem.</p></div></div></div>


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Guangzhou An ◽  
Masahiro Akiba ◽  
Kazuko Omodaka ◽  
Toru Nakazawa ◽  
Hideo Yokota

AbstractDeep learning is being employed in disease detection and classification based on medical images for clinical decision making. It typically requires large amounts of labelled data; however, the sample size of such medical image datasets is generally small. This study proposes a novel training framework for building deep learning models of disease detection and classification with small datasets. Our approach is based on a hierarchical classification method where the healthy/disease information from the first model is effectively utilized to build subsequent models for classifying the disease into its sub-types via a transfer learning method. To improve accuracy, multiple input datasets were used, and a stacking ensembled method was employed for final classification. To demonstrate the method’s performance, a labelled dataset extracted from volumetric ophthalmic optical coherence tomography data for 156 healthy and 798 glaucoma eyes was used, in which glaucoma eyes were further labelled into four sub-types. The average weighted accuracy and Cohen’s kappa for three randomized test datasets were 0.839 and 0.809, respectively. Our approach outperformed the flat classification method by 9.7% using smaller training datasets. The results suggest that the framework can perform accurate classification with a small number of medical images.


Sensors ◽  
2021 ◽  
Vol 21 (21) ◽  
pp. 7259
Author(s):  
Deevyankar Agarwal ◽  
Gonçalo Marques ◽  
Isabel de la Torre-Díez ◽  
Manuel A. Franco Martin ◽  
Begoña García Zapiraín ◽  
...  

Alzheimer’s disease (AD) is a remarkable challenge for healthcare in the 21st century. Since 2017, deep learning models with transfer learning approaches have been gaining recognition in AD detection, and progression prediction by using neuroimaging biomarkers. This paper presents a systematic review of the current state of early AD detection by using deep learning models with transfer learning and neuroimaging biomarkers. Five databases were used and the results before screening report 215 studies published between 2010 and 2020. After screening, 13 studies met the inclusion criteria. We noted that the maximum accuracy achieved to date for AD classification is 98.20% by using the combination of 3D convolutional networks and local transfer learning, and that for the prognostic prediction of AD is 87.78% by using pre-trained 3D convolutional network-based architectures. The results show that transfer learning helps researchers in developing a more accurate system for the early diagnosis of AD. However, there is a need to consider some points in future research, such as improving the accuracy of the prognostic prediction of AD, exploring additional biomarkers such as tau-PET and amyloid-PET to understand highly discriminative feature representation to separate similar brain patterns, managing the size of the datasets due to the limited availability.


Sensors ◽  
2020 ◽  
Vol 20 (18) ◽  
pp. 5097 ◽  
Author(s):  
Satya P. Singh ◽  
Lipo Wang ◽  
Sukrit Gupta ◽  
Haveesh Goli ◽  
Parasuraman Padmanabhan ◽  
...  

The rapid advancements in machine learning, graphics processing technologies and the availability of medical imaging data have led to a rapid increase in the use of deep learning models in the medical domain. This was exacerbated by the rapid advancements in convolutional neural network (CNN) based architectures, which were adopted by the medical imaging community to assist clinicians in disease diagnosis. Since the grand success of AlexNet in 2012, CNNs have been increasingly used in medical image analysis to improve the efficiency of human clinicians. In recent years, three-dimensional (3D) CNNs have been employed for the analysis of medical images. In this paper, we trace the history of how the 3D CNN was developed from its machine learning roots, we provide a brief mathematical description of 3D CNN and provide the preprocessing steps required for medical images before feeding them to 3D CNNs. We review the significant research in the field of 3D medical imaging analysis using 3D CNNs (and its variants) in different medical areas such as classification, segmentation, detection and localization. We conclude by discussing the challenges associated with the use of 3D CNNs in the medical imaging domain (and the use of deep learning models in general) and possible future trends in the field.


Sign in / Sign up

Export Citation Format

Share Document