scholarly journals Illuminating Clues of Cancer Buried in Prostate MR Image: Deep Learning and Expert Approaches

Biomolecules ◽  
2019 ◽  
Vol 9 (11) ◽  
pp. 673 ◽  
Author(s):  
Jun Akatsuka ◽  
Yoichiro Yamamoto ◽  
Tetsuro Sekine ◽  
Yasushi Numata ◽  
Hiromu Morikawa ◽  
...  

Deep learning algorithms have achieved great success in cancer image classification. However, it is imperative to understand the differences between the deep learning and human approaches. Using an explainable model, we aimed to compare the deep learning-focused regions of magnetic resonance (MR) images with cancerous locations identified by radiologists and pathologists. First, 307 prostate MR images were classified using a well-established deep neural network without locational information of cancers. Subsequently, we assessed whether the deep learning-focused regions overlapped the radiologist-identified targets. Furthermore, pathologists provided histopathological diagnoses on 896 pathological images, and we compared the deep learning-focused regions with the genuine cancer locations through 3D reconstruction of pathological images. The area under the curve (AUC) for MR images classification was sufficiently high (AUC = 0.90, 95% confidence interval 0.87–0.94). Deep learning-focused regions overlapped radiologist-identified targets by 70.5% and pathologist-identified cancer locations by 72.1%. Lymphocyte aggregation and dilated prostatic ducts were observed in non-cancerous regions focused by deep learning. Deep learning algorithms can achieve highly accurate image classification without necessarily identifying radiological targets or cancer locations. Deep learning may find clues that can help a clinical diagnosis even if the cancer is not visible.

Author(s):  
Yuejun Liu ◽  
Yifei Xu ◽  
Xiangzheng Meng ◽  
Xuguang Wang ◽  
Tianxu Bai

Background: Medical imaging plays an important role in the diagnosis of thyroid diseases. In the field of machine learning, multiple dimensional deep learning algorithms are widely used in image classification and recognition, and have achieved great success. Objective: The method based on multiple dimensional deep learning is employed for the auxiliary diagnosis of thyroid diseases based on SPECT images. The performances of different deep learning models are evaluated and compared. Methods: Thyroid SPECT images are collected with three types, they are hyperthyroidism, normal and hypothyroidism. In the pre-processing, the region of interest of thyroid is segmented and the amount of data sample is expanded. Four CNN models, including CNN, Inception, VGG16 and RNN, are used to evaluate deep learning methods. Results: Deep learning based methods have good classification performance, the accuracy is 92.9%-96.2%, AUC is 97.8%-99.6%. VGG16 model has the best performance, the accuracy is 96.2% and AUC is 99.6%. Especially, the VGG16 model with a changing learning rate works best. Conclusion: The standard CNN, Inception, VGG16, and RNN four deep learning models are efficient for the classification of thyroid diseases with SPECT images. The accuracy of the assisted diagnostic method based on deep learning is higher than that of other methods reported in the literature.


2021 ◽  
Vol 10 (9) ◽  
pp. 25394-25398
Author(s):  
Chitra Desai

Deep learning models have demonstrated improved efficacy in image classification since the ImageNet Large Scale Visual Recognition Challenge started since 2010. Classification of images has further augmented in the field of computer vision with the dawn of transfer learning. To train a model on huge dataset demands huge computational resources and add a lot of cost to learning. Transfer learning allows to reduce on cost of learning and also help avoid reinventing the wheel. There are several pretrained models like VGG16, VGG19, ResNet50, Inceptionv3, EfficientNet etc which are widely used.   This paper demonstrates image classification using pretrained deep neural network model VGG16 which is trained on images from ImageNet dataset. After obtaining the convolutional base model, a new deep neural network model is built on top of it for image classification based on fully connected network. This classifier will use features extracted from the convolutional base model.


2019 ◽  
Vol 10 (1) ◽  
Author(s):  
Daniel Pinto dos Santos ◽  
Sebastian Brodehl ◽  
Bettina Baeßler ◽  
Gordon Arnhold ◽  
Thomas Dratsch ◽  
...  

Abstract Background Data used for training of deep learning networks usually needs large amounts of accurate labels. These labels are usually extracted from reports using natural language processing or by time-consuming manual review. The aim of this study was therefore to develop and evaluate a workflow for using data from structured reports as labels to be used in a deep learning application. Materials and methods We included all plain anteriorposterior radiographs of the ankle for which structured reports were available. A workflow was designed and implemented where a script was used to automatically retrieve, convert, and anonymize the respective radiographs of cases where fractures were either present or absent from the institution’s picture archiving and communication system (PACS). These images were then used to retrain a pretrained deep convolutional neural network. Finally, performance was evaluated on a set of previously unseen radiographs. Results Once implemented and configured, completion of the whole workflow took under 1 h. A total of 157 structured reports were retrieved from the reporting platform. For all structured reports, corresponding radiographs were successfully retrieved from the PACS and fed into the training process. On an unseen validation subset, the model showed a satisfactory performance with an area under the curve of 0.850 (95% CI 0.634–1.000) for detection of fractures. Conclusion We demonstrate that data obtained from structured reports written in clinical routine can be used to successfully train deep learning algorithms. This highlights the potential role of structured reporting for the future of radiology, especially in the context of deep learning.


Author(s):  
Ankita Singh ◽  
◽  
Pawan Singh

The Classification of images is a paramount topic in artificial vision systems which have drawn a notable amount of interest over the past years. This field aims to classify an image, which is an input, based on its visual content. Currently, most people relied on hand-crafted features to describe an image in a particular way. Then, using classifiers that are learnable, such as random forest, and decision tree was applied to the extract features to come to a final decision. The problem arises when large numbers of photos are concerned. It becomes a too difficult problem to find features from them. This is one of the reasons that the deep neural network model has been introduced. Owing to the existence of Deep learning, it can become feasible to represent the hierarchical nature of features using a various number of layers and corresponding weight with them. The existing image classification methods have been gradually applied in real-world problems, but then there are various problems in its application processes, such as unsatisfactory effect and extremely low classification accuracy or then and weak adaptive ability. Models using deep learning concepts have robust learning ability, which combines the feature extraction and the process of classification into a whole which then completes an image classification task, which can improve the image classification accuracy effectively. Convolutional Neural Networks are a powerful deep neural network technique. These networks preserve the spatial structure of a problem and were built for object recognition tasks such as classifying an image into respective classes. Neural networks are much known because people are getting a state-of-the-art outcome on complex computer vision and natural language processing tasks. Convolutional neural networks have been extensively used.


2020 ◽  
Author(s):  
Song Li ◽  
Yu-Qin Deng ◽  
Hong-Li Hua ◽  
Sheng-Lan Li ◽  
Xi-Xiang Chen ◽  
...  

Abstract Background: Although it has been reported by several studies that using AI to predict the prognosis of nasopharyngeal carcinoma (NPC) based on magnetic resonance (MR) image, the information around the tumor was not valued and the post-treatment MR images were ignored. Herein we aimed to predict the prognosis of advanced NPC (stage Ⅲ-Ⅳa) using pre- and post-treatment MR images based on deep learning (DL).Methods: A total of 206 patients with primary NPC who were diagnosed and treated at the Renmin Hospital of Wuhan University between June 2012 and January 2018 were retrospectively selected. A rectangular region of interest (ROI), which included the tumor area, surrounding tissues and organs, was delineated on each pre- and post-treatment MR image. Two InceptionResnetV2-based transfer learning models, named pre-model and post-model, were trained with the Pre-dataset and the Post-dataset, respectively. In addition, an ensemble learning model based on the pre-model and post-models was trained. The three established models were evaluated by receiver operating characteristic (ROC) analysis, confusion matrix, and Harrell’s concordance indices (C-index) after the model test. High-risk-related heat maps were developed according to the DL models.Results: The pre-model, post-model, and ensemble models displayed a C-index of 0.717 (95% CI: 0.639 to 0.795), 0.811 (95% CI: 0.745–0.877), 0.830 (95% CI: 0.767–0.893), and AUC of 0.745 (95% CI: 0.592–0.897), 0.820 (95% CI: 0.687–0.953), and 0.841 (95% CI: 0.715–0.968) for the test cohort, respectively. In comparison with the models, the post-model performance was better than the pre-model, which indicated the importance of post-treatment images for prognosis prediction. All three DL models performed better than the TNM staging system. The captured features presented on heat maps showed that the areas around the tumor and lymph nodes were related to the prognosis of the tumor.Conclusions: The three established DL models based on pre- and post-treatment MR images have a better performance than TNM staging. Post-treatment MR images are of great significance for prognosis prediction and could contribute to clinical decision-making.


2020 ◽  
Vol 23 (6) ◽  
pp. 1172-1191
Author(s):  
Artem Aleksandrovich Elizarov ◽  
Evgenii Viktorovich Razinkov

Recently, such a direction of machine learning as reinforcement learning has been actively developing. As a consequence, attempts are being made to use reinforcement learning for solving computer vision problems, in particular for solving the problem of image classification. The tasks of computer vision are currently one of the most urgent tasks of artificial intelligence. The article proposes a method for image classification in the form of a deep neural network using reinforcement learning. The idea of ​​the developed method comes down to solving the problem of a contextual multi-armed bandit using various strategies for achieving a compromise between exploitation and research and reinforcement learning algorithms. Strategies such as -greedy, -softmax, -decay-softmax, and the UCB1 method, and reinforcement learning algorithms such as DQN, REINFORCE, and A2C are considered. The analysis of the influence of various parameters on the efficiency of the method is carried out, and options for further development of the method are proposed.


2020 ◽  
pp. 1-1
Author(s):  
William Taylor ◽  
Kia Dashtipour ◽  
Syed Aziz Shah ◽  
Muhammad A. Imran ◽  
Qammer H. Abbasi

Sign in / Sign up

Export Citation Format

Share Document