scholarly journals Skin Cancer Classification Using Convolutional Neural Networks with Integrated Patient Data: A Systematic Review (Preprint)

2020 ◽  
Author(s):  
Julia Höhn ◽  
Achim Hekler ◽  
Eva Krieghoff-Henning ◽  
Jakob Nikolas Kather ◽  
Jochen Sven Utikal ◽  
...  

BACKGROUND In the past years, accuracy of skin cancer classification by convolutional neural networks (CNNs) has improved substantially. On classification tasks of single images, CNNs have performed on par or better than dermatologists. However, in clinical practice dermatologists also use other patient data beyond the visual aspects present in a digitized image which increases their diagnostic accuracy. The effect of integration of different subtypes of patient data into CNN-based skin cancer classifiers was recently investigated in several pilot studies. OBJECTIVE This systematic review focuses on current research investigating the impact of merging information from image features and patient data on the performance of CNN-based skin cancer image classification. The aim is to explore the potential in this field of research by evaluating the type of patient data used, the ways the non-image data is encoded and merged with the image features as well as the impact of the integration for the classifier performance. METHODS Google Scholar, PubMed, Medline and ScienceDirect were screened for peer-reviewed studies published in English dealing with the integration of patient data within a CNN-based skin cancer classification. The search terms skin cancer classification, convolutional neural network(s), deep learning, lesions, melanoma, metadata, clinical information and patient data were combined. RESULTS A total of 11 publications fulfilled the inclusion criteria. All of them reported an overall improvement in different skin lesion classification tasks with patient data integration. The most commonly used patient data were age, sex and lesion location. Patient data was mostly one-hot encoded. Differences occur in the complexity that the encoded patient data was processed with regarding deep learning methods before and after fusing it with the image features for a ‘combined classifier’. CONCLUSIONS The present studies indicate a potential benefit of patient data integration into CNN-based diagnostic algorithms. However, how exactly the individual patient data enhances classification performance, especially in case of multiclass classification problems, is still unclear. Moreover, a substantial fraction of patient data used by dermatologists remains to be analyzed in the context of CNN-based skin cancer classification. Further exploratory analyses in this promising field may optimize patient data integration into CNN-based skin cancer diagnostics for the benefit of the patient.

Author(s):  
Julia Höhn ◽  
Achim Hekler ◽  
Eva Krieghoff-Henning ◽  
Jakob Nikolas Kather ◽  
Jochen Sven Utikal ◽  
...  

2021 ◽  
Vol 156 ◽  
pp. 202-216 ◽  
Author(s):  
Sarah Haggenmüller ◽  
Roman C. Maron ◽  
Achim Hekler ◽  
Jochen S. Utikal ◽  
Catarina Barata ◽  
...  

10.2196/11936 ◽  
2018 ◽  
Vol 20 (10) ◽  
pp. e11936 ◽  
Author(s):  
Titus Josef Brinker ◽  
Achim Hekler ◽  
Jochen Sven Utikal ◽  
Niels Grabe ◽  
Dirk Schadendorf ◽  
...  

Author(s):  
Zinah Mohsin Arkah ◽  
Dalya S. Al-Dulaimi ◽  
Ahlam R. Khekan

<p>Skin cancer is an example of the most dangerous disease. Early diagnosis of skin cancer can save many people’s lives. Manual classification methods are time-consuming and costly. Deep learning has been proposed for the automated classification of skin cancer. Although deep learning showed impressive performance in several medical imaging tasks, it requires a big number of images to achieve a good performance. The skin cancer classification task suffers from providing deep learning with sufficient data due to the expensive annotation process and required experts. One of the most used solutions is transfer learning of pre-trained models of the ImageNet dataset. However, the learned features of pre-trained models are different from skin cancer image features. To end this, we introduce a novel approach of transfer learning by training the pre-trained models of the ImageNet (VGG, GoogleNet, and ResNet50) on a large number of unlabelled skin cancer images, first. We then train them on a small number of labeled skin images. Our experimental results proved that the proposed method is efficient by achieving an accuracy of 84% with ResNet50 when directly trained with a small number of labeled skin and 93.7% when trained with the proposed approach.</p>


2020 ◽  
Vol 6 (1) ◽  
Author(s):  
Malte Seemann ◽  
Lennart Bargsten ◽  
Alexander Schlaefer

AbstractDeep learning methods produce promising results when applied to a wide range of medical imaging tasks, including segmentation of artery lumen in computed tomography angiography (CTA) data. However, to perform sufficiently, neural networks have to be trained on large amounts of high quality annotated data. In the realm of medical imaging, annotations are not only quite scarce but also often not entirely reliable. To tackle both challenges, we developed a two-step approach for generating realistic synthetic CTA data for the purpose of data augmentation. In the first step moderately realistic images are generated in a purely numerical fashion. In the second step these images are improved by applying neural domain adaptation. We evaluated the impact of synthetic data on lumen segmentation via convolutional neural networks (CNNs) by comparing resulting performances. Improvements of up to 5% in terms of Dice coefficient and 20% for Hausdorff distance represent a proof of concept that the proposed augmentation procedure can be used to enhance deep learning-based segmentation for artery lumen in CTA images.


Author(s):  
Moloud Abdar ◽  
Maryam Samami ◽  
Sajjad Dehghani Mahmoodabad ◽  
Thang Doan ◽  
Bogdan Mazoure ◽  
...  

2021 ◽  
Vol 20 ◽  
pp. 153303382110163
Author(s):  
Danju Huang ◽  
Han Bai ◽  
Li Wang ◽  
Yu Hou ◽  
Lan Li ◽  
...  

With the massive use of computers, the growth and explosion of data has greatly promoted the development of artificial intelligence (AI). The rise of deep learning (DL) algorithms, such as convolutional neural networks (CNN), has provided radiation oncologists with many promising tools that can simplify the complex radiotherapy process in the clinical work of radiation oncology, improve the accuracy and objectivity of diagnosis, and reduce the workload, thus enabling clinicians to spend more time on advanced decision-making tasks. As the development of DL gets closer to clinical practice, radiation oncologists will need to be more familiar with its principles to properly evaluate and use this powerful tool. In this paper, we explain the development and basic concepts of AI and discuss its application in radiation oncology based on different task categories of DL algorithms. This work clarifies the possibility of further development of DL in radiation oncology.


2021 ◽  
Vol 2 (01) ◽  
pp. 41-51
Author(s):  
Jwan Saeed ◽  
Subhi Zeebaree

Skin cancer is among the primary cancer types that manifest due to various dermatological disorders, which may be further classified into several types based on morphological features, color, structure, and texture. The mortality rate of patients who have skin cancer is contingent on preliminary and rapid detection and diagnosis of malignant skin cancer cells. Limitations in current dermoscopic images, including shadow, artifact, and noise, affect image quality, which may hamper detection effort. Attempts to overcome these challenges have been made by analyzing the images using deep learning neural networks to perform skin cancer detection. In this paper, the authors review the state-of-the-art in authoritative deep learning concepts pertinent to skin cancer detection and classification.


2021 ◽  
Vol 155 ◽  
pp. 200-215
Author(s):  
Sara Kuntz ◽  
Eva Krieghoff-Henning ◽  
Jakob N. Kather ◽  
Tanja Jutzi ◽  
Julia Höhn ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document