scholarly journals Fully Leveraging Deep Learning Methods for Constructing Retinal Fundus Photomontages

2021 ◽  
Vol 11 (4) ◽  
pp. 1754
Author(s):  
Jooyoung Kim ◽  
Sojung Go ◽  
Kyoungjin Noh ◽  
Sangjun Park ◽  
Soochahn Lee

Retinal photomontages, which are constructed by aligning and integrating multiple fundus images, are useful in diagnosing retinal diseases affecting peripheral retina. We present a novel framework for constructing retinal photomontages that fully leverage recent deep learning methods. Deep learning based object detection is used to define the order of image registration and blending. Deep learning based vessel segmentation is used to enhance image texture to improve registration performance within a two step image registration framework comprising rigid and non-rigid registration. Experimental evaluation demonstrates the robustness of our montage construction method with an increased amount of successfully integrated images as well as reduction of image artifacts.

Author(s):  
Mohammad Shorfuzzaman ◽  
M. Shamim Hossain ◽  
Abdulmotaleb El Saddik

Diabetic retinopathy (DR) is one of the most common causes of vision loss in people who have diabetes for a prolonged period. Convolutional neural networks (CNNs) have become increasingly popular for computer-aided DR diagnosis using retinal fundus images. While these CNNs are highly reliable, their lack of sufficient explainability prevents them from being widely used in medical practice. In this article, we propose a novel explainable deep learning ensemble model where weights from different models are fused into a single model to extract salient features from various retinal lesions found on fundus images. The extracted features are then fed to a custom classifier for the final diagnosis of DR severity level. The model is trained on an APTOS dataset containing retinal fundus images of various DR grades using a cyclical learning rates strategy with an automatic learning rate finder for decaying the learning rate to improve model accuracy. We develop an explainability approach by leveraging gradient-weighted class activation mapping and shapely adaptive explanations to highlight the areas of fundus images that are most indicative of different DR stages. This allows ophthalmologists to view our model's decision in a way that they can understand. Evaluation results using three different datasets (APTOS, MESSIDOR, IDRiD) show the effectiveness of our model, achieving superior classification rates with a high degree of precision (0.970), sensitivity (0.980), and AUC (0.978). We believe that the proposed model, which jointly offers state-of-the-art diagnosis performance and explainability, will address the black-box nature of deep CNN models in robust detection of DR grading.


2020 ◽  
Vol 217 ◽  
pp. 121-130 ◽  
Author(s):  
Jooyoung Chang ◽  
Ahryoung Ko ◽  
Sang Min Park ◽  
Seulggie Choi ◽  
Kyuwoong Kim ◽  
...  

2019 ◽  
Vol 4 (1) ◽  
pp. 18-27 ◽  
Author(s):  
Akinori Mitani ◽  
Abigail Huang ◽  
Subhashini Venugopalan ◽  
Greg S. Corrado ◽  
Lily Peng ◽  
...  

10.2196/28868 ◽  
2021 ◽  
Vol 9 (5) ◽  
pp. e28868
Author(s):  
Eugene Yu-Chuan Kang ◽  
Ling Yeung ◽  
Yi-Lun Lee ◽  
Cheng-Hsiu Wu ◽  
Shu-Yen Peng ◽  
...  

Background Retinal vascular diseases, including diabetic macular edema (DME), neovascular age-related macular degeneration (nAMD), myopic choroidal neovascularization (mCNV), and branch and central retinal vein occlusion (BRVO/CRVO), are considered vision-threatening eye diseases. However, accurate diagnosis depends on multimodal imaging and the expertise of retinal ophthalmologists. Objective The aim of this study was to develop a deep learning model to detect treatment-requiring retinal vascular diseases using multimodal imaging. Methods This retrospective study enrolled participants with multimodal ophthalmic imaging data from 3 hospitals in Taiwan from 2013 to 2019. Eye-related images were used, including those obtained through retinal fundus photography, optical coherence tomography (OCT), and fluorescein angiography with or without indocyanine green angiography (FA/ICGA). A deep learning model was constructed for detecting DME, nAMD, mCNV, BRVO, and CRVO and identifying treatment-requiring diseases. Model performance was evaluated and is presented as the area under the curve (AUC) for each receiver operating characteristic curve. Results A total of 2992 eyes of 2185 patients were studied, with 239, 1209, 1008, 211, 189, and 136 eyes in the control, DME, nAMD, mCNV, BRVO, and CRVO groups, respectively. Among them, 1898 eyes required treatment. The eyes were divided into training, validation, and testing groups in a 5:1:1 ratio. In total, 5117 retinal fundus photos, 9316 OCT images, and 20,922 FA/ICGA images were used. The AUCs for detecting mCNV, DME, nAMD, BRVO, and CRVO were 0.996, 0.995, 0.990, 0.959, and 0.988, respectively. The AUC for detecting treatment-requiring diseases was 0.969. From the heat maps, we observed that the model could identify retinal vascular diseases. Conclusions Our study developed a deep learning model to detect retinal diseases using multimodal ophthalmic imaging. Furthermore, the model demonstrated good performance in detecting treatment-requiring retinal diseases.


2020 ◽  
Vol 10 (7) ◽  
pp. 1540-1546
Author(s):  
Xiaomei Xu ◽  
Xiaobo Lai ◽  
Yanli Liu

Glaucoma is a chronic and irreversible eye disease leading to blindness, and early detection is particularly important for its diagnosis and treatment. To improve the performance of automatic glaucoma diagnosis, a method based on multi-feature and multi-classifier is proposed. Firstly, an average histogram is obtained for each channel and ophthalmic condition, and 6 features are extracted from the average histogram with the average count of pixels and their maximum intensity value. Secondly, the optimal features combination is screened for each classifier with 10-fold cross-validation. Finally, the three optimal classifiers and their optimal features combination are fused with the principle of democratic voting. With the 10-fold cross-validation algorithm, the fusion model was evaluated on Local and HRF dataset, that achieved accuracy of 91.8% and 93.3%, sensitivity of 86.9% and 86.7%, specificity of 96.7% and 100%, AUC of 0.953 and 0.978, time cost of 1.0 s and 3.9 s per image, respectively. Simulation results show that the proposed method is of high accuracy and generality. It can effectively classify the retinal fundus images and provide technical support for the clinical diagnosis of retinal diseases.


Sign in / Sign up

Export Citation Format

Share Document