scholarly journals Fully Convolutional Network and Visual Saliency-Based Automatic Optic Disc Detection in Retinal Fundus Images

2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Xiaosheng Yu ◽  
Ying Wang ◽  
Siqi Wang ◽  
Nan Hu

We present in this paper a novel optic disc detection method based on a fully convolutional network and visual saliency in retinal fundus images. Firstly, we employ the morphological reconstruction-based object detection method to locate the optic disc region roughly. According to the location result, a 400 × 400 image patch that covers the whole optic disc is obtained by cropping the original retinal fundus image. Secondly, the Simple Linear Iterative Cluster approach is utilized to segment such an image patch into many smaller superpixels. Thirdly, each superpixel is assigned a uniform initial saliency value according to the background prior information based on the assumption that the superpixels located on the boundary of the image belong to the background. Meanwhile, we use a pretrained fully convolutional network to extract the deep features from different layers of the network and design the strategy to represent each superpixel by the deep features. Finally, both the background prior information and the deep features are integrated into the single-layer cellular automata framework to gain the accurate optic disc detection result. We utilize the DRISHTI-GS dataset and RIM-ONE r3 dataset to evaluate the performance of our method. The experimental results demonstrate that the proposed method can overcome the influence of intensity inhomogeneity, weak contrast, and the complex surroundings of the optic disc effectively and has superior performance in terms of accuracy and robustness.

2020 ◽  
Vol 10 (11) ◽  
pp. 3833 ◽  
Author(s):  
Haidar Almubarak ◽  
Yakoub Bazi ◽  
Naif Alajlan

In this paper, we propose a method for localizing the optic nerve head and segmenting the optic disc/cup in retinal fundus images. The approach is based on a simple two-stage Mask-RCNN compared to sophisticated methods that represent the state-of-the-art in the literature. In the first stage, we detect and crop around the optic nerve head then feed the cropped image as input for the second stage. The second stage network is trained using a weighted loss to produce the final segmentation. To further improve the detection in the first stage, we propose a new fine-tuning strategy by combining the cropping output of the first stage with the original training image to train a new detection network using different scales for the region proposal network anchors. We evaluate the method on Retinal Fundus Images for Glaucoma Analysis (REFUGE), Magrabi, and MESSIDOR datasets. We used the REFUGE training subset to train the models in the proposed method. Our method achieved 0.0430 mean absolute error in the vertical cup-to-disc ratio (MAE vCDR) on the REFUGE test set compared to 0.0414 obtained using complex and multiple ensemble networks methods. The models trained with the proposed method transfer well to datasets outside REFUGE, achieving a MAE vCDR of 0.0785 and 0.077 on MESSIDOR and Magrabi datasets, respectively, without being retrained. In terms of detection accuracy, the proposed new fine-tuning strategy improved the detection rate from 96.7% to 98.04% on MESSIDOR and from 93.6% to 100% on Magrabi datasets compared to the reported detection rates in the literature.


Genes ◽  
2019 ◽  
Vol 10 (10) ◽  
pp. 817
Author(s):  
Lizong Zhang ◽  
Shuxin Feng ◽  
Guiduo Duan ◽  
Ying Li ◽  
Guisong Liu

Microaneurysms (MAs) are the earliest detectable diabetic retinopathy (DR) lesions. Thus, the ability to automatically detect MAs is critical for the early diagnosis of DR. However, achieving the accurate and reliable detection of MAs remains a significant challenge due to the size and complexity of retinal fundus images. Therefore, this paper presents a novel MA detection method based on a deep neural network with a multilayer attention mechanism for retinal fundus images. First, a series of equalization operations are performed to improve the quality of the fundus images. Then, based on the attention mechanism, multiple feature layers with obvious target features are fused to achieve preliminary MA detection. Finally, the spatial relationships between MAs and blood vessels are utilized to perform a secondary screening of the preliminary test results to obtain the final MA detection results. We evaluated the method on the IDRiD_VOC dataset, which was collected from the open IDRiD dataset. The results show that our method effectively improves the average accuracy and sensitivity of MA detection.


Sign in / Sign up

Export Citation Format

Share Document