scholarly journals Efficient Fundus Image Gradeability Approach Based on Deep Reconstruction-Classification Network

2021 ◽  
Author(s):  
Saif Khalid ◽  
Saddam Abdulwahab ◽  
Hatem A. Rashwan ◽  
Julián Cristiano ◽  
Mohamed Abdel-Nasser ◽  
...  

Quality of retinal image is vital for screening of ailments pertaining to eye such as glaucoma, diabetic retinopathy (DR) and age related macular degeneration. Therefore, assessing quality of retinal image prior to any kind of diagnosis has assumed significance in Computer Aided Desgin (CAD) applications. The rationale behind this is that reliability of retinal image is to be guaranteed to have dependable diagnosis. In this paper, we propose a novel retinal fundus image quality assessment (RIQA) method based on autoencoder network to assess retinal images if the image is acceptable for screening or not. The autoencoder network architecture is well suited to precisely to properly represent the key features of the image quality, especially when the network can correctly reconstruct the input image. The proposed model consists of encoder and decoder successive networks. The encoder will be used for representing the features of the input image. In turn , the decoder will be used for reconstruct the input image. The features get from encoder network will then be fed to a classifier in order to classify the quality of retinal image to two classes: gradable or ungradable. The experimental results revealed more useful assessment and the proposed deep model provides a superior performance for RIQA. Thus, our model can serve real-world Clinical Decision Support Systems in the healthcare domain.

2019 ◽  
Vol 19 (05) ◽  
pp. 1950030 ◽  
Author(s):  
XUEWEI WANG ◽  
SHULIN ZHANG ◽  
XIAO LIANG ◽  
CHUN ZHENG ◽  
JINJIN ZHENG ◽  
...  

Oculopathy is a widespread disease among people of all ages around the world. Teleophthalmology can facilitate the ophthalmological diagnosis for less developed countries that lack medical resources. In teleophthalmology, the assessment of retinal image quality is of great importance. In this paper, we propose a no-reference retinal image assessment system based on DenseNet, a convolutional neural network architecture. This system classified fundus images into good and bad quality or five categories: adequate, just noticeable blur, inappropriate illumination, incomplete optic disc, and opacity. The proposed system was evaluated on different datasets and compared to the applications based on other two networks: VGG-16 and GoogLenet. For binary classification, the good-and-bad binary classifier achieves an AUC of 1.000, and the degradation-specified classifiers that distinguish one specified degradation versus the rest achieve AUC values of 0.972, 0.990, 0.982, 0.982 for four categories, respectively. The multi-classification based on DenseNet achieves an overall accuracy of 0.927, which is significantly higher than 0.549 and 0.757 obtained using VGG-16 and GoogLeNet, respectively. The experimental results indicate that the proposed approach produces outstanding performance in retinal image quality assessment and is worth applying in ophthalmological telemedicine applications. In addition, the proposed approach is robust to the image noise. This study fills the gap of multi-classification in retinal image quality assessment.


Author(s):  
V. V. Starovoitov ◽  
Y. I. Golub ◽  
M. M. Lukashevich

Diabetic retinopathy (DR) is a disease caused by complications of diabetes. It starts asymptomatically and can end in blindness. To detect it, doctors use special fundus cameras that allow them to register images of the retina in the visible range of the spectrum. On these images one can see features, which determine the presence of DR and its grade. Researchers around the world are developing systems for the automated analysis of fundus images. At present, the level of accuracy of classification of diseases caused by DR by systems based on machine learning is comparable to the level of qualified medical doctors.The article shows variants for representation of the retina in digital images by different cameras. We define the task to develop a universal approach for the image quality assessment of a retinal image obtained by an arbitrary fundus camera. It is solved in the first block of any automated retinal image analysis system. The quality assessment procedure is carried out in several stages. At the first stage, it is necessary to perform binarization of the original image and build a retinal mask. Such a mask is individual for each image, even among the images recorded by one camera. For this, a new universal retinal image binarization algorithm is proposed. By analyzing result of the binarization, it is possible to identify and remove imagesoutliers, which show not the retina, but other objects. Further, the problem of no-reference image quality assessment is solved and images are classified into two classes: satisfactory and unsatisfactory for analysis. Contrast, sharpness and possibility of segmentation of the vascular system on the retinal image are evaluated step by step. It is shown that the problem of no-reference image quality assessment of an arbitrary fundus image can be solved.Experiments were performed on a variety of images from the available retinal image databases.


2010 ◽  
Vol 35 (8) ◽  
pp. 757-761 ◽  
Author(s):  
Carolina Ortiz ◽  
José R. Jiménez ◽  
Francisco Pérez-Ocón ◽  
José J Castro ◽  
Rosario González-Anera

Author(s):  
Bhargav Bhatkalkar ◽  
Abhishek Joshi ◽  
Srikanth Prabhu ◽  
Sulatha Bhandary

An automated fundus image analysis is used as a tool for the diagnosis of common retinal diseases. A good quality fundus image results in better diagnosis and hence discarding the degraded fundus images at the time of screening itself provides an opportunity to retake the adequate fundus photographs, which save both time and resources. In this paper, we propose a novel fundus image quality assessment (IQA) model using the convolutional neural network (CNN) based on the quality of optic disc (OD) visibility. We localize the OD by transfer learning with Inception v-3 model. Precise segmentation of OD is done using the GrabCut algorithm. Contour operations are applied to the segmented OD to approximate it to the nearest circle for finding its center and diameter. For training the model, we are using the publicly available fundus databases and a private hospital database. We have attained excellent classification accuracy for fundus IQA on DRIVE, CHASE-DB, and HRF databases. For the OD segmentation, we have experimented our method on DRINS-DB, DRISHTI-GS, and RIM-ONE v.3 databases and compared the results with existing state-of-the-art methods. Our proposed method outperforms existing methods for OD segmentation on Jaccard index and F-score metrics.


2020 ◽  
Vol 2020 (9) ◽  
pp. 323-1-323-8
Author(s):  
Litao Hu ◽  
Zhenhua Hu ◽  
Peter Bauer ◽  
Todd J. Harris ◽  
Jan P. Allebach

Image quality assessment has been a very active research area in the field of image processing, and there have been numerous methods proposed. However, most of the existing methods focus on digital images that only or mainly contain pictures or photos taken by digital cameras. Traditional approaches evaluate an input image as a whole and try to estimate a quality score for the image, in order to give viewers an idea of how “good” the image looks. In this paper, we mainly focus on the quality evaluation of contents of symbols like texts, bar-codes, QR-codes, lines, and hand-writings in target images. Estimating a quality score for this kind of information can be based on whether or not it is readable by a human, or recognizable by a decoder. Moreover, we mainly study the viewing quality of the scanned document of a printed image. For this purpose, we propose a novel image quality assessment algorithm that is able to determine the readability of a scanned document or regions in a scanned document. Experimental results on some testing images demonstrate the effectiveness of our method.


2021 ◽  
Vol 11 (5) ◽  
pp. 321
Author(s):  
Kyoung Min Kim ◽  
Tae-Young Heo ◽  
Aesul Kim ◽  
Joohee Kim ◽  
Kyu Jin Han ◽  
...  

Artificial intelligence (AI)-based diagnostic tools have been accepted in ophthalmology. The use of retinal images, such as fundus photographs, is a promising approach for the development of AI-based diagnostic platforms. Retinal pathologies usually occur in a broad spectrum of eye diseases, including neovascular or dry age-related macular degeneration, epiretinal membrane, rhegmatogenous retinal detachment, retinitis pigmentosa, macular hole, retinal vein occlusions, and diabetic retinopathy. Here, we report a fundus image-based AI model for differential diagnosis of retinal diseases. We classified retinal images with three convolutional neural network models: ResNet50, VGG19, and Inception v3. Furthermore, the performance of several dense (fully connected) layers was compared. The prediction accuracy for diagnosis of nine classes of eight retinal diseases and normal control was 87.42% in the ResNet50 model, which added a dense layer with 128 nodes. Furthermore, our AI tool augments ophthalmologist’s performance in the diagnosis of retinal disease. These results suggested that the fundus image-based AI tool is applicable for the medical diagnosis process of retinal diseases.


2021 ◽  
Vol 7 (7) ◽  
pp. 112
Author(s):  
Domonkos Varga

The goal of no-reference image quality assessment (NR-IQA) is to evaluate their perceptual quality of digital images without using the distortion-free, pristine counterparts. NR-IQA is an important part of multimedia signal processing since digital images can undergo a wide variety of distortions during storage, compression, and transmission. In this paper, we propose a novel architecture that extracts deep features from the input image at multiple scales to improve the effectiveness of feature extraction for NR-IQA using convolutional neural networks. Specifically, the proposed method extracts deep activations for local patches at multiple scales and maps them onto perceptual quality scores with the help of trained Gaussian process regressors. Extensive experiments demonstrate that the introduced algorithm performs favorably against the state-of-the-art methods on three large benchmark datasets with authentic distortions (LIVE In the Wild, KonIQ-10k, and SPAQ).


Sign in / Sign up

Export Citation Format

Share Document