scholarly journals Application of a deep convolutional neural network in the diagnosis of neonatal ocular fundus hemorrhage

2018 ◽  
Vol 38 (6) ◽  
Author(s):  
Binbin Wang ◽  
Li Xiao ◽  
Yang Liu ◽  
Jing Wang ◽  
Beihong Liu ◽  
...  

There is a disparity between the increasing application of digital retinal imaging to neonatal ocular screening and slowly growing number of pediatric ophthalmologists. Assistant tools that can automatically detect ocular disorders may be needed. In present study, we develop a deep convolutional neural network (DCNN) for automated classification and grading of retinal hemorrhage. We used 48,996 digital fundus images from 3770 newborns with retinal hemorrhage of different severity (grade 1, 2 and 3) and normal controls from a large cross-sectional investigation in China. The DCNN was trained for automated grading of retinal hemorrhage (multiclass classification problem: hemorrhage-free and grades 1, 2 and 3) and then validated for its performance level. The DCNN yielded an accuracy of 97.85 to 99.96%, and the area under the receiver operating characteristic curve was 0.989–1.000 in the binary classification of neonatal retinal hemorrhage (i.e., one classification vs. the others). The overall accuracy with regard to the multiclass classification problem was 97.44%. This is the first study to show that a DCNN can detect and grade neonatal retinal hemorrhage at high performance levels. Artificial intelligence will play more positive roles in ocular healthcare of newborns and children.

2020 ◽  
pp. bjophthalmol-2020-316526
Author(s):  
Yo-Ping Huang ◽  
Haobijam Basanta ◽  
Eugene Yu-Chuan Kang ◽  
Kuan-Jen Chen ◽  
Yih-Shiou Hwang ◽  
...  

Background/AimTo automatically detect and classify the early stages of retinopathy of prematurity (ROP) using a deep convolutional neural network (CNN).MethodsThis retrospective cross-sectional study was conducted in a referral medical centre in Taiwan. Only premature infants with no ROP, stage 1 ROP or stage 2 ROP were enrolled. Overall, 11 372 retinal fundus images were compiled and split into 10 235 images (90%) for training, 1137 (10%) for validation and 244 for testing. A deep CNN was implemented to classify images according to the ROP stage. Data were collected from December 17, 2013 to May 24, 2019 and analysed from December 2018 to January 2020. The metrics of sensitivity, specificity and area under the receiver operating characteristic curve were adopted to evaluate the performance of the algorithm relative to the reference standard diagnosis.ResultsThe model was trained using fivefold cross-validation, yielding an average accuracy of 99.93%±0.03 during training and 92.23%±1.39 during testing. The sensitivity and specificity scores of the model were 96.14%±0.87 and 95.95%±0.48, 91.82%±2.03 and 94.50%±0.71, and 89.81%±1.82 and 98.99%±0.40 when predicting no ROP versus ROP, stage 1 ROP versus no ROP and stage 2 ROP, and stage 2 ROP versus no ROP and stage 1 ROP, respectively.ConclusionsThe proposed system can accurately differentiate among ROP early stages and has the potential to help ophthalmologists classify ROP at an early stage.


Author(s):  
Souad Khellat-Kihel ◽  
Zhenan Sun ◽  
Massimo Tistarelli

AbstractRecent research on face analysis has demonstrated the richness of information embedded in feature vectors extracted from a deep convolutional neural network. Even though deep learning achieved a very high performance on several challenging visual tasks, such as determining the identity, age, gender and race, it still lacks a well grounded theory which allows to properly understand the processes taking place inside the network layers. Therefore, most of the underlying processes are unknown and not easy to control. On the other hand, the human visual system follows a well understood process in analyzing a scene or an object, such as a face. The direction of the eye gaze is repeatedly directed, through purposively planned saccadic movements, towards salient regions to capture several details. In this paper we propose to capitalize on the knowledge of the saccadic human visual processes to design a system to predict facial attributes embedding a biologically-inspired network architecture, the HMAX. The architecture is tailored to predict attributes with different textural information and conveying different semantic meaning, such as attributes related and unrelated to the subject’s identity. Salient points on the face are extracted from the outputs of the S2 layer of the HMAX architecture and fed to a local texture characterization module based on LBP (Local Binary Pattern). The resulting feature vector is used to perform a binary classification on a set of pre-defined visual attributes. The devised system allows to distill a very informative, yet robust, representation of the imaged faces, allowing to obtain high performance but with a much simpler architecture as compared to a deep convolutional neural network. Several experiments performed on publicly available, challenging, large datasets demonstrate the validity of the proposed approach.


2022 ◽  
Vol 10 (1) ◽  
pp. 0-0

Brain tumor is a severe cancer disease caused by uncontrollable and abnormal partitioning of cells. Timely disease detection and treatment plans lead to the increased life expectancy of patients. Automated detection and classification of brain tumor are a more challenging process which is based on the clinician’s knowledge and experience. For this fact, one of the most practical and important techniques is to use deep learning. Recent progress in the fields of deep learning has helped the clinician’s in medical imaging for medical diagnosis of brain tumor. In this paper, we present a comparison of Deep Convolutional Neural Network models for automatically binary classification query MRI images dataset with the goal of taking precision tools to health professionals based on fined recent versions of DenseNet, Xception, NASNet-A, and VGGNet. The experiments were conducted using an MRI open dataset of 3,762 images. Other performance measures used in the study are the area under precision, recall, and specificity.


Author(s):  
Amira Ahmad Al-Sharkawy ◽  
Gehan A. Bahgat ◽  
Elsayed E. Hemayed ◽  
Samia Abdel-Razik Mashali

Object classification problem is essential in many applications nowadays. Human can easily classify objects in unconstrained environments easily. Classical classification techniques were far away from human performance. Thus, researchers try to mimic the human visual system till they reached the deep neural networks. This chapter gives a review and analysis in the field of the deep convolutional neural network usage in object classification under constrained and unconstrained environment. The chapter gives a brief review on the classical techniques of object classification and the development of bio-inspired computational models from neuroscience till the creation of deep neural networks. A review is given on the constrained environment issues: the hardware computing resources and memory, the object appearance and background, and the training and processing time. Datasets that are used to test the performance are analyzed according to the images environmental conditions, besides the dataset biasing is discussed.


We have tried to automate the classification task of white blood cells by using a Convolutional Neural Network. We have divided white blood cell classification in two types of problems, a binary class problem and a 4-classification problem. In binary class problem we classify white blood cell as either mononuclear or Grenrecules. In 4-classification problem where cells are classified into their subtypes (monocytes, lymphocytes, neutrophils, basophils and eosinophils). In our experiment we were able to achieve validation accuracy of 100% in binary classification and 98.40 in multiple classifications.


Complexity ◽  
2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
Xin Li ◽  
Anzi Ding ◽  
Shaojie Mei ◽  
Wenjin Wu ◽  
Wenguang Hou

Fish killing machines can effectively relieve the workers from the backbreaking labour. Generally, it is necessary to ensure the fish to be in unified posture before being input into the automatic fish killing machine. As such, how to detect the actual posture of fish in real time is a new and meaningful issue. Considering that in the actual situation, we only need to determine the four postures which are related to the head, tail, back, and belly of the fish, and we transfer this task into a four-kind classification problem. As such, the convolutional neural network (CNN) is introduced here to do classification and then to detect the fish’s posture. Before training the network, all sample images are preprocessed to make the fish be horizontal on the image according to the principal component analysis. Meanwhile, the histogram equalization is used to make the grey distribution of different images be close. After that, two kinds of strategies are taken to do classification. The first is a paired binary classification CNN and the second is a four-category CNN. In addition, three kinds of CNN are adopted. By comparison, the four-kind classification can obtain better results with error less than 1/1000.


Sign in / Sign up

Export Citation Format

Share Document