scholarly journals Deep Learning to Distinguish ABCA4-Related Stargardt Disease from PRPH2-Related Pseudo-Stargardt Pattern Dystrophy

2021 ◽  
Vol 10 (24) ◽  
pp. 5742
Author(s):  
Alexandra Miere ◽  
Olivia Zambrowski ◽  
Arthur Kessler ◽  
Carl-Joe Mehanna ◽  
Carlotta Pallone ◽  
...  

(1) Background: Recessive Stargardt disease (STGD1) and multifocal pattern dystrophy simulating Stargardt disease (“pseudo-Stargardt pattern dystrophy”, PSPD) share phenotypic similitudes, leading to a difficult clinical diagnosis. Our aim was to assess whether a deep learning classifier pretrained on fundus autofluorescence (FAF) images can assist in distinguishing ABCA4-related STGD1 from the PRPH2/RDS-related PSPD and to compare the performance with that of retinal specialists. (2) Methods: We trained a convolutional neural network (CNN) using 729 FAF images from normal patients or patients with inherited retinal diseases (IRDs). Transfer learning was then used to update the weights of a ResNet50V2 used to classify the 370 FAF images into STGD1 and PSPD. Retina specialists evaluated the same dataset. The performance of the CNN and that of retina specialists were compared in terms of accuracy, sensitivity, and precision. (3) Results: The CNN accuracy on the test dataset of 111 images was 0.882. The AUROC was 0.890, the precision was 0.883 and the sensitivity was 0.883. The accuracy for retina experts averaged 0.816, whereas for retina fellows it averaged 0.724. (4) Conclusions: This proof-of-concept study demonstrates that, even with small databases, a pretrained CNN is able to distinguish between STGD1 and PSPD with good accuracy.

Diagnostics ◽  
2021 ◽  
Vol 11 (9) ◽  
pp. 1672
Author(s):  
Luya Lian ◽  
Tianer Zhu ◽  
Fudong Zhu ◽  
Haihua Zhu

Objectives: Deep learning methods have achieved impressive diagnostic performance in the field of radiology. The current study aimed to use deep learning methods to detect caries lesions, classify different radiographic extensions on panoramic films, and compare the classification results with those of expert dentists. Methods: A total of 1160 dental panoramic films were evaluated by three expert dentists. All caries lesions in the films were marked with circles, whose combination was defined as the reference dataset. A training and validation dataset (1071) and a test dataset (89) were then established from the reference dataset. A convolutional neural network, called nnU-Net, was applied to detect caries lesions, and DenseNet121 was applied to classify the lesions according to their depths (dentin lesions in the outer, middle, or inner third D1/2/3 of dentin). The performance of the test dataset in the trained nnU-Net and DenseNet121 models was compared with the results of six expert dentists in terms of the intersection over union (IoU), Dice coefficient, accuracy, precision, recall, negative predictive value (NPV), and F1-score metrics. Results: nnU-Net yielded caries lesion segmentation IoU and Dice coefficient values of 0.785 and 0.663, respectively, and the accuracy and recall rate of nnU-Net were 0.986 and 0.821, respectively. The results of the expert dentists and the neural network were shown to be no different in terms of accuracy, precision, recall, NPV, and F1-score. For caries depth classification, DenseNet121 showed an overall accuracy of 0.957 for D1 lesions, 0.832 for D2 lesions, and 0.863 for D3 lesions. The recall results of the D1/D2/D3 lesions were 0.765, 0.652, and 0.918, respectively. All metric values, including accuracy, precision, recall, NPV, and F1-score values, were proven to be no different from those of the experienced dentists. Conclusion: In detecting and classifying caries lesions on dental panoramic radiographs, the performance of deep learning methods was similar to that of expert dentists. The impact of applying these well-trained neural networks for disease diagnosis and treatment decision making should be explored.


2021 ◽  
Vol 11 (12) ◽  
pp. 3199-3208
Author(s):  
K. Ganapriya ◽  
N. Uma Maheswari ◽  
R. Venkatesh

Prediction of occurrence of a seizure would be of greater help to make necessary precaution for taking care of the patient. A Deep learning model, recurrent neural network (RNN), is designed for predicting the upcoming values in the EEG values. A deep data analysis is made to find the parameter that could best differentiate the normal values and seizure values. Next a recurrent neural network model is built for predicting the values earlier. Four different variants of recurrent neural networks are designed in terms of number of time stamps and the number of LSTM layers and the best model is identified. The best identified RNN model is used for predicting the values. The performance of the model is evaluated in terms of explained variance score and R2 score. The model founds to perform well number of elements in the test dataset is minimal and so this model can predict the seizure values only a few seconds earlier.


2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Jason Charng ◽  
Di Xiao ◽  
Maryam Mehdizadeh ◽  
Mary S. Attia ◽  
Sukanya Arunachalam ◽  
...  

Abstract Stargardt disease is one of the most common forms of inherited retinal disease and leads to permanent vision loss. A diagnostic feature of the disease is retinal flecks, which appear hyperautofluorescent in fundus autofluorescence (FAF) imaging. The size and number of these flecks increase with disease progression. Manual segmentation of flecks allows monitoring of disease, but is time-consuming. Herein, we have developed and validated a deep learning approach for segmenting these Stargardt flecks (1750 training and 100 validation FAF patches from 37 eyes with Stargardt disease). Testing was done in 10 separate Stargardt FAF images and we observed a good overall agreement between manual and deep learning in both fleck count and fleck area. Longitudinal data were available in both eyes from 6 patients (average total follow-up time 4.2 years), with both manual and deep learning segmentation performed on all (n = 82) images. Both methods detected a similar upward trend in fleck number and area over time. In conclusion, we demonstrated the feasibility of utilizing deep learning to segment and quantify FAF lesions, laying the foundation for future studies using fleck parameters as a trial endpoint.


Author(s):  
Petteri Oura ◽  
Alina Junno ◽  
Juho-Antti Junno

AbstractWhile the applications of deep learning are considered revolutionary within several medical specialties, forensic applications have been scarce despite the visual nature of the field. For example, a forensic pathologist may benefit from deep learning-based tools in gunshot wound interpretation. This proof-of-concept study aimed to test the hypothesis that trained neural network architectures have potential to predict shooting distance class on the basis of a simple photograph of the gunshot wound. A dataset of 204 gunshot wound images (60 negative controls, 50 contact shots, 49 close-range shots, and 45 distant shots) was constructed on the basis of nineteen piglet carcasses fired with a .22 Long Rifle pistol. The dataset was used to train, validate, and test the ability of neural net architectures to correctly classify images on the basis of shooting distance. Deep learning was performed using the AIDeveloper open-source software. Of the explored neural network architectures, a trained multilayer perceptron based model (MLP_24_16_24) reached the highest testing accuracy of 98%. Of the testing set, the trained model was able to correctly classify all negative controls, contact shots, and close-range shots, whereas one distant shot was misclassified. Our study clearly demonstrated that in the future, forensic pathologists may benefit from deep learning-based tools in gunshot wound interpretation. With these data, we seek to provide an initial impetus for larger-scale research on deep learning approaches in forensic wound interpretation.


2021 ◽  
pp. 016555152110181
Author(s):  
Alberto Nogales ◽  
Miguel-Angel Sicilia ◽  
Álvaro J García-Tejedor

The publication of large amounts of open data is an increasing trend. This is a consequence of initiatives like Linked Open Data (LOD) that aims at publishing and linking data sets published in the World Wide Web. Linked Data publishers should follow a set of principles for their task. This information is described in a 2011 document that includes the consideration of reusing vocabularies as key. The Linked Open Vocabularies (LOV) project attempts to collect the vocabularies and ontologies commonly used in LOD. These ontologies have been classified by domain following the criteria of LOV members, thus having the disadvantage of introducing personal biases. This article presents an automatic classifier of ontologies based on the main categories appearing in Wikipedia. For that purpose, word-embedding models are used in combination with deep learning techniques. Results show that with a hybrid model of regular Deep Neural Networks (DNNs), Recurrent Neural Network (RNN) and Convolutional Neural Network (CNN), classification could be made with an accuracy of 93.57%. A further evaluation of the domain matchings between LOV and the classifier brings possible matchings in 79.8% of the cases.


2021 ◽  
Author(s):  
Alec Fraser ◽  
Nikolai S Prokhorov ◽  
John M Miller ◽  
Ekaterina S Knyazhanskaya ◽  
Petr G Leiman

Cryo-EM has made extraordinary headway towards becoming a semi-automated, high-throughput structure determination technique. In the general workflow, high-to-medium population states are grouped into two- and three-dimensional classes, from which structures can be obtained with near-atomic resolution and subsequently analyzed to interpret function. However, low population states, which are also functionally important, are often discarded. Here, we describe a technique whereby low population states can be efficiently identified with minimal human effort via a deep convolutional neural network classifier. We use this deep learning classifier to describe a transient, low population state of bacteriophage A511 in the midst of infecting its bacterial host. This method can be used to further automate data collection and identify other functionally important low population states.


2019 ◽  
Vol 3 (2) ◽  
Author(s):  
Toru Hirano ◽  
Masayuki Nishide ◽  
Naoki Nonaka ◽  
Jun Seita ◽  
Kosuke Ebina ◽  
...  

Abstract Objective The purpose of this research was to develop a deep-learning model to assess radiographic finger joint destruction in RA. Methods The model comprises two steps: a joint-detection step and a joint-evaluation step. Among 216 radiographs of 108 patients with RA, 186 radiographs were assigned to the training/validation dataset and 30 to the test dataset. In the training/validation dataset, images of PIP joints, the IP joint of the thumb or MCP joints were manually clipped and scored for joint space narrowing (JSN) and bone erosion by clinicians, and then these images were augmented. As a result, 11 160 images were used to train and validate a deep convolutional neural network for joint evaluation. Three thousand seven hundred and twenty selected images were used to train machine learning for joint detection. These steps were combined as the assessment model for radiographic finger joint destruction. Performance of the model was examined using the test dataset, which was not included in the training/validation process, by comparing the scores assigned by the model and clinicians. Results The model detected PIP joints, the IP joint of the thumb and MCP joints with a sensitivity of 95.3% and assigned scores for JSN and erosion. Accuracy (percentage of exact agreement) reached 49.3–65.4% for JSN and 70.6–74.1% for erosion. The correlation coefficient between scores by the model and clinicians per image was 0.72–0.88 for JSN and 0.54–0.75 for erosion. Conclusion Image processing with the trained convolutional neural network model is promising to assess radiographs in RA.


Sign in / Sign up

Export Citation Format

Share Document