scholarly journals Robust deep learning method for choroidal vessel segmentation on swept source optical coherence tomography images

2019 ◽  
Vol 10 (4) ◽  
pp. 1601 ◽  
Author(s):  
Xiaoxiao Liu ◽  
Lei Bi ◽  
Yupeng Xu ◽  
Dagan Feng ◽  
Jinman Kim ◽  
...  
2021 ◽  
Vol 11 (12) ◽  
pp. 5488
Author(s):  
Wei Ping Hsia ◽  
Siu Lun Tse ◽  
Chia Jen Chang ◽  
Yu Len Huang

The purpose of this article is to evaluate the accuracy of the optical coherence tomography (OCT) measurement of choroidal thickness in healthy eyes using a deep-learning method with the Mask R-CNN model. Thirty EDI-OCT of thirty patients were enrolled. A mask region-based convolutional neural network (Mask R-CNN) model composed of deep residual network (ResNet) and feature pyramid networks (FPNs) with standard convolution and fully connected heads for mask and box prediction, respectively, was used to automatically depict the choroid layer. The average choroidal thickness and subfoveal choroidal thickness were measured. The results of this study showed that ResNet 50 layers deep (R50) model and ResNet 101 layers deep (R101). R101 U R50 (OR model) demonstrated the best accuracy with an average error of 4.85 pixels and 4.86 pixels, respectively. The R101 ∩ R50 (AND model) took the least time with an average execution time of 4.6 s. Mask-RCNN models showed a good prediction rate of choroidal layer with accuracy rates of 90% and 89.9% for average choroidal thickness and average subfoveal choroidal thickness, respectively. In conclusion, the deep-learning method using the Mask-RCNN model provides a faster and accurate measurement of choroidal thickness. Comparing with manual delineation, it provides better effectiveness, which is feasible for clinical application and larger scale of research on choroid.


2020 ◽  
Author(s):  
Jonathan D. Oakley ◽  
Simrat K. Sodhi ◽  
Daniel B. Russakoff ◽  
Netan Choudhry

AbstractPurposeTo evaluate the performance of a deep learning-based, fully automated, multi-class, macular fluid segmentation algorithm relative to expert annotations in a heterogeneous population of confirmed wet age-related macular degeneration (wAMD) subjects.MethodsTwenty-two swept-source optical coherence tomography (SS-OCT) volumes of the macula from 22 from different individuals with wAMD were manually annotated by two expert graders. These results were compared using cross-validation (CV) to automated segmentations using a deep learning-based algorithm encoding spatial information about retinal tissue as an additional input to the network. The algorithm detects and delineates fluid regions in the OCT data, differentiating between intra- and sub-retinal fluid (IRF, SRF), as well as fluid resulting from in serous pigment epithelial detachments (PED). Standard metrics for fluid detection and quantification were used to evaluate performance.ResultsThe per slice receiver operating characteristic (ROC) area under the curves (AUCs) for each of these fluid types were 0.90, 0.94 and 0.94 for IRF, SRF and PED, respectively. Per volume results were 0.94 and 0.88 for IRF and PED (SRF being present in all cases). The correlation of fluid volume between the expert graders and the algorithm were 0.99 for IRF, 0.99 for SRF and 0.82 for PED.ConclusionsAutomated, deep learning-based segmentation is able to accurately detect and quantify different macular fluid types in SS-OCT data on par with expert graders.


BMJ Open ◽  
2019 ◽  
Vol 9 (9) ◽  
pp. e031313 ◽  
Author(s):  
Kazutaka Kamiya ◽  
Yuji Ayatsuka ◽  
Yudai Kato ◽  
Fusako Fujimura ◽  
Masahide Takahashi ◽  
...  

ObjectiveTo evaluate the diagnostic accuracy of keratoconus using deep learning of the colour-coded maps measured with the swept-source anterior segment optical coherence tomography (AS-OCT).DesignA diagnostic accuracy study.SettingA single-centre study.ParticipantsA total of 304 keratoconic eyes (grade 1 (108 eyes), 2 (75 eyes), 3 (42 eyes) and 4 (79 eyes)) according to the Amsler-Krumeich classification, and 239 age-matched healthy eyes.Main outcome measuresThe diagnostic accuracy of keratoconus using deep learning of six colour-coded maps (anterior elevation, anterior curvature, posterior elevation, posterior curvature, total refractive power and pachymetry map).ResultsDeep learning of the arithmetical mean output data of these six maps showed an accuracy of 0.991 in discriminating between normal and keratoconic eyes. For single map analysis, posterior elevation map (0.993) showed the highest accuracy, followed by posterior curvature map (0.991), anterior elevation map (0.983), corneal pachymetry map (0.982), total refractive power map (0.978) and anterior curvature map (0.976), in discriminating between normal and keratoconic eyes. This deep learning also showed an accuracy of 0.874 in classifying the stage of the disease. Posterior curvature map (0.869) showed the highest accuracy, followed by corneal pachymetry map (0.845), anterior curvature map (0.836), total refractive power map (0.836), posterior elevation map (0.829) and anterior elevation map (0.820), in classifying the stage.ConclusionsDeep learning using the colour-coded maps obtained by the AS-OCT effectively discriminates keratoconus from normal corneas, and furthermore classifies the grade of the disease. It is suggested that this will become an aid for improving the diagnostic accuracy of keratoconus in daily practice.Clinical trial registration number000034587.


2021 ◽  
pp. bjophthalmol-2020-318275
Author(s):  
Natalia Porporato ◽  
Tin A Tun ◽  
Mani Baskaran ◽  
Damon W K Wong ◽  
Rahat Husain ◽  
...  

AimsTo validate a deep learning (DL) algorithm (DLA) for 360° angle assessment on swept-source optical coherence tomography (SS-OCT) (CASIA SS-1000, Tomey Corporation, Nagoya, Japan).MethodsThis was a reliability analysis from a cross-sectional study. An independent test set of 39 936 SS-OCT scans from 312 phakic subjects (128 SS-OCT meridional scans per eye) was analysed. Participants above 50 years with no previous history of intraocular surgery were consecutively recruited from glaucoma clinics. Indentation gonioscopy and dark room SS-OCT were performed. Gonioscopic angle closure was defined as non-visibility of the posterior trabecular meshwork in ≥180° of the angle. For each subject, all images were analysed by a DL-based network based on the VGG-16 architecture, for gonioscopic angle-closure detection. Area under receiver operating characteristic curves (AUCs) and other diagnostic performance indicators were calculated for the DLA (index test) against gonioscopy (reference standard).ResultsApproximately 80% of the participants were Chinese, and more than half were women (57.4%). The prevalence of gonioscopic angle closure in this hospital-based sample was 20.2%. After analysing a total of 39 936 SS-OCT scans, the AUC of the DLA was 0.85 (95% CI:0.80 to 0.90, with sensitivity of 83% and a specificity of 87%) to classify gonioscopic angle closure with the optimal cut-off value of >35% of circumferential angle closure.ConclusionsThe DLA exhibited good diagnostic performance for detection of gonioscopic angle closure on 360° SS-OCT scans in a glaucoma clinic setting. Such an algorithm, independent of the identification of the scleral spur, may be the foundation for a non-contact, fast and reproducible ‘automated gonioscopy’ in future.


Retina ◽  
2021 ◽  
Vol Publish Ahead of Print ◽  
Author(s):  
Gerardo Ledesma-Gil ◽  
Zaixing Mao ◽  
Jonathan Liu ◽  
Richard F. Spaide

EP Europace ◽  
2020 ◽  
Vol 22 (Supplement_1) ◽  
Author(s):  
D Liang ◽  
A Haeberlin

Abstract Background The immediate effect of radiofrequency catheter ablation (RFA) on the tissue is not directly visualized. Optical coherence tomography (OCT) is an imaging technique that uses light to capture histology-like images with a penetration depth of 1-3 mm in the cardiac tissue. There are two specific features of ablation lesions in the OCT images: the disappearance of birefringence artifacts in the lateral and sudden decrease of signal at the bottom (Figure panel A and D). These features can not only be used to recognize the ablation lesions from the OCT images by eye, but also be used to train a machine learning model for automatic lesion segmentation. In recent years, deep learning methods, e.g. convolutional neural networks, have been used in medical image analysis and greatly increased the accuracy of image segmentation. We hypothesize that using a convolutional neural network, e.g. U-Net, can locate and segment the ablation lesions in the OCT images. Purpose To investigate whether a deep learning method such as a convolutional neural network optimized for biomedical image processing, could be used to segment ablation lesions in OCT images automatically. Method 8 OCT datasets with ablation lesions were used for training the convolutional neural network (U-Net model). After training, the model was validated by two new OCT datasets. Dice coefficients were calculated to evaluate spatial overlap between the predictions and the ground truth segmentations, which were manually segmented by the researchers (its value ranges from 0 to 1, and "1" means perfect segmentation). Results The U-Net model could predict the central parts of lesions automatically and accurately (Dice coefficients are 0.933 and 0.934), compared with the ground truth segmentations (Figure panel B and E). These predictions could reveal the depths and diameters of the ablation lesions correctly (Figure panel C and F). Conclusions  Our results showed that deep learning could facilitate ablation lesion identification and segmentation in OCT images. Deep learning methods, integrated in an OCT system, might enable automatic and precise ablation lesion visualization, which may help to assess ablation lesions during radiofrequency ablation procedures with great precision. Figure legend Panel A and D: the central OCT images of the ablation lesions. The blue arrows indicate the lesion bottom, where the image intensity suddenly decreases. The white arrows indicate the birefringence artifacts (the black bands in the grey regions). Panel B and E: the ground true segmentations of lesions in panel A and D. Panel C and F: the predictions by U-Net model of the lesions in panel A and D. A scale bar representing 500 μm is shown in each panel. Abstract Figure


2017 ◽  
Vol 102 (7) ◽  
pp. 991-995 ◽  
Author(s):  
Daniela Montorio ◽  
Chiara Giuffrè ◽  
Elisabetta Miserocchi ◽  
Giulio Modorati ◽  
Riccardo Sacconi ◽  
...  

Background/AimsTo analyse choroidal vascular density of affected and non-affected areas in active and inactive serpiginouschoroiditis (SC) by means of optical coherence tomography angiography (OCT-A).MethodsIn this cross-sectional and observational study, 22 eyes of 11 patients diagnosed with SC were included. All patients underwent blue-light fundus autofluorescence (spectralis Heidelberg retinalangiography+OCT) and swept-source OCT-A (AngioPlex Elite 9000 SS-OCT, Carl Zeiss Meditech) to analyse qualitative features and choroidal vessel density of areas considered affected, and the inner and the outer border of the lesions. Unaffected areas of otherwise healthy retina have also been studied.ResultsAll inactive inflammatory lesions were characterised by atrophy of choriocapillaris with an impairment of its detectable flow and greater visibility of choroidal vessels. On the other hand, all active inflammatory lesions showed an area of complete absence of decorrelation signal. The pathological border was characterised by a statistically significant lower choroidal vessel density compared with both the outer border and the unaffected area (0.650±0.113 vs 0.698±0.112, (p<0.001)). Although not statistically significant, vessel density of the outer border of inactive lesions was lower than vessel density of unaffected areas (0.650±0.113 vs 0.698±0.112, p=0.441). Active inflammatory lesions showed an area of complete absence of decorrelation signal at the level of the choriocapillaris and whole choroid.ConclusionOCT-A represents a new imaging technique that provides useful information about the leading changes of choroidal vascular network in active and inactive lesions of SC.


Sign in / Sign up

Export Citation Format

Share Document