Using convolutional neural network and long short time memory to automatically detect aneurysm on 2D DSA images (Preprint)

2021 ◽  
Author(s):  
JunHua Liao ◽  
LunXin Liu ◽  
HaiHan Duan ◽  
YunZhi Huang ◽  
LiangXue Zhou ◽  
...  

BACKGROUND It is hard to distinguish cerebral aneurysm from overlap vessels based on the 2D DSA images, for its lack the spatial information. OBJECTIVE The aim of this study is to construct a deep learning diagnostic system to improve the ability of detecting the PCoA aneurysm on 2D-DSA images and validate the efficiency of deep learning diagnostic system in 2D-DSA aneurysm detecting. METHODS We proposed a two stage detecting system. First, we established the regional localization stage (RLS) to automatically locate specific detection region of raw 2D-DSA sequences. And then, in the intracranial aneurysm detection stage (IADS) ,we build three different frames, RetinaNet, RetinaNet+LSTM, Bi-input+RetinaNet+LSTM, to detect the aneurysms. Each of the frame had fivefold cross-validation scheme. The area under curve (AUC), the receiver operating characteristic (ROC) curve, and mean average precision (mAP) were used to validate the efficiency of different frames. The sensitivity, specificity and accuracy were used to identify the ability of different frames. RESULTS 255 patients with PCoA aneurysms and 20 patients without aneurysm were included in this study. The best results of AUC of the RetinaNet, RetinaNet+LSTM, and Bi-input+RetinaNet+LSTM were 0.95, 0.96, and 0.97, respectively. The sensitivity of the RetinaNet, RetinaNet+LSTM, and Bi-input+RetinaNet+LSTM were 81.65% (59.40% to 94.76%), 87.91% (64.24% to 98.27%), 84.50% (69.57% to 93.97%), respectively. The specificity of the RetinaNet, RetinaNet+LSTM, and Bi-input+RetinaNet+LSTM were 88.89% (66.73% to 98.41%), 88.12% (66.06% to 98.08%), and 88.50% (74.44% to 96.39%), respectively. The accuracy of the RetinaNet, RetinaNet+LSTM, and Bi-input+RetinaNet+LSTM were 92.71% (71.29% to 99.54%), 89.42% (68.13% to 98.49%), and 91.00% (77.63% to 97.72%), respectively. CONCLUSIONS Two stage aneurysm detecting system can reduce time cost and the computation load. According to our results, more spatial and temporal information can help improve the performance of the frames, so that Bi-input+RetinaNet+LSTM has the best performance compared to other frames. And our study can demonstrate that our system was feasible to assist doctor to detect intracranial aneurysm on 2D-DSA images.

2021 ◽  
Author(s):  
JunHua Liao ◽  
LunXin Liu ◽  
HaiHan Duan ◽  
YunZhi Huang ◽  
LiangXue Zhou ◽  
...  

BACKGROUND It is hard to distinguish cerebral aneurysms from overlapping vessels based on 2D DSA images due to their lack of spatial information. OBJECTIVE The aim of this study was to construct a deep learning diagnostic system to improve the ability to detect posterior communicating artery (PCoA) aneurysms on 2D-DSA images and validate the efficiency of the deep learning diagnostic system in 2D-DSA aneurysm detection. METHODS We proposed a two-stage detecting system. First, we established the regional localization stage (RLS) to automatically locate specific detection regions of raw 2D-DSA sequences. Then, in the intracranial aneurysm detection stage (IADS), we constructed the Bi-input+RetinaNet+C-LSTM framework to compare the performance of aneurysm detection with the existing three frameworks. Each of the frameworks had a fivefold cross-validation scheme. The area under the curve (AUC), the receiver operating characteristic (ROC) curve, and mean average precision (mAP) were used to validate the efficiency of different frameworks. The sensitivity, specificity and accuracy were used to identify the abilities of different frameworks. RESULTS A total of 255 patients with PCoA aneurysms and 20 patients without aneurysms were included in this study. The best AUC results of RetinaNet, RetinaNet+C-LSTM, Bi-input+RetinaNet and Bi-input+RetinaNet+C-LSTM were 0.95, 0.96, 0.92 and 0.97, respectively. The sensitivities of RetinaNet, RetinaNet+C-LSTM, Bi-input+RetinaNet, Bi-input+RetinaNet+C-LSTM, and human experts were 89.00% (67.02% to 98.43%), 88.00% (65.76% to 98.06%), 87.00% (64.53% to 97.66%), 89.00% (67.02% to 98.43%), and 90% (68.30% to 98.77%), respectively. The specificity of RetinaNet, RetinaNet+C-LSTM, Bi-input+RetinaNet, Bi-input+RetinaNet+C-LSTM, and human expert were 80.00% (56.34% to 94.27%), 89.00% (67.02% to 98.43%), 86.00% (63.31% to 97.24%), 93.00% (72.30% to 99.56%), and 90% (68.30% to 98.77%), respectively. The accuracies of RetinaNet, RetinaNet+C-LSTM, Bi-input+RetinaNet, Bi-input+RetinaNet+C-LSTM, and human experts were 84.50% (69.57% to 93.97%), 88.50% (74.44% to 96.39%), 86.50% (71.97% to 95.22%), 91.00% (77.63% to 97.72%), and 90.00% (76.34% to 97.21%), respectively. CONCLUSIONS A two-stage aneurysm detection system can reduce the time cost and the computational load. According to our results, more spatial and temporal information can help improve the performances of the frameworks so that Bi-input+RetinaNet+C-LSTM has the best performance compared to the other frameworks. Our study demonstrates that our system can assist doctors in detecting intracranial aneurysms on 2D-DSA images.


Author(s):  
Zhao Shi ◽  
Chongchang Miao ◽  
Chengwei Pan ◽  
Xue Chai ◽  
Xiu Li Li ◽  
...  

AbstractIntracranial aneurysm is a common life-threatening disease. CTA is recommended as a standard diagnosis tool, while the interpretation is time-consuming and challenging. We presented a novel deep-learning-based framework trained on 1,177 DSA verified bone-removal CTA cases. The framework had excellent tolerance to the influence of occult cases of CTA-negative but DSA-positive aneurysms, image quality, and manufacturers. Simulated real-world studies were conducted in consecutive internal and external cohorts, achieving improved sensitivity and negative predictive value than radiologists. A specific cohort of suspected acute ischemic stroke was employed and found 96.8% predicted-negative cases can be trusted with high confidence, leading to reducing in human burden. A prospective study is warranted to determine whether the algorithm could improve patients’ care in comparison to radiologists’ assessment.


2020 ◽  
Vol 30 (11) ◽  
pp. 5785-5793 ◽  
Author(s):  
Bio Joo ◽  
Sung Soo Ahn ◽  
Pyeong Ho Yoon ◽  
Sohi Bae ◽  
Beomseok Sohn ◽  
...  

2021 ◽  
Vol 12 ◽  
Author(s):  
Wei Xu ◽  
Ling Jin ◽  
Peng-Zhi Zhu ◽  
Kai He ◽  
Wei-Hua Yang ◽  
...  

Objective: This study aims to implement and investigate the application of a special intelligent diagnostic system based on deep learning in the diagnosis of pterygium using anterior segment photographs.Methods: A total of 1,220 anterior segment photographs of normal eyes and pterygium patients were collected for training (using 750 images) and testing (using 470 images) to develop an intelligent pterygium diagnostic model. The images were classified into three categories by the experts and the intelligent pterygium diagnosis system: (i) the normal group, (ii) the observation group of pterygium, and (iii) the operation group of pterygium. The intelligent diagnostic results were compared with those of the expert diagnosis. Indicators including accuracy, sensitivity, specificity, kappa value, the area under the receiver operating characteristic curve (AUC), as well as 95% confidence interval (CI) and F1-score were evaluated.Results: The accuracy rate of the intelligent diagnosis system on the 470 testing photographs was 94.68%; the diagnostic consistency was high; the kappa values of the three groups were all above 85%. Additionally, the AUC values approached 100% in group 1 and 95% in the other two groups. The best results generated from the proposed system for sensitivity, specificity, and F1-scores were 100, 99.64, and 99.74% in group 1; 90.06, 97.32, and 92.49% in group 2; and 92.73, 95.56, and 89.47% in group 3, respectively.Conclusion: The intelligent pterygium diagnosis system based on deep learning can not only judge the presence of pterygium but also classify the severity of pterygium. This study is expected to provide a new screening tool for pterygium and benefit patients from areas lacking medical resources.


Endoscopy ◽  
2020 ◽  
Author(s):  
Alanna Ebigbo ◽  
Robert Mendel ◽  
Tobias Rückert ◽  
Laurin Schuster ◽  
Andreas Probst ◽  
...  

Background and aims: The accurate differentiation between T1a and T1b Barrett’s cancer has both therapeutic and prognostic implications but is challenging even for experienced physicians. We trained an Artificial Intelligence (AI) system on the basis of deep artificial neural networks (deep learning) to differentiate between T1a and T1b Barrett’s cancer white-light images. Methods: Endoscopic images from three tertiary care centres in Germany were collected retrospectively. A deep learning system was trained and tested using the principles of cross-validation. A total of 230 white-light endoscopic images (108 T1a and 122 T1b) was evaluated with the AI-system. For comparison, the images were also classified by experts specialized in endoscopic diagnosis and treatment of Barrett’s cancer. Results: The sensitivity, specificity, F1 and accuracy of the AI-system in the differentiation between T1a and T1b cancer lesions was 0.77, 0.64, 0.73 and 0.71, respectively. There was no statistically significant difference between the performance of the AI-system and that of human experts with sensitivity, specificity, F1 and accuracy of 0.63, 0.78, 0.67 and 0.70 respectively. Conclusion: This pilot study demonstrates the first multicenter application of an AI-based system in the prediction of submucosal invasion in endoscopic images of Barrett’s cancer. AI scored equal to international experts in the field, but more work is necessary to improve the system and apply it to video sequences and in a real-life setting. Nevertheless, the correct prediction of submucosal invasion in Barret´s cancer remains challenging for both experts and AI.


2021 ◽  
Vol 13 (8) ◽  
pp. 1602
Author(s):  
Qiaoqiao Sun ◽  
Xuefeng Liu ◽  
Salah Bourennane

Deep learning models have strong abilities in learning features and they have been successfully applied in hyperspectral images (HSIs). However, the training of most deep learning models requires labeled samples and the collection of labeled samples are labor-consuming in HSI. In addition, single-level features from a single layer are usually considered, which may result in the loss of some important information. Using multiple networks to obtain multi-level features is a solution, but at the cost of longer training time and computational complexity. To solve these problems, a novel unsupervised multi-level feature extraction framework that is based on a three dimensional convolutional autoencoder (3D-CAE) is proposed in this paper. The designed 3D-CAE is stacked by fully 3D convolutional layers and 3D deconvolutional layers, which allows for the spectral-spatial information of targets to be mined simultaneously. Besides, the 3D-CAE can be trained in an unsupervised way without involving labeled samples. Moreover, the multi-level features are directly obtained from the encoded layers with different scales and resolutions, which is more efficient than using multiple networks to get them. The effectiveness of the proposed multi-level features is verified on two hyperspectral data sets. The results demonstrate that the proposed method has great promise in unsupervised feature learning and can help us to further improve the hyperspectral classification when compared with single-level features.


2021 ◽  
pp. 028418512098397
Author(s):  
Yang Li ◽  
Hong Qiu ◽  
Zhihui Hou ◽  
Jianfeng Zheng ◽  
Jianan Li ◽  
...  

Background Deep learning (DL) has achieved great success in medical imaging and could be utilized for the non-invasive calculation of fractional flow reserve (FFR) from coronary computed tomographic angiography (CCTA) (CT-FFR). Purpose To examine the ability of a DL-based CT-FFR in detecting hemodynamic changes of stenosis. Material and Methods This study included 73 patients (85 vessels) who were suspected of coronary artery disease (CAD) and received CCTA followed by invasive FFR measurements within 90 days. The diagnostic accuracy, sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and area under the receiver operating characteristics curve (AUC) were compared between CT-FFR and CCTA. Thirty-nine patients who received drug therapy instead of revascularization were followed for up to 31 months. Major adverse cardiac events (MACE), unstable angina, and rehospitalization were evaluated and compared between the study groups. Results At the patient level, CT-FFR achieved 90.4%, 93.6%, 88.1%, 85.3%, and 94.9% in accuracy, sensitivity, specificity, PPV, and NPV, respectively. At the vessel level, CT-FFR achieved 91.8%, 93.9%, 90.4%, 86.1%, and 95.9%, respectively. CT-FFR exceeded CCTA in these measurements at both levels. The vessel-level AUC for CT-FFR also outperformed that for CCTA (0.957 vs. 0.599, P < 0.0001). Patients with CT-FFR ≤0.8 had higher rates of rehospitalization (hazard ratio [HR] 4.51, 95% confidence interval [CI] 1.08–18.9) and MACE (HR 7.26, 95% CI 0.88–59.8), as well as a lower rate of unstable angina (HR 0.46, 95% CI 0.07–2.91). Conclusion CT-FFR is superior to conventional CCTA in differentiating functional myocardial ischemia. In addition, it has the potential to differentiate prognoses of patients with CAD.


Sign in / Sign up

Export Citation Format

Share Document