scholarly journals Algorithm of Pulmonary Vascular Segment and Centerline Extraction

2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Shi Qiu ◽  
Jie Lian ◽  
Yan Ding ◽  
Tao Zhou ◽  
Ting Liang

Because pulmonary vascular lesions are harmful to the human body and difficult to detect, computer-assisted diagnosis of pulmonary blood vessels has become the focus and difficulty of the current research. An algorithm of pulmonary vascular segment and centerline extraction which is consistent with the physician’s diagnosis process is proposed for the first time. We construct the projection of maximum density, restore the vascular space information, and correct random walk algorithm to satisfy automatic and accurate segmentation of blood vessels. Construct a local 3D model to restrain Hessian matrix when extracting centerline. In order to assist the physician to make a correct diagnosis and verify the effectiveness of the algorithm, we proposed a visual expansion model. According to the 420 high-resolution CT data of lung blood vessels labeled by physicians, the accuracy of segmentation algorithm AOM reached 93%, and the processing speed was 0.05 s/frame, which achieved the clinical application standards.

2020 ◽  
Vol 08 (03) ◽  
pp. E415-E420 ◽  
Author(s):  
Romain Leenhardt ◽  
Cynthia Li ◽  
Jean-Philippe Le Mouel ◽  
Gabriel Rahmi ◽  
Jean Christophe Saurin ◽  
...  

Abstract Background and study aims Capsule endoscopy (CE) is the preferred method for small bowel (SB) exploration. With a mean number of 50,000 SB frames per video, SBCE reading is time-consuming and tedious (30 to 60 minutes per video). We describe a large, multicenter database named CAD-CAP (Computer-Assisted Diagnosis for CAPsule Endoscopy, CAD-CAP). This database aims to serve the development of CAD tools for CE reading. Materials and methods Twelve French endoscopy centers were involved. All available third-generation SB-CE videos (Pillcam, Medtronic) were retrospectively selected from these centers and deidentified. Any pathological frame was extracted and included in the database. Manual segmentation of findings within these frames was performed by two pre-med students trained and supervised by an expert reader. All frames were then classified by type and clinical relevance by a panel of three expert readers. An automated extraction process was also developed to create a dataset of normal, proofread, control images from normal, complete, SB-CE videos. Results Four-thousand-one-hundred-and-seventy-four SB-CE were included. Of them, 1,480 videos (35 %) containing at least one pathological finding were selected. Findings from 5,184 frames (with their short video sequences) were extracted and delimited: 718 frames with fresh blood, 3,097 frames with vascular lesions, and 1,369 frames with inflammatory and ulcerative lesions. Twenty-thousand normal frames were extracted from 206 SB-CE normal videos. CAD-CAP has already been used for development of automated tools for angiectasia detection and also for two international challenges on medical computerized analysis.


Author(s):  
Nur Syazlin Zolkifli ◽  
Ain Nazari ◽  
Mohd Marzuki Mustafa ◽  
Wan NurShazwani Wan Zakaria ◽  
Nor Surayahani Suriani ◽  
...  

<p class="IJASEITAbtract">Analysis on the retina blood vessels from fundus images have been widely used in the medical community to detect the disorder condition in the blood vessels. An automated tracing of retina blood vessel can help to provide valuable computer-assisted diagnosis for the ophthalmic disorders. Thus, it helps to reduce the time for the ophthalmologist to analyses and diagnose the result of the fundus image of patient. The purpose of this research is to build an algorithm to trace the retina blood vessels. The method to be used in this research consist of two parts which are the pre-processing part and the feature extraction by using the Kirsch’s template. Combining the pre-processing at the early stage and feature extraction at the next stage is applied to extract the edges of the blood vessels.  The proposed algorithm was verified by using two online databases, DRIVE and HRF to validate the performance measures. Hence, proposed method is capable to extract the retina blood vessel and give the accuracy of 0.7917, the sensitivity of 0.9077 and the specificity of 0.7215. In conclusion, the extraction of the blood vessels is highly recommended as the early screening stage for the eye diseases beneficially.</p>


Author(s):  
Aylin Akbulut ◽  
Suleyman Kalayci ◽  
Gokhan Koca ◽  
Meliha Korkmaz

Background: Supernumerary kidney is an accessory organ with its own encapsulated parenchyma, blood vessels and ureters, either separated from the normal kidney or connected to it via fibrous tissue and ectopic kidney is a migration abnormality of the kidney. Here, we have evaluated a rare case of the supernumerary and ectopic kidney with DMSA, MAG3 and also CT fusion of the images. Methods: The absolute divided renal function was calculated for each kidney by DMSA. The MAG3 scintigraphy showed no obstruction in the ureteropelvic junction. Furthermore, the renogram curve and Tmax and time to ½ values were assessed. Two months after the conventional scintigraphies, the patient was referred to a CT scan and the fusion of DMSA SPECT and CT data was generated on a workstation. Results: The ectopic supernumerary kidney was functioning very well except a small hypoactive area, visible on DMSA, which was possibly a minimal pelvicalyceal dilatation. However, consequent CT scan did not show any pathology. Conclusion: It is important to evaluate particularly complicated or rare cases with multimodality systems with 3D or fusion techniques for the accurate diagnosis.


2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Chih-Wei Lin ◽  
Yu Hong ◽  
Jinfu Liu

Abstract Background Glioma is a malignant brain tumor; its location is complex and is difficult to remove surgically. To diagnosis the brain tumor, doctors can precisely diagnose and localize the disease using medical images. However, the computer-assisted diagnosis for the brain tumor diagnosis is still the problem because the rough segmentation of the brain tumor makes the internal grade of the tumor incorrect. Methods In this paper, we proposed an Aggregation-and-Attention Network for brain tumor segmentation. The proposed network takes the U-Net as the backbone, aggregates multi-scale semantic information, and focuses on crucial information to perform brain tumor segmentation. To this end, we proposed an enhanced down-sampling module and Up-Sampling Layer to compensate for the information loss. The multi-scale connection module is to construct the multi-receptive semantic fusion between encoder and decoder. Furthermore, we designed a dual-attention fusion module that can extract and enhance the spatial relationship of magnetic resonance imaging and applied the strategy of deep supervision in different parts of the proposed network. Results Experimental results show that the performance of the proposed framework is the best on the BraTS2020 dataset, compared with the-state-of-art networks. The performance of the proposed framework surpasses all the comparison networks, and its average accuracies of the four indexes are 0.860, 0.885, 0.932, and 1.2325, respectively. Conclusions The framework and modules of the proposed framework are scientific and practical, which can extract and aggregate useful semantic information and enhance the ability of glioma segmentation.


Author(s):  
Ali H. Al-Timemy ◽  
Nebras H. Ghaeb ◽  
Zahraa M. Mosa ◽  
Javier Escudero

Abstract Clinical keratoconus (KCN) detection is a challenging and time-consuming task. In the diagnosis process, ophthalmologists must revise demographic and clinical ophthalmic examinations. The latter include slit-lamb, corneal topographic maps, and Pentacam indices (PI). We propose an Ensemble of Deep Transfer Learning (EDTL) based on corneal topographic maps. We consider four pretrained networks, SqueezeNet (SqN), AlexNet (AN), ShuffleNet (SfN), and MobileNet-v2 (MN), and fine-tune them on a dataset of KCN and normal cases, each including four topographic maps. We also consider a PI classifier. Then, our EDTL method combines the output probabilities of each of the five classifiers to obtain a decision based on the fusion of probabilities. Individually, the classifier based on PI achieved 93.1% accuracy, whereas the deep classifiers reached classification accuracies over 90% only in isolated cases. Overall, the average accuracy of the deep networks over the four corneal maps ranged from 86% (SfN) to 89.9% (AN). The classifier ensemble increased the accuracy of the deep classifiers based on corneal maps to values ranging (92.2% to 93.1%) for SqN and (93.1% to 94.8%) for AN. Including in the ensemble-specific combinations of corneal maps’ classifiers and PI increased the accuracy to 98.3%. Moreover, visualization of first learner filters in the networks and Grad-CAMs confirmed that the networks had learned relevant clinical features. This study shows the potential of creating ensembles of deep classifiers fine-tuned with a transfer learning strategy as it resulted in an improved accuracy while showing learnable filters and Grad-CAMs that agree with clinical knowledge. This is a step further towards the potential clinical deployment of an improved computer-assisted diagnosis system for KCN detection to help ophthalmologists to confirm the clinical decision and to perform fast and accurate KCN treatment.


2017 ◽  
Vol 30 (6) ◽  
pp. 796-811 ◽  
Author(s):  
Afsaneh Jalalian ◽  
Syamsiah Mashohor ◽  
Rozi Mahmud ◽  
Babak Karasfi ◽  
M. Iqbal Saripan ◽  
...  

1994 ◽  
Vol 40 (5) ◽  
pp. 621-628 ◽  
Author(s):  
Hidetoshi Ohta ◽  
Yutaka Kohgo ◽  
Yasuo Takahashi ◽  
Ryuzou Koyama ◽  
Hideo Suzuki ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document