scholarly journals Deep‐learning based fully automatic segmentation of the globus pallidus interna and externa using ultra‐high 7 Tesla MRI

2021 ◽  
Author(s):  
Oren Solomon ◽  
Tara Palnitkar ◽  
Re'mi Patriat ◽  
Henry Braun ◽  
Joshua Aman ◽  
...  
2020 ◽  
Author(s):  
Oren Solomon ◽  
Tara Palnitkar ◽  
Remi Patriat ◽  
Henry Braun ◽  
Joshua Aman ◽  
...  

AbstractDeep brain stimulation (DBS) surgery has been shown to dramatically improve the quality of life for patients with various motor dysfunctions, such as those afflicted with Parkinson’s disease (PD), dystonia, and essential tremor (ET), by relieving motor symptoms associated with such pathologies. The success of DBS procedures is directly related to the proper placement of the electrodes, which requires the ability to accurately detect and identify relevant target structures within the subcortical basal ganglia region. In particular, accurate and reliable segmentation of the globus pallidus (GP) interna is of great interest for DBS surgery for PD and dystonia. In this work, we present a deep-learning based neural network, which we term GP-net, for the automatic segmentation of both the external and internal segments of the globus pallidus. High resolution 7 Tesla images from 101 subjects were used in this study; GP-net is trained on a cohort of 58 subjects, containing patients with movement disorders as well as healthy control subjects. GP-net performs 3D inference in a patient-specific manner, alleviating the need for atlas-based segmentation. GP-net was extensively validated, both quantitatively and qualitatively over 43 test subjects including patients with movement disorders and healthy control and is shown to consistently produce improved segmentation results compared with state-of-the-art atlas-based segmentations. We also demonstrate a post-operative lead location assessment with respect to a segmented globus pallidus obtained by GP-net.


2021 ◽  
Vol 22 (Supplement_2) ◽  
Author(s):  
S Alabed ◽  
K Karunasaagarar ◽  
F Alandejani ◽  
P Garg ◽  
J Uthoff ◽  
...  

Abstract Funding Acknowledgements Type of funding sources: Foundation. Main funding source(s): Wellcome Trust (UK), NIHR (UK) Introduction Cardiac magnetic resonance (CMR) measurements have significant diagnostic and prognostic value. Accurate and repeatable measurements are essential to assess disease severity, evaluate therapy response and monitor disease progression. Deep learning approaches have shown promise for automatic left ventricular (LV) segmentation on CMR, however fully automatic right ventricular (RV) segmentation remains challenging. We aimed to develop a biventricular automatic contouring model and evaluate the interstudy repeatability of the model in a prospectively recruited cohort. Methods A deep learning CMR contouring model was developed in a retrospective multi-vendor (Siemens and General Electric), multi-pathology cohort of patients, predominantly with heart failure, pulmonary hypertension and lung diseases (n = 400, ASPIRE registry). Biventricular segmentations were made on all CMR studies across cardiac phases. To test the accuracy of the automatic segmentation, 30 ASPIRE CMRs were segmented independently by two CMR experts. Each segmentation was compared to the automatic contouring with agreement assessed using the Dice similarity coefficient (DSC).  A prospective validation cohort of 46 subjects (10 healthy volunteers and 36 patients with pulmonary hypertension) were recruited to assess interstudy agreement of automatic and manual CMR assessments. Two CMR studies were performed during separate sessions on the same day. Interstudy repeatability was assessed using intraclass correlation coefficient (ICC) and Bland-Altman plots.  Results DSC showed high agreement (figure 1) comparing automatic and expert CMR readers, with minimal bias towards either CMR expert. The scan-scan repeatability CMR measurements were higher for all automatic RV measurements (ICC 0.89 to 0.98) compared to manual RV measurements (0.78 to 0.98). LV automatic and manual measurements were similarly repeatable (figure 2). Bland-Altman plots showed strong agreement with small mean differences between the scan-scan measurements (figure 2). Conclusion Fully automatic biventricular short-axis segmentations are comparable with expert manual segmentations, and have shown excellent interstudy repeatability.


2020 ◽  
Vol 42 (4-5) ◽  
pp. 221-230
Author(s):  
Nirvedh H. Meshram ◽  
Carol C. Mitchell ◽  
Stephanie Wilbrand ◽  
Robert J. Dempsey ◽  
Tomy Varghese

Carotid plaque segmentation in ultrasound longitudinal B-mode images using deep learning is presented in this work. We report on 101 severely stenotic carotid plaque patients. A standard U-Net is compared with a dilated U-Net architecture in which the dilated convolution layers were used in the bottleneck. Both a fully automatic and a semi-automatic approach with a bounding box was implemented. The performance degradation in plaque segmentation due to errors in the bounding box is quantified. We found that the bounding box significantly improved the performance of the networks with U-Net Dice coefficients of 0.48 for automatic and 0.83 for semi-automatic segmentation of plaque. Similar results were also obtained for the dilated U-Net with Dice coefficients of 0.55 for automatic and 0.84 for semi-automatic when compared to manual segmentations of the same plaque by an experienced sonographer. A 5% error in the bounding box in both dimensions reduced the Dice coefficient to 0.79 and 0.80 for U-Net and dilated U-Net respectively.


2021 ◽  
Vol 11 (4) ◽  
pp. 1600-1612
Author(s):  
Yan Wang ◽  
Yue Zhang ◽  
Zhaoying Wen ◽  
Bing Tian ◽  
Evan Kao ◽  
...  

2021 ◽  
Vol 11 (8) ◽  
pp. 3501
Author(s):  
Jinyoung Park ◽  
JaeJoon Hwang ◽  
Jihye Ryu ◽  
Inhye Nam ◽  
Sol-A Kim ◽  
...  

The purpose of this study was to investigate the accuracy of the airway volume measurement by a Regression Neural Network-based deep-learning model. A set of manually outlined airway data was set to build the algorithm for fully automatic segmentation of a deep learning process. Manual landmarks of the airway were determined by one examiner using a mid-sagittal plane of cone-beam computed tomography (CBCT) images of 315 patients. Clinical dataset-based training with data augmentation was conducted. Based on the annotated landmarks, the airway passage was measured and segmented. The accuracy of our model was confirmed by measuring the following between the examiner and the program: (1) a difference in volume of nasopharynx, oropharynx, and hypopharynx, and (2) the Euclidean distance. For the agreement analysis, 61 samples were extracted and compared. The correlation test showed a range of good to excellent reliability. A difference between volumes were analyzed using regression analysis. The slope of the two measurements was close to 1 and showed a linear regression correlation (r2 = 0.975, slope = 1.02, p < 0.001). These results indicate that fully automatic segmentation of the airway is possible by training via deep learning of artificial intelligence. Additionally, a high correlation between manual data and deep learning data was estimated.


2018 ◽  
Vol 2 (3) ◽  
pp. 324-335 ◽  
Author(s):  
Johannes Kvam ◽  
Lars Erik Gangsei ◽  
Jørgen Kongsro ◽  
Anne H Schistad Solberg

Abstract Computed tomography (CT) scanning of pigs has been shown to produce detailed phenotypes useful in pig breeding. Due to the large number of individuals scanned and corresponding large data sets, there is a need for automatic tools for analysis of these data sets. In this paper, the feasibility of deep learning for fully automatic segmentation of the skeleton of pigs from CT volumes is explored. To maximize performance, given the training data available, a series of problem simplifications are applied. The deep-learning approach can replace our currently used semiautomatic solution, with increased robustness and little or no need for manual control. Accuracy was highly affected by training data, and expanding the training set can further increase performance making this approach especially promising.


Sign in / Sign up

Export Citation Format

Share Document