scholarly journals An Optimized Registration Method Based on Distribution Similarity and DVF Smoothness for 3D PET and CT Images

IEEE Access ◽  
2020 ◽  
Vol 8 ◽  
pp. 1135-1145 ◽  
Author(s):  
Hongjian Kang ◽  
Huiyan Jiang ◽  
Xiangrong Zhou ◽  
Hengjian Yu ◽  
Takeshi Hara ◽  
...  
Keyword(s):  
2015 ◽  
Vol 42 (9) ◽  
pp. 5559-5567 ◽  
Author(s):  
Ha Manh Luu ◽  
Wiro Niessen ◽  
Theo van Walsum ◽  
Camiel Klink ◽  
Adriaan Moelker

IRBM ◽  
2020 ◽  
Author(s):  
R. Bhattacharjee ◽  
F. Heitz ◽  
V. Noblet ◽  
S. Sharma ◽  
N. Sharma

2015 ◽  
Author(s):  
Shuyue Shi ◽  
Rong Yuan ◽  
Zhi Sun ◽  
Qingguo Xie

IEEE Access ◽  
2020 ◽  
Vol 8 ◽  
pp. 63077-63089 ◽  
Author(s):  
Hengjian Yu ◽  
Huiyan Jiang ◽  
Xiangrong Zhou ◽  
Takeshi Hara ◽  
Yu-Dong Yao ◽  
...  

2017 ◽  
Vol 1 (1) ◽  
pp. 46-55 ◽  
Author(s):  
Il Jun Ahn ◽  
Ji Hye Kim ◽  
Yongjin Chang ◽  
Woo Hyun Nam ◽  
Jong Beom Ra

Sensors ◽  
2021 ◽  
Vol 21 (18) ◽  
pp. 6254
Author(s):  
Shaodi Yang ◽  
Yuqian Zhao ◽  
Miao Liao ◽  
Fan Zhang

Medical image registration is an essential technique to achieve spatial consistency geometric positions of different medical images obtained from single- or multi-sensor, such as computed tomography (CT), magnetic resonance (MR), and ultrasound (US) images. In this paper, an improved unsupervised learning-based framework is proposed for multi-organ registration on 3D abdominal CT images. First, the explored coarse-to-fine recursive cascaded network (RCN) modules are embedded into a basic U-net framework to achieve more accurate multi-organ registration results from 3D abdominal CT images. Then, a topology-preserving loss is added in the total loss function to avoid a distortion of the predicted transformation field. Four public databases are selected to validate the registration performances of the proposed method. The experimental results show that the proposed method is superior to some existing traditional and deep learning-based methods and is promising to meet the real-time and high-precision clinical registration requirements of 3D abdominal CT images.


2021 ◽  
pp. 019459982110449
Author(s):  
Andy S. Ding ◽  
Alexander Lu ◽  
Zhaoshuo Li ◽  
Deepa Galaiya ◽  
Jeffrey H. Siewerdsen ◽  
...  

Objective This study investigates the accuracy of an automated method to rapidly segment relevant temporal bone anatomy from cone beam computed tomography (CT) images. Implementation of this segmentation pipeline has potential to improve surgical safety and decrease operative time by augmenting preoperative planning and interfacing with image-guided robotic surgical systems. Study Design Descriptive study of predicted segmentations. Setting Academic institution. Methods We have developed a computational pipeline based on the symmetric normalization registration method that predicts segmentations of anatomic structures in temporal bone CT scans using a labeled atlas. To evaluate accuracy, we created a data set by manually labeling relevant anatomic structures (eg, ossicles, labyrinth, facial nerve, external auditory canal, dura) for 16 deidentified high-resolution cone beam temporal bone CT images. Automated segmentations from this pipeline were compared against ground-truth manual segmentations by using modified Hausdorff distances and Dice scores. Runtimes were documented to determine the computational requirements of this method. Results Modified Hausdorff distances and Dice scores between predicted and ground-truth labels were as follows: malleus (0.100 ± 0.054 mm; Dice, 0.827 ± 0.068), incus (0.100 ± 0.033 mm; Dice, 0.837 ± 0.068), stapes (0.157 ± 0.048 mm; Dice, 0.358 ± 0.100), labyrinth (0.169 ± 0.100 mm; Dice, 0.838 ± 0.060), and facial nerve (0.522 ± 0.278 mm; Dice, 0.567 ± 0.130). A quad-core 16GB RAM workstation completed this segmentation pipeline in 10 minutes. Conclusions We demonstrated submillimeter accuracy for automated segmentation of temporal bone anatomy when compared against hand-segmented ground truth using our template registration pipeline. This method is not dependent on the training data volume that plagues many complex deep learning models. Favorable runtime and low computational requirements underscore this method’s translational potential.


2017 ◽  
Vol 2017 ◽  
pp. 1-11 ◽  
Author(s):  
Zhiying Song ◽  
Huiyan Jiang ◽  
Qiyao Yang ◽  
Zhiguo Wang ◽  
Guoxu Zhang

The PET and CT fusion image, combining the anatomical and functional information, has important clinical meaning. An effective registration of PET and CT images is the basis of image fusion. This paper presents a multithread registration method based on contour point cloud for 3D whole-body PET and CT images. Firstly, a geometric feature-based segmentation (GFS) method and a dynamic threshold denoising (DTD) method are creatively proposed to preprocess CT and PET images, respectively. Next, a new automated trunk slices extraction method is presented for extracting feature point clouds. Finally, the multithread Iterative Closet Point is adopted to drive an affine transform. We compare our method with a multiresolution registration method based on Mattes Mutual Information on 13 pairs (246~286 slices per pair) of 3D whole-body PET and CT data. Experimental results demonstrate the registration effectiveness of our method with lower negative normalization correlation (NC = −0.933) on feature images and less Euclidean distance error (ED = 2.826) on landmark points, outperforming the source data (NC = −0.496, ED = 25.847) and the compared method (NC = −0.614, ED = 16.085). Moreover, our method is about ten times faster than the compared one.


Sign in / Sign up

Export Citation Format

Share Document