scholarly journals Non-Rigid Multi-Modal 3D Medical Image Registration Based on Foveated Modality Independent Neighborhood Descriptor

Sensors ◽  
2019 ◽  
Vol 19 (21) ◽  
pp. 4675 ◽  
Author(s):  
Feng Yang ◽  
Mingyue Ding ◽  
Xuming Zhang

The non-rigid multi-modal three-dimensional (3D) medical image registration is highly challenging due to the difficulty in the construction of similarity measure and the solution of non-rigid transformation parameters. A novel structural representation based registration method is proposed to address these problems. Firstly, an improved modality independent neighborhood descriptor (MIND) that is based on the foveated nonlocal self-similarity is designed for the effective structural representations of 3D medical images to transform multi-modal image registration into mono-modal one. The sum of absolute differences between structural representations is computed as the similarity measure. Subsequently, the foveated MIND based spatial constraint is introduced into the Markov random field (MRF) optimization to reduce the number of transformation parameters and restrict the calculation of the energy function in the image region involving non-rigid deformation. Finally, the accurate and efficient 3D medical image registration is realized by minimizing the similarity measure based MRF energy function. Extensive experiments on 3D positron emission tomography (PET), computed tomography (CT), T1, T2, and PD weighted magnetic resonance (MR) images with synthetic deformation demonstrate that the proposed method has higher computational efficiency and registration accuracy in terms of target registration error (TRE) than the registration methods that are based on the hybrid L-BFGS-B and cat swarm optimization (HLCSO), the sum of squared differences on entropy images, the MIND, and the self-similarity context (SSC) descriptor, except that it provides slightly bigger TRE than the HLCSO for CT-PET image registration. Experiments on real MR and ultrasound images with unknown deformation have also be done to demonstrate the practicality and superiority of the proposed method.

2014 ◽  
Vol 2014 ◽  
pp. 1-8
Author(s):  
Liang Hua ◽  
Kean Yu ◽  
Lijun Ding ◽  
Juping Gu ◽  
Xinsong Zhang ◽  
...  

A three-dimensional multimodality medical image registration method using geometric invariant based on conformal geometric algebra (CGA) theory is put forward for responding to challenges resulting from many free degrees and computational burdens with 3D medical image registration problems. The mathematical model and calculation method of dual-vector projection invariant are established using the distribution characteristics of point cloud data and the point-to-plane distance-based measurement in CGA space. The translation operator and geometric rotation operator during registration operation are built in Clifford algebra (CA) space. The conformal geometrical algebra is used to realize the registration of 3D CT/MR-PD medical image data based on the dual vector geometric invariant. The registration experiment results indicate that the methodology proposed in this paper is of stronger commonality, less computation burden, shorter time consumption, and intuitive geometric meaning. Both subjective evaluation and objective indicators show that the methodology proposed here is of high registration accuracy and suitable for 3D medical image registration.


2014 ◽  
Vol 513-517 ◽  
pp. 3020-3023
Author(s):  
Yun Feng Yang ◽  
Cheng Xin Lin ◽  
Peng Xiao Wang ◽  
Jia Li ◽  
Bo Li

Medical image registration is the important technique in the clinical medicine field. A novel hierarchical registration method of the medical images based on multiscale information and contour line is proposed in the paper. At First, contour lines of the couple images are extracted based on the edge features obtained by Canny operator, and contour lines of the couple images are resample in order to reduce the calculation cost in the registration process. Secondly, the Principal Axes method is used to accomplish the rough registration based on the resampled contour lines. Thirdly, multiscale image serials obtained by down-sample transform are used to accomplish the couple images fine registration. Experiment results show that the method not only can achieve more accurate registration results, but also can reduce the computational time greatly. The accurate registration results also can be achieved in the noisy environment.


2015 ◽  
Vol 27 (04) ◽  
pp. 1550032 ◽  
Author(s):  
Meisen Pan ◽  
Jianjun Jiang ◽  
Fen Zhang ◽  
Qiusheng Rong

The mutual information (MI) technology and the iterative closest point (ICP) algorithm, as intensity-based and feature-based image registration methods respectively, are commonly put into use in medical image registration. But some naturally existing things which restrict the further development need to be faced and be solved. On one hand, they remain heavy calculation costs and low registration efficiencies. On the other hand, since they seriously depend on whether the initial rotation and translation registration parameters can be exactly selected, they often trap in the local optima and even fail to register images. In this paper, we compute the centroids of the reference and floating images by using the image moments to obtain the initial translation values, and use improved fuzzy C-means clustering (IFCM) to classify the image coordinates. Before clustering, this proposed method first centralizes the medical image coordinates, creates the two-row coordinate matrix to construct the two-dimensional (2D) sample set partitioned into two classes, and computes the slope of a straight line fitted to the two classes, finally derives the rotation angle from solving the arc tangent of the slope and obtains the initial rotation values. The experimental results show that, this proposed method has a fairly simple implementation, a low computational load, a fast registration and good registration accuracy. Also, it can efficiently avoid trapping in the local optima and meets both mono-modality and multi-modality image registrations.


2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Kunpeng Cui ◽  
Panpan Fu ◽  
Yinghao Li ◽  
Yusong Lin

The purpose of medical image registration is to find geometric transformations that align two medical images so that the corresponding voxels on two images are spatially consistent. Nonrigid medical image registration is a key step in medical image processing, such as image comparison, data fusion, target recognition, and pathological change analysis. Existing registration methods only consider registration accuracy but largely neglect the uncertainty of registration results. In this work, a method based on the Bayesian fully convolutional neural network is proposed for nonrigid medical image registration. The proposed method can generate a geometric uncertainty map to calculate the uncertainty of registration results. This uncertainty can be interpreted as a confidence interval, which is essential for judging whether the source data are abnormal. Moreover, the proposed method introduces group normalization, which is conducive to the network convergence of the Bayesian neural network. Some representative learning-based image registration methods are compared with the proposed method on different image datasets. Experimental results show that the registration accuracy of the proposed method is better than that of the methods, and its antifolding performance is comparable to that of fast image registration and VoxelMorph. Furthermore, the proposed method can evaluate the uncertainty of registration results.


2013 ◽  
Vol 2013 ◽  
pp. 1-10 ◽  
Author(s):  
Peng Liu ◽  
Benjamin Eberhardt ◽  
Christian Wybranski ◽  
Jens Ricke ◽  
Lutz Lüdemann

For coregistration of medical images, rigid methods often fail to provide enough freedom, while reliable elastic methods are available clinically for special applications only. The number of degrees of freedom of elastic models must be reduced for use in the clinical setting to archive a reliable result. We propose a novel geometry-based method of nonrigid 3D medical image registration and fusion. The proposed method uses a 3D surface-based deformable model as guidance. In our twofold approach, the deformable mesh from one of the images is first applied to the boundary of the object to be registered. Thereafter, the non-rigid volume deformation vector field needed for registration and fusion inside of the region of interest (ROI) described by the active surface is inferred from the displacement of the surface mesh points. The method was validated using clinical images of a quasirigid organ (kidney) and of an elastic organ (liver). The reduction in standard deviation of the image intensity difference between reference image and model was used as a measure of performance. Landmarks placed at vessel bifurcations in the liver were used as a gold standard for evaluating registration results for the elastic liver. Our registration method was compared with affine registration using mutual information applied to the quasi-rigid kidney. The new method achieved 15.11% better quality with a high confidence level of 99% for rigid registration. However, when applied to the quasi-elastic liver, the method has an averaged landmark dislocation of 4.32 mm. In contrast, affine registration of extracted livers yields a significantly () smaller dislocation of 3.26 mm. In conclusion, our validation shows that the novel approach is applicable in cases where internal deformation is not crucial, but it has limitations in cases where internal displacement must also be taken into account.


2014 ◽  
Vol 2014 ◽  
pp. 1-16 ◽  
Author(s):  
Wu Zhou ◽  
Lijuan Zhang ◽  
Yaoqin Xie ◽  
Changhong Liang

Image pair is often aligned initially based on a rigid or affine transformation before a deformable registration method is applied in medical image registration. Inappropriate initial registration may compromise the registration speed or impede the convergence of the optimization algorithm. In this work, a novel technique was proposed for prealignment in both monomodality and multimodality image registration based on statistical correlation of gradient information. A simple and robust algorithm was proposed to determine the rotational differences between two images based on orientation histogram matching accumulated from local orientation of each pixel without any feature extraction. Experimental results showed that it was effective to acquire the orientation angle between two unregistered images with advantages over the existed method based on edge-map in multimodalities. Applying the orientation detection into the registration of CT/MR, T1/T2 MRI, and monomadality images with respect to rigid and nonrigid deformation improved the chances of finding the global optimization of the registration and reduced the search space of optimization.


Sign in / Sign up

Export Citation Format

Share Document