Cylindrical affine transformation model for image registration

Author(s):  
Christine Tanner ◽  
Timothy Carter ◽  
David Hawkes ◽  
Gàbor Székely
2013 ◽  
Author(s):  
Feiyu Chen ◽  
Peng Zheng ◽  
Penglong Xu ◽  
Andrew D. A. Maidment ◽  
Predrag R. Bakic ◽  
...  

Author(s):  
X. J. Shan ◽  
P. Tang

Given the influences of illumination, imaging angle, and geometric distortion, among others, false matching points still occur in all image registration algorithms. Therefore, false matching points detection is an important step in remote sensing image registration. Random Sample Consensus (RANSAC) is typically used to detect false matching points. However, RANSAC method cannot detect all false matching points in some remote sensing images. Therefore, a robust false matching points detection method based on Knearest- neighbour (K-NN) graph (KGD) is proposed in this method to obtain robust and high accuracy result. The KGD method starts with the construction of the K-NN graph in one image. K-NN graph can be first generated for each matching points and its K nearest matching points. Local transformation model for each matching point is then obtained by using its K nearest matching points. The error of each matching point is computed by using its transformation model. Last, L matching points with largest error are identified false matching points and removed. This process is iterative until all errors are smaller than the given threshold. In addition, KGD method can be used in combination with other methods, such as RANSAC. Several remote sensing images with different resolutions and terrains are used in the experiment. We evaluate the performance of KGD method, RANSAC + KGD method, RANSAC, and Graph Transformation Matching (GTM). The experimental results demonstrate the superior performance of the KGD and RANSAC + KGD methods.


2021 ◽  
Author(s):  
Ahmed Shaker ◽  
Said M. Easa ◽  
Wai Yeung Yan

The line-based transformation model (LBTM), built upon the use of affine transformation, was previously proposed for image registration and image rectification. The original LBTM first utilizes the control line features to estimate six rotation and scale parameters and subsequently uses the control point(s) to retrieve the remaining two translation parameters. Such a mechanism may accumulate the error of the six rotation and scale parameters toward the two translation parameters. In this study, we propose the incorporation of a direct method to estimate all eight transformation parameters of LBTM simultaneously using least-squares adjustment. The improved LBTM method was compared with the original LBTM through using one synthetic dataset and three experimental datasets for satellite image 2D registration and 3D rectification. The experimental results demonstrated that the improved LBTM converges to a steady solution with two to three ground control points (GCPs) and five ground control lines (GCLs), whereas the original LBTM requires at least 10 GCLs to yield a stable solution. Keywords: image registration; image rectification; remote sensing; ground control lines; line-based transformation model


2019 ◽  
Vol 25 (5) ◽  
pp. 4-10 ◽  
Author(s):  
Shoufeng Jin ◽  
Qiangqiang Lin ◽  
Jian Yang ◽  
Yu Bie ◽  
Mingrui Tian ◽  
...  

An improved SURF (Speeded-Up Robust Feature) algorithm is proposed to deal with the time-consuming and low precision of positioning of industrial robot. Hessian matrix determinant is used to extract feature points from the target image and a multi-scale spatial pyramid is constructed. The location and scale value of feature points are determined by neighbourhood non-maximum suppression method. The direction of feature points is defined as directional feature descriptors by the binary robust independent elementary feature (BRIEF). The progressive sample consensus (PROSAC) is used to carry out second precise matching and remove mismatching points based on the Hamming distance. Then, an affine transformation model is established to describe the relationship between the template and target images. Centroid coordinates of the target can be obtained based on the affine transformation. Comparative tests were carried out to demonstrate that the proposed method can effectively improve the recognition rate and positioning accuracy of the industrial robots. The average time consuming is less than 0.2 s, the matching accuracy is 96 %, and the positioning error of the robot is less than 1.5 mm. Therefore, the proposed method has practical application importance.


2019 ◽  
Vol 11 (19) ◽  
pp. 2235 ◽  
Author(s):  
Han ◽  
Kim ◽  
Yeom

A large number of evenly distributed conjugate points (CPs) in entirely overlapping regions of the images are required to achieve successful co-registration between very-high-resolution (VHR) remote sensing images. The CPs are then used to construct a non-linear transformation model that locally warps a sensed image to a reference image’s coordinates. Piecewise linear (PL) transformation is largely exploited for warping VHR images because of its superior performance as compared to the other methods. The PL transformation constructs triangular regions on a sensed image from the CPs by applying the Delaunay algorithm, after which the corresponding triangular regions in a reference image are constructed using the same CPs on the image. Each corresponding region in the sensed image is then locally warped to the regions of the reference image through an affine transformation estimated from the CPs on the triangle vertices. The warping performance of the PL transformation shows reliable results, particularly in regions inside the triangles, i.e., within the convex hulls. However, the regions outside the triangles, which are warped when the extrapolated boundary planes are extended using CPs located close to the regions, incur severe geometric distortion. In this study, we propose an effective approach that focuses on the improvement of the warping performance of the PL transformation over the external area of the triangles. Accordingly, the proposed improved piecewise linear (IPL) transformation uses additional pseudo-CPs intentionally extracted from positions on the boundary of the sensed image. The corresponding pseudo-CPs on the reference image are determined by estimating the affine transformation from CPs located close to the pseudo-CPs. The latter are simultaneously used with the former to construct the triangular regions, which are enlarged accordingly. Experiments on both simulated and real datasets, constructed from Worldview-3 and Kompsat-3A satellite images, were conducted to validate the effectiveness of the proposed IPL transformation. That transformation was shown to outperform the existing linear/non-linear transformation models such as an affine, third and fourth polynomials, local weighted mean, and PL. Moreover, we demonstrated that the IPL transformation improved the warping performance over the PL transformation outside the triangular regions by increasing the correlation coefficient values from 0.259 to 0.304, 0.603 to 0.657, and 0.180 to 0.338 in the first, second, and third real datasets, respectively.


2009 ◽  
Vol 2009 ◽  
pp. 1-18 ◽  
Author(s):  
Lotta M. Ellingsen ◽  
Jerry L. Prince

Image registration is a crucial step in many medical image analysis procedures such as image fusion, surgical planning, segmentation and labeling, and shape comparison in population or longitudinal studies. A new approach to volumetric intersubject deformable image registration is presented. The method, called Mjolnir, is an extension of the highly successful method HAMMER. New image features in order to better localize points of correspondence between the two images are introduced as well as a novel approach to generate a dense displacement field based upon the weighted diffusion of automatically derived feature correspondences. An extensive validation of the algorithm was performed on T1-weighted SPGR MR brain images from the NIREP evaluation database. The results were compared with results generated by HAMMER and are shown to yield significant improvements in cortical alignment as well as reduced computation time.


Author(s):  
Peter Rogelj ◽  
Wassim El-Hajj-Chehade

In this study, we focus on improving the efficiency and accuracy of nonrigid multi-modality registration of medical images. In this regard, we analyze the potentials of using the point similarity measurement approach as an alternative to global computation of mutual information (MI), which is still the most renown multi-modality similarity measure. The improvement capabilities are illustrated using the popular B-spline transformation model. The proposed solution is a combination of three related improvements of the most straightforward implementation, i.e., efficient computation of the voxel displacement field, local estimation of similarity and usage of a static image intensity dependence estimate. Five image registration prototypes were implemented to show contribution and dependence of the proposed improvements. When all the proposed improvements are applied, a significant reduction of computational cost and increased accuracy are obtained. The concept offers additional improvement opportunities by incorporating prior knowledge and machine learning techniques into the static intensity dependence estimation.


Sign in / Sign up

Export Citation Format

Share Document