Reducing Shadow Effects on the Co-Registration of Aerial Image Pairs

2020 ◽  
Vol 86 (3) ◽  
pp. 177-186
Author(s):  
Matthew Plummer ◽  
Douglas Stow ◽  
Emanuel Storey ◽  
Lloyd Coulter ◽  
Nicholas Zamora ◽  
...  

Image registration is an important preprocessing step prior to detecting changes using multi-temporal image data, which is increasingly accomplished using automated methods. In high spatial resolution imagery, shadows represent a major source of illumination variation, which can reduce the performance of automated registration routines. This study evaluates the statistical relationship between shadow presence and image registration accuracy, and whether masking and normalizing shadows leads to improved automatic registration results. Eighty-eight bitemporal aerial image pairs were co-registered using software called Scale Invariant Features Transform (<small>SIFT</small>) and Random Sample Consensus (<small>RANSAC</small>) Alignment (<small>SARA</small>). Co-registration accuracy was assessed at different levels of shadow coverage and shadow movement within the images. The primary outcomes of this study are (1) the amount of shadow in a multi-temporal image pair is correlated with the accuracy/success of automatic co-registration; (2) masking out shadows prior to match point select does not improve the success of image-to-image co-registration; and (3) normalizing or brightening shadows can help match point routines find more match points and therefore improve performance of automatic co-registration. Normalizing shadows via a standard linear correction provided the most reliable co-registration results in image pairs containing substantial amounts of relative shadow movement, but had minimal effect for pairs with stationary shadows.

2017 ◽  
Vol 2017 ◽  
pp. 1-15 ◽  
Author(s):  
Roziana Ramli ◽  
Mohd Yamani Idna Idris ◽  
Khairunnisa Hasikin ◽  
Noor Khairiah A. Karim ◽  
Ainuddin Wahid Abdul Wahab ◽  
...  

Retinal image registration is important to assist diagnosis and monitor retinal diseases, such as diabetic retinopathy and glaucoma. However, registering retinal images for various registration applications requires the detection and distribution of feature points on the low-quality region that consists of vessels of varying contrast and sizes. A recent feature detector known as Saddle detects feature points on vessels that are poorly distributed and densely positioned on strong contrast vessels. Therefore, we propose a multiresolution difference of Gaussian pyramid with Saddle detector (D-Saddle) to detect feature points on the low-quality region that consists of vessels with varying contrast and sizes. D-Saddle is tested on Fundus Image Registration (FIRE) Dataset that consists of 134 retinal image pairs. Experimental results show that D-Saddle successfully registered 43% of retinal image pairs with average registration accuracy of 2.329 pixels while a lower success rate is observed in other four state-of-the-art retinal image registration methods GDB-ICP (28%), Harris-PIIFD (4%), H-M (16%), and Saddle (16%). Furthermore, the registration accuracy of D-Saddle has the weakest correlation (Spearman) with the intensity uniformity metric among all methods. Finally, the paired t-test shows that D-Saddle significantly improved the overall registration accuracy of the original Saddle.


Sensors ◽  
2021 ◽  
Vol 21 (4) ◽  
pp. 1188
Author(s):  
Qingqing Li ◽  
Guangliang Han ◽  
Peixun Liu ◽  
Hang Yang ◽  
Huiyuan Luo ◽  
...  

It is difficult to find correct correspondences for infrared and visible image registration because of different imaging principles. Traditional registration methods based on the point feature require designing the complicated feature descriptor and eliminate mismatched points, which results in unsatisfactory precision and much calculation time. To tackle these problems, this paper presents an artful method based on constrained point features to align infrared and visible images. The proposed method principally contains three steps. First, constrained point features are extracted by employing an object detection algorithm, which avoids constructing the complex feature descriptor and introduces the senior semantic information to improve the registration accuracy. Then, the left value rule (LV-rule) is designed to match constrained points strictly without the deletion of mismatched and redundant points. Finally, the affine transformation matrix is calculated according to matched point pairs. Moreover, this paper presents an evaluation method to automatically estimate registration accuracy. The proposed method is tested on a public dataset. Among all tested infrared-visible image pairs, registration results demonstrate that the proposed framework outperforms five state-of-the-art registration algorithms in terms of accuracy, speed, and robustness.


2021 ◽  
Vol 11 (23) ◽  
pp. 11201
Author(s):  
Roziana Ramli ◽  
Khairunnisa Hasikin ◽  
Mohd Yamani Idna Idris ◽  
Noor Khairiah A. Karim ◽  
Ainuddin Wahid Abdul Wahab

Feature-based retinal fundus image registration (RIR) technique aligns fundus images according to geometrical transformations estimated between feature point correspondences. To ensure accurate registration, the feature points extracted must be from the retinal vessels and throughout the image. However, noises in the fundus image may resemble retinal vessels in local patches. Therefore, this paper introduces a feature extraction method based on a local feature of retinal vessels (CURVE) that incorporates retinal vessels and noises characteristics to accurately extract feature points on retinal vessels and throughout the fundus image. The CURVE performance is tested on CHASE, DRIVE, HRF and STARE datasets and compared with six feature extraction methods used in the existing feature-based RIR techniques. From the experiment, the feature extraction accuracy of CURVE (86.021%) significantly outperformed the existing feature extraction methods (p ≤ 0.001*). Then, CURVE is paired with a scale-invariant feature transform (SIFT) descriptor to test its registration capability on the fundus image registration (FIRE) dataset. Overall, CURVE-SIFT successfully registered 44.030% of the image pairs while the existing feature-based RIR techniques (GDB-ICP, Harris-PIIFD, Ghassabi’s-SIFT, H-M 16, H-M 17 and D-Saddle-HOG) only registered less than 27.612% of the image pairs. The one-way ANOVA analysis showed that CURVE-SIFT significantly outperformed GDB-ICP (p = 0.007*), Harris-PIIFD, Ghassabi’s-SIFT, H-M 16, H-M 17 and D-Saddle-HOG (p ≤ 0.001*).


2014 ◽  
Vol 1044-1045 ◽  
pp. 1392-1396 ◽  
Author(s):  
Shu Guang Wu ◽  
Shu He ◽  
Xia Yang

The scale invariant features transform (SIFT) is commonly used in object recognition,According to the problems of large memory consumption and low computation speed in SIFT (Scale Invariant Feature Transform) algorithm.During the image registration methods based on point features,SIFT point feature is invariant to image scale and rotation, and provides robust matching across a substantial range of affine distortion. Experiments show that on the premise that registration accuracy is stable, the proposed algorithm solves the problem of high requirement of memory and the efficiency is improved greatly, which is applicable for registering remote sensing images of large areas.


Author(s):  
A. Moussa ◽  
N. El-Sheimy

The last few years have witnessed an increasing volume of aerial image data because of the extensive improvements of the Unmanned Aerial Vehicles (UAVs). These newly developed UAVs have led to a wide variety of applications. A fast assessment of the achieved coverage and overlap of the acquired images of a UAV flight mission is of great help to save the time and cost of the further steps. A fast automatic stitching of the acquired images can help to visually assess the achieved coverage and overlap during the flight mission. This paper proposes an automatic image stitching approach that creates a single overview stitched image using the acquired images during a UAV flight mission along with a coverage image that represents the count of overlaps between the acquired images. The main challenge of such task is the huge number of images that are typically involved in such scenarios. A short flight mission with image acquisition frequency of one second can capture hundreds to thousands of images. The main focus of the proposed approach is to reduce the processing time of the image stitching procedure by exploiting the initial knowledge about the images positions provided by the navigation sensors. The proposed approach also avoids solving for all the transformation parameters of all the photos together to save the expected long computation time if all the parameters were considered simultaneously. After extracting the points of interest of all the involved images using Scale-Invariant Feature Transform (SIFT) algorithm, the proposed approach uses the initial image’s coordinates to build an incremental constrained Delaunay triangulation that represents the neighborhood of each image. This triangulation helps to match only the neighbor images and therefore reduces the time-consuming features matching step. The estimated relative orientation between the matched images is used to find a candidate seed image for the stitching process. The pre-estimated transformation parameters of the images are employed successively in a growing fashion to create the stitched image and the coverage image. The proposed approach is implemented and tested using the images acquired through a UAV flight mission and the achieved results are presented and discussed.


2021 ◽  
Vol 11 (13) ◽  
pp. 5986
Author(s):  
Yinsen Zhao ◽  
Farong Gao ◽  
Jun Yu ◽  
Xing Yu ◽  
Zhangyi Yang

In order to obtain panoramic images in a low contrast underwater environment, an underwater panoramic image mosaic algorithm based on image enhancement and improved image registration (IIR) was proposed. Firstly, mixed filtering and sigma filtering are used to enhance the contrast of the original image and de-noise the image. Secondly, scale-invariant feature transform (SIFT) is used to detect image feature points. Then, the proposed IIR algorithm is applied to image registration to improve the matching accuracy and reduce the matching time. Finally, the weighted smoothing method is used for image fusion to avoid image seams. The results show that IIR algorithm can effectively improve the registration accuracy, shorten the registration time, and improve the image fusion effect. In the field of cruise research, instruments equipped with imaging systems, such as television capture and deep-drag camera systems, can produce a large number of image or video recordings. This algorithm provides support for fast and accurate underwater image mosaic and has important practical significance.


Sensors ◽  
2021 ◽  
Vol 21 (8) ◽  
pp. 2695
Author(s):  
Xinan Hou ◽  
Quanxue Gao ◽  
Rong Wang ◽  
Xin Luo

Since technologies in image fusion, image splicing, and target recognition have developed rapidly, as the basis of many image applications, the performance of image registration directly affects subsequent work. In this work, for rich features of satellite-borne optical imagery such as panchromatic and multispectral images, the Harris corner algorithm is combined with the scale invariant feature transform (SIFT) operator for feature point extraction. Our rough matching strategy uses the K-D (K-Dimensional) tree combined with the BBF (Best Bin First) method, and the similarity measure is the nearest neighbor/the second-nearest neighbor ratio. Finally, a triangle-area representation (TAR) algorithm is utilized to eliminate false matches in order to ensure registration accuracy. The performance of the proposed algorithm is compared with existing popular algorithms. The experimental results indicate that for visible light and multi-spectral satellite remote sensing images of different sizes and different sources, the proposed algorithm in this work is excellent in accuracy and efficiency.


Author(s):  
A. Moussa ◽  
N. El-Sheimy

The last few years have witnessed an increasing volume of aerial image data because of the extensive improvements of the Unmanned Aerial Vehicles (UAVs). These newly developed UAVs have led to a wide variety of applications. A fast assessment of the achieved coverage and overlap of the acquired images of a UAV flight mission is of great help to save the time and cost of the further steps. A fast automatic stitching of the acquired images can help to visually assess the achieved coverage and overlap during the flight mission. This paper proposes an automatic image stitching approach that creates a single overview stitched image using the acquired images during a UAV flight mission along with a coverage image that represents the count of overlaps between the acquired images. The main challenge of such task is the huge number of images that are typically involved in such scenarios. A short flight mission with image acquisition frequency of one second can capture hundreds to thousands of images. The main focus of the proposed approach is to reduce the processing time of the image stitching procedure by exploiting the initial knowledge about the images positions provided by the navigation sensors. The proposed approach also avoids solving for all the transformation parameters of all the photos together to save the expected long computation time if all the parameters were considered simultaneously. After extracting the points of interest of all the involved images using Scale-Invariant Feature Transform (SIFT) algorithm, the proposed approach uses the initial image’s coordinates to build an incremental constrained Delaunay triangulation that represents the neighborhood of each image. This triangulation helps to match only the neighbor images and therefore reduces the time-consuming features matching step. The estimated relative orientation between the matched images is used to find a candidate seed image for the stitching process. The pre-estimated transformation parameters of the images are employed successively in a growing fashion to create the stitched image and the coverage image. The proposed approach is implemented and tested using the images acquired through a UAV flight mission and the achieved results are presented and discussed.


Symmetry ◽  
2021 ◽  
Vol 13 (10) ◽  
pp. 1863
Author(s):  
Ying Chen ◽  
Qi Zhang ◽  
Wencheng Zhang ◽  
Lei Chen

Multi-temporal remote sensing image registration is a geometric symmetry process that involves matching a source image with a target image. To improve the accuracy and enhance the robustness of the algorithm, this study proposes an end-to-end registration network—a bidirectional symmetry network based on dual-field cyclic attention for multi-temporal remote sensing image registration, which mainly improves feature extraction and feature matching. (1) We propose a feature extraction framework combining an attention module and a pre-training model, which can accurately locate important areas in images and quickly extract features. Not only is the dual receptive field module designed to enhance attention in the spatial region, a loop structure is also used to improve the network model and improve overall accuracy. (2) Matching has not only directivity but also symmetry. We design a symmetric network of two-way matching to reduce the registration deviation caused by one-way matching and use a Pearson correlation method to improve the cross-correlation matching and enhance the robustness of the matching relation. In contrast with two traditional methods and three deep learning-based algorithms, the proposed approach works well under five indicators in three public multi-temporal datasets. Notably, in the case of the Aerial Image Dataset, the accuracy of the proposed method is improved by 39.8% compared with the Two-stream Ensemble method under a PCK (Percentage of Correct Keypoints) index of 0.05. When the PCK index is 0.03, accuracy increases by 46.8%, and increases by 18.7% under a PCK index of 0.01. Additionally, when adding the innovation points in feature extraction into the basic network CNNGeo (Convolutional Neural Network Architecture for Geometric Matching), accuracy is increased by 36.7% under 0.05 PCK, 18.2% under 0.03 PCK, and 8.4% under 0.01 PCK. Meanwhile, by adding the innovation points in feature matching into CNNGeo, accuracy is improved by 16.4% under 0.05 PCK, 9.1% under 0.03 PCK, and 5.2% under 0.01 PCK. In most cases, this paper reports high registration accuracy and efficiency for multi-temporal remote sensing image registration.


Author(s):  
Jiayong Yu ◽  
Longchen Ma ◽  
Maoyi Tian, ◽  
Xiushan Lu

The unmanned aerial vehicle (UAV)-mounted mobile LiDAR system (ULS) is widely used for geomatics owing to its efficient data acquisition and convenient operation. However, due to limited carrying capacity of a UAV, sensors integrated in the ULS should be small and lightweight, which results in decrease in the density of the collected scanning points. This affects registration between image data and point cloud data. To address this issue, the authors propose a method for registering and fusing ULS sequence images and laser point clouds, wherein they convert the problem of registering point cloud data and image data into a problem of matching feature points between the two images. First, a point cloud is selected to produce an intensity image. Subsequently, the corresponding feature points of the intensity image and the optical image are matched, and exterior orientation parameters are solved using a collinear equation based on image position and orientation. Finally, the sequence images are fused with the laser point cloud, based on the Global Navigation Satellite System (GNSS) time index of the optical image, to generate a true color point cloud. The experimental results show the higher registration accuracy and fusion speed of the proposed method, thereby demonstrating its accuracy and effectiveness.


Sign in / Sign up

Export Citation Format

Share Document