scholarly journals Feature-Based Retinal Image Registration Using D-Saddle Feature

2017 ◽  
Vol 2017 ◽  
pp. 1-15 ◽  
Author(s):  
Roziana Ramli ◽  
Mohd Yamani Idna Idris ◽  
Khairunnisa Hasikin ◽  
Noor Khairiah A. Karim ◽  
Ainuddin Wahid Abdul Wahab ◽  
...  

Retinal image registration is important to assist diagnosis and monitor retinal diseases, such as diabetic retinopathy and glaucoma. However, registering retinal images for various registration applications requires the detection and distribution of feature points on the low-quality region that consists of vessels of varying contrast and sizes. A recent feature detector known as Saddle detects feature points on vessels that are poorly distributed and densely positioned on strong contrast vessels. Therefore, we propose a multiresolution difference of Gaussian pyramid with Saddle detector (D-Saddle) to detect feature points on the low-quality region that consists of vessels with varying contrast and sizes. D-Saddle is tested on Fundus Image Registration (FIRE) Dataset that consists of 134 retinal image pairs. Experimental results show that D-Saddle successfully registered 43% of retinal image pairs with average registration accuracy of 2.329 pixels while a lower success rate is observed in other four state-of-the-art retinal image registration methods GDB-ICP (28%), Harris-PIIFD (4%), H-M (16%), and Saddle (16%). Furthermore, the registration accuracy of D-Saddle has the weakest correlation (Spearman) with the intensity uniformity metric among all methods. Finally, the paired t-test shows that D-Saddle significantly improved the overall registration accuracy of the original Saddle.

2020 ◽  
Vol 37 (5) ◽  
pp. 855-864
Author(s):  
Nagendra Pratap Singh ◽  
Vibhav Prakash Singh

The registration of segmented retinal images is mainly used for the diagnosis of various diseases such as glaucoma, diabetes, and hypertension, etc. These retinal diseases depend on the retinal vessel structure. The fast and accurate registration of segmented retinal images helps to identify the changes in vessels and the diagnosis of the diseases. This paper presents a novel binary robust invariant scalable key point (BRISK) feature-based segmented retinal image registration approach. The BRISK framework is an efficient keypoint detection, description, and matching approach. The proposed approach contains three steps, namely, pre-processing, segmentation using matched filter based Gumbel pdf, and BRISK framework for registration of segmented source and target retinal images. The effectiveness of the proposed approach is demonstrated by evaluating the normalized cross-correlation of image pairs. Based on the experimental analysis, it has been observed that the performance of the proposed approach is better in both aspect, registration performance as well as computation time with respect to SURF and Harris partial intensity invariant feature descriptor based registration.


2021 ◽  
Vol 11 (23) ◽  
pp. 11201
Author(s):  
Roziana Ramli ◽  
Khairunnisa Hasikin ◽  
Mohd Yamani Idna Idris ◽  
Noor Khairiah A. Karim ◽  
Ainuddin Wahid Abdul Wahab

Feature-based retinal fundus image registration (RIR) technique aligns fundus images according to geometrical transformations estimated between feature point correspondences. To ensure accurate registration, the feature points extracted must be from the retinal vessels and throughout the image. However, noises in the fundus image may resemble retinal vessels in local patches. Therefore, this paper introduces a feature extraction method based on a local feature of retinal vessels (CURVE) that incorporates retinal vessels and noises characteristics to accurately extract feature points on retinal vessels and throughout the fundus image. The CURVE performance is tested on CHASE, DRIVE, HRF and STARE datasets and compared with six feature extraction methods used in the existing feature-based RIR techniques. From the experiment, the feature extraction accuracy of CURVE (86.021%) significantly outperformed the existing feature extraction methods (p ≤ 0.001*). Then, CURVE is paired with a scale-invariant feature transform (SIFT) descriptor to test its registration capability on the fundus image registration (FIRE) dataset. Overall, CURVE-SIFT successfully registered 44.030% of the image pairs while the existing feature-based RIR techniques (GDB-ICP, Harris-PIIFD, Ghassabi’s-SIFT, H-M 16, H-M 17 and D-Saddle-HOG) only registered less than 27.612% of the image pairs. The one-way ANOVA analysis showed that CURVE-SIFT significantly outperformed GDB-ICP (p = 0.007*), Harris-PIIFD, Ghassabi’s-SIFT, H-M 16, H-M 17 and D-Saddle-HOG (p ≤ 0.001*).


2020 ◽  
Vol 86 (3) ◽  
pp. 177-186
Author(s):  
Matthew Plummer ◽  
Douglas Stow ◽  
Emanuel Storey ◽  
Lloyd Coulter ◽  
Nicholas Zamora ◽  
...  

Image registration is an important preprocessing step prior to detecting changes using multi-temporal image data, which is increasingly accomplished using automated methods. In high spatial resolution imagery, shadows represent a major source of illumination variation, which can reduce the performance of automated registration routines. This study evaluates the statistical relationship between shadow presence and image registration accuracy, and whether masking and normalizing shadows leads to improved automatic registration results. Eighty-eight bitemporal aerial image pairs were co-registered using software called Scale Invariant Features Transform (<small>SIFT</small>) and Random Sample Consensus (<small>RANSAC</small>) Alignment (<small>SARA</small>). Co-registration accuracy was assessed at different levels of shadow coverage and shadow movement within the images. The primary outcomes of this study are (1) the amount of shadow in a multi-temporal image pair is correlated with the accuracy/success of automatic co-registration; (2) masking out shadows prior to match point select does not improve the success of image-to-image co-registration; and (3) normalizing or brightening shadows can help match point routines find more match points and therefore improve performance of automatic co-registration. Normalizing shadows via a standard linear correction provided the most reliable co-registration results in image pairs containing substantial amounts of relative shadow movement, but had minimal effect for pairs with stationary shadows.


2019 ◽  
Vol 7 (6) ◽  
pp. 178
Author(s):  
Armagan Elibol ◽  
Nak Young Chong

Image registration is one of the most fundamental and widely used tools in optical mapping applications. It is mostly achieved by extracting and matching salient points (features) described by vectors (feature descriptors) from images. While matching the descriptors, mismatches (outliers) do appear. Probabilistic methods are then applied to remove outliers and to find the transformation (motion) between images. These methods work in an iterative manner. In this paper, an efficient way of integrating geometric invariants into feature-based image registration is presented aiming at improving the performance of image registration in terms of both computational time and accuracy. To do so, geometrical properties that are invariant to coordinate transforms are studied. This would be beneficial to all methods that use image registration as an intermediate step. Experimental results are presented using both semi-synthetically generated data and real image pairs from underwater environments.


2019 ◽  
Vol 11 (12) ◽  
pp. 1418
Author(s):  
Zhaohui Zheng ◽  
Hong Zheng ◽  
Yong Ma ◽  
Fan Fan ◽  
Jianping Ju ◽  
...  

In feature-based image matching, implementing a fast and ultra-robust feature matching technique is a challenging task. To solve the problems that the traditional feature matching algorithm suffers from, such as long running time and low registration accuracy, an algorithm called feedback unilateral grid-based clustering (FUGC) is presented which is able to improve computation efficiency, accuracy and robustness of feature-based image matching while applying it to remote sensing image registration. First, the image is divided by using unilateral grids and then fast coarse screening of the initial matching feature points through local grid clustering is performed to eliminate a great deal of mismatches in milliseconds. To ensure that true matches are not erroneously screened, a local linear transformation is designed to take feedback verification further, thereby performing fine screening between true matching points deleted erroneously and undeleted false positives in and around this area. This strategy can not only extract high-accuracy matching from coarse baseline matching with low accuracy, but also preserves the true matching points to the greatest extent. The experimental results demonstrate the strong robustness of the FUGC algorithm on various real-world remote sensing images. The FUGC algorithm outperforms current state-of-the-art methods and meets the real-time requirement.


2011 ◽  
Vol 66-68 ◽  
pp. 1954-1959
Author(s):  
Hong Bo Zhu ◽  
Xue Jun Xu ◽  
Xue Song Chen ◽  
Shao Hua Jiang

Matching feature points is an important step in image registration. For high- dimensional feature vector, the process of matching is very time-consuming, especially matching the vast amount of points. In the premise of ensuring the registration, filtering the candidate vectors to reduce the number of feature vectors, can effectively reduce the time matching the vectors. This paper presents a matching algorithm based on filtering the feature points on their characteristics of the corner feature. The matching method can effectively improve the matching speed, and can guarantee registration accuracy as well.


Sensors ◽  
2021 ◽  
Vol 21 (4) ◽  
pp. 1188
Author(s):  
Qingqing Li ◽  
Guangliang Han ◽  
Peixun Liu ◽  
Hang Yang ◽  
Huiyuan Luo ◽  
...  

It is difficult to find correct correspondences for infrared and visible image registration because of different imaging principles. Traditional registration methods based on the point feature require designing the complicated feature descriptor and eliminate mismatched points, which results in unsatisfactory precision and much calculation time. To tackle these problems, this paper presents an artful method based on constrained point features to align infrared and visible images. The proposed method principally contains three steps. First, constrained point features are extracted by employing an object detection algorithm, which avoids constructing the complex feature descriptor and introduces the senior semantic information to improve the registration accuracy. Then, the left value rule (LV-rule) is designed to match constrained points strictly without the deletion of mismatched and redundant points. Finally, the affine transformation matrix is calculated according to matched point pairs. Moreover, this paper presents an evaluation method to automatically estimate registration accuracy. The proposed method is tested on a public dataset. Among all tested infrared-visible image pairs, registration results demonstrate that the proposed framework outperforms five state-of-the-art registration algorithms in terms of accuracy, speed, and robustness.


Author(s):  
N. Zhu ◽  
B. Yang ◽  
Y. Jia

Abstract. We propose using the relative orientation model (ROM) of panoramic to register the MMS LiDAR points and panoramic image sequence, which has the wide applicability. The feature points, extracted and matched from panoramic image pairs, are used to solve the relative position and attitude parameters in the ROM, then, combining the absolute position and attitude parameters of the initial panoramic image, the MMS LiDAR points and panoramic image sequence are registered. First, we propose the position/attitude ROM (PA-ROM) and attitude ROM (A-ROM) of panoramic images respectively, which are apply to the position/attitude parameters both unknown and only the attitude parameters unknown. Second, we automatically extract and match feature points from panoramic image pairs using the SURF algorithm, as these mismatching points will affect the registration accuracy, the RANSAC algorithm and ROM were used to choose the best matching points automatically. Finally, we select the feature points manually from MMS LiDAR points and panoramic image sequence as the checkpoints, and then compare the registration accuracy of continuous/discontinuous panoramic image pairs. The results show that MMS LiDAR points and panoramic image sequence are registered accurately based on ROM (7.36 and 3.75 pixels in dataset I and II), what's more, our registration method just tackle the image pairs (uninvolved LiDAR points), so it is suitable for more road scenes.


Sign in / Sign up

Export Citation Format

Share Document