scholarly journals Image Feature Matching Based on Semantic Fusion Description and Spatial Consistency

Symmetry ◽  
2018 ◽  
Vol 10 (12) ◽  
pp. 725
Author(s):  
Wei Zhang ◽  
Guoying Zhang

Image feature description and matching is widely used in computer vision, such as camera pose estimation. Traditional feature descriptions lack the semantic and spatial information, and give rise to a large number of feature mismatches. In order to improve the accuracy of image feature matching, a feature description and matching method, based on local semantic information fusion and feature spatial consistency, is proposed in this paper. Once object detection is used on images, feature points are then extracted, and image patches with various sizes surrounding these points are clipped. These patches are sent into the Siamese convolution network to get their semantic vectors. Then, semantic fusion description of feature points is obtained by weighted sum of the semantic vectors, and their weights optimized by particle swarm optimization (PSO) algorithm. When matching these feature points using their descriptions, feature spatial consistency is calculated based on the spatial consistency of matched objects, and the orientation and distance constraint of adjacent points within matched objects. With the description and matching method, the feature points are matched accurately and effectively. Our experiment results showed the efficiency of our methods.

2021 ◽  
Author(s):  
Aikui Tian ◽  
Kangtao Wang ◽  
liye zhang ◽  
Bingcai Wei

Abstract Aiming at the problem of inaccurate extraction of feature points by the traditional image matching method, low robustness, and problems such as diffculty in inentifying feature points in area with poor texture. This paper proposes a new local image feature matching method, which replaces the traditional sequential image feature detection, description and matching steps. First, extract the coarse features with a resolution of 1/8 from the original image, then tile to a one-dimensional vector plus the positional encoding, feed them to the self-attention layer and cross-attention layer in the Transformer module, and finally get through the Differentiable Matching Layer and confidence matrix, after setting the threshold and the mutual closest standard, a Coarse-Level matching prediction is obtained. Secondly the fine matching is refined at the Fine-level match, after the Fine-level match is established, the image overlapped area is aligned by transforming the matrix to a unified coordinate, and finally the image is fused by the weighted fusion algorithm to realize the unification of seamless mosaic of images. This paper uses the self-attention layer and cross-attention layer in Transformers to obtain the feature descriptor of the image. Finally, experiments show that in terms of feature point extraction, LoFTR algorithm is more accurate than the traditional SIFT algorithm in both low-texture regions and regions with rich textures. At the same time, the image mosaic effect obtained by this method is more accurate than that of the traditional classic algorithms, the experimental effect is more ideal.


Sensors ◽  
2021 ◽  
Vol 21 (5) ◽  
pp. 1839
Author(s):  
Yutong Zhang ◽  
Jianmei Song ◽  
Yan Ding ◽  
Yating Yuan ◽  
Hua-Liang Wei

Fisheye images with a far larger Field of View (FOV) have severe radial distortion, with the result that the associated image feature matching process cannot achieve the best performance if the traditional feature descriptors are used. To address this challenge, this paper reports a novel distorted Binary Robust Independent Elementary Feature (BRIEF) descriptor for fisheye images based on a spherical perspective model. Firstly, the 3D gray centroid of feature points is designed, and the position and direction of the feature points on the spherical image are described by a constructed feature point attitude matrix. Then, based on the attitude matrix of feature points, the coordinate mapping relationship between the BRIEF descriptor template and the fisheye image is established to realize the computation associated with the distorted BRIEF descriptor. Four experiments are provided to test and verify the invariance and matching performance of the proposed descriptor for a fisheye image. The experimental results show that the proposed descriptor works well for distortion invariance and can significantly improve the matching performance in fisheye images.


2021 ◽  
Vol 5 (4) ◽  
pp. 783-793
Author(s):  
Muhammad Muttabi Hudaya ◽  
Siti Saadah ◽  
Hendy Irawan

needs a solid validation that has verification and matching uploaded images. To solve this problem, this paper implementing a detection model using Faster R-CNN and a matching method using ORB (Oriented FAST and Rotated BRIEF) and KNN-BFM (K-Nearest Neighbor Brute Force Matcher). The goal of the implementations is to reach both an 80% mark of accuracy and prove matching using ORB only can be a replaced OCR technique. The implementation accuracy results in the detection model reach mAP (Mean Average Precision) of 94%. But, the matching process only achieves an accuracy of 43,46%. The matching process using only image feature matching underperforms the previous OCR technique but improves processing time from 4510ms to 60m). Image matching accuracy has proven to increase by using a high-quality dan high quantity dataset, extracting features on the important area of EKTP card images.


Author(s):  
Hongmin Liu ◽  
Hongya Zhang ◽  
Zhiheng Wang ◽  
Yiming Zheng

For images with distortions or repetitive patterns, the existing matching methods usually work well just on one of the two kinds of images. In this paper, we present novel triangle guidance and constraints (TGC)-based feature matching method, which can achieve good results on both kinds of images. We first extract stable matched feature points and combine these points into triangles as the initial matched triangles, and triangles combined by feature points are as the candidates to be matched. Then, triangle guidance based on the connection relationship via the shared feature point between the matched triangles and the candidates is defined to find the potential matching triangles. Triangle constraints, specially the location of a vertex relative to the inscribed circle center of the triangle, the scale represented by the ratio of corresponding side lengths of two matching triangles and the included angles between the sides of two triangles with connection relationship, are subsequently used to verify the potential matches and obtain the correct ones. Comparative experiments show that the proposed TGC can increase the number of the matched points with high accuracy under various image transformations, especially more effective on images with distortions or repetitive patterns due to the fact that the triangular structure are not only stable to image transformations but also provides more geometric constraints.


Sensors ◽  
2020 ◽  
Vol 20 (21) ◽  
pp. 6235
Author(s):  
Chengyi Xu ◽  
Ying Liu ◽  
Fenglong Ding ◽  
Zilong Zhuang

Considering the difficult problem of robot recognition and grasping in the scenario of disorderly stacked wooden planks, a recognition and positioning method based on local image features and point pair geometric features is proposed here and we define a local patch point pair feature. First, we used self-developed scanning equipment to collect images of wood boards and a robot to drive a RGB-D camera to collect images of disorderly stacked wooden planks. The image patches cut from these images were input to a convolutional autoencoder to train and obtain a local texture feature descriptor that is robust to changes in perspective. Then, the small image patches around the point pairs of the plank model are extracted, and input into the trained encoder to obtain the feature vector of the image patch, combining the point pair geometric feature information to form a feature description code expressing the characteristics of the plank. After that, the robot drives the RGB-D camera to collect the local image patches of the point pairs in the area to be grasped in the scene of the stacked wooden planks, also obtaining the feature description code of the wooden planks to be grasped. Finally, through the process of point pair feature matching, pose voting and clustering, the pose of the plank to be grasped is determined. The robot grasping experiment here shows that both the recognition rate and grasping success rate of planks are high, reaching 95.3% and 93.8%, respectively. Compared with the traditional point pair feature method (PPF) and other methods, the method present here has obvious advantages and can be applied to stacked wood plank grasping environments.


2018 ◽  
Vol 10 (9) ◽  
pp. 168781401879503
Author(s):  
Haihua Cui ◽  
Wenhe Liao ◽  
Xiaosheng Cheng ◽  
Ning Dai ◽  
Changye Guo

Flexible and robust point cloud matching is important for three-dimensional surface measurement. This article proposes a new matching method based on three-dimensional image feature points. First, an intrinsic shape signature algorithm is used to detect the key shape feature points, using a weighted three-dimensional occupational histogram of the data points within the angular space, which is a view-independent representation of the three-dimensional shape. Then, the point feature histogram is used to represent the underlying surface model properties at a point whose computation is based on the combination of certain geometrical relations between the point’s nearest k-neighbors. The two-view point clouds are robustly matched using the proposed double neighborhood constraint of minimizing the sum of the Euclidean distances between the local neighbors of the point and feature point. The proposed optimization method is immune to noise, reduces the search range for matching points, and improves the correct feature point matching rate for a weak surface texture. The matching accuracy and stability of the proposed method are verified using experiments. This method can be used for a flat surface with weak features and in other applications. The method has a larger application range than the traditional methods.


2019 ◽  
Vol 79 (23-24) ◽  
pp. 16421-16439
Author(s):  
Fengquan Zhang ◽  
Yahui Gao ◽  
Liuqing Xu

2020 ◽  
Vol 57 (10) ◽  
pp. 101509
Author(s):  
吴斌 Wu Bin ◽  
王旭日 Wang Xuri

2013 ◽  
Vol 401-403 ◽  
pp. 1341-1346
Author(s):  
Guang Shuai Liu ◽  
Bai Lin Li

Through considering the symmetry constraint characteristics in mechanical product contours, an auto-identification method of two-dimensional symmetrical contour based on feature matching is presented in this paper. Firstly, the feature points are extracted based on contour cloud point data partition and by using offset method, the different distribution rules of axis-symmetrical and rotation-symmetrical images for judging the type of symmetry was studied. The feature description parameters of symmetrical contour were calculated by adopting rotational inertia method and periodic method, which is regarded as the parameters for solving overall constraint optimization of the contour. Examples show that the proposed method can effectively identify the symmetrical contours and their types, and accurately extract the symmetrical constraint features.


Sign in / Sign up

Export Citation Format

Share Document