scholarly journals Techniques of Feature Points Matching in the Problem of UAV’s Visual Navigation

2011 ◽  
Vol 383-390 ◽  
pp. 5193-5199 ◽  
Author(s):  
Jian Ying Yuan ◽  
Xian Yong Liu ◽  
Zhi Qiang Qiu

In optical measuring system with a handheld digital camera, image points matching is very important for 3-dimensional(3D) reconstruction. The traditional matching algorithms are usually based on epipolar geometry or multi-base lines. Mistaken matching points can not be eliminated by epipolar geometry and many matching points will be lost by multi-base lines. In this paper, a robust algorithm is presented to eliminate mistaken matching feature points in the process of 3D reconstruction from multiple images. The algorithm include three steps: (1) pre-matching the feature points using constraints of epipolar geometry and image topological structure firstly; (2) eliminating the mistaken matching points by the principle of triangulation in multi-images; (3) refining camera external parameters by bundle adjustment. After the external parameters of every image refined, repeat step (1) to step (3) until all the feature points been matched. Comparative experiments with real image data have shown that mistaken matching feature points can be effectively eliminated, and nearly no matching points have been lost, which have a better performance than traditonal matching algorithms do.


2021 ◽  
pp. 335-344
Author(s):  
Yusong Chen ◽  
Changxing Geng ◽  
Yong Wang ◽  
Guofeng Zhu ◽  
Renyuan Shen

For the extraction of paddy rice seedling centerline, this study proposed a method based on Fast-SCNN (Fast Segmentation Convolutional Neural Network) semantic segmentation network. By training the FAST-SCNN network, the optimal model was selected to separate the seedling from the picture. Feature points were extracted using the FAST (Features from Accelerated Segment Test) corner detection algorithm after the pre-processing of original images. All the outer contours of the segmentation results were extracted, and feature point classification was carried out based on the extracted outer contour. For each class of points, Hough transformation based on known points was used to fit the seedling row centerline. It has been verified by experiments that this algorithm has high robustness in each period within three weeks after transplanting. In a 1280×1024-pixel PNG format color image, the accuracy of this algorithm is 95.9% and the average time of each frame is 158ms, which meets the real-time requirement of visual navigation in paddy field.


Author(s):  
Wenhao Wang ◽  
Mingxin Jiang ◽  
Yunyang Yan ◽  
Xiaobing Chen ◽  
Wendong Zhao

2018 ◽  
Vol 8 (11) ◽  
pp. 2268 ◽  
Author(s):  
Jianfeng Li ◽  
Xiaowei Wang ◽  
Shigang Li

As we know, SLAM (Simultaneous Localization and Mapping) relies on surroundings. A full-view image provides more benefits to SLAM than a limited-view image. In this paper, we present a spherical-model-based SLAM on full-view images for indoor environments. Unlike traditional limited-view images, the full-view image has its own specific imaging principle (which is nonlinear), and is accompanied by distortions. Thus, specific techniques are needed for processing a full-view image. In the proposed method, we first use a spherical model to express the full-view image. Then, the algorithms are implemented based on the spherical model, including feature points extraction, feature points matching, 2D-3D connection, and projection and back-projection of scene points. Thanks to the full field of view, the experiments show that the proposed method effectively handles sparse-feature or partially non-feature environments, and also achieves high accuracy in localization and mapping. An experiment is conducted to prove that the accuracy is affected by the view field.


Sign in / Sign up

Export Citation Format

Share Document