scholarly journals A High-accuracy and Semi-dense Feature-based VSLAM System

2021 ◽  
Vol 9 (4) ◽  
pp. 124-130
Author(s):  
Zhen Li ◽  
Yuren Du ◽  
Miaomiao Zhu ◽  
Shi Zhou ◽  
Seiichi Serikawa ◽  
...  
Keyword(s):  
2020 ◽  
Author(s):  
Harith Al-Sahaf ◽  
A Song ◽  
K Neshatian ◽  
Mengjie Zhang

Image classification is a complex but important task especially in the areas of machine vision and image analysis such as remote sensing and face recognition. One of the challenges in image classification is finding an optimal set of features for a particular task because the choice of features has direct impact on the classification performance. However the goodness of a feature is highly problem dependent and often domain knowledge is required. To address these issues we introduce a Genetic Programming (GP) based image classification method, Two-Tier GP, which directly operates on raw pixels rather than features. The first tier in a classifier is for automatically defining features based on raw image input, while the second tier makes decision. Compared to conventional feature based image classification methods, Two-Tier GP achieved better accuracies on a range of different tasks. Furthermore by using the features defined by the first tier of these Two-Tier GP classifiers, conventional classification methods obtained higher accuracies than classifying on manually designed features. Analysis on evolved Two-Tier image classifiers shows that there are genuine features captured in the programs and the mechanism of achieving high accuracy can be revealed. The Two-Tier GP method has clear advantages in image classification, such as high accuracy, good interpretability and the removal of explicit feature extraction process. © 2012 IEEE.


Radiotekhnika ◽  
2020 ◽  
pp. 191-196
Author(s):  
V.A. Dushepa ◽  
Y.A. Tiahnyriadno ◽  
I.V. Baryshev

The paper compares the image registration algorithms: the classical normalized correlation (as a representative of intensity-based algorithms) and the SIFT-based algorithm (feature-based registration). A gradient subpixel correction algorithm was also used for normalized correlation. We compared the effectiveness of their work on real images (including a terrain map) when modeling artificial distortions. The accuracy of determining the position (shift) of one image relative to another in the presence of rotation and scale changes was studied. The experiment was carried out using a simulation model created in the Python programming language using the OpenCV computer vision library. The results of the experiments show that in the absence of rotation and scale changes between the registered images the normalized correlation provides a slightly smaller root-mean-square error. At the same time, if there are even small such distortions, for example, a rotation of more than 2 degrees and a scale change of more than 2 percent, the probability of correct registration for the normalized correlation drops sharply. It was also noted that the advantages of normalized correlation are almost 5 times higher speed and the possibility of using it for small fragments (50x50 or less), where it is problematic for the SIFT algorithm to allocate a sufficient number of keypoints. It was also shown that the use of a two-stage algorithm (SIFT-based registration at the first stage, and optimization with normalized correlation as a criterion at the second) allows you to get both high accuracy and stability to rotation and scale change, but this will be accompanied by high computational costs.


2016 ◽  
Vol 9 (3) ◽  
Author(s):  
Pieter Blignaut

It is argued that polynomial expressions that are normally used for remote, video-based, low cost eye tracking systems, are not always ideal to accommodate individual differences in eye cleft, position of the eye in the socket, corneal bulge, astigmatism, etc. A procedure to identify a set of polynomial expressions that will provide the best possible accuracy for a specific individual is proposed.  It is also proposed that regression coefficients are recalculated in real-time, based on a subset of calibration points in the region of the current gaze and that a real-time correction is applied, based on the offsets from calibration targets that are close to the estimated point of regard.It was found that if no correction is applied, the choice of polynomial is critically important to get an accuracy that is just acceptable.  Previously identified polynomial sets were confirmed to provide good results in the absence of any correction procedure.  By applying real-time correction, the accuracy of any given polynomial improves while the choice of polynomial becomes less critical.  Identification of the best polynomial set per participant and correction technique in combination with the aforementioned correction techniques, lead to an average error of 0.32° (sd = 0.10°) over 134 participant recordings.The proposed improvements could lead to low-cost systems that are accurate and fast enough to do reading research or other studies where high accuracy is expected at framerates in excess of 200 Hz.


2020 ◽  
Author(s):  
Harith Al-Sahaf ◽  
A Song ◽  
K Neshatian ◽  
Mengjie Zhang

Image classification is a complex but important task especially in the areas of machine vision and image analysis such as remote sensing and face recognition. One of the challenges in image classification is finding an optimal set of features for a particular task because the choice of features has direct impact on the classification performance. However the goodness of a feature is highly problem dependent and often domain knowledge is required. To address these issues we introduce a Genetic Programming (GP) based image classification method, Two-Tier GP, which directly operates on raw pixels rather than features. The first tier in a classifier is for automatically defining features based on raw image input, while the second tier makes decision. Compared to conventional feature based image classification methods, Two-Tier GP achieved better accuracies on a range of different tasks. Furthermore by using the features defined by the first tier of these Two-Tier GP classifiers, conventional classification methods obtained higher accuracies than classifying on manually designed features. Analysis on evolved Two-Tier image classifiers shows that there are genuine features captured in the programs and the mechanism of achieving high accuracy can be revealed. The Two-Tier GP method has clear advantages in image classification, such as high accuracy, good interpretability and the removal of explicit feature extraction process. © 2012 IEEE.


Sensors ◽  
2020 ◽  
Vol 20 (5) ◽  
pp. 1511 ◽  
Author(s):  
Quanpan Liu ◽  
Zhengjie Wang ◽  
Huan Wang

In practical applications, how to achieve a perfect balance between high accuracy and computational efficiency can be the main challenge for simultaneous localization and mapping (SLAM). To solve this challenge, we propose SD-VIS, a novel fast and accurate semi-direct visual-inertial SLAM framework, which can estimate camera motion and structure of surrounding sparse scenes. In the initialization procedure, we align the pre-integrated IMU measurements and visual images and calibrate out the metric scale, initial velocity, gravity vector, and gyroscope bias by using multiple view geometry (MVG) theory based on the feature-based method. At the front-end, keyframes are tracked by feature-based method and used for back-end optimization and loop closure detection, while non-keyframes are utilized for fast-tracking by direct method. This strategy makes the system not only have the better real-time performance of direct method, but also have high accuracy and loop closing detection ability based on feature-based method. At the back-end, we propose a sliding window-based tightly-coupled optimization framework, which can get more accurate state estimation by minimizing the visual and IMU measurement errors. In order to limit the computational complexity, we adopt the marginalization strategy to fix the number of keyframes in the sliding window. Experimental evaluation on EuRoC dataset demonstrates the feasibility and superior real-time performance of SD-VIS. Compared with state-of-the-art SLAM systems, we can achieve a better balance between accuracy and speed.


2008 ◽  
Vol 19 (9) ◽  
pp. 2293-2301 ◽  
Author(s):  
Gong-Jian WEN ◽  
Jin-Jian LÜ ◽  
Ji-Yang WANG

Author(s):  
M. Nishigaki ◽  
S. Katagiri ◽  
H. Kimura ◽  
B. Tadano

The high voltage electron microscope has many advantageous features in comparison with the ordinary electron microscope. They are a higher penetrating efficiency of the electron, low chromatic aberration, high accuracy of the selected area diffraction and so on. Thus, the high voltage electron microscope becomes an indispensable instrument for the metallurgical, polymer and biological specimen studies. The application of the instrument involves today not only basic research but routine survey in the various fields. Particularly for the latter purpose, the performance, maintenance and reliability of the microscope should be same as those of commercial ones. The authors completed a 500 kV electron microscope in 1964 and a 1,000 kV one in 1966 taking these points into consideration. The construction of our 1,000 kV electron microscope is described below.


Sign in / Sign up

Export Citation Format

Share Document