scholarly journals Unique 4-DOF Relative Pose Estimation with Six Distances for UWB/V-SLAM-Based Devices

Sensors ◽  
2019 ◽  
Vol 19 (20) ◽  
pp. 4366 ◽  
Author(s):  
Francisco Molina Martel ◽  
Juri Sidorenko ◽  
Christoph Bodensteiner ◽  
Michael Arens ◽  
Urs Hugentobler

In this work we introduce a relative localization method that estimates the coordinate frame transformation between two devices based on distance measurements. We present a linear algorithm that calculates the relative pose in 2D or 3D with four degrees of freedom (4-DOF). This algorithm needs a minimum of five or six distance measurements, respectively, to estimate the relative pose uniquely. We use the linear algorithm in conjunction with outlier detection algorithms and as a good initial estimate for iterative least squares refinement. The proposed method outperforms other related linear methods in terms of distance measurements needed and in terms of accuracy. In comparison with a related linear algorithm in 2D, we can reduce 10% of the translation error. In contrast to the more general 6-DOF linear algorithm, our 4-DOF method reduces the minimum distances needed from ten to six and the rotation error by a factor of four at the standard deviation of our ultra-wideband (UWB) transponders. When using the same amount of measurements the orientation error and translation error are approximately reduced to a factor of ten. We validate our method with simulations and an experimental setup, where we integrate ultra-wideband (UWB) technology into simultaneous localization and mapping (SLAM)-based devices. The presented relative pose estimation method is intended for use in augmented reality applications for cooperative localization with head-mounted displays. We foresee practical use cases of this method in cooperative SLAM, where map merging is performed in the most proactive manner.

2020 ◽  
Vol 10 (24) ◽  
pp. 8876
Author(s):  
Sungkwan Kim ◽  
Inhwan Kim ◽  
Luiz Felipe Vecchietti ◽  
Dongsoo Har

Lately, pose estimation based on learning-based Visual Odometry (VO) methods, where raw image data are provided as the input of a neural network to get 6 Degrees of Freedom (DoF) information, has been intensively investigated. Despite its recent advances, learning-based VO methods still perform worse than the classical VO that consists of feature-based VO methods and direct VO methods. In this paper, a new pose estimation method with the help of a Gated Recurrent Unit (GRU) network trained by pose data acquired by an accurate sensor is proposed. The historical trajectory data of the yaw angle are provided to the GRU network to get a yaw angle at the current timestep. The proposed method can be easily combined with other VO methods to enhance the overall performance via an ensemble of predicted results. Pose estimation using the proposed method is especially advantageous in the cornering section which often introduces an estimation error. The performance is improved by reconstructing the rotation matrix using a yaw angle that is the fusion of the yaw angles estimated from the proposed GRU network and other VO methods. The KITTI dataset is utilized to train the network. On average, regarding the KITTI sequences, performance is improved as much as 1.426% in terms of translation error and 0.805 deg/100 m in terms of rotation error.


2020 ◽  
Vol 10 (16) ◽  
pp. 5442
Author(s):  
Ryo Hachiuma ◽  
Hideo Saito

This paper presents a method for estimating the six Degrees of Freedom (6DoF) pose of texture-less primitive-shaped objects from depth images. As the conventional methods for object pose estimation require rich texture or geometric features to the target objects, these methods are not suitable for texture-less and geometrically simple shaped objects. In order to estimate the pose of the primitive-shaped object, the parameters that represent primitive shapes are estimated. However, these methods explicitly limit the number of types of primitive shapes that can be estimated. We employ superquadrics as a primitive shape representation that can represent various types of primitive shapes with only a few parameters. In order to estimate the superquadric parameters of primitive-shaped objects, the point cloud of the object must be segmented from a depth image. It is known that the parameter estimation is sensitive to outliers, which are caused by the miss-segmentation of the depth image. Therefore, we propose a novel estimation method for superquadric parameters that are robust to outliers. In the experiment, we constructed a dataset in which the person grasps and moves the primitive-shaped objects. The experimental results show that our estimation method outperformed three conventional methods and the baseline method.


Sensors ◽  
2019 ◽  
Vol 19 (24) ◽  
pp. 5428 ◽  
Author(s):  
Yibin Wu ◽  
Xiaoji Niu ◽  
Junwei Du ◽  
Le Chang ◽  
Hailiang Tang ◽  
...  

The fully autonomous operation of multirotor unmanned air vehicles (UAVs) in many applications requires support of precision landing. Onboard camera and fiducial marker have been widely used for this critical phase due to its low cost and high effectiveness. This paper proposes a six-degrees-of-freedom (DoF) pose estimation solution for UAV landing based on an artificial marker and a micro-electromechanical system (MEMS) inertial measurement unit (IMU). The position and orientation of the landing maker are measured in advance. The absolute position and heading of the UAV are estimated by detecting the marker and extracting corner points with the onboard monocular camera. To achieve continuous and reliable positioning when the marker is occasionally shadowed, IMU data is fused by an extended Kalman filter (EKF). The error terms of the IMU sensor are modeled and estimated. Field experiments show that the positioning accuracy of the proposed system is at centimeter level, and the heading error is less than 0.1 degrees. Comparing to the marker-based approach, the roll and pitch angle errors decreased by 33% and 54% on average. Within five seconds of vision outage, the average drifts of the horizontal and vertical position were 0.41 and 0.09 m, respectively.


2021 ◽  
Vol 41 (5) ◽  
pp. 0515001
Author(s):  
田苗 Tian Miao ◽  
关棒磊 Guan Banglei ◽  
孙放 Sun Fang ◽  
苑云 Yuan Yun ◽  
于起峰 Yu Qifeng

2018 ◽  
Vol 47 (6) ◽  
pp. 612002
Author(s):  
薛俊诗 XUE Jun-shi ◽  
舒奇泉 SHU Qi-quan ◽  
郭宁博 GUO Ning-bo

2020 ◽  
Vol 124 (1279) ◽  
pp. 1281-1300
Author(s):  
O. Knuuttila ◽  
A. Kestilä ◽  
E. Kallio

AbstractThe need for autonomous location estimation in the form of optical navigation is an essential requirement for forthcoming deep space missions. While crater-based navigation might work well with larger bodies littered with craters, small sub-kilometer bodies do not necessarily have them. We have developed a new pose estimation method for absolute navigation based on photometric local feature extraction techniques thus making it suitable for missions that cannot rely on craters. The algorithm can be used by a navigation filter in conjunction with relative pose estimation such as visual odometry for additional robustness and accuracy. To estimate the position and orientation of the spacecraft in the asteroid-fixed coordinate frame, it uses navigation camera images in combination with other readily available information, such as orientation relative to the stars and the current time for an initial estimate of the asteroid rotation state. Evaluation of the algorithm when using different feature extractors is performed, on one hand, using Monte Carlo simulations and, on the other hand, using actual images taken by the Rosetta spacecraft orbiting the comet 67P/Churyumov–Gerasimenko. Our analysis, where four different feature extraction methods (AKAZE, ORB, SIFT, SURF) were compared, showed that AKAZE is most promising in terms of stability and accuracy.


2020 ◽  
pp. 1-1
Author(s):  
Hiroaki Murakami ◽  
Takumi Suzaki ◽  
Masanari Nakamura ◽  
Hiromichi Hashizume ◽  
Masanori Sugimoto

2019 ◽  
Vol 40 (4) ◽  
pp. 535-541
Author(s):  
WANG Jun ◽  
XU Xiaofeng ◽  
DONG Mingli ◽  
SUN Peng ◽  
CHEN Min

Sign in / Sign up

Export Citation Format

Share Document