scholarly journals Pose Estimation Utilizing a Gated Recurrent Unit Network for Visual Localization

2020 ◽  
Vol 10 (24) ◽  
pp. 8876
Author(s):  
Sungkwan Kim ◽  
Inhwan Kim ◽  
Luiz Felipe Vecchietti ◽  
Dongsoo Har

Lately, pose estimation based on learning-based Visual Odometry (VO) methods, where raw image data are provided as the input of a neural network to get 6 Degrees of Freedom (DoF) information, has been intensively investigated. Despite its recent advances, learning-based VO methods still perform worse than the classical VO that consists of feature-based VO methods and direct VO methods. In this paper, a new pose estimation method with the help of a Gated Recurrent Unit (GRU) network trained by pose data acquired by an accurate sensor is proposed. The historical trajectory data of the yaw angle are provided to the GRU network to get a yaw angle at the current timestep. The proposed method can be easily combined with other VO methods to enhance the overall performance via an ensemble of predicted results. Pose estimation using the proposed method is especially advantageous in the cornering section which often introduces an estimation error. The performance is improved by reconstructing the rotation matrix using a yaw angle that is the fusion of the yaw angles estimated from the proposed GRU network and other VO methods. The KITTI dataset is utilized to train the network. On average, regarding the KITTI sequences, performance is improved as much as 1.426% in terms of translation error and 0.805 deg/100 m in terms of rotation error.

Sensors ◽  
2019 ◽  
Vol 19 (20) ◽  
pp. 4366 ◽  
Author(s):  
Francisco Molina Martel ◽  
Juri Sidorenko ◽  
Christoph Bodensteiner ◽  
Michael Arens ◽  
Urs Hugentobler

In this work we introduce a relative localization method that estimates the coordinate frame transformation between two devices based on distance measurements. We present a linear algorithm that calculates the relative pose in 2D or 3D with four degrees of freedom (4-DOF). This algorithm needs a minimum of five or six distance measurements, respectively, to estimate the relative pose uniquely. We use the linear algorithm in conjunction with outlier detection algorithms and as a good initial estimate for iterative least squares refinement. The proposed method outperforms other related linear methods in terms of distance measurements needed and in terms of accuracy. In comparison with a related linear algorithm in 2D, we can reduce 10% of the translation error. In contrast to the more general 6-DOF linear algorithm, our 4-DOF method reduces the minimum distances needed from ten to six and the rotation error by a factor of four at the standard deviation of our ultra-wideband (UWB) transponders. When using the same amount of measurements the orientation error and translation error are approximately reduced to a factor of ten. We validate our method with simulations and an experimental setup, where we integrate ultra-wideband (UWB) technology into simultaneous localization and mapping (SLAM)-based devices. The presented relative pose estimation method is intended for use in augmented reality applications for cooperative localization with head-mounted displays. We foresee practical use cases of this method in cooperative SLAM, where map merging is performed in the most proactive manner.


2021 ◽  
Vol 2021 ◽  
pp. 1-18
Author(s):  
Mingrui Luo ◽  
En Li ◽  
Rui Guo ◽  
Jiaxin Liu ◽  
Zize Liang

Redundant manipulators are suitable for working in narrow and complex environments due to their flexibility. However, a large number of joints and long slender links make it hard to obtain the accurate end-effector pose of the redundant manipulator directly through the encoders. In this paper, a pose estimation method is proposed with the fusion of vision sensors, inertial sensors, and encoders. Firstly, according to the complementary characteristics of each measurement unit in the sensors, the original data is corrected and enhanced. Furthermore, an improved Kalman filter (KF) algorithm is adopted for data fusion by establishing the nonlinear motion prediction of the end-effector and the synchronization update model of the multirate sensors. Finally, the radial basis function (RBF) neural network is used to adaptively adjust the fusion parameters. It is verified in experiments that the proposed method achieves better performances on estimation error and update frequency than the original extended Kalman filter (EKF) and unscented Kalman filter (UKF) algorithm, especially in complex environments.


2020 ◽  
Vol 10 (16) ◽  
pp. 5442
Author(s):  
Ryo Hachiuma ◽  
Hideo Saito

This paper presents a method for estimating the six Degrees of Freedom (6DoF) pose of texture-less primitive-shaped objects from depth images. As the conventional methods for object pose estimation require rich texture or geometric features to the target objects, these methods are not suitable for texture-less and geometrically simple shaped objects. In order to estimate the pose of the primitive-shaped object, the parameters that represent primitive shapes are estimated. However, these methods explicitly limit the number of types of primitive shapes that can be estimated. We employ superquadrics as a primitive shape representation that can represent various types of primitive shapes with only a few parameters. In order to estimate the superquadric parameters of primitive-shaped objects, the point cloud of the object must be segmented from a depth image. It is known that the parameter estimation is sensitive to outliers, which are caused by the miss-segmentation of the depth image. Therefore, we propose a novel estimation method for superquadric parameters that are robust to outliers. In the experiment, we constructed a dataset in which the person grasps and moves the primitive-shaped objects. The experimental results show that our estimation method outperformed three conventional methods and the baseline method.


Sensors ◽  
2019 ◽  
Vol 19 (24) ◽  
pp. 5428 ◽  
Author(s):  
Yibin Wu ◽  
Xiaoji Niu ◽  
Junwei Du ◽  
Le Chang ◽  
Hailiang Tang ◽  
...  

The fully autonomous operation of multirotor unmanned air vehicles (UAVs) in many applications requires support of precision landing. Onboard camera and fiducial marker have been widely used for this critical phase due to its low cost and high effectiveness. This paper proposes a six-degrees-of-freedom (DoF) pose estimation solution for UAV landing based on an artificial marker and a micro-electromechanical system (MEMS) inertial measurement unit (IMU). The position and orientation of the landing maker are measured in advance. The absolute position and heading of the UAV are estimated by detecting the marker and extracting corner points with the onboard monocular camera. To achieve continuous and reliable positioning when the marker is occasionally shadowed, IMU data is fused by an extended Kalman filter (EKF). The error terms of the IMU sensor are modeled and estimated. Field experiments show that the positioning accuracy of the proposed system is at centimeter level, and the heading error is less than 0.1 degrees. Comparing to the marker-based approach, the roll and pitch angle errors decreased by 33% and 54% on average. Within five seconds of vision outage, the average drifts of the horizontal and vertical position were 0.41 and 0.09 m, respectively.


2020 ◽  
pp. 1-1
Author(s):  
Hiroaki Murakami ◽  
Takumi Suzaki ◽  
Masanari Nakamura ◽  
Hiromichi Hashizume ◽  
Masanori Sugimoto

2011 ◽  
Vol 23 (3) ◽  
pp. 400-407 ◽  
Author(s):  
Joonho Seo ◽  
◽  
Norihiro Koizumi ◽  
Takakazu Funamoto ◽  
Naohiko Sugita ◽  
...  

This paper presents a real-time pose estimation method as a part of robotic HIFU treatment system for moving volumetric targets. For the acquired biplane US images, current pose of the preoperative model is calculated by iterative segmentation and registration. Seed contours for the segmentation in each iteration is provided by previously registered preoperative 3-D model. The segmented boundary points then update the pose of 3-D model. The boundary outlier-removal makes the algorithm robust against partially noisy boundaries as well as the spatial boundary points accelerates the algorithm to be calculated in real-time. By the phantom experiments, registration accuracy for a biplane US image data was evaluated, and the processing time was also investigated.


2021 ◽  
Vol 11 (9) ◽  
pp. 4241
Author(s):  
Jiahua Wu ◽  
Hyo Jong Lee

In bottom-up multi-person pose estimation, grouping joint candidates into the appropriately structured corresponding instance of a person is challenging. In this paper, a new bottom-up method, the Partitioned CenterPose (PCP) Network, is proposed to better cluster the detected joints. To achieve this goal, we propose a novel approach called Partition Pose Representation (PPR) which integrates the instance of a person and its body joints based on joint offset. PPR leverages information about the center of the human body and the offsets between that center point and the positions of the body’s joints to encode human poses accurately. To enhance the relationships between body joints, we divide the human body into five parts, and then, we generate a sub-PPR for each part. Based on this PPR, the PCP Network can detect people and their body joints simultaneously, then group all body joints according to joint offset. Moreover, an improved l1 loss is designed to more accurately measure joint offset. Using the COCO keypoints and CrowdPose datasets for testing, it was found that the performance of the proposed method is on par with that of existing state-of-the-art bottom-up methods in terms of accuracy and speed.


2021 ◽  
Vol 11 (7) ◽  
pp. 3158
Author(s):  
Néstor J. Jarque-Bou ◽  
Margarita Vergara ◽  
Joaquín L. Sancho-Bru

Thumb opposition is essential for grasping, and involves the flexion and abduction of the carpometacarpal and metacarpophalangeal joints of the thumb. The high number of degrees of freedom of the thumb in a fairly small space makes the in vivo recording of its kinematics a challenging task. For this reason, along with the very limited independence of the abduction movement of the metacarpophalangeal joint, many devices do not implement sensors to measure such movement, which may lead to important implications in terms of the accuracy of thumb models. The aims of this work are to examine the correlation between thumb joints and to obtain an equation that allows thumb metacarpophalangeal abduction/adduction movement to be estimated from the other joint motions of the thumb, during the commonest grasps used during activities of daily living and in free movement. The correlation analysis shows that metacarpophalangeal abduction/adduction movement can be expressed mainly from carpometacarpal joint movements. The model thus obtained presents a low estimation error (6.29°), with no significant differences between grasps. The results could benefit most fields that do not typically include this joint movement, such as virtual reality, teleoperation, 3D modeling, prostheses, and exoskeletons.


Measurement ◽  
2022 ◽  
Vol 187 ◽  
pp. 110274
Author(s):  
Zhang Zimiao ◽  
Xu kai ◽  
Wu Yanan ◽  
Zhang Shihai

Sign in / Sign up

Export Citation Format

Share Document