Pose Estimation from Uncertain Omnidirectional Image Data Using Line-Plane Correspondences

Author(s):  
Christian Gebken ◽  
Antti Tolvanen ◽  
Gerald Sommer
2016 ◽  
Vol 2016 ◽  
pp. 1-8
Author(s):  
Shanshan Wei ◽  
Zhiqiang He ◽  
Wei Xie

This paper proposes a novel vision and inertial fusion algorithm S2fM (Simplified Structure from Motion) for camera relative pose estimation. Different from current existing algorithms, our algorithm estimates rotation parameter and translation parameter separately. S2fM employs gyroscopes to estimate camera rotation parameter, which is later fused with the image data to estimate camera translation parameter. Our contributions are in two aspects. (1) Under the circumstance that no inertial sensor can estimate accurately enough translation parameter, we propose a translation estimation algorithm by fusing gyroscope sensor and image data. (2) Our S2fM algorithm is efficient and suitable for smart devices. Experimental results validate efficiency of the proposed S2fM algorithm.


2020 ◽  
Vol 8 ◽  
Author(s):  
Bruno Berenguel

Póster presentado en la IX Jornada de Jóvenes Investigadores del I3A


Sensors ◽  
2018 ◽  
Vol 18 (8) ◽  
pp. 2719 ◽  
Author(s):  
Diyi Liu ◽  
Shogo Arai ◽  
Jiaqi Miao ◽  
Jun Kinugawa ◽  
Zhao Wang ◽  
...  

Automation of the bin picking task with robots entails the key step of pose estimation, which identifies and locates objects so that the robot can pick and manipulate the object in an accurate and reliable way. This paper proposes a novel point pair feature-based descriptor named Boundary-to-Boundary-using-Tangent-Line (B2B-TL) to estimate the pose of industrial parts including some parts whose point clouds lack key details, for example, the point cloud of the ridges of a part. The proposed descriptor utilizes the 3D point cloud data and 2D image data of the scene simultaneously, and the 2D image data could compensate the missing key details of the point cloud. Based on the descriptor B2B-TL, Multiple Edge Appearance Models (MEAM), a method using multiple models to describe the target object, is proposed to increase the recognition rate and reduce the computation time. A novel pipeline of an online computation process is presented to take advantage of B2B-TL and MEAM. Our algorithm is evaluated against synthetic and real scenes and implemented in a bin picking system. The experimental results show that our method is sufficiently accurate for a robot to grasp industrial parts and is fast enough to be used in a real factory environment.


2012 ◽  
Vol 182-183 ◽  
pp. 1708-1712
Author(s):  
Wei Liu ◽  
Jian Hua Su ◽  
Sui Wu Zheng ◽  
Peng Wang

In industrial fields, precise pose of a 3D object is the prerequisite of the subsequent tasks like grasping and assembly, thus many researches on accurate pose estimation of a 3D object are explored over the last decades. To get the pose of a 3D work piece from the 2D image data is a challenging task in industrial applications. This paper proposes a fully automated pose estimation system which is capable to estimate the accurate model and pose of a 3D work piece that can well match the 2D image data. This is achieved by representing the above problem as an optimization problem aiming at finding the accurate model parameters and pose parameters of work piece by minimize the difference between the real 2D image and the hypothetical 2D image that produced through the given parameters from 3D image. Due to the coupling of the unknown model and pose parameters and the discontinuity of the objective function, the above optimization problem cannot be solved through traditional optimization approaches. Hence, we utilize a heuristic optimization strategy - Differential Evolution to cope with the problem. The experimental results demonstrate the effectiveness of the proposed method.


Author(s):  
Zanuar Tri Romadon ◽  
Hary Oktavianto ◽  
Iwan Kurnianto Wibowo ◽  
Bima Sena Bayu Dewantara ◽  
Erna Alfi Nurrohmah ◽  
...  

2011 ◽  
Vol 23 (3) ◽  
pp. 400-407 ◽  
Author(s):  
Joonho Seo ◽  
◽  
Norihiro Koizumi ◽  
Takakazu Funamoto ◽  
Naohiko Sugita ◽  
...  

This paper presents a real-time pose estimation method as a part of robotic HIFU treatment system for moving volumetric targets. For the acquired biplane US images, current pose of the preoperative model is calculated by iterative segmentation and registration. Seed contours for the segmentation in each iteration is provided by previously registered preoperative 3-D model. The segmented boundary points then update the pose of 3-D model. The boundary outlier-removal makes the algorithm robust against partially noisy boundaries as well as the spatial boundary points accelerates the algorithm to be calculated in real-time. By the phantom experiments, registration accuracy for a biplane US image data was evaluated, and the processing time was also investigated.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Laurie Needham ◽  
Murray Evans ◽  
Darren P. Cosker ◽  
Logan Wade ◽  
Polly M. McGuigan ◽  
...  

AbstractHuman movement researchers are often restricted to laboratory environments and data capture techniques that are time and/or resource intensive. Markerless pose estimation algorithms show great potential to facilitate large scale movement studies ‘in the wild’, i.e., outside of the constraints imposed by marker-based motion capture. However, the accuracy of such algorithms has not yet been fully evaluated. We computed 3D joint centre locations using several pre-trained deep-learning based pose estimation methods (OpenPose, AlphaPose, DeepLabCut) and compared to marker-based motion capture. Participants performed walking, running and jumping activities while marker-based motion capture data and multi-camera high speed images (200 Hz) were captured. The pose estimation algorithms were applied to 2D image data and 3D joint centre locations were reconstructed. Pose estimation derived joint centres demonstrated systematic differences at the hip and knee (~ 30–50 mm), most likely due to mislabeling of ground truth data in the training datasets. Where systematic differences were lower, e.g., the ankle, differences of 1–15 mm were observed depending on the activity. Markerless motion capture represents a highly promising emerging technology that could free movement scientists from laboratory environments but 3D joint centre locations are not yet consistently comparable to marker-based motion capture.


Sign in / Sign up

Export Citation Format

Share Document