[POSTER] An Inertial, Magnetic and Vision Based Trusted Pose Estimation for AR and 3D Data Qualification on Long Urban Pedestrian Displacements

Author(s):  
Nicolas Antigny ◽  
Myriam Servieres ◽  
Valerie Renaudin
Keyword(s):  
2020 ◽  
Author(s):  
Gopi Krishna Erabati

The technology in current research scenario is marching towards automation forhigher productivity with accurate and precise product development. Vision andRobotics are domains which work to create autonomous systems and are the keytechnology in quest for mass productivity. The automation in an industry canbe achieved by detecting interactive objects and estimating the pose to manipulatethem. Therefore the object localization ( i.e., pose) includes position andorientation of object, has profound ?significance. The application of object poseestimation varies from industry automation to entertainment industry and fromhealth care to surveillance. The objective of pose estimation of objects is verysigni?cant in many cases, like in order for the robots to manipulate the objects,for accurate rendering of Augmented Reality (AR) among others.This thesis tries to solve the issue of object pose estimation using 3D dataof scene acquired from 3D sensors (e.g. Kinect, Orbec Astra Pro among others).The 3D data has an advantage of independence from object texture and invarianceto illumination. The proposal is divided into two phases : An o?ine phasewhere the 3D model template of the object ( for estimation of pose) is built usingIterative Closest Point (ICP) algorithm. And an online phase where the pose ofthe object is estimated by aligning the scene to the model using ICP, providedwith an initial alignment using 3D descriptors (like Fast Point Feature Transform(FPFH)).The approach we develop is to be integrated on two di?erent platforms :1)Humanoid robot `Pyrene' which has Orbec Astra Pro 3D sensor for data acquisition,and 2)Unmanned Aerial Vehicle (UAV) which has Intel Realsense Euclidon it. The datasets of objects (like electric drill, brick, a small cylinder, cake box)are acquired using Microsoft Kinect, Orbec Astra Pro and Intel RealSense Euclidsensors to test the performance of this technique. The objects which are used totest this approach are the ones which are used by robot. This technique is testedin two scenarios, fi?rstly, when the object is on the table and secondly when theobject is held in hand by a person. The range of objects from the sensor is 0.6to 1.6m. This technique could handle occlusions of the object by hand (when wehold the object), as ICP can work even if partial object is visible in the scene.


Symmetry ◽  
2020 ◽  
Vol 12 (10) ◽  
pp. 1636
Author(s):  
Yiqi Wu ◽  
Shichao Ma ◽  
Dejun Zhang ◽  
Jun Sun

Hand pose estimation from 3D data is a key challenge in computer vision as well as an essential step for human–computer interaction. A lot of deep learning-based hand pose estimation methods have made significant progress but give less consideration to the inner interactions of input data, especially when consuming hand point clouds. Therefore, this paper proposes an end-to-end capsule-based hand pose estimation network (Capsule-HandNet), which processes hand point clouds directly with the consideration of structural relationships among local parts, including symmetry, junction, relative location, etc. Firstly, an encoder is adopted in Capsule-HandNet to extract multi-level features into the latent capsule by dynamic routing. The latent capsule represents the structural relationship information of the hand point cloud explicitly. Then, a decoder recovers a point cloud to fit the input hand point cloud via a latent capsule. This auto-encoder procedure is designed to ensure the effectiveness of the latent capsule. Finally, the hand pose is regressed from the combined feature, which consists of the global feature and the latent capsule. The Capsule-HandNet is evaluated on public hand pose datasets under the metrics of the mean error and the fraction of frames. The mean joint errors of Capsule-HandNet on MSRA and ICVL datasets reach 8.85 mm and 7.49 mm, respectively, and Capsule-HandNet outperforms the state-of-the-art methods on most thresholds under the fraction of frames metric. The experimental results demonstrate the effectiveness of Capsule-HandNet for 3D hand pose estimation.


Author(s):  
Douglas L. Dorset

The quantitative use of electron diffraction intensity data for the determination of crystal structures represents the pioneering achievement in the electron crystallography of organic molecules, an effort largely begun by B. K. Vainshtein and his co-workers. However, despite numerous representative structure analyses yielding results consistent with X-ray determination, this entire effort was viewed with considerable mistrust by many crystallographers. This was no doubt due to the rather high crystallographic R-factors reported for some structures and, more importantly, the failure to convince many skeptics that the measured intensity data were adequate for ab initio structure determinations.We have recently demonstrated the utility of these data sets for structure analyses by direct phase determination based on the probabilistic estimate of three- and four-phase structure invariant sums. Examples include the structure of diketopiperazine using Vainshtein's 3D data, a similar 3D analysis of the room temperature structure of thiourea, and a zonal determination of the urea structure, the latter also based on data collected by the Moscow group.


2003 ◽  
Vol 42 (05) ◽  
pp. 215-219
Author(s):  
G. Platsch ◽  
A. Schwarz ◽  
K. Schmiedehausen ◽  
B. Tomandl ◽  
W. Huk ◽  
...  

Summary: Aim: Although the fusion of images from different modalities may improve diagnostic accuracy, it is rarely used in clinical routine work due to logistic problems. Therefore we evaluated performance and time needed for fusing MRI and SPECT images using a semiautomated dedicated software. Patients, material and Method: In 32 patients regional cerebral blood flow was measured using 99mTc ethylcystein dimer (ECD) and the three-headed SPECT camera MultiSPECT 3. MRI scans of the brain were performed using either a 0,2 T Open or a 1,5 T Sonata. Twelve of the MRI data sets were acquired using a 3D-T1w MPRAGE sequence, 20 with a 2D acquisition technique and different echo sequences. Image fusion was performed on a Syngo workstation using an entropy minimizing algorithm by an experienced user of the software. The fusion results were classified. We measured the time needed for the automated fusion procedure and in case of need that for manual realignment after automated, but insufficient fusion. Results: The mean time of the automated fusion procedure was 123 s. It was for the 2D significantly shorter than for the 3D MRI datasets. For four of the 2D data sets and two of the 3D data sets an optimal fit was reached using the automated approach. The remaining 26 data sets required manual correction. The sum of the time required for automated fusion and that needed for manual correction averaged 320 s (50-886 s). Conclusion: The fusion of 3D MRI data sets lasted significantly longer than that of the 2D MRI data. The automated fusion tool delivered in 20% an optimal fit, in 80% manual correction was necessary. Nevertheless, each of the 32 SPECT data sets could be merged in less than 15 min with the corresponding MRI data, which seems acceptable for clinical routine use.


2015 ◽  
Vol 10 (6) ◽  
pp. 558 ◽  
Author(s):  
Kristian Sestak ◽  
Zdenek Havlice

Sign in / Sign up

Export Citation Format

Share Document