scholarly journals Fusion of Multiple Lidars and Inertial Sensors for the Real-Time Pose Tracking of Human Motion

Sensors ◽  
2020 ◽  
Vol 20 (18) ◽  
pp. 5342
Author(s):  
Ashok Kumar Patil ◽  
Adithya Balasubramanyam ◽  
Jae Yeong Ryu ◽  
Pavan Kumar B N ◽  
Bharatesh Chakravarthi ◽  
...  

Today, enhancement in sensing technology enables the use of multiple sensors to track human motion/activity precisely. Tracking human motion has various applications, such as fitness training, healthcare, rehabilitation, human-computer interaction, virtual reality, and activity recognition. Therefore, the fusion of multiple sensors creates new opportunities to develop and improve an existing system. This paper proposes a pose-tracking system by fusing multiple three-dimensional (3D) light detection and ranging (lidar) and inertial measurement unit (IMU) sensors. The initial step estimates the human skeletal parameters proportional to the target user’s height by extracting the point cloud from lidars. Next, IMUs are used to capture the orientation of each skeleton segment and estimate the respective joint positions. In the final stage, the displacement drift in the position is corrected by fusing the data from both sensors in real time. The installation setup is relatively effortless, flexible for sensor locations, and delivers results comparable to the state-of-the-art pose-tracking system. We evaluated the proposed system regarding its accuracy in the user’s height estimation, full-body joint position estimation, and reconstruction of the 3D avatar. We used a publicly available dataset for the experimental evaluation wherever possible. The results reveal that the accuracy of height and the position estimation is well within an acceptable range of ±3–5 cm. The reconstruction of the motion based on the publicly available dataset and our data is precise and realistic.

Sensors ◽  
2021 ◽  
Vol 21 (7) ◽  
pp. 2340
Author(s):  
Ashok Kumar Patil ◽  
Adithya Balasubramanyam ◽  
Jae Yeong Ryu ◽  
Bharatesh Chakravarthi ◽  
Young Ho Chai

Human pose estimation and tracking in real-time from multi-sensor systems is essential for many applications. Combining multiple heterogeneous sensors increases opportunities to improve human motion tracking. Using only a single sensor type, e.g., inertial sensors, human pose estimation accuracy is affected by sensor drift over longer periods. This paper proposes a human motion tracking system using lidar and inertial sensors to estimate 3D human pose in real-time. Human motion tracking includes human detection and estimation of height, skeletal parameters, position, and orientation by fusing lidar and inertial sensor data. Finally, the estimated data are reconstructed on a virtual 3D avatar. The proposed human pose tracking system was developed using open-source platform APIs. Experimental results verified the proposed human position tracking accuracy in real-time and were in good agreement with current multi-sensor systems.


2018 ◽  
Vol 198 ◽  
pp. 04010
Author(s):  
Zhonghao Han ◽  
Lei Hu ◽  
Na Guo ◽  
Biao Yang ◽  
Hongsheng Liu ◽  
...  

As a newly emerging human-computer interaction, motion tracking technology offers a way to extract human motion data. This paper presents a series of techniques to improve the flexibility of the motion tracking system based on the inertial measurement units (IMUs). First, we built a most miniatured wireless tracking node by integrating an IMU, a Wi-Fi module and a power supply. Then, the data transfer rate was optimized using an asynchronous query method. Finally, to simplify the setup and make the interchangeability of all nodes possible, we designed a calibration procedure and trained a support vector machine (SVM) model to determine the binding relation between the body segments and the tracking nodes after setup. The evaluations of the whole system justify the effectiveness of proposed methods and demonstrate its advantages compared to other commercial motion tracking system.


Sensors ◽  
2020 ◽  
Vol 20 (3) ◽  
pp. 919 ◽  
Author(s):  
Hao Du ◽  
Wei Wang ◽  
Chaowen Xu ◽  
Ran Xiao ◽  
Changyin Sun

The question of how to estimate the state of an unmanned aerial vehicle (UAV) in real time in multi-environments remains a challenge. Although the global navigation satellite system (GNSS) has been widely applied, drones cannot perform position estimation when a GNSS signal is not available or the GNSS is disturbed. In this paper, the problem of state estimation in multi-environments is solved by employing an Extended Kalman Filter (EKF) algorithm to fuse the data from multiple heterogeneous sensors (MHS), including an inertial measurement unit (IMU), a magnetometer, a barometer, a GNSS receiver, an optical flow sensor (OFS), Light Detection and Ranging (LiDAR), and an RGB-D camera. Finally, the robustness and effectiveness of the multi-sensor data fusion system based on the EKF algorithm are verified by field flights in unstructured, indoor, outdoor, and indoor and outdoor transition scenarios.


2020 ◽  
Vol 69 (11) ◽  
pp. 8953-8961 ◽  
Author(s):  
Francesca Digiacomo ◽  
Abanti Shama Afroz ◽  
Riccardo Pelliccia ◽  
Francesco Inglese ◽  
Mario Milazzo ◽  
...  

2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Chaoyong Shen ◽  
Zongjian Lin ◽  
Shaoqi Zhou ◽  
Xuling Luo ◽  
Yu Zhang

Multisource remote sensing data have been extensively used in disaster and emergency response management. Different types of visual and measured data, such as high-resolution orthoimages, real-time videos, accurate digital elevation models, and three-dimensional landscape maps, can enable producing effective rescue plans and aid the efficient dispatching of rescuers after disasters. Generally, such data are acquired using unmanned aerial vehicles equipped with multiple sensors. For typical application scenarios, efficient and real-time access to data is more important in emergency response cases than in traditional application scenarios. In this study, an efficient emergency response airborne mapping system equipped with multiple sensors was designed. The system comprises groups of wide-angle cameras, a high-definition video camera, an infrared video camera, a LiDAR system, and a global navigation satellite system/inertial measurement unit. The wide-angle cameras had a visual field of 85° × 105°, facilitating the efficient operation of the mapping system. Numerous calibrations were performed on the constructed mapping system. In particular, initial calibration and self-calibration were performed to determine the relative pose between different wide-angle cameras to fuse all the acquired images. The mapping system was then tested in an area with altitudes of 1000 m–1250 m. The biases of the wide-angle cameras were small bias values (0.090 m, −0.018 m, and −0.046 m in the x-, y-, and z-axes, respectively). Moreover, the root-mean-square error (RMSE) along the planer direction was smaller than that along the vertical direction (0.202 and 0.294 m, respectively). The LiDAR system achieved smaller biases (0.117, −0.020, and −0.039 m in the x-, y-, and z-axes, respectively) and a smaller RMSE in the vertical direction (0.192 m) than the wide-angle cameras; however, RMSE of the LiDAR system along the planar direction (0.276 m) was slightly larger. The proposed system shows potential for use in emergency response systems for efficiently acquiring data such as images and point clouds.


2019 ◽  
Vol 13 (4) ◽  
pp. 506-516 ◽  
Author(s):  
Tsubasa Maruyama ◽  
Mitsunori Tada ◽  
Haruki Toda ◽  
◽  

The measurement of human motion is an important aspect of ergonomic mobility design, in which the mobility product is evaluated based on human factors obtained by digital human (DH) technologies. The optical motion-capture (MoCap) system has been widely used for measuring human motion in laboratories. However, it is generally difficult to measure human motion using mobility products in real-world scenarios, e.g., riding a bicycle on an outdoor slope, owing to unstable lighting conditions and camera arrangements. On the other hand, the inertial-measurement-unit (IMU)-based MoCap system does not require any optical devices, providing the potential for measuring riding motion even in outdoor environments. However, in general, the estimated motion is not necessarily accurate as there are many errors due to the nature of the IMU itself, such as drift and calibration errors. Thus, it is infeasible to apply the IMU-based system to riding motion estimation. In this study, we develop a new riding MoCap system using IMUs. The proposed system estimates product and human riding motions by combining the IMU orientation with contact constraints between the product and DH, e.g., DH hands in contact with handles. The proposed system is demonstrated with a bicycle ergometer, including the handles, seat, backrest, and foot pedals, as in general mobility products. The proposed system is further validated by comparing the estimated joint angles and positions with those of the optical MoCap for three different subjects. The experiment reveals both the effectiveness and limitations of the proposed system. It is confirmed that the proposed system improves the joint position estimation accuracy compared with a system using only IMUs. The angle estimation accuracy is also improved for near joints. However, it is observed that the angle accuracy decreases for a few joints. This is explained by the fact that the proposed system modifies the orientations of all body segments to satisfy the contact constraints, even if the orientations of a few joints are correct. This further confirms that the elapsed time using the proposed system is sufficient for real-time application.


Author(s):  
JAE-WON SUNG ◽  
DAIJIN KIM

Since pose-varying face images form nonlinear convex manifold in high dimensional image space, it is difficult to model their pose distribution in terms of a simple probabilistic density function. To solve this difficulty, we divide the pose space into many constituent pose classes and treat the continuous pose estimation problem as a discrete pose-class identification problem. We propose to use a hierarchically structured ML (Maximum Likelihood) pose classifiers in the reduced feature space to decrease the computation time for pose identification, where pose space is divided into several pose groups and each group consists of a number of similar neighboring poses. We use the CONDENSATION algorithm to find a newly appearing face and track the face with a variety of poses in real-time. Simulation results show that our proposed pose identification using the hierarchically structured ML pose classifiers can perform a faster pose identification than conventional pose identification using the flat structured ML pose classifiers. A real-time facial pose tracking system is built with high speed hierarchically structured ML pose classifiers.


Sensors ◽  
2019 ◽  
Vol 19 (17) ◽  
pp. 3714 ◽  
Author(s):  
Guihua Liu ◽  
Weilin Zeng ◽  
Bo Feng ◽  
Feng Xu

Presently, although many impressed SLAM systems have achieved exceptional accuracy in a real environment, most of them are verified in the static environment. However, for mobile robots and autonomous driving, the dynamic objects in the scene can result in tracking failure or large deviation during pose estimation. In this paper, a general visual SLAM system for dynamic scenes with multiple sensors called DMS-SLAM is proposed. First, the combination of GMS and sliding window is used to achieve the initialization of the system, which can eliminate the influence of dynamic objects and construct a static initialization 3D map. Then, the corresponding 3D points of the current frame in the local map are obtained by reprojection. These points are combined with the constant speed model or reference frame model to achieve the position estimation of the current frame and the update of the 3D map points in the local map. Finally, the keyframes selected by the tracking module are combined with the GMS feature matching algorithm to add static 3D map points to the local map. DMS-SLAM implements pose tracking, closed-loop detection and relocalization based on static 3D map points of the local map and supports monocular, stereo and RGB-D visual sensors in dynamic scenes. Exhaustive evaluation in public TUM and KITTI datasets demonstrates that DMS-SLAM outperforms state-of-the-art visual SLAM systems in accuracy and speed in dynamic scenes.


Sign in / Sign up

Export Citation Format

Share Document