Human motion caption with vision and inertial sensors for hand/arm robot teleoperation

2016 ◽  
Vol 52 (3-4) ◽  
pp. 1629-1636 ◽  
Author(s):  
Futoshi Kobayashi ◽  
Keiichi Kitabayashi ◽  
Kai Shimizu ◽  
Hiroyuki Nakamoto ◽  
Fumio Kojima
Sensors ◽  
2021 ◽  
Vol 21 (9) ◽  
pp. 3065
Author(s):  
Ernest Kwesi Ofori ◽  
Shuaijie Wang ◽  
Tanvi Bhatt

Inertial sensors (IS) enable the kinematic analysis of human motion with fewer logistical limitations than the silver standard optoelectronic motion capture (MOCAP) system. However, there are no data on the validity of IS for perturbation training and during the performance of dance. The aim of this present study was to determine the concurrent validity of IS in the analysis of kinematic data during slip and trip-like perturbations and during the performance of dance. Seven IS and the MOCAP system were simultaneously used to capture the reactive response and dance movements of fifteen healthy young participants (Age: 18–35 years). Bland Altman (BA) plots, root mean square errors (RMSE), Pearson’s correlation coefficients (R), and intraclass correlation coefficients (ICC) were used to compare kinematic variables of interest between the two systems for absolute equivalency and accuracy. Limits of agreements (LOA) of the BA plots ranged from −0.23 to 0.56 and −0.21 to 0.43 for slip and trip stability variables, respectively. The RMSE for slip and trip stabilities were from 0.11 to 0.20 and 0.11 to 0.16, respectively. For the joint mobility in dance, LOA varied from −6.98–18.54, while RMSE ranged from 1.90 to 13.06. Comparison of IS and optoelectronic MOCAP system for reactive balance and body segmental kinematics revealed that R varied from 0.59 to 0.81 and from 0.47 to 0.85 while ICC was from 0.50 to 0.72 and 0.45 to 0.84 respectively for slip–trip perturbations and dance. Results of moderate to high concurrent validity of IS and MOCAP systems. These results were consistent with results from similar studies. This suggests that IS are valid tools to quantitatively analyze reactive balance and mobility kinematics during slip–trip perturbation and the performance of dance at any location outside, including the laboratory, clinical and home settings.


2018 ◽  
Vol 198 ◽  
pp. 04010
Author(s):  
Zhonghao Han ◽  
Lei Hu ◽  
Na Guo ◽  
Biao Yang ◽  
Hongsheng Liu ◽  
...  

As a newly emerging human-computer interaction, motion tracking technology offers a way to extract human motion data. This paper presents a series of techniques to improve the flexibility of the motion tracking system based on the inertial measurement units (IMUs). First, we built a most miniatured wireless tracking node by integrating an IMU, a Wi-Fi module and a power supply. Then, the data transfer rate was optimized using an asynchronous query method. Finally, to simplify the setup and make the interchangeability of all nodes possible, we designed a calibration procedure and trained a support vector machine (SVM) model to determine the binding relation between the body segments and the tracking nodes after setup. The evaluations of the whole system justify the effectiveness of proposed methods and demonstrate its advantages compared to other commercial motion tracking system.


2016 ◽  
Vol 138 (9) ◽  
Author(s):  
Arash Atrsaei ◽  
Hassan Salarieh ◽  
Aria Alasty

Due to various applications of human motion capture techniques, developing low-cost methods that would be applicable in nonlaboratory environments is under consideration. MEMS inertial sensors and Kinect are two low-cost devices that can be utilized in home-based motion capture systems, e.g., home-based rehabilitation. In this work, an unscented Kalman filter approach was developed based on the complementary properties of Kinect and the inertial sensors to fuse the orientation data of these two devices for human arm motion tracking during both stationary shoulder joint position and human body movement. A new measurement model of the fusion algorithm was obtained that can compensate for the inertial sensors drift problem in high dynamic motions and also joints occlusion in Kinect. The efficiency of the proposed algorithm was evaluated by an optical motion tracker system. The errors were reduced by almost 50% compared to cases when either inertial sensor or Kinect measurements were utilized.


2010 ◽  
Vol 15 (6) ◽  
pp. 462-473 ◽  
Author(s):  
Antonio I Cuesta-Vargas ◽  
Alejandro Galán-Mercant ◽  
Jonathan M Williams

Proceedings ◽  
2018 ◽  
Vol 2 (19) ◽  
pp. 1238 ◽  
Author(s):  
Irvin López-Nava ◽  
Angélica Muñoz-Meléndez

Action recognition is important for various applications, such as, ambient intelligence, smart devices, and healthcare. Automatic recognition of human actions in daily living environments, mainly using wearable sensors, is still an open research problem of the field of pervasive computing. This research focuses on extracting a set of features related to human motion, in particular the motion of the upper and lower limbs, in order to recognize actions in daily living environments, using time-series of joint orientation. Ten actions were performed by five test subjects in their homes: cooking, doing housework, eating, grooming, mouth care, ascending stairs, descending stairs, sitting, standing, and walking. The joint angles of the right upper limb and the left lower limb were estimated using information from five wearable inertial sensors placed on the back, right upper arm, right forearm, left thigh and left leg. The set features were used to build classifiers using three inference algorithms: Naive Bayes, K-Nearest Neighbours, and AdaBoost. The F- m e a s u r e average of classifying the ten actions of the three classifiers built by using the proposed set of features was 0.806 ( σ = 0.163).


Author(s):  
Felipe Marrese Bersotti ◽  
ANDERSON DE OLIVEIRA ◽  
Lucas Ortega Venzel ◽  
Carlos Ande ◽  
Mário Sandro Francisco da Rocha

2020 ◽  
Author(s):  
Timo von Marcard

This thesis explores approaches to capture human motions with a small number of sensors. In the first part of this thesis an approach is presented that reconstructs the body pose from only six inertial sensors. Instead of relying on pre-recorded motion databases, a global optimization problem is solved to maximize the consistency of measurements and model over an entire recording sequence. The second part of this thesis deals with a hybrid approach to fuse visual information from a single hand-held camera with inertial sensor data. First, a discrete optimization problem is solved to automatically associate people detections in the video with inertial sensor data. Then, a global optimization problem is formulated to combine visual and inertial information. The propose  approach enables capturing of multiple interacting people and works even if many more people are visible in the camera image. In addition, systematic inertial sensor errors can be compensated, leading to a substantial in...


2019 ◽  
Vol 28 (04) ◽  
pp. 1940006 ◽  
Author(s):  
Olga C. Santos

Recent trends in educational technology focus on designing systems that can support students while learning complex psychomotor skills, such as those required when practicing sports and martial arts, dancing or playing a musical instrument. In this context, artificial intelligence can be key to personalize the development of these psychomotor skills by enabling the provision of effective feedback when the instructor is not present, or scaling up to a larger pool of students the feedback that an instructor would typically provide one-on-one. This paper presents the modeling of human motion gathered with inertial sensors aimed to offer a personalized support to students when learning complex psychomotor skills. In particular, when comparing learner data with those of an expert during the psychomotor learning process, artificial intelligence algorithms can allow to: (i) recognize specific motion learning units and (ii) assess learning performance in a motion unit. However, it seems that this field is still emerging, since when reviewed systematically, search results hardly included the motion modeling with artificial intelligence techniques of complex human activities measured with inertial sensors.


Sensors ◽  
2020 ◽  
Vol 20 (21) ◽  
pp. 6330
Author(s):  
Jack H. Geissinger ◽  
Alan T. Asbeck

In recent years, wearable sensors have become common, with possible applications in biomechanical monitoring, sports and fitness training, rehabilitation, assistive devices, or human-computer interaction. Our goal was to achieve accurate kinematics estimates using a small number of sensors. To accomplish this, we introduced a new dataset (the Virginia Tech Natural Motion Dataset) of full-body human motion capture using XSens MVN Link that contains more than 40 h of unscripted daily life motion in the open world. Using this dataset, we conducted self-supervised machine learning to do kinematics inference: we predicted the complete kinematics of the upper body or full body using a reduced set of sensors (3 or 4 for the upper body, 5 or 6 for the full body). We used several sequence-to-sequence (Seq2Seq) and Transformer models for motion inference. We compared the results using four different machine learning models and four different configurations of sensor placements. Our models produced mean angular errors of 10–15 degrees for both the upper body and full body, as well as worst-case errors of less than 30 degrees. The dataset and our machine learning code are freely available.


Sign in / Sign up

Export Citation Format

Share Document