scholarly journals DeepMoCap: Deep Optical Motion Capture Using Multiple Depth Sensors and Retro-Reflectors

Sensors ◽  
2019 ◽  
Vol 19 (2) ◽  
pp. 282 ◽  
Author(s):  
Anargyros Chatzitofis ◽  
Dimitrios Zarpalas ◽  
Stefanos Kollias ◽  
Petros Daras

In this paper, a marker-based, single-person optical motion capture method (DeepMoCap) is proposed using multiple spatio-temporally aligned infrared-depth sensors and retro-reflective straps and patches (reflectors). DeepMoCap explores motion capture by automatically localizing and labeling reflectors on depth images and, subsequently, on 3D space. Introducing a non-parametric representation to encode the temporal correlation among pairs of colorized depthmaps and 3D optical flow frames, a multi-stage Fully Convolutional Network (FCN) architecture is proposed to jointly learn reflector locations and their temporal dependency among sequential frames. The extracted reflector 2D locations are spatially mapped in 3D space, resulting in robust 3D optical data extraction. The subject’s motion is efficiently captured by applying a template-based fitting technique on the extracted optical data. Two datasets have been created and made publicly available for evaluation purposes; one comprising multi-view depth and 3D optical flow annotated images (DMC2.5D), and a second, consisting of spatio-temporally aligned multi-view depth images along with skeleton, inertial and ground truth MoCap data (DMC3D). The FCN model outperforms its competitors on the DMC2.5D dataset using 2D Percentage of Correct Keypoints (PCK) metric, while the motion capture outcome is evaluated against RGB-D and inertial data fusion approaches on DMC3D, outperforming the next best method by 4 . 5 % in total 3D PCK accuracy.

2018 ◽  
Vol 12 (3) ◽  
pp. 129-140 ◽  
Author(s):  
Ernesto Morales ◽  
Stéphanie Gamache ◽  
François Routhier ◽  
Jacqueline Rousseau ◽  
Olivier Doyle

PurposeThe purpose of this paper is to describe a methodology to measure the circulation area required by a manual or powered wheelchair within a toilet stall and present the range of possible results that can be collected when used in an experimental bathroom setup.Design/methodology/approachA bathroom environment containing a toilet, grab bars and two transparent acrylic panels suspended on rails to simulate walls was built. Three setups were experimented: 1,500 mm from the walls, 1,500 mm diagonally from the toilet and 1,700 mm from the walls. For each of the participants, markers were placed on the back and on the rear of the wheelchair and one on the toes of the participants. The Vicon® optical motion capture system was used to register the markers’ position in the 3D space.FindingsThe methodology proved to be relatively easy to install, efficient and easy to interpret in terms of results. It provides specific points from which it is possible to measure the trajectories of markers and calculate the polygonal projection of the area covered by each participant. The results showed that manual and powered wheelchair users required, respectively, 100 and 300 mm more than the minimum 1,500 mm wall-to-wall area to complete a rotation task in front of the toilet.Originality/valueThese results showed that the 1,500 mm gyration area proposed in the Canadian Code of Construction is not sufficient for manual and powered wheelchair users to circulate easily in toilet stalls. The methodology can provide evidence to support the improvement of construction norms in terms of accessible circulation areas.


2021 ◽  
Vol 13 (8) ◽  
pp. 194
Author(s):  
Ibsa K. Jalata ◽  
Thanh-Dat Truong ◽  
Jessica L. Allen ◽  
Han-Seok Seo ◽  
Khoa Luu

Using optical motion capture and wearable sensors is a common way to analyze impaired movement in individuals with neurological and musculoskeletal disorders. However, using optical motion sensors and wearable sensors is expensive and often requires highly trained professionals to identify specific impairments. In this work, we proposed a graph convolutional neural network that mimics the intuition of physical therapists to identify patient-specific impairments based on video of a patient. In addition, two modeling approaches are compared: a graph convolutional network applied solely on skeleton input data and a graph convolutional network accompanied with a 1-dimensional convolutional neural network (1D-CNN). Experiments on the dataset showed that the proposed method not only improves the correlation of the predicted gait measure with the ground truth value (speed = 0.791, gait deviation index (GDI) = 0.792) but also enables faster training with fewer parameters. In conclusion, the proposed method shows that the possibility of using video-based data to treat neurological and musculoskeletal disorders with acceptable accuracy instead of depending on the expensive and labor-intensive optical motion capture systems.


Sensors ◽  
2021 ◽  
Vol 21 (18) ◽  
pp. 6115
Author(s):  
Przemysław Skurowski ◽  
Magdalena Pawlyta

Optical motion capture is a mature contemporary technique for the acquisition of motion data; alas, it is non-error-free. Due to technical limitations and occlusions of markers, gaps might occur in such recordings. The article reviews various neural network architectures applied to the gap-filling problem in motion capture sequences within the FBM framework providing a representation of body kinematic structure. The results are compared with interpolation and matrix completion methods. We found out that, for longer sequences, simple linear feedforward neural networks can outperform the other, sophisticated architectures, but these outcomes might be affected by the small amount of data availabe for training. We were also able to identify that the acceleration and monotonicity of input sequence are the parameters that have a notable impact on the obtained results.


2020 ◽  
Vol 26 ◽  
pp. 00061
Author(s):  
Elina Makarova ◽  
Vladislav Dubatovkin ◽  
Nataliya Berezinskaya ◽  
Lyudmila Barkhatova ◽  
Elena Oleynik

The research is focused on studying the possibility of effective use of the dart grip system, the work of the athlete’s hand, to prepare the dartsman for competitions using the MOSAR complex. The experiment uses optical motion capture systems, a set of video cameras, led parameter sensors, and devices that allow to record the movement of body parts and a dart. This method of training and controlling dart throwing can serve as educational and visual material for training future athletes. The use of such motion capture systems in the near future may become one of the main aspects of training, both beginners and professionals, in many sports.


1999 ◽  
Vol 8 (2) ◽  
pp. 187-203 ◽  
Author(s):  
Tom Molet ◽  
Ronan Boulic ◽  
Daniel Thalmann

Motion-capture techniques are rarely based on orientation measurements for two main reasons: (1) optical motion-capture systems are designed for tracking object position rather than their orientation (which can be deduced from several trackers), (2) known animation techniques, like inverse kinematics or geometric algorithms, require position targets constantly, but orientation inputs only occasionally. We propose a complete human motion-capture technique based essentially on orientation measurements. The position measurement is used only for recovering the global position of the performer. This method allows fast tracking of human gestures for interactive applications as well as high rate recording. Several motion-capture optimizations, including the multijoint technique, improve the posture realism. This work is well suited for magnetic-based systems that rely more on orientation registration (in our environment) than position measurements that necessitate difficult system calibration.


Diagnostics ◽  
2020 ◽  
Vol 10 (6) ◽  
pp. 426
Author(s):  
I. Concepción Aranda-Valera ◽  
Antonio Cuesta-Vargas ◽  
Juan L. Garrido-Castro ◽  
Philip V. Gardiner ◽  
Clementina López-Medina ◽  
...  

Portable inertial measurement units (IMUs) are beginning to be used in human motion analysis. These devices can be useful for the evaluation of spinal mobility in individuals with axial spondyloarthritis (axSpA). The objectives of this study were to assess (a) concurrent criterion validity in individuals with axSpA by comparing spinal mobility measured by an IMU sensor-based system vs. optical motion capture as the reference standard; (b) discriminant validity comparing mobility with healthy volunteers; (c) construct validity by comparing mobility results with relevant outcome measures. A total of 70 participants with axSpA and 20 healthy controls were included. Individuals with axSpA completed function and activity questionnaires, and their mobility was measured using conventional metrology for axSpA, an optical motion capture system, and an IMU sensor-based system. The UCOASMI, a metrology index based on measures obtained by motion capture, and the IUCOASMI, the same index using IMU measures, were also calculated. Descriptive and inferential analyses were conducted to show the relationships between outcome measures. There was excellent agreement (ICC > 0.90) between both systems and a significant correlation between the IUCOASMI and conventional metrology (r = 0.91), activity (r = 0.40), function (r = 0.62), quality of life (r = 0.55) and structural change (r = 0.76). This study demonstrates the validity of an IMU system to evaluate spinal mobility in axSpA. These systems are more feasible than optical motion capture systems, and they could be useful in clinical practice.


Sign in / Sign up

Export Citation Format

Share Document