Forearm Motion Tracking with Estimating Joint Angles from Inertial Sensor Signals

Author(s):  
Ji-Hwan Kim ◽  
Nguyen Duc Thang ◽  
Hyun Sang Suh ◽  
Tahir Rasheed ◽  
Tae-Seong Kim
Sensors ◽  
2019 ◽  
Vol 19 (23) ◽  
pp. 5143 ◽  
Author(s):  
Lukas Adamowicz ◽  
Reed Gurchiek ◽  
Jonathan Ferri ◽  
Anna Ursiny ◽  
Niccolo Fiorentino ◽  
...  

Wearable sensor-based algorithms for estimating joint angles have seen great improvements in recent years. While the knee joint has garnered most of the attention in this area, algorithms for estimating hip joint angles are less available. Herein, we propose and validate a novel algorithm for this purpose with innovations in sensor-to-sensor orientation and sensor-to-segment alignment. The proposed approach is robust to sensor placement and does not require specific calibration motions. The accuracy of the proposed approach is established relative to optical motion capture and compared to existing methods for estimating relative orientation, hip joint angles, and range of motion (ROM) during a task designed to exercise the full hip range of motion (ROM) and fast walking using root mean square error (RMSE) and regression analysis. The RMSE of the proposed approach was less than that for existing methods when estimating sensor orientation ( 12 . 32 ∘ and 11 . 82 ∘ vs. 24 . 61 ∘ and 23 . 76 ∘ ) and flexion/extension joint angles ( 7 . 88 ∘ and 8 . 62 ∘ vs. 14 . 14 ∘ and 15 . 64 ∘ ). Also, ROM estimation error was less than 2 . 2 ∘ during the walking trial using the proposed method. These results suggest the proposed approach presents an improvement to existing methods and provides a promising technique for remote monitoring of hip joint angles.


2020 ◽  
Vol 7 ◽  
Author(s):  
Arne Passon ◽  
Thomas Schauer ◽  
Thomas Seel

End-effector-based robotic systems provide easy-to-set-up motion support in rehabilitation of stroke and spinal-cord-injured patients. However, measurement information is obtained only about the motion of the limb segments to which the systems are attached and not about the adjacent limb segments. We demonstrate in one particular experimental setup that this limitation can be overcome by augmenting an end-effector-based robot with a wearable inertial sensor. Most existing inertial motion tracking approaches rely on a homogeneous magnetic field and thus fail in indoor environments and near ferromagnetic materials and electronic devices. In contrast, we propose a magnetometer-free sensor fusion method. It uses a quaternion-based algorithm to track the heading of a limb segment in real time by combining the gyroscope and accelerometer readings with position measurements of one point along that segment. We apply this method to an upper-limb rehabilitation robotics use case in which the orientation and position of the forearm and elbow are known, and the orientation and position of the upper arm and shoulder are estimated by the proposed method using an inertial sensor worn on the upper arm. Experimental data from five healthy subjects who performed 282 proper executions of a typical rehabilitation motion and 163 executions with compensation motion are evaluated. Using a camera-based system as a ground truth, we demonstrate that the shoulder position and the elbow angle are tracked with median errors around 4 cm and 4°, respectively; and that undesirable compensatory shoulder movements, which were defined as shoulder displacements greater ±10 cm for more than 20% of a motion cycle, are detected and classified 100% correctly across all 445 performed motions. The results indicate that wearable inertial sensors and end-effector-based robots can be combined to provide means for effective rehabilitation therapy with likewise detailed and accurate motion tracking for performance assessment, real-time biofeedback and feedback control of robotic and neuroprosthetic motion support.


Sensors ◽  
2013 ◽  
Vol 13 (5) ◽  
pp. 5614-5629 ◽  
Author(s):  
Tran Hung ◽  
Young Suh

Author(s):  
Seonhong Hwang ◽  
Chung-Ying Tsai ◽  
Alicia M. Koontz

AbstractThe purpose of this study was to test the concurrent validity and test-retest reliability of the Kinect skeleton tracking algorithm for measurement of trunk, shoulder, and elbow joint angle measurement during a wheelchair transfer task. Eight wheelchair users were recruited for this study. Joint positions were recorded simultaneously by the Kinect and Vicon motion capture systems while subjects transferred from their wheelchairs to a level bench. Shoulder, elbow, and trunk angles recorded with the Kinect system followed a similar trajectory as the angles recorded with the Vicon system with correlation coefficients that are larger than 0.71 on both sides (leading arm and trailing arm). The root mean square errors (RMSEs) ranged from 5.18 to 22.46 for the shoulder, elbow, and trunk angles. The 95% limits of agreement (LOA) for the discrepancy between the two systems exceeded the clinical significant level of 5°. For the trunk, shoulder, and elbow angles, the Kinect had very good relative reliability for the measurement of sagittal, frontal and horizontal trunk angles, as indicated by the high intraclass correlation coefficient (ICC) values (>0.90). Small standard error of the measure (SEM) values, indicating good absolute reliability, were observed for all joints except for the leading arm’s shoulder joint. Relatively large minimal detectable changes (MDCs) were observed in all joint angles. The Kinect motion tracking has promising performance levels for some upper limb joints. However, more accurate measurement of the joint angles may be required. Therefore, understanding the limitations in precision and accuracy of Kinect is imperative before utilization of Kinect.


Sensors ◽  
2019 ◽  
Vol 19 (11) ◽  
pp. 2474 ◽  
Author(s):  
Sébastien Cordillet ◽  
Nicolas Bideau ◽  
Benoit Bideau ◽  
Guillaume Nicolas

This paper presents a novel sensor-to-segment calibration procedure for inertial sensor-based knee joint kinematics analysis during cycling. This procedure was designed to be feasible in-field, autonomously, and without any external operator or device. It combines a static standing up posture and a pedaling task. The main goal of this study was to assess the accuracy of the new sensor-to-segment calibration method (denoted as the ‘cycling’ method) by calculating errors in terms of body-segment orientations and 3D knee joint angles using inertial measurement unit (IMU)-based and optoelectronic-based motion capture. To do so, 14 participants were evaluated during pedaling motion at a workload of 100 W, which enabled comparisons of the cycling method with conventional calibration methods commonly employed in gait analysis. The accuracy of the cycling method was comparable to that of other methods concerning the knee flexion/extension angle, and did not exceed 3.8°. However, the cycling method presented the smallest errors for knee internal/external rotation (6.65 ± 1.94°) and abduction/adduction (5.92 ± 2.85°). This study demonstrated that a calibration method based on the completion of a pedaling task combined with a standing posture significantly improved the accuracy of 3D knee joint angle measurement when applied to cycling analysis.


Sensors ◽  
2020 ◽  
Vol 20 (17) ◽  
pp. 4946
Author(s):  
Nicolas Lemieux ◽  
Rita Noumeir

In the domain of human action recognition, existing works mainly focus on using RGB, depth, skeleton and infrared data for analysis. While these methods have the benefit of being non-invasive, they can only be used within limited setups, are prone to issues such as occlusion and often need substantial computational resources. In this work, we address human action recognition through inertial sensor signals, which have a vast quantity of practical applications in fields such as sports analysis and human-machine interfaces. For that purpose, we propose a new learning framework built around a 1D-CNN architecture, which we validated by achieving very competitive results on the publicly available UTD-MHAD dataset. Moreover, the proposed method provides some answers to two of the greatest challenges currently faced by action recognition algorithms, which are (1) the recognition of high-level activities and (2) the reduction of their computational cost in order to make them accessible to embedded devices. Finally, this paper also investigates the tractability of the features throughout the proposed framework, both in time and duration, as we believe it could play an important role in future works in order to make the solution more intelligible, hardware-friendly and accurate.


Sensors ◽  
2020 ◽  
Vol 20 (7) ◽  
pp. 2110 ◽  
Author(s):  
Long Liu ◽  
Sen Qiu ◽  
ZheLong Wang ◽  
Jie Li ◽  
JiaXin Wang

Coaches and athletes are constantly seeking novel training methodologies in an attempt to improve athletic performance. This paper proposes a method of rowing sport capture and analysis based on Inertial Measurement Units (IMUs). A canoeist’s motion was collected by multiple miniature inertial sensor nodes. The gradient descent method was used to fuse data and obtain the canoeist’s attitude information after sensor calibration, and then the motions of canoeist’s actions were reconstructed. Stroke quality was performed based on the estimated joint angles. Machine learning algorithm was used as the classification method to divide the stroke cycle into different phases, including propulsion-phase and recovery-phase, a quantitative kinematic analysis was carried out. Experiments conducted in this paper demonstrated that our method possesses the capacity to reveal the similarities and differences between novice and coach, the whole process of canoeist’s motions can be analyzed with satisfactory accuracy validated by videography method. It can provide quantitative data for coaches or athletes, which can be used to improve the skills of rowers.


Sign in / Sign up

Export Citation Format

Share Document