scholarly journals Research on Pose Recognition Algorithm for Sports Players Based on Machine Learning of Sensor Data

2021 ◽  
Vol 2021 ◽  
pp. 1-8
Author(s):  
Chunlong Zhang ◽  
Hongtao He

The existing motion recognition system has a low athlete tracking recognition accuracy due to the poor processing effect of recognition algorithm for edge detection. A machine vision-based gymnast pose-tracking recognition system is designed for the above problem. The software part mainly optimizes the tracking recognition algorithm and uses the spatiotemporal graph convolution algorithm to construct the sequence graph structure of human joints, completes the strategy of label subset division, and completes the pose tracking according to the change of information dimension. The results of the system performance test show that the designed machine vision-based gymnast posture tracking recognition system can enhance the accuracy of tracking recognition and reduce the convergence time compared with the original system.

2021 ◽  
Vol 11 (21) ◽  
pp. 10235
Author(s):  
Heonmoo Kim ◽  
Yosoon Choi

In this study, an autonomous driving robot that drives and returns along a planned route in an underground mine tunnel was developed using a machine-vision-based road sign recognition algorithm. The robot was designed to recognize road signs at the intersection of a tunnel using a geometric matching algorithm of machine vision, and the autonomous driving mode was switched according to the shape of the road sign to drive the robot according to the planned route. The autonomous driving mode recognized the shape of the tunnel using the distance data from the LiDAR sensor; it was designed to drive while maintaining a fixed distance from the centerline or one wall of the tunnel. A machine-vision-based road sign recognition system and an autonomous driving robot for underground mines were used in a field experiment. The results reveal that all road signs were accurately recognized, and the average matching score was 979.14 out of 1000, confirming stable driving along the planned route.


Sensors ◽  
2021 ◽  
Vol 21 (2) ◽  
pp. 405
Author(s):  
Marcos Lupión ◽  
Javier Medina-Quero ◽  
Juan F. Sanjuan ◽  
Pilar M. Ortigosa

Activity Recognition (AR) is an active research topic focused on detecting human actions and behaviours in smart environments. In this work, we present the on-line activity recognition platform DOLARS (Distributed On-line Activity Recognition System) where data from heterogeneous sensors are evaluated in real time, including binary, wearable and location sensors. Different descriptors and metrics from the heterogeneous sensor data are integrated in a common feature vector whose extraction is developed by a sliding window approach under real-time conditions. DOLARS provides a distributed architecture where: (i) stages for processing data in AR are deployed in distributed nodes, (ii) temporal cache modules compute metrics which aggregate sensor data for computing feature vectors in an efficient way; (iii) publish-subscribe models are integrated both to spread data from sensors and orchestrate the nodes (communication and replication) for computing AR and (iv) machine learning algorithms are used to classify and recognize the activities. A successful case study of daily activities recognition developed in the Smart Lab of The University of Almería (UAL) is presented in this paper. Results present an encouraging performance in recognition of sequences of activities and show the need for distributed architectures to achieve real time recognition.


2014 ◽  
Vol 687-691 ◽  
pp. 3861-3868
Author(s):  
Zheng Hong Deng ◽  
Li Tao Jiao ◽  
Li Yan Liu ◽  
Shan Shan Zhao

According to the trend of the intelligent monitoring system, on the basis of the study of gait recognition algorithm, the intelligent monitoring system is designed based on FPGA and DSP; On the one hand, FPGA’s flexibility and fast parallel processing algorithms when designing can be both used to avoid that circuit can not be modified after designed; On the other hand, the advantage of processing the digital signal of DSP is fully taken. In the feature extraction and recognition, Zernike moment is selected, at the same time the system uses the nearest neighbor classification method which is more mature and has good real-time performance. Experiments show that the system has high recognition rate.


Author(s):  
Mohamed H Abdelhafiz ◽  
Mohammed I Awad ◽  
Ahmed Sadek ◽  
Farid Tolbah

This paper describes the development of a human gait activity recognition system. A multi-sensor recognition system, which has been developed for this purpose, was reduced to a single sensor-based recognition system. A sensor election method was devised based on the maximum relevance minimum redundancy feature selector to determine the sensor’s optimum position regarding activity recognition. The election method proved that the thigh has the highest contribution to recognize walking, stairs and ramp ascending, and descending activities. A recognition algorithm (which depends mainly on features that are classified by random forest, and selected by a combined feature selector using the maximum relevance minimum redundancy and genetic algorithm) has been modified to compensate the degradation that occurs in the prediction accuracy due to the reduction in the number of sensors. The first modification was implementing a double layer classifier in order to discriminate between the interfered activities. The second modification was adding physical features to the features dictionary used. These modifications succeeded to improve the prediction accuracy to allow a single sensor recognition system to behave in the same manner as a multi-sensor activity recognition system.


Informatics ◽  
2018 ◽  
Vol 5 (3) ◽  
pp. 38 ◽  
Author(s):  
Martin Jänicke ◽  
Bernhard Sick ◽  
Sven Tomforde

Personal wearables such as smartphones or smartwatches are increasingly utilized in everyday life. Frequently, activity recognition is performed on these devices to estimate the current user status and trigger automated actions according to the user’s needs. In this article, we focus on the creation of a self-adaptive activity recognition system based on IMU that includes new sensors during runtime. Starting with a classifier based on GMM, the density model is adapted to new sensor data fully autonomously by issuing the marginalization property of normal distributions. To create a classifier from that, label inference is done, either based on the initial classifier or based on the training data. For evaluation, we used more than 10 h of annotated activity data from the publicly available PAMAP2 benchmark dataset. Using the data, we showed the feasibility of our approach and performed 9720 experiments, to get resilient numbers. One approach performed reasonably well, leading to a system improvement on average, with an increase in the F-score of 0.0053, while the other one shows clear drawbacks due to a high loss of information during label inference. Furthermore, a comparison with state of the art techniques shows the necessity for further experiments in this area.


Sensors ◽  
2018 ◽  
Vol 18 (11) ◽  
pp. 4029 ◽  
Author(s):  
Jiaxuan Wu ◽  
Yunfei Feng ◽  
Peng Sun

Activity of daily living (ADL) is a significant predictor of the independence and functional capabilities of an individual. Measurements of ADLs help to indicate one’s health status and capabilities of quality living. Recently, the most common ways to capture ADL data are far from automation, including a costly 24/7 observation by a designated caregiver, self-reporting by the user laboriously, or filling out a written ADL survey. Fortunately, ubiquitous sensors exist in our surroundings and on electronic devices in the Internet of Things (IoT) era. We proposed the ADL Recognition System that utilizes the sensor data from a single point of contact, such as smartphones, and conducts time-series sensor fusion processing. Raw data is collected from the ADL Recorder App constantly running on a user’s smartphone with multiple embedded sensors, including the microphone, Wi-Fi scan module, heading orientation of the device, light proximity, step detector, accelerometer, gyroscope, magnetometer, etc. Key technologies in this research cover audio processing, Wi-Fi indoor positioning, proximity sensing localization, and time-series sensor data fusion. By merging the information of multiple sensors, with a time-series error correction technique, the ADL Recognition System is able to accurately profile a person’s ADLs and discover his life patterns. This paper is particularly concerned with the care for the older adults who live independently.


2018 ◽  
Vol 2018 ◽  
pp. 1-10 ◽  
Author(s):  
Saad Albawi ◽  
Oguz Bayat ◽  
Saad Al-Azawi ◽  
Osman N. Ucan

Recently, social touch gesture recognition has been considered an important topic for touch modality, which can lead to highly efficient and realistic human-robot interaction. In this paper, a deep convolutional neural network is selected to implement a social touch recognition system for raw input samples (sensor data) only. The touch gesture recognition is performed using a dataset previously measured with numerous subjects that perform varying social gestures. This dataset is dubbed as the corpus of social touch, where touch was performed on a mannequin arm. A leave-one-subject-out cross-validation method is used to evaluate system performance. The proposed method can recognize gestures in nearly real time after acquiring a minimum number of frames (the average range of frame length was from 0.2% to 4.19% from the original frame lengths) with a classification accuracy of 63.7%. The achieved classification accuracy is competitive in terms of the performance of existing algorithms. Furthermore, the proposed system outperforms other classification algorithms in terms of classification ratio and touch recognition time without data preprocessing for the same dataset.


Sign in / Sign up

Export Citation Format

Share Document