Workers' Routine Activity Recognition using Body Movements and Location Information

Author(s):  
Futoshi Naya ◽  
Ren Ohmura ◽  
Fusako Takayanagi ◽  
Haruo Noma ◽  
Kiyoshi Kogure
2018 ◽  
pp. 2102-2123
Author(s):  
Anastasios Doulamis ◽  
Athanasios Voulodimos ◽  
Theodora Varvarigou

Automatic recognition of human actions from video signals is probably one of the most salient research topics of computer vision with a tremendous impact for many applications. In this chapter, the authors introduce a new descriptor, the Human Constrained Pixel Change History (HC-PCH), which is based on PCH but focuses on the human body movements over time. They propose a modification of the conventional PCH that entails the calculation of two probabilistic maps based on human face and body detection, respectively. These HC-PCH features are used as input to an HMM-based classification framework, which exploits redundant information from multiple streams by employing sophisticated fusion methods, resulting in enhanced activity recognition rates.


2018 ◽  
pp. 277-296
Author(s):  
Chun Zhu ◽  
Weihua Sheng

In this chapter, the authors propose an approach to indoor human daily activity recognition that combines motion data and location information. One inertial sensor is worn on the thigh of a human subject to provide motion data while a motion capture system is used to record the human location information. Such a combination has the advantage of significantly reducing the obtrusiveness to the human subject at a moderate cost of vision processing, while maintaining a high accuracy of recognition. The approach has two phases. First, a two-step algorithm is proposed to recognize the activity based on motion data only. In the coarse-grained classification, two neural networks are used to classify the basic activities. In the fine-grained classification, the sequence of activities is modeled by a Hidden Markov Model (HMM) to consider the sequential constraints. The modified short-time Viterbi algorithm is used for real-time daily activity recognition. Second, to fuse the motion data with the location information, Bayes' theorem is used to refine the activities recognized from the motion data. The authors conduct experiments in a mock apartment, and the obtained results prove the effectiveness and accuracy of the algorithms.


2021 ◽  
Vol 12 (1) ◽  
pp. 40
Author(s):  
Ali Arshad ◽  
Saman Cheema ◽  
Umair Ahsan

In recent years, activity recognition and object tracking are receiving extensive attention due to the increasing demand for adaptable surveillance systems. Activity recognition is guided by the parameters such as the shape, size, and color of the object. This article purposes an examination of the performance of existing color-based object detection and tracking algorithms using thermal/visual camera-based video steaming in MATLAB. A framework is developed to detect and track red moving objects in real time. Detection is carried out based on the location information acquired from an adaptive image processing algorithm. Coordinate extraction is followed by tracking and locking the object with the help of a laser barrel. The movement of the laser barrel is controlled with the help of an 8051 microcontroller. Location information is communicated from the image-processing algorithm to the microcontroller serially. During implementation, a single static camera is used that provides 30 frames per second. For each frame, 88 ms are required to complete all three steps from detection to tracking, to locking, so a processing speed of 12 frames per second is implemented. This repetition makes the setup adaptive to the environment despite the presence of a single static camera. This setup can handle multiple objects with shades of red and has demonstrated equally good results in varying outdoor conditions. Currently, the setup can lock only single targets, but the capacity of the system can be increased with the installation of multiple cameras and laser barrels.


Author(s):  
Anastasios Doulamis ◽  
Athanasios Voulodimos ◽  
Theodora Varvarigou

Automatic recognition of human actions from video signals is probably one of the most salient research topics of computer vision with a tremendous impact for many applications. In this chapter, the authors introduce a new descriptor, the Human Constrained Pixel Change History (HC-PCH), which is based on PCH but focuses on the human body movements over time. They propose a modification of the conventional PCH that entails the calculation of two probabilistic maps based on human face and body detection, respectively. These HC-PCH features are used as input to an HMM-based classification framework, which exploits redundant information from multiple streams by employing sophisticated fusion methods, resulting in enhanced activity recognition rates.


Author(s):  
Chun Zhu ◽  
Weihua Sheng

In this chapter, the authors propose an approach to indoor human daily activity recognition that combines motion data and location information. One inertial sensor is worn on the thigh of a human subject to provide motion data while a motion capture system is used to record the human location information. Such a combination has the advantage of significantly reducing the obtrusiveness to the human subject at a moderate cost of vision processing, while maintaining a high accuracy of recognition. The approach has two phases. First, a two-step algorithm is proposed to recognize the activity based on motion data only. In the coarse-grained classification, two neural networks are used to classify the basic activities. In the fine-grained classification, the sequence of activities is modeled by a Hidden Markov Model (HMM) to consider the sequential constraints. The modified short-time Viterbi algorithm is used for real-time daily activity recognition. Second, to fuse the motion data with the location information, Bayes’ theorem is used to refine the activities recognized from the motion data. The authors conduct experiments in a mock apartment, and the obtained results prove the effectiveness and accuracy of the algorithms.


2013 ◽  
Author(s):  
S. Ruiz Fernandez ◽  
J. Rahona ◽  
B. Rolke ◽  
G. Hervas ◽  
C. Vazquez

2020 ◽  
Vol 20 (3) ◽  
pp. 13-20
Author(s):  
Jinsoo Kim ◽  
◽  
Hyukjin Kwon ◽  
Dongkyoo Shin ◽  
Sunghoon Hong

Sign in / Sign up

Export Citation Format

Share Document