scholarly journals WiFi-Based Driver’s Activity Monitoring with Efficient Computation of Radio-Image Features

Sensors ◽  
2020 ◽  
Vol 20 (5) ◽  
pp. 1381 ◽  
Author(s):  
Zain Ul Abiden Akhtar ◽  
Hongyu Wang

Driver distraction and fatigue are among the leading contributing factors in various fatal accidents. Driver activity monitoring can effectively reduce the number of roadway accidents. Besides the traditional methods that rely on camera or wearable devices, wireless technology for driver’s activity monitoring has emerged with remarkable attention. With substantial progress in WiFi-based device-free localization and activity recognition, radio-image features have achieved better recognition performance using the proficiency of image descriptors. The major drawback of image features is computational complexity, which increases exponentially, with the growth of irrelevant information in an image. It is still unresolved how to choose appropriate radio-image features to alleviate the expensive computational burden. This paper explores a computational efficient wireless technique that could recognize the attentive and inattentive status of a driver leveraging Channel State Information (CSI) of WiFi signals. In this novel research work, we demonstrate an efficient scheme to extract the representative features from the discriminant components of radio-images to reduce the computational cost with significant improvement in recognition accuracy. Specifically, we addressed the problem of the computational burden by efficacious use of Gabor filters with gray level statistical features. The presented low-cost solution requires neither sophisticated camera support to capture images nor any special hardware to carry with the user. This novel framework is evaluated in terms of activity recognition accuracy. To ensure the reliability of the suggested scheme, we analyzed the results by adopting different evaluation metrics. Experimental results show that the presented prototype outperforms the traditional methods with an average recognition accuracy of 93.1 % in promising application scenarios. This ubiquitous model leads to improve the system performance significantly for the diverse scale of applications. In the realm of intelligent vehicles and assisted driving systems, the proposed wireless solution can effectively characterize the driving maneuvers, primary tasks, driver distraction, and fatigue by exploiting radio-image descriptors.

2017 ◽  
Vol 66 (11) ◽  
pp. 10346-10356 ◽  
Author(s):  
Qinhua Gao ◽  
Jie Wang ◽  
Xiaorui Ma ◽  
Xueyan Feng ◽  
Hongyu Wang

2020 ◽  
Author(s):  
Anis Davoudi ◽  
Mamoun T. Mardini ◽  
Dave Nelson ◽  
Fahd Albinali ◽  
Sanjay Ranka ◽  
...  

BACKGROUND Research shows the feasibility of human activity recognition using Wearable accelerometer devices. Different studies have used varying number and placement for data collection using the sensors. OBJECTIVE To compare accuracy performance between multiple and variable placement of accelerometer devices in categorizing the type of physical activity and corresponding energy expenditure in older adults. METHODS Participants (n=93, 72.2±7.1 yrs) completed a total of 32 activities of daily life in a laboratory setting. Activities were classified as sedentary vs. non-sedentary, locomotion vs. non-locomotion, and lifestyle vs. non-lifestyle activities (e.g. leisure walk vs. computer work). A portable metabolic unit was worn during each activity to measure metabolic equivalents (METs). Accelerometers were placed on five different body positions: wrist, hip, ankle, upper arm, and thigh. Accelerometer data from each body position and combinations of positions were used in developing Random Forest models to assess activity category recognition accuracy and MET estimation. RESULTS Model performance for both MET estimation and activity category recognition strengthened with additional accelerometer devices. However, a single accelerometer on the ankle, upper arm, hip, thigh, or wrist had only a 0.03 to 0.09 MET increase in prediction error as compared to wearing all five devices. Balanced accuracy showed similar trends with slight decreases in balanced accuracy for detection of locomotion (0-0.01 METs), sedentary (0.13-0.05 METs) and lifestyle activities (0.08-0.04 METs) compared to all five placements. The accuracy of recognizing activity categories increased with additional placements (0.15-0.29). Notably, the hip was the best single body position for MET estimation and activity category recognition. CONCLUSIONS Additional accelerometer devices only slightly enhance activity recognition accuracy and MET estimation in older adults. However, given the extra burden of wearing additional devices, single accelerometers with appropriate placement appear to be sufficient for estimating energy expenditure and activity category recognition in older adults.


2021 ◽  
Vol 13 (4) ◽  
pp. 628
Author(s):  
Liang Ye ◽  
Tong Liu ◽  
Tian Han ◽  
Hany Ferdinando ◽  
Tapio Seppänen ◽  
...  

Campus violence is a common social phenomenon all over the world, and is the most harmful type of school bullying events. As artificial intelligence and remote sensing techniques develop, there are several possible methods to detect campus violence, e.g., movement sensor-based methods and video sequence-based methods. Sensors and surveillance cameras are used to detect campus violence. In this paper, the authors use image features and acoustic features for campus violence detection. Campus violence data are gathered by role-playing, and 4096-dimension feature vectors are extracted from every 16 frames of video images. The C3D (Convolutional 3D) neural network is used for feature extraction and classification, and an average recognition accuracy of 92.00% is achieved. Mel-frequency cepstral coefficients (MFCCs) are extracted as acoustic features, and three speech emotion databases are involved. The C3D neural network is used for classification, and the average recognition accuracies are 88.33%, 95.00%, and 91.67%, respectively. To solve the problem of evidence conflict, the authors propose an improved Dempster–Shafer (D–S) algorithm. Compared with existing D–S theory, the improved algorithm increases the recognition accuracy by 10.79%, and the recognition accuracy can ultimately reach 97.00%.


Sensors ◽  
2021 ◽  
Vol 21 (14) ◽  
pp. 4643
Author(s):  
Sang Jun Lee ◽  
Jeawoo Lee ◽  
Wonju Lee ◽  
Cheolhun Jang

In intelligent vehicles, extrinsic camera calibration is preferable to be conducted on a regular basis to deal with unpredictable mechanical changes or variations on weight load distribution. Specifically, high-precision extrinsic parameters between the camera coordinate and the world coordinate are essential to implement high-level functions in intelligent vehicles such as distance estimation and lane departure warning. However, conventional calibration methods, which solve a Perspective-n-Point problem, require laborious work to measure the positions of 3D points in the world coordinate. To reduce this inconvenience, this paper proposes an automatic camera calibration method based on 3D reconstruction. The main contribution of this paper is a novel reconstruction method to recover 3D points on planes perpendicular to the ground. The proposed method jointly optimizes reprojection errors of image features projected from multiple planar surfaces, and finally, it significantly reduces errors in camera extrinsic parameters. Experiments were conducted in synthetic simulation and real calibration environments to demonstrate the effectiveness of the proposed method.


Robotica ◽  
1991 ◽  
Vol 9 (2) ◽  
pp. 203-212 ◽  
Author(s):  
Won Jang ◽  
Kyungjin Kim ◽  
Myungjin Chung ◽  
Zeungnam Bien

SUMMARYFor efficient visual servoing of an “eye-in-hand” robot, the concepts of Augmented Image Space and Transformed Feature Space are presented in the paper. A formal definition of image features as functionals is given along with a technique to use defined image features for visual servoing. Compared with other known methods, the proposed concepts reduce the computational burden for visual feedback, and enhance the flexibility in describing the vision-based task. Simulations and real experiments demonstrate that the proposed concepts are useful and versatile tools for the industrial robot vision tasks, and thus the visual servoing problem can be dealt with more systematically.


Author(s):  
Chih-Ta Yen ◽  
Jia-De Lin

This study employed wearable inertial sensors integrated with an activity-recognition algorithm to recognize six types of daily activities performed by humans, namely walking, ascending stairs, descending stairs, sitting, standing, and lying. The sensor system consisted of a microcontroller, a three-axis accelerometer, and a three-axis gyro; the algorithm involved collecting and normalizing the activity signals. To simplify the calculation process and to maximize the recognition accuracy, the data were preprocessed through linear discriminant analysis; this reduced their dimensionality and captured their features, thereby reducing the feature space of the accelerometer and gyro signals; they were then verified through the use of six classification algorithms. The new contribution is that after feature extraction, data classification results indicated that an artificial neural network was the most stable and effective of the six algorithms. In the experiment, 20 participants equipped the wearable sensors on their waists to record the aforementioned six types of daily activities and to verify the effectiveness of the sensors. According to the cross-validation results, the combination of linear discriminant analysis and an artificial neural network was the most stable classification algorithm for data generalization; its activity-recognition accuracy was 87.37% on the training data and 80.96% on the test data.


Sensors ◽  
2020 ◽  
Vol 20 (8) ◽  
pp. 2189
Author(s):  
Zhimin Chen ◽  
Jianxin Chen ◽  
Xiangjun Huang

In recent years, sensors in smartphones have been widely used in applications, e.g., human activity recognition (HAR). However, the power of smartphone constrains the applications of HAR due to the computations. To combat it, energy efficiency should be considered in the applications of HAR with smartphones. In this paper, we improve energy efficiency for smartphones by adaptively controlling the sampling rate of the sensors during HAR. We collect the sensor samples, depending on the activity changing, based on the magnitude of acceleration. Besides that, we use linear discriminant analysis (LDA) to select the feature and machine learning methods for activity classification. Our method is verified on the UCI (University of California, Irvine) dataset; and it achieves an overall 56.39% of energy saving and the recognition accuracy of 99.58% during the HAR applications with smartphone.


Data ◽  
2018 ◽  
Vol 3 (4) ◽  
pp. 52 ◽  
Author(s):  
Oleksii Gorokhovatskyi ◽  
Volodymyr Gorokhovatskyi ◽  
Olena Peredrii

In this paper, we propose an investigation of the properties of structural image recognition methods in the cluster space of characteristic features. Recognition, which is based on key point descriptors like SIFT (Scale-invariant Feature Transform), SURF (Speeded Up Robust Features), ORB (Oriented FAST and Rotated BRIEF), etc., often relating to the search for corresponding descriptor values between an input image and all etalon images, which require many operations and time. Recognition of the previously quantized (clustered) sets of descriptor features is described. Clustering is performed across the complete set of etalon image descriptors and followed by screening, which allows for representation of each etalon image in vector form as a distribution of clusters. Due to such representations, the number of computation and comparison procedures, which are the core of the recognition process, might be reduced tens of times. Respectively, the preprocessing stage takes additional time for clustering. The implementation of the proposed approach was tested on the Leeds Butterfly dataset. The dependence of cluster amount on recognition performance and processing time was investigated. It was proven that recognition may be performed up to nine times faster with only a moderate decrease in quality recognition compared to searching for correspondences between all existing descriptors in etalon images and input one without quantization.


Sign in / Sign up

Export Citation Format

Share Document