scholarly journals Brain Activity Recognition Method Based on Attention-Based RNN Mode

2021 ◽  
Vol 11 (21) ◽  
pp. 10425
Author(s):  
Song Zhou ◽  
Tianhan Gao

Brain activity recognition based on electroencephalography (EEG) marks a major research orientation in intelligent medicine, especially in human intention prediction, human–computer control and neurological diagnosis. The literature research mainly focuses on the recognition of single-person binary brain activity, which is limited in the more extensive and complex scenarios. Therefore, brain activity recognition in multiperson and multi-objective scenarios has aroused increasingly more attention. Another challenge is the reduction of recognition accuracy caused by the interface of external noise as well as EEG’s low signal-to-noise ratio. In addition, traditional EEG feature analysis proves to be time-intensive and it relies heavily on mature experience. The paper proposes a novel EEG recognition method to address the above issues. The basic feature of EEG is first analyzed according to the band of EEG. The attention-based RNN model is then adopted to eliminate the interference to achieve the purpose of automatic recognition of the original EEG signal. Finally, we evaluate the proposed method with public and local data sets of EEG and perform lots of tests to investigate how factors affect the results of recognition. As shown by the test results, compared with some typical EEG recognition methods, the proposed method owns better recognition accuracy and suitability in multi-objective task scenarios.

2020 ◽  
Author(s):  
Anis Davoudi ◽  
Mamoun T. Mardini ◽  
Dave Nelson ◽  
Fahd Albinali ◽  
Sanjay Ranka ◽  
...  

BACKGROUND Research shows the feasibility of human activity recognition using Wearable accelerometer devices. Different studies have used varying number and placement for data collection using the sensors. OBJECTIVE To compare accuracy performance between multiple and variable placement of accelerometer devices in categorizing the type of physical activity and corresponding energy expenditure in older adults. METHODS Participants (n=93, 72.2±7.1 yrs) completed a total of 32 activities of daily life in a laboratory setting. Activities were classified as sedentary vs. non-sedentary, locomotion vs. non-locomotion, and lifestyle vs. non-lifestyle activities (e.g. leisure walk vs. computer work). A portable metabolic unit was worn during each activity to measure metabolic equivalents (METs). Accelerometers were placed on five different body positions: wrist, hip, ankle, upper arm, and thigh. Accelerometer data from each body position and combinations of positions were used in developing Random Forest models to assess activity category recognition accuracy and MET estimation. RESULTS Model performance for both MET estimation and activity category recognition strengthened with additional accelerometer devices. However, a single accelerometer on the ankle, upper arm, hip, thigh, or wrist had only a 0.03 to 0.09 MET increase in prediction error as compared to wearing all five devices. Balanced accuracy showed similar trends with slight decreases in balanced accuracy for detection of locomotion (0-0.01 METs), sedentary (0.13-0.05 METs) and lifestyle activities (0.08-0.04 METs) compared to all five placements. The accuracy of recognizing activity categories increased with additional placements (0.15-0.29). Notably, the hip was the best single body position for MET estimation and activity category recognition. CONCLUSIONS Additional accelerometer devices only slightly enhance activity recognition accuracy and MET estimation in older adults. However, given the extra burden of wearing additional devices, single accelerometers with appropriate placement appear to be sufficient for estimating energy expenditure and activity category recognition in older adults.


Sensors ◽  
2021 ◽  
Vol 21 (5) ◽  
pp. 1919
Author(s):  
Shuhua Liu ◽  
Huixin Xu ◽  
Qi Li ◽  
Fei Zhang ◽  
Kun Hou

With the aim to solve issues of robot object recognition in complex scenes, this paper proposes an object recognition method based on scene text reading. The proposed method simulates human-like behavior and accurately identifies objects with texts through careful reading. First, deep learning models with high accuracy are adopted to detect and recognize text in multi-view. Second, datasets including 102,000 Chinese and English scene text images and their inverse are generated. The F-measure of text detection is improved by 0.4% and the recognition accuracy is improved by 1.26% because the model is trained by these two datasets. Finally, a robot object recognition method is proposed based on the scene text reading. The robot detects and recognizes texts in the image and then stores the recognition results in a text file. When the user gives the robot a fetching instruction, the robot searches for corresponding keywords from the text files and achieves the confidence of multiple objects in the scene image. Then, the object with the maximum confidence is selected as the target. The results show that the robot can accurately distinguish objects with arbitrary shape and category, and it can effectively solve the problem of object recognition in home environments.


2011 ◽  
Vol 121-126 ◽  
pp. 2141-2145 ◽  
Author(s):  
Wei Gang Yan ◽  
Chang Jian Wang ◽  
Jin Guo

This paper proposes a new image segmentation algorithm to detect the flame image from video in enclosed compartment. In order to avoid the contamination of soot and water vapor, this method first employs the cubic root of four color channels to transform a RGB image to a pseudo-gray one. Then the latter is divided into many small stripes (child images) and OTSU is employed to perform child image segmentation. Lastly, these processed child images are reconstructed into a whole image. A computer program using OpenCV library is developed and the new method is compared with other commonly used methods such as edge detection and normal Otsu’s method. It is found that the new method has better performance in flame image recognition accuracy and can be used to obtain flame shape from experiment video with much noise.


2021 ◽  
Vol 39 (1B) ◽  
pp. 1-10
Author(s):  
Iman H. Hadi ◽  
Alia K. Abdul-Hassan

Speaker recognition depends on specific predefined steps. The most important steps are feature extraction and features matching. In addition, the category of the speaker voice features has an impact on the recognition process. The proposed speaker recognition makes use of biometric (voice) attributes to recognize the identity of the speaker. The long-term features were used such that maximum frequency, pitch and zero crossing rate (ZCR).  In features matching step, the fuzzy inner product was used between feature vectors to compute the matching value between a claimed speaker voice utterance and test voice utterances. The experiments implemented using (ELSDSR) data set. These experiments showed that the recognition accuracy is 100% when using text dependent speaker recognition.


Entropy ◽  
2018 ◽  
Vol 20 (9) ◽  
pp. 701 ◽  
Author(s):  
Beige Ye ◽  
Taorong Qiu ◽  
Xiaoming Bai ◽  
Ping Liu

In view of the nonlinear characteristics of electroencephalography (EEG) signals collected in the driving fatigue state recognition research and the issue that the recognition accuracy of the driving fatigue state recognition method based on EEG is still unsatisfactory, this paper proposes a driving fatigue recognition method based on sample entropy (SE) and kernel principal component analysis (KPCA), which combines the advantage of the high recognition accuracy of sample entropy and the advantages of KPCA in dimensionality reduction for nonlinear principal components and the strong non-linear processing capability. By using support vector machine (SVM) classifier, the proposed method (called SE_KPCA) is tested on the EEG data, and compared with those based on fuzzy entropy (FE), combination entropy (CE), three kinds of entropies including SE, FE and CE that merged with KPCA. Experiment results show that the method is effective.


2016 ◽  
Vol 24 (3) ◽  
pp. 512-521 ◽  
Author(s):  
Kazuya Murao ◽  
Tsutomu Terada

2021 ◽  
Vol 13 (10) ◽  
pp. 265
Author(s):  
Jie Chen ◽  
Bing Han ◽  
Xufeng Ma ◽  
Jian Zhang

Underwater target recognition is an important supporting technology for the development of marine resources, which is mainly limited by the purity of feature extraction and the universality of recognition schemes. The low-frequency analysis and recording (LOFAR) spectrum is one of the key features of the underwater target, which can be used for feature extraction. However, the complex underwater environment noise and the extremely low signal-to-noise ratio of the target signal lead to breakpoints in the LOFAR spectrum, which seriously hinders the underwater target recognition. To overcome this issue and to further improve the recognition performance, we adopted a deep-learning approach for underwater target recognition, and a novel LOFAR spectrum enhancement (LSE)-based underwater target-recognition scheme was proposed, which consists of preprocessing, offline training, and online testing. In preprocessing, we specifically design a LOFAR spectrum enhancement based on multi-step decision algorithm to recover the breakpoints in LOFAR spectrum. In offline training, the enhanced LOFAR spectrum is adopted as the input of convolutional neural network (CNN) and a LOFAR-based CNN (LOFAR-CNN) for online recognition is developed. Taking advantage of the powerful capability of CNN in feature extraction, the recognition accuracy can be further improved by the proposed LOFAR-CNN. Finally, extensive simulation results demonstrate that the LOFAR-CNN network can achieve a recognition accuracy of 95.22%, which outperforms the state-of-the-art methods.


2021 ◽  
Author(s):  
Catriona L Scrivener ◽  
Jade B Jackson ◽  
Marta Morgado Correia ◽  
Marius Mada ◽  
Alexandra Woolgar

The powerful combination of transcranial magnetic stimulation (TMS) concurrent with functional magnetic resonance imaging (fMRI) provides rare insights into the causal relationships between brain activity and behaviour. Despite a recent resurgence in popularity, TMS-fMRI remains technically challenging. Here we examined the feasibility of applying TMS during short gaps between fMRI slices to avoid incurring artefacts in the fMRI data. We quantified signal dropout and changes in temporal signal-to-noise ratio (tSNR) for TMS pulses presented at timepoints from 100ms before to 100ms after slice onset. Up to 3 pulses were delivered per volume using MagVenture's MR-compatible TMS coil. We used a spherical phantom, two 7-channel TMS-dedicated surface coils, and a multiband (MB) sequence (factor=2) with interslice gaps of 100ms and 40ms, on a Siemens 3T Prisma-fit scanner. For comparison we repeated a subset of parameters with a more standard single-channel TxRx (birdcage) coil, and with a human participant and surface coil set up. We found that, even at 100% stimulator output, pulses applied at least -40ms/+50ms from the onset of slice readout avoid incurring artifacts. This was the case for all three setups. Thus, an interslice protocol can be achieved with a frequency of up to ~10 Hz, using a standard EPI sequence (slice acquisition time: 62.5ms, interslice gap: 40ms). Faster stimulation frequencies would require shorter slice acquisition times, for example using in-plane acceleration. Interslice TMS-fMRI protocols provide a promising avenue for retaining flexible timing of stimulus delivery without incurring TMS artifacts.


2011 ◽  
Vol 189-193 ◽  
pp. 1426-1431
Author(s):  
Ze Ning Xu ◽  
Hong Yu Liu ◽  
Yong Guo Zhang

Signal measuring is an important link in machine fault diagnosis. Accurate and reliable fault signals can be achieved by reasonable signal measuring. When the distance between sensor and measuring gear or bearing is comparatively far, the collected signals became weak and disturbed by other vibratory signals in equipments on bearing and gear fault analysis. Useful signals often were submerged in powerful noise, so caused difficult in extracting fault feature. In this paper, according to the feature of vibratory signals in machine test, wavelet analysis basic theory was applied on researching basic feature of wavelet analysis. By selecting suitable wavelet function and applying wavelet elimination noise technology the signal to noise ratio of signal was raised, thus the vibratory impact component can be measured in weak signals. Finally, wavelet analysis was applied on bearing fault diagnosis.


2014 ◽  
Vol 989-994 ◽  
pp. 4187-4190 ◽  
Author(s):  
Lin Zhang

An adaptive gender recognition method is proposed in this paper. At first, do multiwavlet transform to face image and get its low frequency information, then do feature extraction to the low frequency information using compressive sensing (CS), use extreme learning machine (ELM) to achieve gender recognition finally. In the process of feature extraction, we use genetic algorithm (GA) to get the number of measurements of CS in order to gain the highest recognition rate, so the method can adaptive access optimal performance. Experimental results show that compared with PDA and LDA, the new method improved the recognition accuracy substantially.


Sign in / Sign up

Export Citation Format

Share Document