scholarly journals A Vehicle Reidentification Algorithm Based on Double-Channel Symmetrical CNN

2021 ◽  
Vol 2021 ◽  
pp. 1-6
Author(s):  
Lijun Yang ◽  
Tangsen Huang

It has become a challenging research topic to accurately identify the vehicles in the past from the mass monitoring data. The challenge is that the vehicle in the image has a large attitude, angle of view, light, and other changes, and these complex changes will seriously affect the vehicle recognition performance. In recent years, the convolutional neural network (CNN) has achieved great success in the field of vehicle reidentification. However, due to the small amount of vehicle annotation in the dataset of vehicle reidentification, the existing CNN model is not fully utilized in the training process, which affects the ability to identify the deep learning model. In order to solve the above problems, a double-channel symmetric CNN vehicle recognition algorithm is proposed by improving the network structure. In this method, two samples are taken as input at the same time, in which each sample has complementary characteristics. In this case, with limited training samples, the combination of inputs will be more diversified, and the training process of the CNN model will be more abundant. Experiments show that the recognition accuracy of the proposed algorithm is better than other existing methods, which further verifies the effectiveness of the proposed algorithm in this study.

2013 ◽  
Vol 347-350 ◽  
pp. 3724-3727
Author(s):  
Lin Tao Lü ◽  
Huan Gao ◽  
Yu Xiang Yang

this article presents an improved classifier vehicle identification algorithm to improve the efficiency of the existing vehicle recognition algorithm. First, using edge orientation histograms to extract image characteristics, then, Error correction coding is applied to the classification of classifier, the multi-class classification problems turned into multiple binary classification problems. A large number of experimental analysis shows that the improved vehicle identification algorithm has good recognition performance and robustness. Therefore, the algorithm which this article used has high theoretical and practical value.


Sensors ◽  
2020 ◽  
Vol 21 (1) ◽  
pp. 52
Author(s):  
Tianyi Zhang ◽  
Abdallah El Ali ◽  
Chen Wang ◽  
Alan Hanjalic ◽  
Pablo Cesar

Recognizing user emotions while they watch short-form videos anytime and anywhere is essential for facilitating video content customization and personalization. However, most works either classify a single emotion per video stimuli, or are restricted to static, desktop environments. To address this, we propose a correlation-based emotion recognition algorithm (CorrNet) to recognize the valence and arousal (V-A) of each instance (fine-grained segment of signals) using only wearable, physiological signals (e.g., electrodermal activity, heart rate). CorrNet takes advantage of features both inside each instance (intra-modality features) and between different instances for the same video stimuli (correlation-based features). We first test our approach on an indoor-desktop affect dataset (CASE), and thereafter on an outdoor-mobile affect dataset (MERCA) which we collected using a smart wristband and wearable eyetracker. Results show that for subject-independent binary classification (high-low), CorrNet yields promising recognition accuracies: 76.37% and 74.03% for V-A on CASE, and 70.29% and 68.15% for V-A on MERCA. Our findings show: (1) instance segment lengths between 1–4 s result in highest recognition accuracies (2) accuracies between laboratory-grade and wearable sensors are comparable, even under low sampling rates (≤64 Hz) (3) large amounts of neutral V-A labels, an artifact of continuous affect annotation, result in varied recognition performance.


2021 ◽  
Vol 13 (6) ◽  
pp. 1205
Author(s):  
Caidan Zhao ◽  
Gege Luo ◽  
Yilin Wang ◽  
Caiyun Chen ◽  
Zhiqiang Wu

A micro-Doppler signature (m-DS) based on the rotation of drone blades is an effective way to detect and identify small drones. Deep-learning-based recognition algorithms can achieve higher recognition performance, but they needs a large amount of sample data to train models. In addition to the hovering state, the signal samples of small unmanned aerial vehicles (UAVs) should also include flight dynamics, such as vertical, pitch, forward and backward, roll, lateral, and yaw. However, it is difficult to collect all dynamic UAV signal samples under actual flight conditions, and these dynamic flight characteristics will lead to the deviation of the original features, thus affecting the performance of the recognizer. In this paper, we propose a small UAV m-DS recognition algorithm based on dynamic feature enhancement. We extract the combined principal component analysis and discrete wavelet transform (PCA-DWT) time–frequency characteristics and texture features of the UAV’s micro-Doppler signal and use a dynamic attribute-guided augmentation (DAGA) algorithm to expand the feature domain for model training to achieve an adaptive, accurate, and efficient multiclass recognition model in complex environments. After the training model is stable, the average recognition accuracy rate can reach 98% during dynamic flight.


Author(s):  
Jiahao Chen ◽  
Ryota Nishimura ◽  
Norihide Kitaoka

Many end-to-end, large vocabulary, continuous speech recognition systems are now able to achieve better speech recognition performance than conventional systems. Most of these approaches are based on bidirectional networks and sequence-to-sequence modeling however, so automatic speech recognition (ASR) systems using such techniques need to wait for an entire segment of voice input to be entered before they can begin processing the data, resulting in a lengthy time-lag, which can be a serious drawback in some applications. An obvious solution to this problem is to develop a speech recognition algorithm capable of processing streaming data. Therefore, in this paper we explore the possibility of a streaming, online, ASR system for Japanese using a model based on unidirectional LSTMs trained using connectionist temporal classification (CTC) criteria, with local attention. Such an approach has not been well investigated for use with Japanese, as most Japanese-language ASR systems employ bidirectional networks. The best result for our proposed system during experimental evaluation was a character error rate of 9.87%.


2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Xiaofei Zhang ◽  
Tao Wang ◽  
Qi Xiong ◽  
Yina Guo

Imagery-based brain-computer interfaces (BCIs) aim to decode different neural activities into control signals by identifying and classifying various natural commands from electroencephalogram (EEG) patterns and then control corresponding equipment. However, several traditional BCI recognition algorithms have the “one person, one model” issue, where the convergence of the recognition model’s training process is complicated. In this study, a new BCI model with a Dense long short-term memory (Dense-LSTM) algorithm is proposed, which combines the event-related desynchronization (ERD) and the event-related synchronization (ERS) of the imagery-based BCI; model training and testing were conducted with its own data set. Furthermore, a new experimental platform was built to decode the neural activity of different subjects in a static state. Experimental evaluation of the proposed recognition algorithm presents an accuracy of 91.56%, which resolves the “one person one model” issue along with the difficulty of convergence in the training process.


2021 ◽  
Author(s):  
ming ji ◽  
Chuanxia Sun ◽  
Yinglei Hu

Abstract In order to solve the increasingly serious traffic congestion problem, an intelligent transportation system is widely used in dynamic traffic management, which effectively alleviates traffic congestion and improves road traffic efficiency. With the continuous development of traffic data acquisition technology, it is possible to obtain real-time traffic data in the road network in time. A large amount of traffic information provides a data guarantee for the analysis and prediction of road network traffic state. Based on the deep learning framework, this paper studies the vehicle recognition algorithm and road environment discrimination algorithm, which greatly improves the accuracy of highway vehicle recognition. Collect highway video surveillance images in different environments, establish a complete original database, build a deep learning model of environment discrimination, and train the classification model to realize real-time environment recognition of highway, as the basic condition of vehicle recognition and traffic event discrimination, and provide basic information for vehicle detection model selection. To improve the accuracy of road vehicle detection, the vehicle target labeling and sample preprocessing of different environment samples are carried out. On this basis, the vehicle recognition algorithm is studied, and the vehicle detection algorithm based on weather environment recognition and fast RCNN model is proposed. Then, the performance of the vehicle detection algorithm described in this paper is verified by comparing the detection accuracy differences between different environment dataset models and overall dataset models, different network structures and deep learning methods, and other methods.


2020 ◽  
Vol 34 (09) ◽  
pp. 13583-13589
Author(s):  
Richa Singh ◽  
Akshay Agarwal ◽  
Maneet Singh ◽  
Shruti Nagpal ◽  
Mayank Vatsa

Face recognition algorithms have demonstrated very high recognition performance, suggesting suitability for real world applications. Despite the enhanced accuracies, robustness of these algorithms against attacks and bias has been challenged. This paper summarizes different ways in which the robustness of a face recognition algorithm is challenged, which can severely affect its intended working. Different types of attacks such as physical presentation attacks, disguise/makeup, digital adversarial attacks, and morphing/tampering using GANs have been discussed. We also present a discussion on the effect of bias on face recognition models and showcase that factors such as age and gender variations affect the performance of modern algorithms. The paper also presents the potential reasons for these challenges and some of the future research directions for increasing the robustness of face recognition models.


Sign in / Sign up

Export Citation Format

Share Document