scholarly journals Investigating cultural aspects in the fundamental diagram using convolutional neural networks

2018 ◽  
Author(s):  
Rodolfo Migon Favaretto ◽  
Roberto Rosa dos Santos ◽  
Soraia Raupp Musse ◽  
Felipe Vilanova ◽  
Ângelo Brandelli Costa

AbstractThis paper presents a study regarding group behavior in a controlled experiment focused on differences in an important attribute that vary across cultures - the personal spaces - in two Countries: Brazil and Germany. In order to coherently compare Germany and Brazil evolutions with same population applying same task, we performed the pedestrian Fundamental Diagram experiment in Brazil, as performed in Germany. We use convolutional neural networks to detect and track people in video sequences. With this data, we use Voronoi Diagrams to find out the neighbor relation among people and then compute the walking distances to find out the personal spaces. Based on personal spaces analyses, we found out that people behavior is more similar in high dense populations. So, we focused our study on cultural differences between the two Countries in low and medium densities. Results indicate that personal space analyses can be a relevant feature in order to understand cultural aspects in video sequences even when compared with data from self-reported questionnaires.

2019 ◽  
Vol 30 (3-4) ◽  
Author(s):  
Rodolfo Migon Favaretto ◽  
Roberto Rosa dos Santos ◽  
Soraia Raupp Musse ◽  
Felipe Vilanova ◽  
Angelo Brandelli Costa

2019 ◽  
Vol 16 (1) ◽  
pp. 172988141882509 ◽  
Author(s):  
Hanbo Wu ◽  
Xin Ma ◽  
Yibin Li

Temporal information plays a significant role in video-based human action recognition. How to effectively extract the spatial–temporal characteristics of actions in videos has always been a challenging problem. Most existing methods acquire spatial and temporal cues in videos individually. In this article, we propose a new effective representation for depth video sequences, called hierarchical dynamic depth projected difference images that can aggregate the action spatial and temporal information simultaneously at different temporal scales. We firstly project depth video sequences onto three orthogonal Cartesian views to capture the 3D shape and motion information of human actions. Hierarchical dynamic depth projected difference images are constructed with the rank pooling in each projected view to hierarchically encode the spatial–temporal motion dynamics in depth videos. Convolutional neural networks can automatically learn discriminative features from images and have been extended to video classification because of their superior performance. To verify the effectiveness of hierarchical dynamic depth projected difference images representation, we construct a hierarchical dynamic depth projected difference images–based action recognition framework where hierarchical dynamic depth projected difference images in three views are fed into three identical pretrained convolutional neural networks independently for finely retuning. We design three classification schemes in the framework and different schemes utilize different convolutional neural network layers to compare their effects on action recognition. Three views are combined to describe the actions more comprehensively in each classification scheme. The proposed framework is evaluated on three challenging public human action data sets. Experiments indicate that our method has better performance and can provide discriminative spatial–temporal information for human action recognition in depth videos.


2019 ◽  
Vol 55 (5) ◽  
pp. 1827-1847 ◽  
Author(s):  
Gaohua Lin ◽  
Yongming Zhang ◽  
Gao Xu ◽  
Qixing Zhang

Sensors ◽  
2019 ◽  
Vol 19 (20) ◽  
pp. 4484 ◽  
Author(s):  
Víctor García Rubio ◽  
Juan Antonio Rodrigo Ferrán ◽  
Jose Manuel Menéndez García ◽  
Nuria Sánchez Almodóvar ◽  
José María Lalueza Mayordomo ◽  
...  

In recent years, the use of unmanned aerial vehicles (UAVs) for surveillance tasks has increased considerably. This technology provides a versatile and innovative approach to the field. However, the automation of tasks such as object recognition or change detection usually requires image processing techniques. In this paper we present a system for change detection in video sequences acquired by moving cameras. It is based on the combination of image alignment techniques with a deep learning model based on convolutional neural networks (CNNs). This approach covers two important topics. Firstly, the capability of our system to be adaptable to variations in the UAV flight. In particular, the difference of height between flights, and a slight modification of the camera’s position or movement of the UAV because of natural conditions such as the effect of wind. These modifications can be produced by multiple factors, such as weather conditions, security requirements or human errors. Secondly, the precision of our model to detect changes in diverse environments, which has been compared with state-of-the-art methods in change detection. This has been measured using the Change Detection 2014 dataset, which provides a selection of labelled images from different scenarios for training change detection algorithms. We have used images from dynamic background, intermittent object motion and bad weather sections. These sections have been selected to test our algorithm’s robustness to changes in the background, as in real flight conditions. Our system provides a precise solution for these scenarios, as the mean F-measure score from the image analysis surpasses 97%, and a significant precision in the intermittent object motion category, where the score is above 99%.


Sign in / Sign up

Export Citation Format

Share Document