Global Coding of Multi-source Surveillance Video Data

Author(s):  
Jing Xiao ◽  
Yu Chen ◽  
Liang Liao ◽  
Jinhui Hu ◽  
Ruimin Hu
2021 ◽  
Vol 11 (9) ◽  
pp. 3730
Author(s):  
Aniqa Dilawari ◽  
Muhammad Usman Ghani Khan ◽  
Yasser D. Al-Otaibi ◽  
Zahoor-ur Rehman ◽  
Atta-ur Rahman ◽  
...  

After the September 11 attacks, security and surveillance measures have changed across the globe. Now, surveillance cameras are installed almost everywhere to monitor video footage. Though quite handy, these cameras produce videos in a massive size and volume. The major challenge faced by security agencies is the effort of analyzing the surveillance video data collected and generated daily. Problems related to these videos are twofold: (1) understanding the contents of video streams, and (2) conversion of the video contents to condensed formats, such as textual interpretations and summaries, to save storage space. In this paper, we have proposed a video description framework on a surveillance dataset. This framework is based on the multitask learning of high-level features (HLFs) using a convolutional neural network (CNN) and natural language generation (NLG) through bidirectional recurrent networks. For each specific task, a parallel pipeline is derived from the base visual geometry group (VGG)-16 model. Tasks include scene recognition, action recognition, object recognition and human face specific feature recognition. Experimental results on the TRECViD, UET Video Surveillance (UETVS) and AGRIINTRUSION datasets depict that the model outperforms state-of-the-art methods by a METEOR (Metric for Evaluation of Translation with Explicit ORdering) score of 33.9%, 34.3%, and 31.2%, respectively. Our results show that our framework has distinct advantages over traditional rule-based models for the recognition and generation of natural language descriptions.


Author(s):  
Suvojit Acharjee ◽  
Sayan Chakraborty ◽  
Wahiba Ben Abdessalem Karaa ◽  
Ahmad Taher Azar ◽  
Nilanjan Dey

Video is an important medium in terms of information sharing in this present era. The tremendous growth of video use can be seen in the traditional multimedia application as well as in many other applications like medical videos, surveillance video etc. Raw video data is usually large in size, which demands for video compression. In different video compressing schemes, motion vector is a very important step to remove the temporal redundancy. A frame is first divided into small blocks and then motion vector for each block is computed. The difference between two blocks is evaluated by different cost functions (i.e. mean absolute difference (MAD), mean square error (MSE) etc).In this paper the performance of different cost functions was evaluated and also the most suitable cost function for motion vector estimation was found.


2019 ◽  
Vol 9 (7) ◽  
pp. 1319 ◽  
Author(s):  
Peng Qin ◽  
Yong Zhang ◽  
Boyue Wang ◽  
Yongli Hu

For a contemporary intelligent transport system, congestion state analysis of traffic surveillance video (TSV) is one of the most crucial and intricate research topics because of the rapid development of transportation systems, the sustained growth of surveillance facilities on road, which lead to massive traffic flow data, and the inherent characteristics of our analysis target. Traditional methods on feature extractions are usually operated on Euclidean space in general, which are not accurate for high-dimensional TSV data analysis. This paper proposes a Grassmann manifold based neural network model to analysis TSV data , by mapping the video data from high dimensional Euclidean space to Grassmann manifold space, and considering the inner relation among adjacent cameras. The accuracy of the traffic congestion is improved, compared with several traditional methods. Experimental results are conducted to validate the accuracy of our method and to investigate the effects of different factors on performance.


2018 ◽  
Vol 4 (1) ◽  
pp. 105-116 ◽  
Author(s):  
Zhenfeng Shao ◽  
Jiajun Cai ◽  
Zhongyuan Wang

2019 ◽  
Vol 631 ◽  
pp. A112 ◽  
Author(s):  
L. Neslušan ◽  
M. Hajduková

Aims. We study the meteoroid stream of the long-period comet C/1963 A1 (Ikeya) to predict the meteor showers originating in this comet. We also aim to identify the predicted showers with their real counterparts. Methods. We modeled 23 parts of a theoretical meteoroid stream of the parent comet considered. Each of our models is characterized by a single value of the evolutionary time and a single value of the strength of the Poynting–Robertson effect. The evolutionary time is defined as the time before the present when the stream is modeled and when we start to follow its dynamical evolution. This period ranges from 10 000 to 80 000 yr. In each model, we considered a stream consisting of 10 000 test particles that dynamically evolve, and their dynamics is followed via a numerical integration up to the present. At the end of the integration, we analyzed the mean orbital characteristics of particles in the orbits approaching Earth’s orbit, which thus enabled us to predict a shower related to the parent comet. We attempted to identify each predicted shower with a shower recorded in the International Astronomical Union Meteor Data Center list of all showers. In addition, we tried to separate, often successfully, a real counterpart of each predicted shower from the databases of real meteors. Results. Many modeled parts of the stream of comet C/1963 A1 are identified with the corresponding real showers in three video-meteor databases. No real counterpart is found in the IAU MDC photographic or radio-meteor data. Specifically, we predict five showers related to C/1963 A1. Two predicted showers are identified with π-Hydrids #101 and δ-Corvids #729. The third predicted shower is only vaguely similar to November α-Sextantids #483, when its mean orbit is compared with the mean orbit of the November α-Sextantids in the IAU MDC list of all showers. However, the prediction is very consistent with the corresponding showers newly separated from three video databases. Another predicted shower has no counterpart in the IAU MDC list, but there is a good match of the prediction and a shower that we separated from the Cameras for Allsky Meteor Surveillance video data. We name this new shower ϑ-Leonids. The last of the predicted showers should be relatively low in number and, hence, no real counterparts were either found in the IAU MDC list or separated from any considered database.


Author(s):  
Asim Zaman ◽  
Baozhang Ren ◽  
Xiang Liu

Trespassing is the leading cause of rail-related deaths and has been on the rise for the past 10 years. Detection of unsafe trespassing of railroad tracks is critical for understanding and preventing fatalities. Witnessing these events has become possible with the widespread deployment of large volumes of surveillance video data in the railroad industry. This potential source of information requires immense labor to monitor in real time. To address this challenge this paper describes an artificial intelligence (AI) framework for the automatic detection of trespassing events in real time. This framework was implemented on three railroad video live streams, a grade crossing and two right-of-ways, in the United States. The AI algorithm automatically detects trespassing events, differentiates between the type of violator (car, motorcycle, truck, pedestrian, etc.) and sends an alert text message to a designated destination with important information including a video clip of the trespassing event. In this study, the AI has analyzed hours of live footage with no false positives or missed detections yet. This paper and its subsequent studies aim to provide the railroad industry with state-of-the-art AI tools to harness the untapped potential of an existing closed-circuit television infrastructure through the real-time analysis of their data feeds. The data generated from these studies will potentially help researchers understand human factors in railroad safety research and give them a real-time edge on tackling the critical challenges of trespassing in the railroad industry.


Sign in / Sign up

Export Citation Format

Share Document