Wide Area Seamless Surveillance Video System to Recognize and Track Moving Objects Based on Gigabit Network

Author(s):  
Y. Shibata ◽  
T. Kon ◽  
N. Uchida ◽  
K. Hashimoto
2012 ◽  
Vol 17 (4) ◽  
pp. 217-222
Author(s):  
Piotr Szymczyk ◽  
Magdalena Szymczyk

Abstract In this paper authors describe in details a system dedicated to scene configuration. The user can define different important 2D regions of the scene. There is a possibility to define the following kinds of regions: flour, total covering, down covering, up covering, middle covering, entrance/exit, protected area, prohibited area, allowed direction, prohibited direction, reflections, moving objects, light source, wall and sky. The definition of this regions is very important to further analysis of live stream camera data in the guardian video system.


2019 ◽  
Vol 9 (10) ◽  
pp. 2003 ◽  
Author(s):  
Tung-Ming Pan ◽  
Kuo-Chin Fan ◽  
Yuan-Kai Wang

Intelligent analysis of surveillance videos over networks requires high recognition accuracy by analyzing good-quality videos that however introduce significant bandwidth requirement. Degraded video quality because of high object dynamics under wireless video transmission induces more critical issues to the success of smart video surveillance. In this paper, an object-based source coding method is proposed to preserve constant quality of video streaming over wireless networks. The inverse relationship between video quality and object dynamics (i.e., decreasing video quality due to the occurrence of large and fast-moving objects) is characterized statistically as a linear model. A regression algorithm that uses robust M-estimator statistics is proposed to construct the linear model with respect to different bitrates. The linear model is applied to predict the bitrate increment required to enhance video quality. A simulated wireless environment is set up to verify the proposed method under different wireless situations. Experiments with real surveillance videos of a variety of object dynamics are conducted to evaluate the performance of the method. Experimental results demonstrate significant improvement of streaming videos relative to both visual and quantitative aspects.


Author(s):  
Shefali Gandhi ◽  
Tushar V. Ratanpara

Video synopsis provides representation of the long surveillance video, while preserving the essential activities of the original video. The activity in the original video is covered into a shorter period by simultaneously displaying multiple activities, which originally occurred at different time segments. As activities are to be displayed in different time segments than original video, the process begins with extracting moving objects. Temporal median algorithm is used to model background and foreground objects are detected using background subtraction method. Each moving object is represented as a space-time activity tube in the video. The concept of genetic algorithm is used for optimized temporal shifting of activity tubes. The temporal arrangement of tubes which results in minimum collision and maintains chronological order of events is considered as the best solution. The time-lapse background video is generated next, which is used as background for the synopsis video. Finally, the activity tubes are stitched on the time-lapse background video using Poisson image editing.


2014 ◽  
Vol 496-500 ◽  
pp. 2150-2153 ◽  
Author(s):  
Cheng Ying Gong ◽  
Hui He

In general, moving target detection method has the background subtraction method,the adjacent frame difference method and the optical flow method. In order to quickly and accurately detect the moving targets of surveillance video , expand the location tracking study.This paper describes the detection method in real-time vehicle moving target,used AForge.NET library and based on the absolute of image gray difference is greater than the taken threshold value. Implementation results show that this method can effectively and quickly detect moving objects in video sequences.


Sign in / Sign up

Export Citation Format

Share Document