scholarly journals Object-Based Approach for Adaptive Source Coding of Surveillance Video

2019 ◽  
Vol 9 (10) ◽  
pp. 2003 ◽  
Author(s):  
Tung-Ming Pan ◽  
Kuo-Chin Fan ◽  
Yuan-Kai Wang

Intelligent analysis of surveillance videos over networks requires high recognition accuracy by analyzing good-quality videos that however introduce significant bandwidth requirement. Degraded video quality because of high object dynamics under wireless video transmission induces more critical issues to the success of smart video surveillance. In this paper, an object-based source coding method is proposed to preserve constant quality of video streaming over wireless networks. The inverse relationship between video quality and object dynamics (i.e., decreasing video quality due to the occurrence of large and fast-moving objects) is characterized statistically as a linear model. A regression algorithm that uses robust M-estimator statistics is proposed to construct the linear model with respect to different bitrates. The linear model is applied to predict the bitrate increment required to enhance video quality. A simulated wireless environment is set up to verify the proposed method under different wireless situations. Experiments with real surveillance videos of a variety of object dynamics are conducted to evaluate the performance of the method. Experimental results demonstrate significant improvement of streaming videos relative to both visual and quantitative aspects.

Author(s):  
Shefali Gandhi ◽  
Tushar V. Ratanpara

Video synopsis provides representation of the long surveillance video, while preserving the essential activities of the original video. The activity in the original video is covered into a shorter period by simultaneously displaying multiple activities, which originally occurred at different time segments. As activities are to be displayed in different time segments than original video, the process begins with extracting moving objects. Temporal median algorithm is used to model background and foreground objects are detected using background subtraction method. Each moving object is represented as a space-time activity tube in the video. The concept of genetic algorithm is used for optimized temporal shifting of activity tubes. The temporal arrangement of tubes which results in minimum collision and maintains chronological order of events is considered as the best solution. The time-lapse background video is generated next, which is used as background for the synopsis video. Finally, the activity tubes are stitched on the time-lapse background video using Poisson image editing.


2014 ◽  
Vol 644-650 ◽  
pp. 4616-4619
Author(s):  
Zhi Yuan Xu ◽  
Yong Kai Wang ◽  
Xiao Hong Su ◽  
Yi Liu

Port surveillance videos are degraded seriously in foggy conditions. This paper presented a clearness algorithm based on wavelet packet decomposition. Firstly, we extracted the background image from degraded videos and established the updated model; Secondly, we detected the moving objects as foreground images; Thirdly, we defogged these images based on wavelet packet decomposition; Finally, we fused the background and foreground images together. The experimental results show that our method is more effective.


2013 ◽  
Vol 321-324 ◽  
pp. 1041-1045
Author(s):  
Jian Rong Cao ◽  
Yang Xu ◽  
Cai Yun Liu

After background modeling and segmenting of moving object for surveillance video, this paper firstly presented a noninteractive matting algorithm of video moving object based on GrabCut. These matted moving objects then were placed in a background image on the condition of nonoverlapping arrangement, so a frame could be obtained with several moving objects placed in a background image. Finally, a series of these frame images could be achieved in timeline and a single camera surveillance video synopsis could be formed. The experimental results show that this video synopsis has the features of conciseness and readable concentrated form and the efficiency of browsing and retrieval can be improved.


Author(s):  
Shefali Gandhi ◽  
Tushar V. Ratanpara

Video synopsis provides representation of the long surveillance video, while preserving the essential activities of the original video. The activity in the original video is covered into a shorter period by simultaneously displaying multiple activities, which originally occurred at different time segments. As activities are to be displayed in different time segments than original video, the process begins with extracting moving objects. Temporal median algorithm is used to model background and foreground objects are detected using background subtraction method. Each moving object is represented as a space-time activity tube in the video. The concept of genetic algorithm is used for optimized temporal shifting of activity tubes. The temporal arrangement of tubes which results in minimum collision and maintains chronological order of events is considered as the best solution. The time-lapse background video is generated next, which is used as background for the synopsis video. Finally, the activity tubes are stitched on the time-lapse background video using Poisson image editing.


2010 ◽  
Vol 6 (3) ◽  
pp. 259-280 ◽  
Author(s):  
N. Qadri ◽  
M. Altaf ◽  
M. Fleury ◽  
M. Ghanbari

Video communication within a Vehicular Ad Hoc Network (VANET) has the potential to be of considerable benefit in an urban emergency, as it allows emergency vehicles approaching the scene to better understand the nature of the emergency. However, the lack of centralized routing and network resource management within a VANET is an impediment to video streaming. To overcome these problems the paper pioneers source-coding techniques for VANET video streaming. The paper firstly investigates two practical multiple-path schemes, Video Redundancy Coding (VRC) and the H.264/AVC codec's redundant frames. The VRC scheme is reinforced by gradual decoder refresh to improve the delivered video quality. Evaluation shows that multiple-path 'redundant frames' achieves acceptable video quality at some destinations, whereas VRC is insufficient. The paper also demonstrates a third source coding scheme, single-path streaming with Flexible Macroblock Ordering, which is also capable of delivery of reasonable quality video. Therefore, video communication between vehicles is indeed shown to be feasible in an urban emergency if the suitable source coding techniques are selected.


2012 ◽  
Vol 532-533 ◽  
pp. 1219-1224
Author(s):  
Hong Tao Deng

During video transmission over error prone network, compressed video bit-stream is sensitive to channel errors that may degrade the decoded pictures severely. In order to solve this problem, error concealment technique is a useful post-processing tool for recovering the lost information. In these methods, how to estimate the lost motion vector correctly is important for the quality of decoded picture. In order to recover the lost motion vector, an Decoder Motion Vector Estimation (DMVE) criterion was proposed and have well effect for recover the lost blocks. In this paper, we propose an improved error concealment method based on DMVE, which exploits the accurate motion vector by using redundant motion vector information. The experimental results with an H.264 codec show that our method improves both subjective and objective decoder reconstructed video quality, especially for sequences of drastic motion.


2014 ◽  
Vol 496-500 ◽  
pp. 2200-2203
Author(s):  
Yang Su ◽  
Mi Lu

We introduce a new across-peer rate allocation algorithm with successive refinement to improve the video transmission performance in P2P networks, based on the combination of multiple description coding and network coding. Successive refinement is implemented through layered multiple description codes. The algorithm is developed to maximize the expected video quality at the receivers by partitioning video bitstream into different descriptions depending on different bandwidth conditions of each peer. Adaptive rate partition adjustment is applied to ensure the real reflection of the packet drop rate in the network. Also the granularity is changed to the scale of atomic blocks instead of stream rates in prior works. Through simulation results we show that the algorithm outperforms prior algorithms in terms of video playback quality at the peer ends, and helps the system more adjustable to the peer dynamics.


2002 ◽  
Vol 13 (2) ◽  
pp. 125-129 ◽  
Author(s):  
Hirokazu Ogawa ◽  
Yuji Takeda ◽  
Akihiro Yagi

Inhibitory tagging is a process that prevents focal attention from revisiting previously checked items in inefficient searches, facilitating search performance. Recent studies suggested that inhibitory tagging is object rather than location based, but it was unclear whether inhibitory tagging operates on moving objects. The present study investigated the tagging effect on moving objects. Participants were asked to search for a moving target among randomly and independently moving distractors. After either efficient or inefficient search, participants performed a probe detection task that measured the inhibitory effect on search items. The inhibitory effect on distractors was observed only after inefficient searches. The present results support the concept of object-based inhibitory tagging.


Sign in / Sign up

Export Citation Format

Share Document