Video data retrieval in parallel disk arrays

1994 ◽  
Author(s):  
Philip S. Yu ◽  
Ming-Syan Chen
2005 ◽  
Vol 05 (01) ◽  
pp. 111-133 ◽  
Author(s):  
HONGMEI LIU ◽  
JIWU HUANG ◽  
YUN Q. SHI

In this paper, we propose a blind video data-hiding algorithm in DWT (discrete wavelet transform) domain. It embeds multiple information bits into uncompressed video sequences. The major features of this algorithm are as follows. (1) Development of a novel embedding strategy in DWT domain. Different from the existing schemes based on DWT that have explicitly excluded the LL subband coefficients from data embedding, we embed data in the LL subband for better invisibility and robustness. The underlying idea comes from our qualitative and quantitative analysis of the DWT coefficients magnitude distribution over commonly used images. The experimental results confirm the superiority of the proposed embedding strategy. (2) To combat temporal attacks, which will destroy the synchronization of hidden data that is necessary in data retrieval, we develop an effective temporal synchronization technique. Compared with the sliding correlation proposed in the existing algorithms, our synchronization technique is more advanced. (3) We adopt a new 3D interleaving technique to combat bursts of errors, while reducing random error probabilities in data detection by exploiting ECC (error correcting coding). The detection error rate with 3D interleaving is much lower than that without 3D interleaving when frame loss rate is below 50%. (4) Take a carefully designed measure in bit embedding to guarantee the invisibility of information. In experiments, we can embed a string of 402 bytes (excluding the redundant bits associated with ECC) in 96 frames of the CIF format sequence. The experimental results have demonstrated that the embedded information bits are perceptually transparent when the frames in the sequence are viewed either as still images or played continuously. The hidden information is robust to manipulations, such as MPEG-2 compression, scaling, additive random noise, and frame loss.


2019 ◽  
Vol 93 ◽  
pp. 583-595 ◽  
Author(s):  
Songtao Ding ◽  
Shiru Qu ◽  
Yuling Xi ◽  
Shaohua Wan

The amount of information produced every year is rapidly growing due to many factor among all media, video is a particular media embedding visual, motion, audio and textual information. Given this huge amount of information we need general framework for video data mining to be applied to the raw videos (surveillance videos, news reading, Person reading books in library etc.).We introduce new techniques which are essential to process the video files. The first step of our frame work for mining raw video data in grouping input frames to a set of basic units which are relevant to the structure of the video. The second step is charactering the unit to cluster into similar groups, to detect interesting patterns. To do this we extract some features (object, colors etc.)From the unit. A histogram based color descriptors also introduced to reliably capture and represent the color properties of multiple images. The preliminary experimental studies indicate that the proposed framework is promising


2022 ◽  
Vol 2 (1) ◽  
Author(s):  
Yalong Pi ◽  
Nick Duffield ◽  
Amir H. Behzadan ◽  
Tim Lomax

AbstractAccurate and prompt traffic data are necessary for the successful management of major events. Computer vision techniques, such as convolutional neural network (CNN) applied on video monitoring data, can provide a cost-efficient and timely alternative to traditional data collection and analysis methods. This paper presents a framework designed to take videos as input and output traffic volume counts and intersection turning patterns. This framework comprises a CNN model and an object tracking algorithm to detect and track vehicles in the camera’s pixel view first. Homographic projection then maps vehicle spatial-temporal information (including unique ID, location, and timestamp) onto an orthogonal real-scale map, from which the traffic counts and turns are computed. Several video data are manually labeled and compared with the framework output. The following results show a robust traffic volume count accuracy up to 96.91%. Moreover, this work investigates the performance influencing factors including lighting condition (over a 24-h-period), pixel size, and camera angle. Based on the analysis, it is suggested to place cameras such that detection pixel size is above 2343 and the view angle is below 22°, for more accurate counts. Next, previous and current traffic reports after Texas A&M home football games are compared with the framework output. Results suggest that the proposed framework is able to reproduce traffic volume change trends for different traffic directions. Lastly, this work also contributes a new intersection turning pattern, i.e., counts for each ingress-egress edge pair, with its optimization technique which result in an accuracy between 43% and 72%.


Sign in / Sign up

Export Citation Format

Share Document