Scene Analysis for Object Detection in Advanced Surveillance Systems Using Laplacian Distribution Model

Author(s):  
Fan-Chieh Cheng ◽  
Shih-Chia Huang ◽  
Shanq-Jang Ruan
2014 ◽  
Vol 60 (1) ◽  
pp. 53-64 ◽  
Author(s):  
Tomasz Kryjak ◽  
Mateusz Komorkiewicz ◽  
Marek Gorgon

Abstract The article presents a hardware implementation of the foreground object detection algorithm PBAS (Pixel-Based Adaptive Segmenter) with a scene analysis module. A mechanism for static object detection is proposed, which is based on consecutive frame differencing. The method allows to distinguish stopped foreground objects (e.g. a car at the intersection, abandoned luggage) from false detections (so-called ghosts) using edge similarity. The improved algorithm was compared with the original version on popular test sequences from the changedetection.net dataset. The obtained results indicate that the proposed approach allows to improve the performance of the method for sequences with the stopped objects. The algorithm has been implemented and successfully verified on a hardware platform with Virtex 7 FPGA device. The PBAS segmentation, consecutive frame differencing, Sobel edge detection and advanced one-pass connected component analysis modules were designed. The system is capable of processing 50 frames with a resolution of 720 × 576 pixels per second


Information ◽  
2018 ◽  
Vol 9 (9) ◽  
pp. 239
Author(s):  
Hongmei Liu ◽  
Jinhua Liu ◽  
Mingfeng Zhao

To improve the invisibility and robustness of the multiplicative watermarking algorithm, an adaptive image watermarking algorithm is proposed based on the visual saliency model and Laplacian distribution in the wavelet domain. The algorithm designs an adaptive multiplicative watermark strength factor by utilizing the energy aggregation of the high-frequency wavelet sub-band, texture masking and visual saliency characteristics. Then, the image blocks with high-energy are selected as the watermark embedding space to implement the imperceptibility of the watermark. In terms of watermark detection, the Laplacian distribution model is used to model the wavelet coefficients, and a blind watermark detection approach is exploited based on the maximum likelihood scheme. Finally, this paper performs the simulation analysis and comparison of the performance of the proposed algorithm. Experimental results show that the proposed algorithm is robust against additive white Gaussian noise, JPEG compression, median filtering, scaling, rotation attack and other attacks.


Automated object detection algorithm is an important research challenge in intelligent urban surveillance systems for Internet of Things (IoT) and smart cities applications. In particular, smart vehicle license plate recognition and vehicle detection are recognized as core research issues of these IoTdriven intelligent urban surveillance systems. They are key techniques in most of the traffic related IoT applications, such as road traffic real-time monitoring, security control of restricted areas, automatic parking access control, searching stolen vehicles, etc. In this paper, we propose a novel unified method of automated object detection for urban surveillance systems. We use this novel method to determine and pick out the highest energy frequency areas of the images from the digital camera imaging sensors, that is, either to pick the vehicle license plates or the vehicles out from the images. The other sensors like flame and ultrasonic sensor are used to monitor nearby objects. Our proposed method can not only help to detect object vehicles rapidly and accurately, but also can be used to reduce big data volume needed to be stored in urban surveillance systems


Author(s):  
Jie Xu

Abstract Recent advances in the field of object detection and face recognition have made it possible to develop practical video surveillance systems with embedded object detection and face recognition functionalities that are accurate and fast enough for commercial uses. In this paper, we compare some of the latest approaches to object detection and face recognition and provide reasons why they may or may not be amongst the best to be used in video surveillance applications in terms of both accuracy and speed. It is discovered that Faster R-CNN with Inception ResNet V2 is able to achieve some of the best accuracies while maintaining real-time rates. Single Shot Detector (SSD) with MobileNet, on the other hand, is incredibly fast and still accurate enough for most applications. As for face recognition, FaceNet with Multi-task Cascaded Convolutional Networks (MTCNN) achieves higher accuracy than advances such as DeepFace and DeepID2+ while being faster. An end-to-end video surveillance system is also proposed which could be used as a starting point for more complex systems. Various experiments have also been attempted on trained models with observations explained in detail. We finish by discussing video object detection and video salient object detection approaches which could potentially be used as future improvements to the proposed system.


Sign in / Sign up

Export Citation Format

Share Document