scholarly journals Automatic Moving Object Segmentation for Freely Moving Cameras

2014 ◽  
Vol 2014 ◽  
pp. 1-11 ◽  
Author(s):  
Yanli Wan ◽  
Xifu Wang ◽  
Hongpu Hu

This paper proposes a new moving object segmentation algorithm for freely moving cameras which is very common for the outdoor surveillance system, the car build-in surveillance system, and the robot navigation system. A two-layer based affine transformation model optimization method is proposed for camera compensation purpose, where the outer layer iteration is used to filter the non-background feature points, and the inner layer iteration is used to estimate a refined affine model based on the RANSAC method. Then the feature points are classified into foreground and background according to the detected motion information. A geodesic based graph cut algorithm is then employed to extract the moving foreground based on the classified features. Unlike the existing global optimization or the long term feature point tracking based method, our algorithm only performs on two successive frames to segment the moving foreground, which makes it suitable for the online video processing applications. The experiment results demonstrate the effectiveness of our algorithm in both of the high accuracy and the fast speed.

2009 ◽  
Vol 09 (04) ◽  
pp. 609-627 ◽  
Author(s):  
J. WANG ◽  
N. V. PATEL ◽  
W. I. GROSKY ◽  
F. FOTOUHI

In this paper, we address the problem of camera and object motion detection in the compressed domain. The estimation of camera motion and the moving object segmentation have been widely stated in a variety of context for video analysis, due to their capabilities of providing essential clues for interpreting the high-level semantics of video sequences. A novel compressed domain motion estimation and segmentation scheme is presented and applied in this paper. MPEG-2 compressed domain information, namely Motion Vectors (MV) and Discrete Cosine Transform (DCT) coefficients, is filtered and manipulated to obtain a dense and reliable Motion Vector Field (MVF) over consecutive frames. An iterative segmentation scheme based upon the generalized affine transformation model is exploited to effect the global camera motion detection. The foreground spatiotemporal objects are separated from the background using the temporal consistency check to the output of the iterative segmentation. This consistency check process can coalesce the resulting foreground blocks and weed out unqualified blocks. Illustrative examples are provided to demonstrate the efficacy of the proposed approach.


2020 ◽  
Vol 13 (1) ◽  
pp. 60
Author(s):  
Chenjie Wang ◽  
Chengyuan Li ◽  
Jun Liu ◽  
Bin Luo ◽  
Xin Su ◽  
...  

Most scenes in practical applications are dynamic scenes containing moving objects, so accurately segmenting moving objects is crucial for many computer vision applications. In order to efficiently segment all the moving objects in the scene, regardless of whether the object has a predefined semantic label, we propose a two-level nested octave U-structure network with a multi-scale attention mechanism, called U2-ONet. U2-ONet takes two RGB frames, the optical flow between these frames, and the instance segmentation of the frames as inputs. Each stage of U2-ONet is filled with the newly designed octave residual U-block (ORSU block) to enhance the ability to obtain more contextual information at different scales while reducing the spatial redundancy of the feature maps. In order to efficiently train the multi-scale deep network, we introduce a hierarchical training supervision strategy that calculates the loss at each level while adding knowledge-matching loss to keep the optimization consistent. The experimental results show that the proposed U2-ONet method can achieve a state-of-the-art performance in several general moving object segmentation datasets.


Sign in / Sign up

Export Citation Format

Share Document