scholarly journals Lattice-Based Background Motion Compensation for Detection of Moving Objects with a Single Moving Camera

Author(s):  
Yunseok Myung ◽  
Gyeonghwan Kim
2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Jinchao Huang

PurposeMulti-domain convolutional neural network (MDCNN) model has been widely used in object recognition and tracking in the field of computer vision. However, if the objects to be tracked move rapid or the appearances of moving objects vary dramatically, the conventional MDCNN model will suffer from the model drift problem. To solve such problem in tracking rapid objects under limiting environment for MDCNN model, this paper proposed an auto-attentional mechanism-based MDCNN (AA-MDCNN) model for the rapid moving and changing objects tracking under limiting environment.Design/methodology/approachFirst, to distinguish the foreground object between background and other similar objects, the auto-attentional mechanism is used to selectively aggregate the weighted summation of all feature maps to make the similar features related to each other. Then, the bidirectional gated recurrent unit (Bi-GRU) architecture is used to integrate all the feature maps to selectively emphasize the importance of the correlated feature maps. Finally, the final feature map is obtained by fusion the above two feature maps for object tracking. In addition, a composite loss function is constructed to solve the similar but different attribute sequences tracking using conventional MDCNN model.FindingsIn order to validate the effectiveness and feasibility of the proposed AA-MDCNN model, this paper used ImageNet-Vid dataset to train the object tracking model, and the OTB-50 dataset is used to validate the AA-MDCNN tracking model. Experimental results have shown that the augmentation of auto-attentional mechanism will improve the accuracy rate 2.75% and success rate 2.41%, respectively. In addition, the authors also selected six complex tracking scenarios in OTB-50 dataset; over eleven attributes have been validated that the proposed AA-MDCNN model outperformed than the comparative models over nine attributes. In addition, except for the scenario of multi-objects moving with each other, the proposed AA-MDCNN model solved the majority rapid moving objects tracking scenarios and outperformed than the comparative models on such complex scenarios.Originality/valueThis paper introduced the auto-attentional mechanism into MDCNN model and adopted Bi-GRU architecture to extract key features. By using the proposed AA-MDCNN model, rapid object tracking under complex background, motion blur and occlusion objects has better effect, and such model is expected to be further applied to the rapid object tracking in the real world.


2013 ◽  
Vol 21 (9) ◽  
pp. 11568 ◽  
Author(s):  
Seung-Cheol Kim ◽  
Xiao-Bin Dong ◽  
Min-Woo Kwon ◽  
Eun-Soo Kim

2001 ◽  
Vol 13 (1) ◽  
pp. 102-120 ◽  
Author(s):  
Christopher Pack ◽  
Stephen Grossberg ◽  
Ennio Mingolla

Smooth pursuit eye movements (SPEMs) are eye rotations that are used to maintain fixation on a moving target. Such rotations complicate the interpretation of the retinal image, because they nullify the retinal motion of the target, while generating retinal motion of stationary objects in the background. This poses a problem for the oculomotor system, which must track the stabilized target image while suppressing the optokinetic reflex, which would move the eye in the direction of the retinal background motion (opposite to the direction in which the target is moving). Similarly, the perceptual system must estimate the actual direction and speed of moving objects in spite of the confounding effects of the eye rotation. This paper proposes a neural model to account for the ability of primates to accomplish these tasks. The model simulates the neurophysiological properties of cell types found in the superior temporal sulcus of the macaque monkey, specifically the medial superior temporal (MST) region. These cells process signals related to target motion, background motion, and receive an efference copy of eye velocity during pursuit movements. The model focuses on the interactions between cells in the ventral and dorsal subdivisions of MST, which are hypothesized to process target velocity and background motion, respectively. The model explains how these signals can be combined to explain behavioral data about pursuit maintenance and perceptual data from human studies, including the Aubert-Fleischl phenomenon and the Filehne Illusion, thereby clarifying the functional significance of neurophysiological data about these MST cell properties. It is suggested that the connectivity used in the model may represent a general strategy used by the brain in analyzing the visual world.


2007 ◽  
Vol 04 (03) ◽  
pp. 227-236 ◽  
Author(s):  
TAEHO KIM ◽  
KANG-HYUN JO

A background is a part that does not vary too much or change frequently in an image sequence. Using this assumption, an algorithm of reconstructing remained background and detecting moving objects for static and also moving camera is presented. For generating background, we detect regions that have high correlation coefficient compared within prior pyramid images from the current image. These detected regions are used for two process. First, we calculate the temporal displacement vector of each detected regions and classify clusters of pixel intensity based on camera movement. Second, we calculate temporally principal displacement vector using histogram of displacement vectors. Temporally principal displacement vector indicates camera movement. Finally we eliminate clusters which have lower weight than threshold, and combine remained clusters for each pixel to generate multiple background clusters. Experimental results show that remained background model and detected moving object under camera moving.


2002 ◽  
Vol 02 (02) ◽  
pp. 163-178 ◽  
Author(s):  
YING REN ◽  
CHIN SENG CHUA ◽  
YEONG KHING HO

This paper proposes a new background subtraction method for detecting moving objects (foreground) from a time-varied background. While background subtraction has traditionally worked well for stationary backgrounds, for a non-stationary viewing sensor, motion compensation can be applied but is difficult to realize to sufficient pixel accuracy in practice, and the traditional background subtraction algorithm fails. The problem is further compounded when the moving target to be detected/tracked is small, since the pixel error in motion compensating the background will subsume the small target. A Spatial Distribution of Gaussians (SDG) model is proposed to deal with moving object detection under motion compensation that has been approximately carried out. The distribution of each background pixel is temporally and spatially modeled. Based on this statistical model, a pixel in the current frame is classified as belonging to the foreground or background. For this system to perform under lighting and environmental changes over an extended period of time, the background distribution must be updated with each incoming frame. A new background restoration and adaptation algorithm is developed for the time-varied background. Test cases involving the detection of small moving objects within a highly textured background and a pan-tilt tracking system based on a 2D background mosaic are demonstrated successfully.


Sign in / Sign up

Export Citation Format

Share Document