Spatio-temporal Modeling of Moving Objects for Content- and Semantic-Based Retrieval in Video Data

Author(s):  
Choon-Bo Shim ◽  
Yong-Won Shin
2021 ◽  
Author(s):  
Bo Shen ◽  
Rakesh R Kamath ◽  
hahn choo ◽  
Zhenyu Kong

<div>Background/foreground separation is one of the most fundamental tasks in computer vision, especially for video data. Robust PCA (RPCA) and its tensor extension, namely, Robust Tensor PCA (RTPCA), provide an effective framework for background/foreground separation by decomposing the data into low-rank and sparse components, which contain the background and the foreground (moving objects), respectively. However, in real-world applications, the video data is contaminated with noise. For example, in metal additive manufacturing (AM), the processed X-ray video to study melt pool dynamics is very noisy. RPCA and RTPCA are not able to separate the background, foreground, and noise simultaneously. As a result, the noise will contaminate the background or the foreground or both. There is a need to remove the noise from the background and foreground. To achieve the three terms decomposition, a smooth sparse RTPCA (SS-RTPCA) model is proposed to decompose the data into static background, smooth foreground, and noise, respectively. Specifically, the static background is modeled by the low-rank tucker decomposition, the smooth foreground (moving objects) is modeled by the spatio-temporal continuity, which is enforced by the total variation regularization, and the noise is modeled by the sparsity, which is enforced by the `1 norm. An efficient algorithm based on alternating direction method of multipliers (ADMM) is implemented to solve the proposed model. Extensive experiments on both simulated and real data demonstrate that the proposed method significantly outperforms the state-of-the-art approaches for background/foreground separation in noisy cases.</div>


2021 ◽  
Author(s):  
Bo Shen ◽  
Rakesh R Kamath ◽  
hahn choo ◽  
Zhenyu Kong

<div>Background/foreground separation is one of the most fundamental tasks in computer vision, especially for video data. Robust PCA (RPCA) and its tensor extension, namely, Robust Tensor PCA (RTPCA), provide an effective framework for background/foreground separation by decomposing the data into low-rank and sparse components, which contain the background and the foreground (moving objects), respectively. However, in real-world applications, the video data is contaminated with noise. For example, in metal additive manufacturing (AM), the processed X-ray video to study melt pool dynamics is very noisy. RPCA and RTPCA are not able to separate the background, foreground, and noise simultaneously. As a result, the noise will contaminate the background or the foreground or both. There is a need to remove the noise from the background and foreground. To achieve the three terms decomposition, a smooth sparse RTPCA (SS-RTPCA) model is proposed to decompose the data into static background, smooth foreground, and noise, respectively. Specifically, the static background is modeled by the low-rank tucker decomposition, the smooth foreground (moving objects) is modeled by the spatio-temporal continuity, which is enforced by the total variation regularization, and the noise is modeled by the sparsity, which is enforced by the `1 norm. An efficient algorithm based on alternating direction method of multipliers (ADMM) is implemented to solve the proposed model. Extensive experiments on both simulated and real data demonstrate that the proposed method significantly outperforms the state-of-the-art approaches for background/foreground separation in noisy cases.</div>


2021 ◽  
Vol 10 (3) ◽  
pp. 188
Author(s):  
Cyril Carré ◽  
Younes Hamdani

Over the last decade, innovative computer technologies and the multiplication of geospatial data acquisition solutions have transformed the geographic information systems (GIS) landscape and opened up new opportunities to close the gap between GIS and the dynamics of geographic phenomena. There is a demand to further develop spatio-temporal conceptual models to comprehensively represent the nature of the evolution of geographic objects. The latter involves a set of considerations like those related to managing changes and object identities, modeling possible causal relations, and integrating multiple interpretations. While conventional literature generally presents these concepts separately and rarely approaches them from a holistic perspective, they are in fact interrelated. Therefore, we believe that the semantics of modeling would be improved by considering these concepts jointly. In this work, we propose to represent these interrelationships in the form of a hierarchical pyramidal framework and to further explore this set of concepts. The objective of this framework is to provide a guideline to orient the design of future generations of GIS data models, enabling them to achieve a better representation of available spatio-temporal data. In addition, this framework aims at providing keys for a new interpretation and classification of spatio-temporal conceptual models. This work can be beneficial for researchers, students, and developers interested in advanced spatio-temporal modeling.


2021 ◽  
Vol 13 (2) ◽  
pp. 690
Author(s):  
Tao Wu ◽  
Huiqing Shen ◽  
Jianxin Qin ◽  
Longgang Xiang

Identifying stops from GPS trajectories is one of the main concerns in the study of moving objects and has a major effect on a wide variety of location-based services and applications. Although the spatial and non-spatial characteristics of trajectories have been widely investigated for the identification of stops, few studies have concentrated on the impacts of the contextual features, which are also connected to the road network and nearby Points of Interest (POIs). In order to obtain more precise stop information from moving objects, this paper proposes and implements a novel approach that represents a spatio-temproal dynamics relationship between stopping behaviors and geospatial elements to detect stops. The relationship between the candidate stops based on the standard time–distance threshold approach and the surrounding environmental elements are integrated in a complex way (the mobility context cube) to extract stop features and precisely derive stops using the classifier classification. The methodology presented is designed to reduce the error rate of detection of stops in the work of trajectory data mining. It turns out that 26 features can contribute to recognizing stop behaviors from trajectory data. Additionally, experiments on a real-world trajectory dataset further demonstrate the effectiveness of the proposed approach in improving the accuracy of identifying stops from trajectories.


2012 ◽  
Vol 204-208 ◽  
pp. 2721-2725
Author(s):  
Hua Ji Zhu ◽  
Hua Rui Wu

Village land continually changes in the real world. In order to keep the data up-to-date, data producers need update the data frequently. When the village land data are updated, the update information must be dispensed to the end-users to keep their client-databases current. In the real world, village land changes in many forms. Identifying the change type of village land (i.e. captures the semantics of change) and representing them in the data world can help end-users understand the change commonly and be convenient for end-users to integrate these change information into their databases. This work focuses on the model of the spatio-temporal change. A three-tuple model CAR for representing the spatio-temporal change is proposed based on the village land feature set before change and the village land feature set after change, change type and rules. In this model, the C denotes the change type. A denotes the attribute set; R denotes the judging rules of change type. The rule is described by the IF-THEN expressions. By the operations between R and A, the C is distinguished. This model overcomes the limitations of current methods. And more, the rules in this model can be easy realized in computer program.


2008 ◽  
Vol 41 (1) ◽  
pp. 204-216 ◽  
Author(s):  
T. Xiang ◽  
M.K.H. Leung ◽  
S.Y. Cho

Sign in / Sign up

Export Citation Format

Share Document