scholarly journals Spatio-temporal image inpainting for video applications

2017 ◽  
Vol 14 (2) ◽  
pp. 229-244 ◽  
Author(s):  
Viacheslav Voronin ◽  
Vladimir Marchuk ◽  
Sergey Makov ◽  
Vladimir Mladenovic ◽  
Yigang Cen

Video inpainting or completion is a vital video improvement technique used to repair or edit digital videos. This paper describes a framework for temporally consistent video completion. The proposed method allows to remove dynamic objects or restore missing or tainted regions presented in a video sequence by utilizing spatial and temporal information from neighboring scenes. Masking algorithm is used for detection of scratches or damaged portions in video frames. The algorithm iteratively performs the following operations: achieve frame; update the scene model; update positions of moving objects; replace parts of the frame occupied by the objects marked for remove by using a background model. In this paper, we extend an image inpainting algorithm based texture and structure reconstruction by incorporating an improved strategy for video. Our algorithm is able to deal with a variety of challenging situations which naturally arise in video inpainting, such as the correct reconstruction of dynamic textures, multiple moving objects and moving background. Experimental comparisons to state-of-the-art video completion methods demonstrate the effectiveness of the proposed approach. It is shown that the proposed spatio-temporal image inpainting method allows restoring a missing blocks and removing a text from the scenes on videos. <br><br><font color="red"><b> This article has been retracted. Link to the retraction <u><a href="http://dx.doi.org/10.2298/SJEE1803373E">10.2298/SJEE1803373E</a><u></b></font>

Author(s):  
Gajanan Tudavekar ◽  
Santosh S. Saraf ◽  
Sanjay R. Patil

Video inpainting aims to complete in a visually pleasing way the missing regions in video frames. Video inpainting is an exciting task due to the variety of motions across different frames. The existing methods usually use attention models to inpaint videos by seeking the damaged content from other frames. Nevertheless, these methods suffer due to irregular attention weight from spatio-temporal dimensions, thus giving rise to artifacts in the inpainted video. To overcome the above problem, Spatio-Temporal Inference Transformer Network (STITN) has been proposed. The STITN aligns the frames to be inpainted and concurrently inpaints all the frames, and a spatio-temporal adversarial loss function improves the STITN. Our method performs considerably better than the existing deep learning approaches in quantitative and qualitative evaluation.


2018 ◽  
Vol 15 (3) ◽  
pp. 373-373
Author(s):  
E Editorial

The article entitled "Spatio-Temporal Image Inpainting for Video Applications", by authors Viacheslav Voronin, Vladimir Marchuk, Sergey Makov, Vladimir Mladenovic, Yigang Cen, published in Serbian Journal of Electrical Engineering, Vol. 14, No. 2, June 2017, pp. 229 - 2445, DOI: https://doi.org/10.2298/SJEE170116004V, has been retracted from the Journal. The paper is withdrawn with the consent of the first author because of the previous partially published results. <br><br><font color="red"><b> Link to the retracted article <u><a href="http://dx.doi.org/10.2298/SJEE170116004V">10.2298/SJEE170116004V</a></b></u>


2021 ◽  
Vol 13 (2) ◽  
pp. 690
Author(s):  
Tao Wu ◽  
Huiqing Shen ◽  
Jianxin Qin ◽  
Longgang Xiang

Identifying stops from GPS trajectories is one of the main concerns in the study of moving objects and has a major effect on a wide variety of location-based services and applications. Although the spatial and non-spatial characteristics of trajectories have been widely investigated for the identification of stops, few studies have concentrated on the impacts of the contextual features, which are also connected to the road network and nearby Points of Interest (POIs). In order to obtain more precise stop information from moving objects, this paper proposes and implements a novel approach that represents a spatio-temproal dynamics relationship between stopping behaviors and geospatial elements to detect stops. The relationship between the candidate stops based on the standard time–distance threshold approach and the surrounding environmental elements are integrated in a complex way (the mobility context cube) to extract stop features and precisely derive stops using the classifier classification. The methodology presented is designed to reduce the error rate of detection of stops in the work of trajectory data mining. It turns out that 26 features can contribute to recognizing stop behaviors from trajectory data. Additionally, experiments on a real-world trajectory dataset further demonstrate the effectiveness of the proposed approach in improving the accuracy of identifying stops from trajectories.


Sign in / Sign up

Export Citation Format

Share Document