Multi‐camera video synopsis of a geographic scene based on optimal virtual viewpoint

2021 ◽  
Author(s):  
Yujia Xie ◽  
Meizhen Wang ◽  
Xuejun Liu ◽  
Xing Wang ◽  
Yiguang Wu ◽  
...  
Keyword(s):  
2021 ◽  
Vol 111 ◽  
pp. 102988
Author(s):  
Subhankar Ghatak ◽  
Suvendu Rup ◽  
Himansu Didwania ◽  
M.N.S. Swamy

2016 ◽  
Vol 26 (6) ◽  
pp. 1058-1069 ◽  
Author(s):  
Jianqing Zhu ◽  
Shengcai Liao ◽  
Stan Z. Li
Keyword(s):  

2018 ◽  
Vol 90 (8-9) ◽  
pp. 1257-1267 ◽  
Author(s):  
Zhe Chen ◽  
Guofang Lv ◽  
Li Lv ◽  
Tanghuai Fan ◽  
Huibin Wang

2013 ◽  
Vol 433-435 ◽  
pp. 297-300
Author(s):  
Zong Yue Wang

Video summaries provide a compact video representation preserving the essential activities of the original video, but the summaries may be confusing when mixing different activities together. Summaries Clustered methodology, showing similar activities simultaneously, enables to view much easier and more efficiently. However, it is very time consuming in generating summaries, especially in calculating motion distance and collision cost. To improve the efficiency of generating summaries, a parallel video synopsis generation algorithm is proposed based on GPGPU. The experiment result shows generation efficiency is improved greatly through GPU parallel computing. The acceleration radio can reach at 5.75 when data size is above 1600*960*30000.


Author(s):  
Shefali Gandhi ◽  
Tushar V. Ratanpara

Video synopsis provides representation of the long surveillance video, while preserving the essential activities of the original video. The activity in the original video is covered into a shorter period by simultaneously displaying multiple activities, which originally occurred at different time segments. As activities are to be displayed in different time segments than original video, the process begins with extracting moving objects. Temporal median algorithm is used to model background and foreground objects are detected using background subtraction method. Each moving object is represented as a space-time activity tube in the video. The concept of genetic algorithm is used for optimized temporal shifting of activity tubes. The temporal arrangement of tubes which results in minimum collision and maintains chronological order of events is considered as the best solution. The time-lapse background video is generated next, which is used as background for the synopsis video. Finally, the activity tubes are stitched on the time-lapse background video using Poisson image editing.


2019 ◽  
Vol 28 (8) ◽  
pp. 3873-3884 ◽  
Author(s):  
Tao Ruan ◽  
Shikui Wei ◽  
Jia Li ◽  
Yao Zhao

2008 ◽  
Author(s):  
Teng Li ◽  
Tao Mei ◽  
In-So Kweon ◽  
Xian-Sheng Hua
Keyword(s):  

2016 ◽  
Vol 23 (1) ◽  
pp. 11-14 ◽  
Author(s):  
Ke Li ◽  
Bo Yan ◽  
Weiyi Wang ◽  
Hamid Gharavi
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document