video clustering
Recently Published Documents


TOTAL DOCUMENTS

22
(FIVE YEARS 2)

H-INDEX

5
(FIVE YEARS 0)

Author(s):  
Dr. Rajeev Tripathi ◽  

The massive volume of data stored in computer files and databases is rapidly increasing. Users of these data, on the other hand, demand more complex information from databases. The video data have exponential growth towards accessing and storing. The vital problem associated to video data is efficient, qualitative and fast accessing. We talk about how video pictures are clustered. We presume video clips have been divided into shots, each of which is denoted by a collection of key frames. As a result, video clustering is limited to still key frame pictures. In amble database finding the qualified data set (clusters) is quite time-taking job. The video data mining relate to multi–lingual text, numeric, image, video, audio, graphical, temporal, relational and categorical data. It may be any kind of information medium that can be represented, processed, stored, fast accessing or summarization of clusters are required due to which significant frame-set is formed. Due to sampling error and test reliability in video, substantial changes of more than one frame are predicted. The goal of this article is to show how to employ a familiar and easy nonparametric statistical approach (chi-square) to select eligible data/framesets for analysis. The chi-square model illustrated here is a straightforward, sensible, fast, reduce saddle, and easiest method. Skimming/ Summarization and clipping technique are further enhanced by this technique along with video database maintenance technique from simple descriptors to a complex description schemes like spatial and temporal or high dimensional indexing.


2021 ◽  
Vol E104.D (3) ◽  
pp. 430-440
Author(s):  
Feng ZHANG ◽  
Di LIU ◽  
Cong LIU
Keyword(s):  

2019 ◽  
Vol 366 ◽  
pp. 234-247
Author(s):  
Vinath Mekthanavanh ◽  
Tianrui Li ◽  
Jie Hu ◽  
Yan Yang

2016 ◽  
Vol 10 (03) ◽  
pp. 323-346
Author(s):  
Yixin Chen ◽  
Wen Wang ◽  
Wenbo He ◽  
Xiaofeng Li

Fuelled by the advancement in multimedia technologies, users across the world have witnessed the proliferation of online videos. Compared with the visual content of these videos, the textual content, for example, titles, tags, or descriptions, has been more broadly exploited in the real-world video data mining or information retrieval tasks. To enhance the understanding of videos, and improve the performance of the tasks such as automatic video annotation, video clustering, and cross-modal tag cleansing, the textual and visual content of videos are combined, through various methods. However, the absence of an empirical study on the properties of these contents makes them less solid to gain satisfactory performance. Therefore, in this paper, we conduct this study to verify the properties of textual content and draw insights from these analyses to promote further developments in video data mining that combine the two contents.


Sign in / Sign up

Export Citation Format

Share Document