scholarly journals Rare Event Detection Using Disentangled Representation Learning

Author(s):  
Ryuhei Hamaguchi ◽  
Ken Sakurada ◽  
Ryosuke Nakamura
2020 ◽  
Vol 39 (6) ◽  
pp. 8463-8475
Author(s):  
Palanivel Srinivasan ◽  
Manivannan Doraipandian

Rare event detections are performed using spatial domain and frequency domain-based procedures. Omnipresent surveillance camera footages are increasing exponentially due course the time. Monitoring all the events manually is an insignificant and more time-consuming process. Therefore, an automated rare event detection contrivance is required to make this process manageable. In this work, a Context-Free Grammar (CFG) is developed for detecting rare events from a video stream and Artificial Neural Network (ANN) is used to train CFG. A set of dedicated algorithms are used to perform frame split process, edge detection, background subtraction and convert the processed data into CFG. The developed CFG is converted into nodes and edges to form a graph. The graph is given to the input layer of an ANN to classify normal and rare event classes. Graph derived from CFG using input video stream is used to train ANN Further the performance of developed Artificial Neural Network Based Context-Free Grammar – Rare Event Detection (ACFG-RED) is compared with other existing techniques and performance metrics such as accuracy, precision, sensitivity, recall, average processing time and average processing power are used for performance estimation and analyzed. Better performance metrics values have been observed for the ANN-CFG model compared with other techniques. The developed model will provide a better solution in detecting rare events using video streams.


2014 ◽  
Vol 409 ◽  
pp. 54-61 ◽  
Author(s):  
Adam J. Richards ◽  
Janet Staats ◽  
Jennifer Enzor ◽  
Katherine McKinnon ◽  
Jacob Frelinger ◽  
...  

2021 ◽  
Author(s):  
Marcos P. S. Gôlo ◽  
Rafael G. Rossi ◽  
Ricardo M. Marcacini

Events are phenomena that occur at a specific time and place. Its detection can bring benefits to society since it is possible to extract knowledge from these events. Event detection is a multimodal task since these events have textual, geographical, and temporal components. Most multimodal research in the literature uses the concatenation of the components to represent the events. These approaches use multi-class or binary learning to detect events of interest which intensifies the user's labeling effort, in which the user should label event classes even if there is no interest in detecting them. In this paper, we present the Triple-VAE approach that learns a unified representation from textual, spatial, and density modalities through a variational autoencoder, one of the state-ofthe-art in representation learning. Our proposed Triple-VAE obtains suitable event representations for one-class classification, where users provide labels only for events of interest, thereby reducing the labeling effort. We carried out an experimental evaluation with ten real-world event datasets, four multimodal representation methods, and five evaluation metrics. Triple-VAE outperforms and presents a statistically significant difference considering the other three representation methods in all datasets. Therefore, Triple-VAE proved to be promising to represent the events in the one-class event detection scenario.


2020 ◽  
Vol 2020 (4) ◽  
Author(s):  
A. F. M. Fernandes ◽  
◽  
C. A. O. Henriques ◽  
R. D. P. Mano ◽  
D. González-Díaz ◽  
...  

Cytometry ◽  
1995 ◽  
Vol 22 (4) ◽  
pp. 317-322 ◽  
Author(s):  
Mark A. Rehse ◽  
Stan Corpuz ◽  
Shelly Heimfeld ◽  
Mark Minie ◽  
Diane Yachimiak

2015 ◽  
Vol 650 ◽  
pp. 011001
Author(s):  
P Colas ◽  
I Giomataris ◽  
I Irastorza ◽  
Th Patzak

Sign in / Sign up

Export Citation Format

Share Document