Extraction of spatio-temporal information of earthquake event based on semantic technology

2015 ◽  
Author(s):  
Hong Fan ◽  
Dan Guo ◽  
Huaiyuan Li
Author(s):  
Andrew Gothard ◽  
Daniel Jones ◽  
Andre Green ◽  
Michael Torrez ◽  
Alessandro Cattaneo ◽  
...  

Abstract Event-driven neuromorphic imagers have a number of attractive properties including low-power consumption, high dynamic range, the ability to detect fast events, low memory consumption and low band-width requirements. One of the biggest challenges with using event-driven imagery is that the field of event data processing is still embryonic. In contrast, decades worth of effort have been invested in the analysis of frame-based imagery. Hybrid approaches for applying established frame-based analysis techniques to event-driven imagery have been studied since event-driven imagers came into existence. However, the process for forming frames from event-driven imagery has not been studied in detail. This work presents a principled digital coded exposure approach for forming frames from event-driven imagery that is inspired by the physics exploited in a conventional camera featuring a shutter. The technique described in this work provides a fundamental tool for understanding the temporal information content that contributes to the formation of a frame from event-driven imagery data. Event-driven imagery allows for the application of arbitrary virtual digital shutter functions to form the final frame on a pixel-by-pixel basis. The proposed approach allows for the careful control of the spatio-temporal information that is captured in the frame. Furthermore, unlike a conventional physical camera, event-driven imagery can be formed into any variety of possible frames in post-processing after the data is captured. Furthermore, unlike a conventional physical camera, coded-exposure virtual shutter functions can assume arbitrary values including positive, negative, real, and complex values. The coded exposure approach also enables the ability to perform applications of industrial interest such as digital stroboscopy without any additional hardware. The ability to form frames from event-driven imagery in a principled manner opens up new possibilities in the ability to use conventional frame-based image processing techniques on event-driven imagery.


2017 ◽  
Vol 10 (3) ◽  
pp. 34-47
Author(s):  
Feriel Abdelkoui ◽  
Mohamed-Khireddine Kholladi

Recently, Twitter as one of social networks has been considered as a rich source of spatio-temporal information and significant revenue for mining data. Event detection from tweets can help to predict more serious real-world events. Such as: criminal events, natural hazards, and the spread of epidemics. Etc. This paper deals with event-based extraction for criminal incidents from Arabic tweets. It presents a framework that supports automated extraction of spatial and temporal information from tweets. The proposed approach is based on combining various indicators, including the names of places and temporal expressions that appear in the tweet message, related tweeting time, and additional locations from the user's profile. The effectiveness of the system was evaluated in term of recall, precision and f-measure.


2021 ◽  
Vol 10 (3) ◽  
pp. 166
Author(s):  
Hartmut Müller ◽  
Marije Louwsma

The Covid-19 pandemic put a heavy burden on member states in the European Union. To govern the pandemic, having access to reliable geo-information is key for monitoring the spatial distribution of the outbreak over time. This study aims to analyze the role of spatio-temporal information in governing the pandemic in the European Union and its member states. The European Nomenclature of Territorial Units for Statistics (NUTS) system and selected national dashboards from member states were assessed to analyze which spatio-temporal information was used, how the information was visualized and whether this changed over the course of the pandemic. Initially, member states focused on their own jurisdiction by creating national dashboards to monitor the pandemic. Information between member states was not aligned. Producing reliable data and timeliness reporting was problematic, just like selecting indictors to monitor the spatial distribution and intensity of the outbreak. Over the course of the pandemic, with more knowledge about the virus and its characteristics, interventions of member states to govern the outbreak were better aligned at the European level. However, further integration and alignment of public health data, statistical data and spatio-temporal data could provide even better information for governments and actors involved in managing the outbreak, both at national and supra-national level. The Infrastructure for Spatial Information in Europe (INSPIRE) initiative and the NUTS system provide a framework to guide future integration and extension of existing systems.


2021 ◽  
pp. 1-1
Author(s):  
Quan-Dung Pham ◽  
Xuan Truong Nguyen ◽  
Khac-Thai Nguyen ◽  
Hyun Kim ◽  
Hyuk-Jae Lee

2018 ◽  
Vol 4 (9) ◽  
pp. 107 ◽  
Author(s):  
Mohib Ullah ◽  
Ahmed Mohammed ◽  
Faouzi Alaya Cheikh

Articulation modeling, feature extraction, and classification are the important components of pedestrian segmentation. Usually, these components are modeled independently from each other and then combined in a sequential way. However, this approach is prone to poor segmentation if any individual component is weakly designed. To cope with this problem, we proposed a spatio-temporal convolutional neural network named PedNet which exploits temporal information for spatial segmentation. The backbone of the PedNet consists of an encoder–decoder network for downsampling and upsampling the feature maps, respectively. The input to the network is a set of three frames and the output is a binary mask of the segmented regions in the middle frame. Irrespective of classical deep models where the convolution layers are followed by a fully connected layer for classification, PedNet is a Fully Convolutional Network (FCN). It is trained end-to-end and the segmentation is achieved without the need of any pre- or post-processing. The main characteristic of PedNet is its unique design where it performs segmentation on a frame-by-frame basis but it uses the temporal information from the previous and the future frame for segmenting the pedestrian in the current frame. Moreover, to combine the low-level features with the high-level semantic information learned by the deeper layers, we used long-skip connections from the encoder to decoder network and concatenate the output of low-level layers with the higher level layers. This approach helps to get segmentation map with sharp boundaries. To show the potential benefits of temporal information, we also visualized different layers of the network. The visualization showed that the network learned different information from the consecutive frames and then combined the information optimally to segment the middle frame. We evaluated our approach on eight challenging datasets where humans are involved in different activities with severe articulation (football, road crossing, surveillance). The most common CamVid dataset which is used for calculating the performance of the segmentation algorithm is evaluated against seven state-of-the-art methods. The performance is shown on precision/recall, F 1 , F 2 , and mIoU. The qualitative and quantitative results show that PedNet achieves promising results against state-of-the-art methods with substantial improvement in terms of all the performance metrics.


Sign in / Sign up

Export Citation Format

Share Document