A Scalable Architecture for Spatio-Temporal Range Queries over Big Location Data

Author(s):  
Rudyar Cortes ◽  
Olivier Marin ◽  
Xavier Bonnaire ◽  
Luciana Arantes ◽  
Pierre Sens
2010 ◽  
Vol 365 (1550) ◽  
pp. 2303-2312 ◽  
Author(s):  
Mark Hebblewhite ◽  
Daniel T. Haydon

In the past decade, ecologists have witnessed vast improvements in our ability to collect animal movement data through animal-borne technology, such as through GPS or ARGOS systems. However, more data does not necessarily yield greater knowledge in understanding animal ecology and conservation. In this paper, we provide a review of the major benefits, problems and potential misuses of GPS/Argos technology to animal ecology and conservation. Benefits are obvious, and include the ability to collect fine-scale spatio-temporal location data on many previously impossible to study animals, such as ocean-going fish, migratory songbirds and long-distance migratory mammals. These benefits come with significant problems, however, imposed by frequent collar failures and high cost, which often results in weaker study design, reduced sample sizes and poorer statistical inference. In addition, we see the divorcing of biologists from a field-based understanding of animal ecology to be a growing problem. Despite these difficulties, GPS devices have provided significant benefits, particularly in the conservation and ecology of wide-ranging species. We conclude by offering suggestions for ecologists on which kinds of ecological questions would currently benefit the most from GPS/Argos technology, and where the technology has been potentially misused. Significant conceptual challenges remain, however, including the links between movement and behaviour, and movement and population dynamics.


Author(s):  
Anh Tuan Truong

The development of location-based services and mobile devices has lead to an increase in the location data. Through the data mining process, some valuable information can be discovered from location data. In the other words, an attacker may also extract some private (sensitive) information of the user and this may make threats against the user privacy. Therefore, location privacy protection becomes an important requirement to the success in the development of location-based services. In this paper, we propose a grid-based approach as well as an algorithm to guarantee k-anonymity, a well-known privacy protection approach, in a location database. The proposed approach considers only the information that has significance for the data mining process while ignoring the un-related information. The experiment results show the effectiveness of the proposed approach in comparison with the literature ones.


Author(s):  
Geir M. Køien

Modern risk assessment methods cover many issues and encompass both risk analysis and corresponding prevention/mitigation measures.However, there is still room for improvement and one aspect that may benefit from more work is “exposure control”.The “exposure” an asset experiences plays an important part in the risks facing the asset.Amongst the aspects that all too regularly get exposed is user identities and user location information,and in a context with mobile subscriber and mobility in the service hosting (VM migration/mobility) the problems associated with lost identity/location privacy becomes urgent.In this paper we look at “exposure control” as a way for analyzing and protecting user identity and user location data.


Electronics ◽  
2020 ◽  
Vol 9 (3) ◽  
pp. 420
Author(s):  
Yan Yan ◽  
Bingqian Wang ◽  
Quan Z. Sheng ◽  
Adnan Mahmood ◽  
Tao Feng ◽  
...  

Centralized publishing of big location data can provide accurate and timely information to assist in traffic management and for facilitating people to decide travel time and route, mitigate traffic congestion, and reduce unnecessary waste. However, the spatio-temporal correlation, non-linearity, randomness, and uncertainty of big location data make it impossible to decide an optimal data publishing instance through traditional methods. This paper, accordingly, proposes a publishing interval predicting method for centralized publication of big location data based on the promising paradigm of deep learning. First, the adaptive adjusted sampling method is designed to address the challenge of finding a reasonable release time via a prediction mechanism. Second, the Maximal Overlap Discrete Wavelet Transform (MODWT) is introduced for the decomposition of time series in order to separate different features of big location data. Finally, different deep learning models are selected to construct the entire framework according to various time-domain features. Experimental analysis suggests that the proposed prediction scheme is not only feasible, but also improves the prediction accuracy in contrast to the traditional deep learning mechanisms.


2020 ◽  
pp. 1471082X1787033 ◽  
Author(s):  
Francesco Finazzi ◽  
Lucia Paci

Localizing people across space and over time is a relevant and challenging problem in many modern applications. Smartphone ubiquity gives the opportunity to collect useful individual data as never before. In this work, the focus is on location data collected by smartphone applications. We propose a kernel-based density estimation approach that exploits cyclical spatio-temporal patterns of people to estimate the individual location density at any time, uncertainty included. Model parameters are estimated by maximum likelihood cross-validation. Unlike classic tracking methods designed for high spatio-temporal resolution data, the approach is suitable when location data are sparse in time and are affected by non-negligible errors. The approach is applied to location data collected by the Earthquake Network citizen science project which carries out a worldwide earthquake early warning system based on smartphones. The approach is parsimonious and is suitable to model location data gathered by any location-aware smartphone application.


Sign in / Sign up

Export Citation Format

Share Document