scholarly journals Data Architecture for Sensor Network

2011 ◽  
Vol 7 ◽  
pp. 31-38
Author(s):  
Jan Ježek

Fast development of hardware in recent years leads to the high availability of simple sensing devices at minimal cost. As a consequence, there is many of sensor networks nowadays. These networks can continuously produce a large amount of observed data including the location of measurement. Optimal data architecture for such propose is a challenging issue due to its large scale and spatio-temporal nature.  The aim of this paper is to describe data architecture that was used in a particular solution for storage of sensor data. This solution is based on relation data model – concretely PostgreSQL and PostGIS. We will mention out experience from real world projects focused on car monitoring and project targeted on agriculture sensor networks. We will also shortly demonstrate the possibilities of client side API and the potential of other open source libraries that can be used for cartographic visualization (e.g. GeoServer). The main objective is to describe the strength and weakness of usage of relation database system for such propose and to introduce also alternative approaches based on NoSQL concept.<br />

Sensor Review ◽  
2019 ◽  
Vol 39 (2) ◽  
pp. 208-217 ◽  
Author(s):  
Jinghan Du ◽  
Haiyan Chen ◽  
Weining Zhang

Purpose In large-scale monitoring systems, sensors in different locations are deployed to collect massive useful time-series data, which can help in real-time data analytics and its related applications. However, affected by hardware device itself, sensor nodes often fail to work, resulting in a common phenomenon that the collected data are incomplete. The purpose of this study is to predict and recover the missing data in sensor networks. Design/methodology/approach Considering the spatio-temporal correlation of large-scale sensor data, this paper proposes a data recover model in sensor networks based on a deep learning method, i.e. deep belief network (DBN). Specifically, when one sensor fails, the historical time-series data of its own and the real-time data from surrounding sensor nodes, which have high similarity with a failure observed using the proposed similarity filter, are collected first. Then, the high-level feature representation of these spatio-temporal correlation data is extracted by DBN. Moreover, to determine the structure of a DBN model, a reconstruction error-based algorithm is proposed. Finally, the missing data are predicted based on these features by a single-layer neural network. Findings This paper collects a noise data set from an airport monitoring system for experiments. Various comparative experiments show that the proposed algorithms are effective. The proposed data recovery model is compared with several other classical models, and the experimental results prove that the deep learning-based model can not only get a better prediction accuracy but also get a better performance in training time and model robustness. Originality/value A deep learning method is investigated in data recovery task, and it proved to be effective compared with other previous methods. This might provide a practical experience in the application of a deep learning method.


Author(s):  
Corinna Schmitt ◽  
Georg Carle

Today the researchers want to collect as much data as possible from different locations for monitoring reasons. In this context large-scale wireless sensor networks are becoming an active topic of research (Kahn1999). Because of the different locations and environments in which these sensor networks can be used, specific requirements for the hardware apply. The hardware of the sensor nodes must be robust, provide sufficient storage and communication capabilities, and get along with limited power resources. Sensor nodes such as the Berkeley-Mote Family (Polastre2006, Schmitt2006) are capable of meeting these requirements. These sensor nodes are small and light devices with radio communication and the capability for collecting sensor data. In this chapter the authors review the key elements for sensor networks and give an overview on possible applications in the field of monitoring.


2003 ◽  
Vol 1836 (1) ◽  
pp. 111-117
Author(s):  
Taek M. Kwon ◽  
Nirish Dhruv ◽  
Siddharth A. Patwardhan ◽  
Eil Kwon

Intelligent transportation system (ITS) sensor networks, such as road weather information and traffic sensor networks, typically generate enormous amounts of data. As a result, archiving, retrieval, and exchange of ITS sensor data for planning and performance analysis are becoming increasingly difficult. An efficient ITS archiving system that is compact and exchangeable and allows efficient and fast retrieval of large amounts of data is essential. A proposal is made for a system that can meet the present and future archiving needs of large-scale ITS data. This system is referred to as common data format (CDF) and was developed by the National Space Science Data Center for archiving, exchange, and management of large-scale scientific array data. CDF is an open system that is free and portable and includes self-describing data abstraction. Archiving traffic data by using CDF is demonstrated, and its archival and retrieval performance is presented for the Minnesota Department of Transportation–s 30-s traffic data collected from about 4,000 loop detectors around Twin Cities freeways. For comparison of the archiving performance, the same data were archived by using a commercially available relational database, which was evaluated for its archival and retrieval performance. This result is presented, along with reasons that CDF is a good fit for large-scale ITS data archiving, retrieval, and exchange of data.


2012 ◽  
Vol 8 (4) ◽  
pp. 708762 ◽  
Author(s):  
Sungmo Jung ◽  
Jae Young Ahn ◽  
Dae-Joon Hwang ◽  
Seoksoo Kim

In ubiquitous healthcare systems, machine-to-machine (M2M) communication promises large opportunities as it utilizes rapidly developing technologies of large-scale networking of devices for patient monitoring without dependence on human interaction. With the emergence of wireless multimedia sensor networks (WMSNs), M2M communications improve continuous monitoring and transmission and retrieval of multimedia content such as video and audio streams, images, and sensor data from the patient being monitored. This research deploys WMSN for continuous monitoring of target patients and reports tracking for preventive ubiquitous healthcare. This study performs optimization scheme movement coordination technique and data routing within the monitored area. A movement tracking algorithm is proposed for better patient tracking techniques and aids in optimal deployment of wireless sensor networks. Results show that our optimization scheme is capable of providing scalable and reliable patient monitoring results.


Sensors ◽  
2021 ◽  
Vol 21 (7) ◽  
pp. 2279
Author(s):  
Lauri Lovén ◽  
Tero Lähderanta ◽  
Leena Ruha ◽  
Ella Peltonen ◽  
Ilkka Launonen ◽  
...  

Spatio-temporal interpolation provides estimates of observations in unobserved locations and time slots. In smart cities, interpolation helps to provide a fine-grained contextual and situational understanding of the urban environment, in terms of both short-term (e.g., weather, air quality, traffic) or long term (e.g., crime, demographics) spatio-temporal phenomena. Various initiatives improve spatio-temporal interpolation results by including additional data sources such as vehicle-fitted sensors, mobile phones, or micro weather stations of, for example, smart homes. However, the underlying computing paradigm in such initiatives is predominantly centralized, with all data collected and analyzed in the cloud. This solution is not scalable, as when the spatial and temporal density of sensor data grows, the required transmission bandwidth and computational capacity become unfeasible. To address the scaling problem, we propose EDISON: algorithms for distributed learning and inference, and an edge-native architecture for distributing spatio-temporal interpolation models, their computations, and the observed data vertically and horizontally between device, edge and cloud layers. We demonstrate EDISON functionality in a controlled, simulated spatio-temporal setup with 1 M artificial data points. While the main motivation of EDISON is the distribution of the heavy computations, the results show that EDISON also provides an improvement over alternative approaches, reaching at best a 10% smaller RMSE than a global interpolation and 6% smaller RMSE than a baseline distributed approach.


Author(s):  
Joel H. Saltz ◽  
George Teodoro ◽  
Tony Pan ◽  
Lee A.D. Cooper ◽  
Jun Kong ◽  
...  

Author(s):  
Mark Roantree ◽  
Alan F. Smeaton ◽  
Noel E. O’Connor ◽  
Vincent Andrieu ◽  
Nicolas Legeay ◽  
...  

One of the more recent sources of large volumes of generated data is sensor devices, where dedicated sensing equipment is used to monitor events and happenings in a wide range of domains, including monitoring human biometrics and behaviour. This chapter proposes an approach and an implementation of semi-automated enrichment of raw sensor data, where the sensor data can come from a wide variety of sources. The authors extract semantics from the sensor data using their XSENSE processing architecture in a multi-stage analysis. The net result is that sensor data values are transformed into XML data so that well-established XML querying via XPATH and similar techniques can be followed. The authors then propose to distribute the XML data on a peer-to-peer configuration and show, through simulations, what the computational costs of executing queries on this P2P network, will be. This approach is validated approach through the use of an array of sensor data readings taken from a range of biometric sensor devices, fitted to movie-watchers as they watched Hollywood movies. These readings were synchronised with video and audio analysis of the actual movies themselves, where we automatically detect movie highlights, which the authors try to correlate with observed human reactions. The XSENSE architecture is used to semantically enrich both the biometric sensor readings and the outputs of video analysis, into one large sensor database. This chapter thus presents and validates a scalable means of semi-automating the semantic enrichment of sensor data, thereby providing a means of large-scale sensor data management which is a necessary step in supporting data mining from sensor networks.


2007 ◽  
Vol 3 (1) ◽  
pp. 23-40 ◽  
Author(s):  
S. Selvakennedy ◽  
S. Sinnappan

Future large-scale sensor networks may comprise thousands of wirelessly connected sensor nodes that could provide an unimaginable opportunity to interact with physical phenomena in real time. However, the nodes are typically highly resource-constrained. Since the communication task is a significant power consumer, various attempts have been made to introduce energy-awareness at different levels within the communication stack. Clustering is one such attempt to control energy dissipation for sensor data dissemination in a multihop fashion. The Time-Controlled Clustering Algorithm (TCCA) is proposed to realize a network-wide energy reduction. A realistic energy dissipation model is derived probabilistically to quantify the sensor network's energy consumption using the proposed clustering algorithm. A discrete-event simulator is developed to verify the mathematical model and to further investigate TCCA in other scenarios. The simulator is also extended to include the rest of the communication stack to allow a comprehensive evaluation of the proposed algorithm.


Sign in / Sign up

Export Citation Format

Share Document