scholarly journals Using Machine Learning Methods to Provision Virtual Sensors in Sensor-Cloud

Sensors ◽  
2020 ◽  
Vol 20 (7) ◽  
pp. 1836
Author(s):  
Ming-Zheng Zhang ◽  
Liang-Min Wang ◽  
Shu-Ming Xiong

The advent of sensor-cloud technology alleviates the limitations of traditional wireless sensor networks (WSNs) in terms of energy, storage, and computing, which has tremendous potential in various agricultural internet of things (IoT) applications. In the sensor-cloud environment, virtual sensor provisioning is an essential task. It chooses physical sensors to create virtual sensors in response to the users’ requests. Considering the capricious meteorological environment of the outdoors, this paper presents an measurements similarity-based virtual-sensor provisioning scheme by taking advantage of machine learning in data analysis. First, to distinguish the changing trends, we classified all the physical sensors into several categories using historical data. Then, the k-means clustering algorithm was exploited for each class to cluster the physical sensors with high similarity. Finally, one representative physical sensor from each cluster was selected to create the corresponding virtual sensors. The experimental results show the reformation of our scheme with respect to energy efficiency, network lifetime, and data accuracy compared with the benchmark schemes.

2018 ◽  
Author(s):  
Quazi Abidur Rahman ◽  
Tahir Janmohamed ◽  
Meysam Pirbaglou ◽  
Hance Clarke ◽  
Paul Ritvo ◽  
...  

BACKGROUND Measuring and predicting pain volatility (fluctuation or variability in pain scores over time) can help improve pain management. Perceptions of pain and its consequent disabling effects are often heightened under the conditions of greater uncertainty and unpredictability associated with pain volatility. OBJECTIVE This study aimed to use data mining and machine learning methods to (1) define a new measure of pain volatility and (2) predict future pain volatility levels from users of the pain management app, Manage My Pain, based on demographic, clinical, and app use features. METHODS Pain volatility was defined as the mean of absolute changes between 2 consecutive self-reported pain severity scores within the observation periods. The k-means clustering algorithm was applied to users’ pain volatility scores at the first and sixth month of app use to establish a threshold discriminating low from high volatility classes. Subsequently, we extracted 130 demographic, clinical, and app usage features from the first month of app use to predict these 2 volatility classes at the sixth month of app use. Prediction models were developed using 4 methods: (1) logistic regression with ridge estimators; (2) logistic regression with Least Absolute Shrinkage and Selection Operator; (3) Random Forests; and (4) Support Vector Machines. Overall prediction accuracy and accuracy for both classes were calculated to compare the performance of the prediction models. Training and testing were conducted using 5-fold cross validation. A class imbalance issue was addressed using a random subsampling of the training dataset. Users with at least five pain records in both the predictor and outcome periods (N=782 users) are included in the analysis. RESULTS k-means clustering algorithm was applied to pain volatility scores to establish a threshold of 1.6 to differentiate between low and high volatility classes. After validating the threshold using random subsamples, 2 classes were created: low volatility (n=611) and high volatility (n=171). In this class-imbalanced dataset, all 4 prediction models achieved 78.1% (611/782) to 79.0% (618/782) in overall accuracy. However, all models have a prediction accuracy of less than 18.7% (32/171) for the high volatility class. After addressing the class imbalance issue using random subsampling, results improved across all models for the high volatility class to greater than 59.6% (102/171). The prediction model based on Random Forests performs the best as it consistently achieves approximately 70% accuracy for both classes across 3 random subsamples. CONCLUSIONS We propose a novel method for measuring pain volatility. Cluster analysis was applied to divide users into subsets of low and high volatility classes. These classes were then predicted at the sixth month of app use with an acceptable degree of accuracy using machine learning methods based on the features extracted from demographic, clinical, and app use information from the first month.


Sensors ◽  
2019 ◽  
Vol 19 (9) ◽  
pp. 2017 ◽  
Author(s):  
Antonio A. Aguileta ◽  
Ramon F. Brena ◽  
Oscar Mayora ◽  
Erik Molino-Minero-Re ◽  
Luis A. Trejo

Sensors are becoming more and more ubiquitous as their price and availability continue to improve, and as they are the source of information for many important tasks. However, the use of sensors has to deal with noise and failures. The lack of reliability in the sensors has led to many forms of redundancy, but simple solutions are not always the best, and the precise way in which several sensors are combined has a big impact on the overall result. In this paper, we discuss how to deal with the combination of information coming from different sensors, acting thus as “virtual sensors”, in the context of human activity recognition, in a systematic way, aiming for optimality. To achieve this goal, we construct meta-datasets containing the “signatures” of individual datasets, and apply machine-learning methods in order to distinguish when each possible combination method could be actually the best. We present specific results based on experimentation, supporting our claims of optimality.


Electronics ◽  
2018 ◽  
Vol 7 (8) ◽  
pp. 140 ◽  
Author(s):  
Lei Hang ◽  
Wenquan Jin ◽  
HyeonSik Yoon ◽  
Yong Hong ◽  
Do Kim

The development of the Internet of Things (IoT) has increased the ubiquity of the Internet by integrating all objects for interaction via embedded systems, leading to a highly distributed network of devices communicating with human beings as well as other devices. In recent years, cloud computing has attracted a lot of attention from specialists and experts around the world. With the increasing number of distributed sensor nodes in wireless sensor networks, new models for interacting with wireless sensors using the cloud are intended to overcome restricted resources and efficiency. In this paper, we propose a novel sensor-cloud based platform which is able to virtualize physical sensors as virtual sensors in the CoT (Cloud of Things) environment. Virtual sensors, which are the essentials of this sensor-cloud architecture, simplify the process of generating a multiuser environment over resource-constrained physical wireless sensors and can help in implementing applications across different domains. Virtual sensors are dynamically provided in a group which advantages capability of the management the designed platform. An auto-detection approach on the basis of virtual sensors is additionally proposed to identify the accessible physical sensors nodes even if the status of these sensors are offline. In order to assess the usability of the designed platform, a smart-space-based IoT case study was implemented, and a series of experiments were carried out to evaluate the proposed system performance. Furthermore, a comparison analysis was made and the results indicate that the proposed platform outperforms the existing platforms in numerous respects.


2020 ◽  
Vol 9 (2) ◽  
pp. 25
Author(s):  
Michael Matusowsky ◽  
Daniel T. Ramotsoela ◽  
Adnan M. Abu-Mahfouz

Data integrity in wireless sensor networks (WSN) is very important because incorrect or missing values could result in the system making suboptimal or catastrophic decisions. Data imputation allows for a system to counteract the effect of data loss by substituting faulty or missing sensor values with system-defined virtual values. This paper proposes a virtual sensor system that uses multi-layer perceptrons (MLP) to impute sensor values in a WSN. The MLP was trained using a genetic algorithm which efficiently reached an optimal solution for each sensor node. The system was able to successfully identify and replace physical sensor nodes that were disconnected from the network with corresponding virtual sensors. The virtual sensors imputed values with very high accuracies when compared to the physical sensor values.


2020 ◽  
Vol 10 (3) ◽  
pp. 544-551 ◽  
Author(s):  
Xiongtao Zhang ◽  
Yunliang Jiang ◽  
Wenjun Hu ◽  
Shitong Wang

Diabetes is one of the deadliest disease on the planet. It isn't just an ailment yet additionally a maker of various types of maladies like heart assault, blurred vision, nephropathy and dyspnea. When decision-making process by traditional machine learning methods for a patient is made, it often face the following challenges: (1) some uncertain factors exist in the patient or the decision-making process which often result in misdiagnosis; (2) the decision-making process with traditional machine learning methods are block-box which are not interpretable. In this paper, a parallel-based fuzzy partition and fuzzy weighted ensemble TSK (Takagi-Sugeno-Kang) fuzzy classifier called FP-TSK-FW is proposed for diabetes diagnosis by utilizing its strong uncertainty-handling capability and interpretability so as to achieve promising classification performance. In FP-TSK-FW, the training dataset firstly is partitioned into several subsets by fuzzy clustering algorithm FCM on certain attributes, each interpretable TSK fuzzy subclassifier on each training subset can be quickly built in parallel, and with different structures. Finally, the final prediction of FP-TSK-FW is realized by fuzzy weighted for the results of each classifier. The experimental results on Pima Indians Diabetes dataset indicate the effectiveness of the proposed methods in the sense of both enhanced classification performance and interpretability.


Sign in / Sign up

Export Citation Format

Share Document