scholarly journals An Unsupervised Learning Technique to Optimize Radio Maps for Indoor Localization

Sensors ◽  
2019 ◽  
Vol 19 (4) ◽  
pp. 752 ◽  
Author(s):  
Jens Trogh ◽  
Wout Joseph ◽  
Luc Martens ◽  
David Plets

A major burden of signal strength-based fingerprinting for indoor positioning is the generation and maintenance of a radio map, also known as a fingerprint database. Model-based radio maps are generated much faster than measurement-based radio maps but are generally not accurate enough. This work proposes a method to automatically construct and optimize a model-based radio map. The method is based on unsupervised learning, where random walks, for which the ground truth locations are unknown, serve as input for the optimization, along with a floor plan and a location tracking algorithm. No measurement campaign or site survey, which are labor-intensive and time-consuming, or inertial sensor measurements, which are often not available and consume additional power, are needed for this approach. Experiments in a large office building, covering over 1100 m2, resulted in median accuracies of up to 2.07 m, or a relative improvement of 28.6% with only 15 min of unlabeled training data.

Sensors ◽  
2018 ◽  
Vol 18 (7) ◽  
pp. 2283 ◽  
Author(s):  
Imran Ashraf ◽  
Soojung Hur ◽  
Yongwan Park

An indoor localization system based on off-the-shelf smartphone sensors is presented which employs the magnetometer to find user location. Further assisted by the accelerometer and gyroscope, the proposed system is able to locate the user without any prior knowledge of user initial position. The system exploits the fingerprint database approach for localization. Traditional fingerprinting technology stores data intensity values in database such as RSSI (Received Signal Strength Indicator) values in the case of WiFi fingerprinting and magnetic flux intensity values in the case of geomagnetic fingerprinting. The down side is the need to update the database periodically and device heterogeneity. We solve this problem by using the fingerprint database of patterns formed by magnetic flux intensity values. The pattern matching approach solves the problem of device heterogeneity and the algorithm’s performance with Samsung Galaxy S8 and LG G6 is comparable. A deep learning based artificial neural network is adopted to identify the user state of walking and stationary and its accuracy is 95%. The localization is totally infrastructure independent and does not require any other technology to constraint the search space. The experiments are performed to determine the accuracy in three buildings of Yeungnam University, Republic of Korea with different path lengths and path geometry. The results demonstrate that the error is 2–3 m for 50 percentile with various buildings. Even though many locations in the same building exhibit very similar magnetic attitude, the algorithm achieves an accuracy of 4 m for 75 percentile irrespective of the device used for localization.


Sensors ◽  
2018 ◽  
Vol 18 (9) ◽  
pp. 2990 ◽  
Author(s):  
Junhong Lin ◽  
Bang Wang ◽  
Guang Yang ◽  
Mu Zhou

Fingerprinting-based indoor localization suffers from its time-consuming and labor-intensive site survey. As a promising solution, sample crowdsourcing has been recently promoted to exploit casually collected samples for building offline fingerprint database. However, crowdsourced samples may be annotated with erroneous locations, which raises a serious question about whether they are reliable for database construction. In this paper, we propose a cross-domain cluster intersection algorithm to weight each sample reliability. We then select those samples with higher weight to construct radio propagation surfaces by fitting polynomial functions. Furthermore, we employ an entropy-like measure to weight constructed surfaces for quantifying their different subarea consistencies and location discriminations in online positioning. Field measurements and experiments show that the proposed scheme can achieve high localization accuracy by well dealing with the sample annotation error and nonuniform density challenges.


2021 ◽  
Vol 26 (5) ◽  
Author(s):  
Maria Ulan ◽  
Welf Löwe ◽  
Morgan Ericsson ◽  
Anna Wingkvist

AbstractIt is a well-known practice in software engineering to aggregate software metrics to assess software artifacts for various purposes, such as their maintainability or their proneness to contain bugs. For different purposes, different metrics might be relevant. However, weighting these software metrics according to their contribution to the respective purpose is a challenging task. Manual approaches based on experts do not scale with the number of metrics. Also, experts get confused if the metrics are not independent, which is rarely the case. Automated approaches based on supervised learning require reliable and generalizable training data, a ground truth, which is rarely available. We propose an automated approach to weighted metrics aggregation that is based on unsupervised learning. It sets metrics scores and their weights based on probability theory and aggregates them. To evaluate the effectiveness, we conducted two empirical studies on defect prediction, one on ca. 200 000 code changes, and another ca. 5 000 software classes. The results show that our approach can be used as an agnostic unsupervised predictor in the absence of a ground truth.


2021 ◽  
Vol 8 (1) ◽  
Author(s):  
Ive Weygers ◽  
Manon Kok ◽  
Thomas Seel ◽  
Darshan Shah ◽  
Orçun Taylan ◽  
...  

AbstractSkin-attached inertial sensors are increasingly used for kinematic analysis. However, their ability to measure outside-lab can only be exploited after correctly aligning the sensor axes with the underlying anatomical axes. Emerging model-based inertial-sensor-to-bone alignment methods relate inertial measurements with a model of the joint to overcome calibration movements and sensor placement assumptions. It is unclear how good such alignment methods can identify the anatomical axes. Any misalignment results in kinematic cross-talk errors, which makes model validation and the interpretation of the resulting kinematics measurements challenging. This study provides an anatomically correct ground-truth reference dataset from dynamic motions on a cadaver. In contrast with existing references, this enables a true model evaluation that overcomes influences from soft-tissue artifacts, orientation and manual palpation errors. This dataset comprises extensive dynamic movements that are recorded with multimodal measurements including trajectories of optical and virtual (via computed tomography) anatomical markers, reference kinematics, inertial measurements, transformation matrices and visualization tools. The dataset can be used either as a ground-truth reference or to advance research in inertial-sensor-to-bone-alignment.


2020 ◽  
Vol 499 (4) ◽  
pp. 5641-5652
Author(s):  
Georgios Vernardos ◽  
Grigorios Tsagkatakis ◽  
Yannis Pantazis

ABSTRACT Gravitational lensing is a powerful tool for constraining substructure in the mass distribution of galaxies, be it from the presence of dark matter sub-haloes or due to physical mechanisms affecting the baryons throughout galaxy evolution. Such substructure is hard to model and is either ignored by traditional, smooth modelling, approaches, or treated as well-localized massive perturbers. In this work, we propose a deep learning approach to quantify the statistical properties of such perturbations directly from images, where only the extended lensed source features within a mask are considered, without the need of any lens modelling. Our training data consist of mock lensed images assuming perturbing Gaussian Random Fields permeating the smooth overall lens potential, and, for the first time, using images of real galaxies as the lensed source. We employ a novel deep neural network that can handle arbitrary uncertainty intervals associated with the training data set labels as input, provides probability distributions as output, and adopts a composite loss function. The method succeeds not only in accurately estimating the actual parameter values, but also reduces the predicted confidence intervals by 10 per cent in an unsupervised manner, i.e. without having access to the actual ground truth values. Our results are invariant to the inherent degeneracy between mass perturbations in the lens and complex brightness profiles for the source. Hence, we can quantitatively and robustly quantify the smoothness of the mass density of thousands of lenses, including confidence intervals, and provide a consistent ranking for follow-up science.


2021 ◽  
Vol 22 (Supplement_1) ◽  
Author(s):  
D Zhao ◽  
E Ferdian ◽  
GD Maso Talou ◽  
GM Quill ◽  
K Gilbert ◽  
...  

Abstract Funding Acknowledgements Type of funding sources: Public grant(s) – National budget only. Main funding source(s): National Heart Foundation (NHF) of New Zealand Health Research Council (HRC) of New Zealand Artificial intelligence shows considerable promise for automated analysis and interpretation of medical images, particularly in the domain of cardiovascular imaging. While application to cardiac magnetic resonance (CMR) has demonstrated excellent results, automated analysis of 3D echocardiography (3D-echo) remains challenging, due to the lower signal-to-noise ratio (SNR), signal dropout, and greater interobserver variability in manual annotations. As 3D-echo is becoming increasingly widespread, robust analysis methods will substantially benefit patient evaluation.  We sought to leverage the high SNR of CMR to provide training data for a convolutional neural network (CNN) capable of analysing 3D-echo. We imaged 73 participants (53 healthy volunteers, 20 patients with non-ischaemic cardiac disease) under both CMR and 3D-echo (<1 hour between scans). 3D models of the left ventricle (LV) were independently constructed from CMR and 3D-echo, and used to spatially align the image volumes using least squares fitting to a cardiac template. The resultant transformation was used to map the CMR mesh to the 3D-echo image. Alignment of mesh and image was verified through volume slicing and visual inspection (Fig. 1) for 120 paired datasets (including 47 rescans) each at end-diastole and end-systole. 100 datasets (80 for training, 20 for validation) were used to train a shallow CNN for mesh extraction from 3D-echo, optimised with a composite loss function consisting of normalised Euclidian distance (for 290 mesh points) and volume. Data augmentation was applied in the form of rotations and tilts (<15 degrees) about the long axis. The network was tested on the remaining 20 datasets (different participants) of varying image quality (Tab. I). For comparison, corresponding LV measurements from conventional manual analysis of 3D-echo and associated interobserver variability (for two observers) were also estimated. Initial results indicate that the use of embedded CMR meshes as training data for 3D-echo analysis is a promising alternative to manual analysis, with improved accuracy and precision compared with conventional methods. Further optimisations and a larger dataset are expected to improve network performance. (n = 20) LV EDV (ml) LV ESV (ml) LV EF (%) LV mass (g) Ground truth CMR 150.5 ± 29.5 57.9 ± 12.7 61.5 ± 3.4 128.1 ± 29.8 Algorithm error -13.3 ± 15.7 -1.4 ± 7.6 -2.8 ± 5.5 0.1 ± 20.9 Manual error -30.1 ± 21.0 -15.1 ± 12.4 3.0 ± 5.0 Not available Interobserver error 19.1 ± 14.3 14.4 ± 7.6 -6.4 ± 4.8 Not available Tab. 1. LV mass and volume differences (means ± standard deviations) for 20 test cases. Algorithm: CNN – CMR (as ground truth). Abstract Figure. Fig 1. CMR mesh registered to 3D-echo.


Sensors ◽  
2021 ◽  
Vol 21 (3) ◽  
pp. 987
Author(s):  
Aki Karttunen ◽  
Mikko Valkama ◽  
Jukka Talvitie

Positioning is considered one of the key features in various novel industry verticals in future radio systems. Since path loss (PL) or received signal strength-based measurements are widely available in the majority of wireless standards, PL-based positioning has an important role among positioning technologies. Conventionally, PL-based positioning has two phases—fitting a PL model to training data and positioning based on the link distance estimates. However, in both phases, the maximum measurable PL is limited by measurement noise. Such immeasurable samples are called censored PL data and such noisy data are commonly neglected in both the model fitting and in the positioning phase. In the case of censored PL, the loss is known to be above a known threshold level and that information can be used in model fitting and in the positioning phase. In this paper, we examine and propose how to use censored PL data in PL model-based positioning. Additionally, we demonstrate with several simulations the potential of the proposed approach for considerable improvements in positioning accuracy (23–57%) and improved robustness against PL model fitting errors.


2020 ◽  
Vol 12 (20) ◽  
pp. 3360
Author(s):  
Jessica Esteban ◽  
Ronald E. McRoberts ◽  
Alfredo Fernández-Landa ◽  
José Luis Tomé ◽  
Miguel Marchamalo

Forest/non-forest and forest species maps are often used by forest inventory programs in the forest estimation process. For example, some inventory programs establish field plots only on lands corresponding to the forest portion of a forest/non-forest map and use species-specific area estimates obtained from those maps to support the estimation of species-specific volume (V) totals. Despite the general use of these maps, the effects of their uncertainties are commonly ignored with the result that estimates might be unreliable. The goal of this study is to estimate the effects of the uncertainty of forest species maps used in the sampling and estimation processes. Random forest (RF) per-pixel predictions were used with model-based inference to estimate V per unit area for the six main forest species of La Rioja, Spain. RF models for predicting V were constructed using field plot information from the Spanish National Forest Inventory and airborne laser scanning data. To limit the prediction of V to pixels classified as one of the main forest species assessed, a forest species map was constructed using Landsat and auxiliary information. Bootstrapping techniques were implemented to estimate the total uncertainty of the V estimates and accommodated both the effects of uncertainty in the Landsat forest species map and the effects of plot-to-plot sampling variability on training data used to construct the RF V models. Standard errors of species-specific total V estimates increased from 2–9% to 3–22% when the effects of map uncertainty were incorporated into the uncertainty assessment. The workflow achieved satisfactory results and revealed that the effects of map uncertainty are not negligible, especially for open-grown and less frequently occurring forest species for which greater variability was evident in the mapping and estimation process. The effects of forest map uncertainty are greater for species-specific area estimation than for the selection of field plots used to calibrate the RF model. Additional research to generalize the conclusions beyond Mediterranean to other forest environments is recommended.


2021 ◽  
Vol 13 (1) ◽  
Author(s):  
Bingyin Hu ◽  
Anqi Lin ◽  
L. Catherine Brinson

AbstractThe inconsistency of polymer indexing caused by the lack of uniformity in expression of polymer names is a major challenge for widespread use of polymer related data resources and limits broad application of materials informatics for innovation in broad classes of polymer science and polymeric based materials. The current solution of using a variety of different chemical identifiers has proven insufficient to address the challenge and is not intuitive for researchers. This work proposes a multi-algorithm-based mapping methodology entitled ChemProps that is optimized to solve the polymer indexing issue with easy-to-update design both in depth and in width. RESTful API is enabled for lightweight data exchange and easy integration across data systems. A weight factor is assigned to each algorithm to generate scores for candidate chemical names and optimized to maximize the minimum value of the score difference between the ground truth chemical name and the other candidate chemical names. Ten-fold validation is utilized on the 160 training data points to prevent overfitting issues. The obtained set of weight factors achieves a 100% test accuracy on the 54 test data points. The weight factors will evolve as ChemProps grows. With ChemProps, other polymer databases can remove duplicate entries and enable a more accurate “search by SMILES” function by using ChemProps as a common name-to-SMILES translator through API calls. ChemProps is also an excellent tool for auto-populating polymer properties thanks to its easy-to-update design.


Author(s):  
Chengcheng Lu ◽  
Zheng Lv ◽  
Linqing Wang ◽  
Jun Zhao ◽  
Wei Wang

Sign in / Sign up

Export Citation Format

Share Document