scholarly journals A High Accuracy Time-Reversal Based WiFi Indoor Localization Approach with a Single Antenna

Sensors ◽  
2018 ◽  
Vol 18 (10) ◽  
pp. 3437 ◽  
Author(s):  
Lili Zheng ◽  
Binjie Hu ◽  
Haoxiang Chen

In this paper, we study the influence of multipath magnitude, bandwidth, and communication link number on the performance of the existing time-reversal (TR) based fingerprinting localization approach and find that the localization accuracy deteriorates with a limited bandwidth. To improve the localization performance, by exploiting two unique location-specified signatures extracted from Channel State Information (CSI), we propose a high accuracy TR fingerprint localization approach, HATRFLA. Furthermore, we employ a density-based spatial clustering algorithm to minimize the storage space of the fingerprint database by adaptively selecting the optimal number of fingerprints for each location. Experimental results confirm that the proposed approach can efficiently mitigate accuracy deterioration caused by a limited bandwidth and consequently, achieve higher accuracy compared with the existing TR localization approach.

Sensors ◽  
2021 ◽  
Vol 21 (17) ◽  
pp. 5786
Author(s):  
Chenguang Shi ◽  
Rui Zhang ◽  
Yong Yu ◽  
Xingzhe Sun ◽  
Xiaodong Lin

The star tracker is widely used for high-accuracy missions due to its high accuracy position high autonomy and low power consumption. On the other hand, the ability of interference suppression of the star tracker has always been a hot issue of concern. A SLIC-DBSCAN-based algorithm for extracting effective information from a single image with strong interference has been developed in this paper to remove interferences. Firstly, the restricted LC (luminance-based contrast) transformation is utilized to enhance the contrast between background noise and the large-area interference. Then, SLIC (the simple linear iterative clustering) algorithm is adopted to segment the saliency map and in this process, optimized parameters are harnessed. Finally, from these segments, features are extracted and superpixels with similar features are combined by using DBSCAN (density-based spatial clustering of applications with noise). The proposed algorithm is proved effective by successfully removing large-area interference and extracting star spots from the sky region of the real star image.


Sensors ◽  
2021 ◽  
Vol 21 (4) ◽  
pp. 1090
Author(s):  
Wenxu Wang ◽  
Damián Marelli ◽  
Minyue Fu

A popular approach for solving the indoor dynamic localization problem based on WiFi measurements consists of using particle filtering. However, a drawback of this approach is that a very large number of particles are needed to achieve accurate results in real environments. The reason for this drawback is that, in this particular application, classical particle filtering wastes many unnecessary particles. To remedy this, we propose a novel particle filtering method which we call maximum likelihood particle filter (MLPF). The essential idea consists of combining the particle prediction and update steps into a single one in which all particles are efficiently used. This drastically reduces the number of particles, leading to numerically feasible algorithms with high accuracy. We provide experimental results, using real data, confirming our claim.


Symmetry ◽  
2021 ◽  
Vol 13 (4) ◽  
pp. 596
Author(s):  
Krishna Kumar Sharma ◽  
Ayan Seal ◽  
Enrique Herrera-Viedma ◽  
Ondrej Krejcar

Calculating and monitoring customer churn metrics is important for companies to retain customers and earn more profit in business. In this study, a churn prediction framework is developed by modified spectral clustering (SC). However, the similarity measure plays an imperative role in clustering for predicting churn with better accuracy by analyzing industrial data. The linear Euclidean distance in the traditional SC is replaced by the non-linear S-distance (Sd). The Sd is deduced from the concept of S-divergence (SD). Several characteristics of Sd are discussed in this work. Assays are conducted to endorse the proposed clustering algorithm on four synthetics, eight UCI, two industrial databases and one telecommunications database related to customer churn. Three existing clustering algorithms—k-means, density-based spatial clustering of applications with noise and conventional SC—are also implemented on the above-mentioned 15 databases. The empirical outcomes show that the proposed clustering algorithm beats three existing clustering algorithms in terms of its Jaccard index, f-score, recall, precision and accuracy. Finally, we also test the significance of the clustering results by the Wilcoxon’s signed-rank test, Wilcoxon’s rank-sum test, and sign tests. The relative study shows that the outcomes of the proposed algorithm are interesting, especially in the case of clusters of arbitrary shape.


2014 ◽  
Vol 543-547 ◽  
pp. 1934-1938
Author(s):  
Ming Xiao

For a clustering algorithm in two-dimension spatial data, the Adaptive Resonance Theory exists not only the shortcomings of pattern drift and vector module of information missing, but also difficultly adapts to spatial data clustering which is irregular distribution. A Tree-ART2 network model was proposed based on the above situation. It retains the memory of old model which maintains the constraint of spatial distance by learning and adjusting LTM pattern and amplitude information of vector. Meanwhile, introducing tree structure to the model can reduce the subjective requirement of vigilance parameter and decrease the occurrence of pattern mixing. It is showed that TART2 network has higher plasticity and adaptability through compared experiments.


Author(s):  
Daeho Jin ◽  
Lazaros Oreopoulos ◽  
Dongmin Lee ◽  
Jackson Tan ◽  
Nayeong Cho

AbstractIn order to better understand cloud-precipitation relationships, we extend the concept of cloud regimes (CRs) developed from two-dimensional joint histograms of cloud optical thickness and cloud top pressure from the Moderate Resolution Imaging Spectroradiometer (MODIS), to include precipitation information. Taking advantage of the high-resolution Integrated Multi-satellitE Retrievals for GPM (IMERG) precipitation dataset, we derive cloud-precipitation “hybrid” regimes by implementing a k-means clustering algorithm with advanced initialization and objective measures to determine the optimal number of clusters. By expressing the variability of precipitation rates within 1-degree grid cells as histograms and varying the relative weight of cloud and precipitation information in the clustering algorithm, we obtain several editions of hybrid cloud-precipitation regimes (CPRs), and examine their characteristics.In the deep tropics, when precipitation is weighted weakly, the cloud part centroids of the hybrid regimes resemble their counterparts of cloud-only regimes, but combined clustering tightens the cloud-precipitation relationship by decreasing each regime’s precipitation variability. As precipitation weight progressively increases, the shape of the cloud part centroids becomes blunter, while the precipitation part sharpens. When cloud and precipitation are weighted equally, the CPRs representing high clouds with intermediate to heavy precipitation exhibit distinct enough features in the precipitation parts of the centroids to allow us to project them onto the 30-min IMERG domain. Such a projection overcomes the temporal sparseness of MODIS cloud observations associated with substantial rainfall, suggesting great application potential for convection-focused studies where characterization of the diurnal cycle is essential.


Author(s):  
Hang Li ◽  
Xi Chen ◽  
Ju Wang ◽  
Di Wu ◽  
Xue Liu

WiFi-based Device-free Passive (DfP) indoor localization systems liberate their users from carrying dedicated sensors or smartphones, and thus provide a non-intrusive and pleasant experience. Although existing fingerprint-based systems achieve sub-meter-level localization accuracy by training location classifiers/regressors on WiFi signal fingerprints, they are usually vulnerable to small variations in an environment. A daily change, e.g., displacement of a chair, may cause a big inconsistency between the recorded fingerprints and the real-time signals, leading to significant localization errors. In this paper, we introduce a Domain Adaptation WiFi (DAFI) localization approach to address the problem. DAFI formulates this fingerprint inconsistency issue as a domain adaptation problem, where the original environment is the source domain and the changed environment is the target domain. Directly applying existing domain adaptation methods to our specific problem is challenging, since it is generally hard to distinguish the variations in the different WiFi domains (i.e., signal changes caused by different environmental variations). DAFI embraces the following techniques to tackle this challenge. 1) DAFI aligns both marginal and conditional distributions of features in different domains. 2) Inside the target domain, DAFI squeezes the marginal distribution of every class to be more concentrated at its center. 3) Between two domains, DAFI conducts fine-grained alignment by forcing every target-domain class to better align with its source-domain counterpart. By doing these, DAFI outperforms the state of the art by up to 14.2% in real-world experiments.


2021 ◽  
Author(s):  
Seyedeh Samira Moosavi ◽  
Paul Fortier

Abstract Currently, localization in distributed massive MIMO (DM-MIMO) systems based on the fingerprinting (FP) approach has attracted great interest. However, this method suffers from severe multipath and signal degradation such that its accuracy is deteriorated in complex propagation environments, which results in variable received signal strength (RSS). Therefore, providing robust and accurate localization is the goal of this work. In this paper, we propose an FP-based approach to improve the accuracy of localization by reducing the noise and the dimensions of the RSS data. In the proposed approach, the fingerprints rely solely on the RSS from the single-antenna MT collected at each of the receive antenna elements of the massive MIMO base station. After creating a radio map, principal component analysis (PCA) is performed to reduce the noise and redundancy. PCA reduces the data dimension which leads to the selection of the appropriate antennas and reduces complexity. A clustering algorithm based on K-means and affinity propagation clustering (APC) is employed to divide the whole area into several regions which improves positioning precision and reduces complexity and latency. Finally, in order to have high precise localization estimation, all similar data in each cluster are modeled using a well-designed deep neural network (DNN) regression. Simulation results show that the proposed scheme improves positioning accuracy significantly. This approach has high coverage and improves average root-mean-squared error (RMSE) performance to a few meters, which is expected in 5G and beyond networks. Consequently, it also proves the superiority of the proposed method over the previous location estimation schemes.


Sign in / Sign up

Export Citation Format

Share Document