scholarly journals Address Assignment in Indoor Wireless Networks Using Deterministic Channel Simulation

2013 ◽  
Vol 2013 ◽  
pp. 1-13
Author(s):  
Edgar Holleis ◽  
Christoph Grimm

A crucial step during commissioning of wireless sensor and automation networks is assigning high-level node addresses (e.g., floor/room/fixture) to nodes mounted at their respective location. This address assignment typically requires visiting every single node prior to, during, or after mounting. For large-scale networks it also presents a considerable logistical effort. This paper describes a new approach to automatically assign high-level addresses without visiting every node. First, the wireless channel is simulated using a deterministic channel simulation in order to obtain node-to-node estimates of path loss. Next, the channel is measured by a precommissioning test procedure on the live network. In a third step, results from measurements and simulation are condensed into graphs and matched against each other. The resulting problem, identified as weighted graph matching, is solved heuristically. Viability of the approach and its performance is demonstrated by means of a publicly available test data set, which the algorithm is able to solve flawlessly. Further points of interest are the conditions that lead to high quality address assignments.

Author(s):  
Theofilos Chrysikos ◽  
Stavros Kotsopoulos ◽  
Eduard Babulak

The aim of this chapter is to summarize and present recent findings in the field of wireless channel modeling that provide a new method for the reliable calculation of the statistical parameters of large-scale variations of the average received signal (shadow fading). This algorithm is theoretically based on a path loss estimation model that incorporates losses due to walls and floors. This has been confirmed to be the most precise mathematical tool for average signal strength prediction for various frequencies of interest and propagation environments. The total path loss is estimated as a sum of two independent attenuation processes: free space loss and losses due to obstacles. This solution allows for a direct and reliable calculation of the deviation of the fluctuations of the average received signal in an obstacle-dense environment.


Sensors ◽  
2019 ◽  
Vol 19 (11) ◽  
pp. 2431 ◽  
Author(s):  
Seppe Van Brandt ◽  
Robbe Van Thielen ◽  
Jo Verhaevert ◽  
Tanja Van Hecke ◽  
Hendrik Rogier

This paper reports the characterization of the 2.45-GHz-ISM-band radio wave propagation channel. Specifically, measurements were performed in an underground parking garage, with the aim of optimizing breadcrumb systems for a Rapid Intervention Team application. The effects of the high penetration loss and large reflections by the concrete reinforced building structure on the path loss and the large-scale fading were studied. Based on the analysis of the wireless channel, critical points for reliable communication between members of a Rapid Intervention Team were identified. In particular, attention was paid to dealing with large, spatially confined signal losses due to shadowing, the anticipation of corner losses and the ability of the system to operate on multiple floors.


Author(s):  
Fausto Lenin Granda ◽  
Leyre Azpilicueta ◽  
Darwin Aguilar ◽  
Cesar Vargas

Vehicular ad hoc networks (VANETs) enable vehicles to communicate with each other as well as with roadside units (RSUs), and Smart Cities must be able to take advantage of its applications and benefits on transportation operations. In urban environments some propagation impairments as reflection from, diffraction around and transmission loss through objects gives rise temporal and spatial variation of path loss and multipath effects. This work evaluates some parameters of a Vehicle-to-Infrastructure (V2I) wireless channel link such as large-scale path loss and multipath metrics in an urban scenario, using a deterministic 3D Ray-Launching (3D-RL) algorithm. Spatial analysis using Wireless Sensor Networks (WSNs) at 868 MHz, 2.4 Ghz and 5.9 GHz is presented. Results show the impact of factors as: geometry, dielectric properties and relative position of the obstacles, placement of the RSU and frequency link, in the V2I communication. The 3D-RL simulation shows better representation of the propagation phenomena when compared with an analytical path loss model, mainly at special types of intersections as roundabouts and give insight of the importance of the spatial distance and scenario segmentation to get consistent results.


2021 ◽  
Vol 20 (1) ◽  
Author(s):  
Rainer Schnell ◽  
Jonas Klingwort ◽  
James M. Farrow

Abstract Background We introduce and study a recently proposed method for privacy-preserving distance computations which has received little attention in the scientific literature so far. The method, which is based on intersecting sets of randomly labeled grid points, is henceforth denoted as ISGP allows calculating the approximate distances between masked spatial data. Coordinates are replaced by sets of hash values. The method allows the computation of distances between locations L when the locations at different points in time t are not known simultaneously. The distance between $$L_1$$ L 1 and $$L_2$$ L 2 could be computed even when $$L_2$$ L 2 does not exist at $$t_1$$ t 1 and $$L_1$$ L 1 has been deleted at $$t_2$$ t 2 . An example would be patients from a medical data set and locations of later hospitalizations. ISGP is a new tool for privacy-preserving data handling of geo-referenced data sets in general. Furthermore, this technique can be used to include geographical identifiers as additional information for privacy-preserving record-linkage. To show that the technique can be implemented in most high-level programming languages with a few lines of code, a complete implementation within the statistical programming language R is given. The properties of the method are explored using simulations based on large-scale real-world data of hospitals ($$n=850$$ n = 850 ) and residential locations ($$n=13,000$$ n = 13 , 000 ). The method has already been used in a real-world application. Results ISGP yields very accurate results. Our simulation study showed that—with appropriately chosen parameters – 99 % accuracy in the approximated distances is achieved. Conclusion We discussed a new method for privacy-preserving distance computations in microdata. The method is highly accurate, fast, has low computational burden, and does not require excessive storage.


2021 ◽  
Vol 18 (2) ◽  
pp. 172988142110076
Author(s):  
Tao Ku ◽  
Qirui Yang ◽  
Hao Zhang

Recently, convolutional neural network (CNN) has led to significant improvement in the field of computer vision, especially the improvement of the accuracy and speed of semantic segmentation tasks, which greatly improved robot scene perception. In this article, we propose a multilevel feature fusion dilated convolution network (Refine-DeepLab). By improving the space pyramid pooling structure, we propose a multiscale hybrid dilated convolution module, which captures the rich context information and effectively alleviates the contradiction between the receptive field size and the dilated convolution operation. At the same time, the high-level semantic information and low-level semantic information obtained through multi-level and multi-scale feature extraction can effectively improve the capture of global information and improve the performance of large-scale target segmentation. The encoder–decoder gradually recovers spatial information while capturing high-level semantic information, resulting in sharper object boundaries. Extensive experiments verify the effectiveness of our proposed Refine-DeepLab model, evaluate our approaches thoroughly on the PASCAL VOC 2012 data set without MS COCO data set pretraining, and achieve a state-of-art result of 81.73% mean interaction-over-union in the validate set.


2021 ◽  
pp. 1-8
Author(s):  
Delaram Javdani ◽  
Hossein Rahmani ◽  
Gerhard Weiss

Entity resolution refers to the process of identifying, matching, and integrating records belonging to unique entities in a data set. However, a comprehensive comparison across all pairs of records leads to quadratic matching complexity. Therefore, blocking methods are used to group similar entities into small blocks before the matching. Available blocking methods typically do not consider semantic relationships among records. In this paper, we propose a Semantic-aware Meta-Blocking approach called SeMBlock. SeMBlock considers the semantic similarity of records by applying locality-sensitive hashing (LSH) based on word embedding to achieve fast and reliable blocking in a large-scale data environment. To improve the quality of the blocks created, SeMBlock builds a weighted graph of semantically similar records and prunes the graph edges. We extensively compare SeMBlock with 16 existing blocking methods, using three real-world data sets. The experimental results show that SeMBlock significantly outperforms all 16 methods with respect to two relevant measures, F-measure and pair-quality measure. F-measure and pair-quality measure of SeMBlock are approximately 7% and 27%, respectively, higher than recently released blocking methods.


2021 ◽  
Vol 13 (1) ◽  
pp. 49-57
Author(s):  
Brahim Jabir ◽  
Noureddine Falih ◽  
Asmaa Sarih ◽  
Adil Tannouche

Researchers in precision agriculture regularly use deep learning that will help growers and farmers control and monitor crops during the growing season; these tools help to extract meaningful information from large-scale aerial images received from the field using several techniques in order to create a strategic analytics for making a decision. The information result of the operation could be exploited for many reasons, such as sub-plot specific weed control. Our focus in this paper is on weed identification and control in sugar beet fields, particularly the creation and optimization of a Convolutional Neural Networks model and train it according to our data set to predict and identify the most popular weed strains in the region of Beni Mellal, Morocco. All that could help select herbicides that work on the identified weeds, we explore the way of transfer learning approach to design the networks, and the famous library Tensorflow for deep learning models, and Keras which is a high-level API built on Tensorflow.


Organizacija ◽  
2017 ◽  
Vol 50 (1) ◽  
pp. 3-16 ◽  
Author(s):  
Damjan Maletič ◽  
Matjaž Maletič ◽  
Basim Al-Najjar ◽  
Katerina Gotzamani ◽  
Maria Gianni ◽  
...  

Abstract Purpose: The purpose of this empirical study is to examine the role of two contingency factors, i.e. uncertainty and competitiveness in relation to physical asset management (PAM) practices as well as to maintenance key performance indicators. The research is based on a premise that PAM, which was defined by risk management practices, performance assessment practices, life cycle management practices, and policy & strategy practices, has become an indispensable element of strategic thinking of asset owners as well as maintenance and asset managers. The purpose of this study is to advance the understanding of how organizations that face high or low level of uncertainty and competitiveness respond in terms of PAM deployment. Methodology/Approach: This study employed a data set based on a large-scale survey among organizations in six European countries (i.e. Slovenia, Poland, Greece, Sweden, Turkey and Slovakia). Data were collected from 138 organizations located in the above-mentioned countries to conduct the study. Findings: The results show that organizations that are faced with high level of uncertainty and competitiveness are more engaged in the deployment of PAM practices. Moreover, results show that when organizations are facing high levels of competitiveness they are using KPIs to a greater extent than organizations under low levels of competitiveness. Originality/value: From a theoretical perspective, this study contributes to the contingency theory by providing empirical evidence whether a context-dependent approach to PAM is needed. The findings also provide insights for managers on how to respond to the competitive pressure as well as how to customize PAM practices in order to adapt to the changes in dynamic organizational environment.


Author(s):  
Georgi Derluguian

The author develops ideas about the origin of social inequality during the evolution of human societies and reflects on the possibilities of its overcoming. What makes human beings different from other primates is a high level of egalitarianism and altruism, which contributed to more successful adaptability of human collectives at early stages of the development of society. The transition to agriculture, coupled with substantially increasing population density, was marked by the emergence and institutionalisation of social inequality based on the inequality of tangible assets and symbolic wealth. Then, new institutions of warfare came into existence, and they were aimed at conquering and enslaving the neighbours engaged in productive labour. While exercising control over nature, people also established and strengthened their power over other people. Chiefdom as a new type of polity came into being. Elementary forms of power (political, economic and ideological) served as a basis for the formation of early states. The societies in those states were characterised by social inequality and cruelties, including slavery, mass violence and numerous victims. Nowadays, the old elementary forms of power that are inherent in personalistic chiefdom are still functioning along with modern institutions of public and private bureaucracy. This constitutes the key contradiction of our time, which is the juxtaposition of individual despotic power and public infrastructural one. However, society is evolving towards an ever more efficient combination of social initiatives with the sustainability and viability of large-scale organisations.


Sign in / Sign up

Export Citation Format

Share Document