Positioning evaluation and ground truth definition for real life use cases

Author(s):  
Carlos Martinez de la Osa ◽  
Grigorios G. Anagnostopoulos ◽  
Mauricio Togneri ◽  
Michel Deriaz ◽  
Dimitri Konstantas
2021 ◽  
Author(s):  
Shikha Suman ◽  
Ashutosh Karna ◽  
Karina Gibert

Hierarchical clustering is one of the most preferred choices to understand the underlying structure of a dataset and defining typologies, with multiple applications in real life. Among the existing clustering algorithms, the hierarchical family is one of the most popular, as it permits to understand the inner structure of the dataset and find the number of clusters as an output, unlike popular methods, like k-means. One can adjust the granularity of final clustering to the goals of the analysis themselves. The number of clusters in a hierarchical method relies on the analysis of the resulting dendrogram itself. Experts have criteria to visually inspect the dendrogram and determine the number of clusters. Finding automatic criteria to imitate experts in this task is still an open problem. But, dependence on the expert to cut the tree represents a limitation in real applications like the fields industry 4.0 and additive manufacturing. This paper analyses several cluster validity indexes in the context of determining the suitable number of clusters in hierarchical clustering. A new Cluster Validity Index (CVI) is proposed such that it properly catches the implicit criteria used by experts when analyzing dendrograms. The proposal has been applied on a range of datasets and validated against experts ground-truth overcoming the results obtained by the State of the Art and also significantly reduces the computational cost.


Information ◽  
2020 ◽  
Vol 11 (1) ◽  
pp. 53
Author(s):  
Jinfang Sheng ◽  
Ben Lu ◽  
Bin Wang ◽  
Jie Hu ◽  
Kai Wang ◽  
...  

The research on complex networks is a hot topic in many fields, among which community detection is a complex and meaningful process, which plays an important role in researching the characteristics of complex networks. Community structure is a common feature in the network. Given a graph, the process of uncovering its community structure is called community detection. Many community detection algorithms from different perspectives have been proposed. Achieving stable and accurate community division is still a non-trivial task due to the difficulty of setting specific parameters, high randomness and lack of ground-truth information. In this paper, we explore a new decision-making method through real-life communication and propose a preferential decision model based on dynamic relationships applied to dynamic systems. We apply this model to the label propagation algorithm and present a Community Detection based on Preferential Decision Model, called CDPD. This model intuitively aims to reveal the topological structure and the hierarchical structure between networks. By analyzing the structural characteristics of complex networks and mining the tightness between nodes, the priority of neighbor nodes is chosen to perform the required preferential decision, and finally the information in the system reaches a stable state. In the experiments, through the comparison of eight comparison algorithms, we verified the performance of CDPD in real-world networks and synthetic networks. The results show that CDPD not only has better performance than most recent algorithms on most datasets, but it is also more suitable for many community networks with ambiguous structure, especially sparse networks.


Sensors ◽  
2020 ◽  
Vol 20 (21) ◽  
pp. 6372
Author(s):  
Aleksandra Królak ◽  
Tomasz Wiktorski ◽  
Magnus Friestad Bjørkavoll-Bergseth ◽  
Stein Ørn

Heart rate variability (HRV) analysis can be a useful tool to detect underlying heart or even general health problems. Currently, such analysis is usually performed in controlled or semi-controlled conditions. Since many of the typical HRV measures are sensitive to data quality, manual artifact correction is common in literature, both as an exclusive method or in addition to various filters. With proliferation of Personal Monitoring Devices with continuous HRV analysis an opportunity opens for HRV analysis in a new setting. However, current artifact correction approaches have several limitations that hamper the analysis of real-life HRV data. To address this issue we propose an algorithm for automated artifact correction that has a minimal impact on HRV measures, but can handle more artifacts than existing solutions. We verify this algorithm based on two datasets. One collected during a recreational bicycle race and another one in a laboratory, both using a PMD in form of a GPS watch. Data include direct measurement of electrical myocardial signals using chest straps and direct measurements of power using a crank sensor (in case of race dataset), both paired with the watch. Early results suggest that the algorithm can correct more artifacts than existing solutions without a need for manual support or parameter tuning. At the same time, the error introduced to HRV measures for peak correction and shorter gaps is similar to the best existing solution (Kubios-inspired threshold-based cubic interpolation) and better than commonly used median filter. For longer gaps, cubic interpolation can in some cases result in lower error in HRV measures, but the shape of the curve it generates matches ground truth worse than our algorithm. It might suggest that further development of the proposed algorithm may also improve these results.


Author(s):  
Sangchul Ahn ◽  
Joohyun Lee ◽  
Jinwoo Kim ◽  
Sungkuk Chun ◽  
Jungbin Kim ◽  
...  
Keyword(s):  

2009 ◽  
Author(s):  
David Doria

In recent years, Light Detection and Ranging (LiDAR) scanners have become more prevalent in the scientific community. They capture a “2.5-D” image of a scene by sending out thousands of laser pulses and using time-of-flight calculations to determine the distance to the first reflecting surface in the scene. Rather than setting up a collection of objects in real life and actually sending lasers into the scene, one can simply create a scene out of 3d models and “scan” it by casting rays at the models. This is a great resource for any researchers who work with 3D model/surface/point data and LiDAR data. The synthetic scanner can be used to produce data sets for which a ground truth is known in order to ensure algorithms are behaving properly before moving to “real” LiDAR scans. Also, noise can be added to the points to attempt to simulate a real LiDAR scan for researchers who do not have access to the very expensive equipment required to obtain real scans.


Background subtraction is a key part to detect moving objects from the video in computer vision field. It is used to subtract reference frame to every new frame of video scenes. There are wide varieties of background subtraction techniques available in literature to solve real life applications like crowd analysis, human activity tracking system, traffic analysis and many more. Moreover, there were not enough benchmark datasets available which can solve all the challenges of subtraction techniques for object detection. Thus challenges were found in terms of dynamic background, illumination changes, shadow appearance, occlusion and object speed. In this perspective, we have tried to provide exhaustive literature survey on background subtraction techniques for video surveillance applications to solve these challenges in real situations. Additionally, we have surveyed eight benchmark video datasets here namely Wallflower, BMC, PET, IBM, CAVIAR, CD.Net, SABS and RGB-D along with their available ground truth. This study evaluates the performance of five background subtraction methods using performance parameters such as specificity, sensitivity, FNR, PWC and F-Score in order to identify an accurate and efficient method for detecting moving objects in less computational time.


Author(s):  
Timofei Istomin ◽  
Elia Leoni ◽  
Davide Molteni ◽  
Amy L. Murphy ◽  
Gian Pietro Picco ◽  
...  

Proximity detection is at the core of several mobile and ubiquitous computing applications. These include reactive use cases, e.g., alerting individuals of hazards or interaction opportunities, and others concerned only with logging proximity data, e.g., for offline analysis and modeling. Common approaches rely on Bluetooth Low Energy (BLE) or ultra-wideband (UWB) radios. Nevertheless, these strike opposite tradeoffs between the accuracy of distance estimates quantifying proximity and the energy efficiency affecting system lifetime, effectively forcing a choice between the two and ultimately constraining applicability. Janus reconciles these dimensions in a dual-radio protocol enabling accurate and energy-efficient proximity detection, where the energy-savvy BLE is exploited to discover devices and coordinate their distance measurements, acquired via the energy-hungry UWB. A model supports domain experts in configuring Janus for their use cases with predictable performance. The latency, reliability, and accuracy of Janus are evaluated experimentally, including realistic scenarios endowed with the mm-level ground truth provided by a motion capture system. Energy measurements show that Janus achieves weeks to months of autonomous operation, depending on the use case configuration. Finally, several large-scale campaigns exemplify its practical usefulness in real-world contexts.


2020 ◽  
Author(s):  
Muhammad Salek Ali ◽  
Massimo Vecchio ◽  
Fabio Antonelli

Abstract Within internet of things (IoT) research, there is a growing interest in leveraging the decentralization properties of blockchains, towards developing IoT authentication and authorization mechanisms that do not inherently require centralized third-party intermediaries. This paper presents a framework for sharing IoT data in a decentralized and private-by-design manner in exchange for monetary services. The framework is built on a tiered blockchain architecture, along with InterPlanetary File System for IoT data storage and transfer. The goal is to enable IoT data users to exercise fine-grained control on how much data they share with entities authenticated through blockchains. To highlight how the framework would be used in real-life scenarios, this paper presents two use cases, namely an IoT data marketplace and a decentralized connected vehicle insurance. These examples showcase how the proposed framework can be used for varying smart contract-based applications involving exchanges of IoT data and cryptocurrency. Following the discussion about the use cases, the paper outlines a detailed security analysis performed on the proposed framework, based on multiple attack scenarios. Finally, it presents and discusses extensive evaluations, in terms of various performance metrics obtained from a real-world implementation.


Author(s):  
Srinivas Mahankali ◽  
Sudhir Chaudhary

Every individual undergoes a series of educational programs and acquires skills and pedagogical certifications throughout his/her life from various educational and skill development organisations across the world, including the companies they work for. It is imperative that there is a comprehensive record of these certifications that can be authentically verified by those wanting to employ the individual for these respective skills accredited through certifications. In this chapter, the authors explore the utility of blockchain technology-led digitization, automation of trust, and disintermediation in education sector. They examine some of the prominent use cases and challenges faced by blockchain technology. They also look at the current state of blockchain technology-enabled applications in related domains and its implications for the education sector in India along with a real-life illustration with implementation using AuxCert on Auxledger, a permissioned blockchain platform from Auxesis group.


Sensors ◽  
2020 ◽  
Vol 20 (23) ◽  
pp. 6810
Author(s):  
Paruthi Pradhapan ◽  
Emmanuel Rios Velazquez ◽  
Jolanda A. Witteveen ◽  
Yelena Tonoyan ◽  
Vojkan Mihajlović

Assessing the human affective state using electroencephalography (EEG) have shown good potential but failed to demonstrate reliable performance in real-life applications. Especially if one applies a setup that might impact affective processing and relies on generalized models of affect. Additionally, using subjective assessment of ones affect as ground truth has often been disputed. To shed the light on the former challenge we explored the use of a convenient EEG system with 20 participants to capture their reaction to affective movie clips in a naturalistic setting. Employing state-of-the-art machine learning approach demonstrated that the highest performance is reached when combining linear features, namely symmetry features and single-channel features, with nonlinear ones derived by a multiscale entropy approach. Nevertheless, the best performance, reflected in the highest F1-score achieved in a binary classification task for valence was 0.71 and for arousal 0.62. The performance was 10–20% better compared to using ratings provided by 13 independent raters. We argue that affective self-assessment might be underrated and it is crucial to account for personal differences in both perception and physiological response to affective cues.


Sign in / Sign up

Export Citation Format

Share Document