Navigation with low-sampling-rate GPS and smartphone sensors: a data-driven learning-based approach

Author(s):  
Jiahui Qi ◽  
Hang Li ◽  
Feng Yin ◽  
Bo Ai ◽  
H.H. Putra ◽  
...  
Sensors ◽  
2021 ◽  
Vol 21 (15) ◽  
pp. 4991
Author(s):  
Mike Lakoju ◽  
Nemitari Ajienka ◽  
M. Ahmadieh Khanesar ◽  
Pete Burnap ◽  
David T. Branson

To create products that are better fit for purpose, manufacturers require new methods for gaining insights into product experience in the wild at scale. “Chatty Factories” is a concept that explores the transformative potential of placing IoT-enabled data-driven systems at the core of design and manufacturing processes, aligned to the Industry 4.0 paradigm. In this paper, we propose a model that enables new forms of agile engineering product development via “chatty” products. Products relay their “experiences” from the consumer world back to designers and product engineers through the mediation provided by embedded sensors, IoT, and data-driven design tools. Our model aims to identify product “experiences” to support the insights into product use. To this end, we create an experiment to: (i) collect sensor data at 100 Hz sampling rate from a “Chatty device” (device with sensors) for six common everyday activities that drive produce experience: standing, walking, sitting, dropping and picking up of the device, placing the device stationary on a side table, and a vibrating surface; (ii) pre-process and manually label the product use activity data; (iii) compare a total of four Unsupervised Machine Learning models (three classic and the fuzzy C-means algorithm) for product use activity recognition for each unique sensor; and (iv) present and discuss our findings. The empirical results demonstrate the feasibility of applying unsupervised machine learning algorithms for clustering product use activity. The highest obtained F-measure is 0.87, and MCC of 0.84, when the Fuzzy C-means algorithm is applied for clustering, outperforming the other three algorithms applied.


2020 ◽  
Author(s):  
Tiago Timóteo Fernandes ◽  
Bruno Direito ◽  
Alexandre Sayal ◽  
João Pereira ◽  
Alexandre Andrade ◽  
...  

AbstractBackgroundThe analysis of connectivity has become a fundamental tool in human neuroscience. Granger Causality Mapping is a data-driven method that uses Granger Causality (GC) to assess the existence and direction of influence between signals, based on temporal precedence of information. More recently, a theory of Granger causality has been developed for state-space (SS-GC) processes, but little is known about its statistical validation and application on functional magnetic resonance imaging (fMRI) data.New MethodWe implemented a new heuristic, focusing on the application of SS-GC with a distinct statistical validation technique - Time Reversed Testing - to generative synthetic models and compare it to classical multivariate computational frameworks. We also test a range of experimental parameters, including block structure, sampling frequency, noise and system mean pairwise correlation, using a statistical framework of binary classification.ResultsWe found that SS-GC with time reversed testing outperforms other frameworks. The results validate the application of SS-GC to generative models. When estimating reliable causal relations, SS-GC returns promising results, especially when considering synthetic data with an high impact of noise and sampling rate.ConclusionsSS-GC with time reversed testing offers a possible framework for future analysis of fMRI data in the context of data-driven causality analysis.HighlightsState-Space GC was combined with a statistical validation step, using a Time Reversed Testing.This novel heuristic overpowers classical GC, when applied to generative models.Correctly identified connections between variables increase with the increase of number of blocks and number of points per block.SNR and subsampling have a significant impact on the results.


Author(s):  
E. Voelkl ◽  
L. F. Allard

The conventional discrete Fourier transform can be extended to a discrete Extended Fourier transform (EFT). The EFT allows to work with discrete data in close analogy to the optical bench, where continuous data are processed. The EFT includes a capability to increase or decrease the resolution in Fourier space (thus the argument that CCD cameras with a higher number of pixels to increase the resolution in Fourier space is no longer valid). Fourier transforms may also be shifted with arbitrary increments, which is important in electron holography. Still, the analogy between the optical bench and discrete optics on a computer is limited by the Nyquist limit. In this abstract we discuss the capability with the EFT to change the initial sampling rate si of a recorded or simulated image to any other(final) sampling rate sf.


2009 ◽  
Vol 23 (4) ◽  
pp. 191-198 ◽  
Author(s):  
Suzannah K. Helps ◽  
Samantha J. Broyd ◽  
Christopher J. James ◽  
Anke Karl ◽  
Edmund J. S. Sonuga-Barke

Background: The default mode interference hypothesis ( Sonuga-Barke & Castellanos, 2007 ) predicts (1) the attenuation of very low frequency oscillations (VLFO; e.g., .05 Hz) in brain activity within the default mode network during the transition from rest to task, and (2) that failures to attenuate in this way will lead to an increased likelihood of periodic attention lapses that are synchronized to the VLFO pattern. Here, we tested these predictions using DC-EEG recordings within and outside of a previously identified network of electrode locations hypothesized to reflect DMN activity (i.e., S3 network; Helps et al., 2008 ). Method: 24 young adults (mean age 22.3 years; 8 male), sampled to include a wide range of ADHD symptoms, took part in a study of rest to task transitions. Two conditions were compared: 5 min of rest (eyes open) and a 10-min simple 2-choice RT task with a relatively high sampling rate (ISI 1 s). DC-EEG was recorded during both conditions, and the low-frequency spectrum was decomposed and measures of the power within specific bands extracted. Results: Shift from rest to task led to an attenuation of VLFO activity within the S3 network which was inversely associated with ADHD symptoms. RT during task also showed a VLFO signature. During task there was a small but significant degree of synchronization between EEG and RT in the VLFO band. Attenuators showed a lower degree of synchrony than nonattenuators. Discussion: The results provide some initial EEG-based support for the default mode interference hypothesis and suggest that failure to attenuate VLFO in the S3 network is associated with higher synchrony between low-frequency brain activity and RT fluctuations during a simple RT task. Although significant, the effects were small and future research should employ tasks with a higher sampling rate to increase the possibility of extracting robust and stable signals.


Sign in / Sign up

Export Citation Format

Share Document