Spectral Power Analysis of Drivers’ Gas Pedal Control during Steady-state Car-following on Freeways

Author(s):  
Fred Feng ◽  
Shan Bao ◽  
James Sayer ◽  
David LeBlanc

This paper investigated the frequency characteristics of drivers’ gas pedal control in steady-state car-following on freeways by using vehicle sensor data from an existing naturalistic driving study. The main objectives were to examine the frequency range and distributions of a driver operating the gas pedal when following a lead vehicle, and whether the higher and lower frequency components of the gas pedal signal would vary when following a lead vehicle with varying distances. A total of 1,461 driving segments each with 90-seconds of steady-state freeway car-following were extracted from the naturalistic driving data. Fourier analysis was performed to convert the time series data of drivers’ gas pedal control to the frequency domain. The results show that during steady-state freeway car-following, the power of the gas pedal control peaks at around 0.033 Hz or 15 s per pedal movement (derived using the median of the peak frequency), and the upper limit of the frequency is around 0.94 Hz or 0.5 s per pedal movement (derived using the 95th percentile of the cutoff frequency). Further analysis showed that following a lead vehicle with smaller gap was associated with a larger proportion of the higher frequency component ( p < .001), and following a lead vehicle with larger gap was associated with a larger proportion of the lower frequency component ( p < .001). This suggests that the larger gap may allow the driver to relax control of the gas pedal with smoother operation. Potential applications of this paper include developing more realistic driver models that could be used in designing advanced driver assistance systems.

Author(s):  
Dan Xu ◽  
Chennan Xue ◽  
Huaguo Zhou

The objective of this paper is to analyze headway and speed distribution based on driver characteristics and work zone (WZ) configurations by utilizing Naturalistic Driving Study (NDS) data. The NDS database provides a unique opportunity to study car-following behaviors for different driver types in various WZ configurations, which cannot be achieved from traditional field data collection. The complete NDS WZ trip data of 200 traversals and 103 individuals, including time-series data, forward-view videos, radar data, and driver characteristics, was collected at four WZ configurations, which encompasses nearly 1,100 vehicle miles traveled, 19 vehicle hours driven, and over 675,000 data points at 0.1 s intervals. First, the time headway selections were analyzed with driver characteristics such as the driver’s gender, age group, and risk perceptions to develop the headway selection table. Further, the speed profiles for different WZ configurations were established to explore the speed distribution and speed change. The best-fitted curves of time headway and speed distributions were estimated by the generalized additive model (GAM). The change point detection method was used to identify where significant changes in mean and variance of speeds occur. The results concluded that NDS data can be used to improve car-following models at WZs that have been implemented in current WZ planning and simulation tools by considering different headway distributions based on driver characteristics and their speed profiles while traversing the entire WZ.


AI ◽  
2021 ◽  
Vol 2 (1) ◽  
pp. 48-70
Author(s):  
Wei Ming Tan ◽  
T. Hui Teo

Prognostic techniques attempt to predict the Remaining Useful Life (RUL) of a subsystem or a component. Such techniques often use sensor data which are periodically measured and recorded into a time series data set. Such multivariate data sets form complex and non-linear inter-dependencies through recorded time steps and between sensors. Many current existing algorithms for prognostic purposes starts to explore Deep Neural Network (DNN) and its effectiveness in the field. Although Deep Learning (DL) techniques outperform the traditional prognostic algorithms, the networks are generally complex to deploy or train. This paper proposes a Multi-variable Time Series (MTS) focused approach to prognostics that implements a lightweight Convolutional Neural Network (CNN) with attention mechanism. The convolution filters work to extract the abstract temporal patterns from the multiple time series, while the attention mechanisms review the information across the time axis and select the relevant information. The results suggest that the proposed method not only produces a superior accuracy of RUL estimation but it also trains many folds faster than the reported works. The superiority of deploying the network is also demonstrated on a lightweight hardware platform by not just being much compact, but also more efficient for the resource restricted environment.


Author(s):  
Meenakshi Narayan ◽  
Ann Majewicz Fey

Abstract Sensor data predictions could significantly improve the accuracy and effectiveness of modern control systems; however, existing machine learning and advanced statistical techniques to forecast time series data require significant computational resources which is not ideal for real-time applications. In this paper, we propose a novel forecasting technique called Compact Form Dynamic Linearization Model-Free Prediction (CFDL-MFP) which is derived from the existing model-free adaptive control framework. This approach enables near real-time forecasts of seconds-worth of time-series data due to its basis as an optimal control problem. The performance of the CFDL-MFP algorithm was evaluated using four real datasets including: force sensor readings from surgical needle, ECG measurements for heart rate, and atmospheric temperature and Nile water level recordings. On average, the forecast accuracy of CFDL-MFP was 28% better than the benchmark Autoregressive Integrated Moving Average (ARIMA) algorithm. The maximum computation time of CFDL-MFP was 49.1ms which was 170 times faster than ARIMA. Forecasts were best for deterministic data patterns, such as the ECG data, with a minimum average root mean squared error of (0.2±0.2).


Author(s):  
Vincenzo Punzo ◽  
Domenico Josto Formisano ◽  
Vincenzo Torrieri

Difficulty in obtaining accurate car-following data has traditionally been regarded as a considerable drawback in understanding real phenomena and has affected the development and validation of traffic microsimulation models. Recent advancements in digital technology have opened up new horizons in the conduct of research in this field. Despite the high degrees of precision of these techniques, estimation of time series data of speeds and accelerations from positions with the required accuracy is still a demanding task. The core of the problem is filtering the noisy trajectory data for each vehicle without altering platoon data consistency; i.e., the speeds and accelerations of following vehicles must be estimated so that the resulting intervehicle spacings are equal to the real one. Otherwise, negative spacings can also easily occur. The task was achieved in this study by considering vehicles of a platoon as a sole dynamic system and reducing several estimation problems to a single consistent one. This process was accomplished by means of a nonstationary Kalman filter that used measurements and time-varying error information from differential Global Positioning System devices. The Kalman filter was fruitfully applied here to estimation of the speed of the whole platoon by including intervehicle spacings as additional measurements (assumed to be reference measurements). The closed solution of an optimization problem that ensures strict observation of the true intervehicle spacings concludes the estimation process. The stationary counterpart of the devised filter is suitable for application to position data, regardless of the data collection technique used, e.g., video cameras.


2020 ◽  
Vol 36 (19) ◽  
pp. 4885-4893 ◽  
Author(s):  
Baoshan Ma ◽  
Mingkun Fang ◽  
Xiangtian Jiao

Abstract Motivation Gene regulatory networks (GRNs) capture the regulatory interactions between genes, resulting from the fundamental biological process of transcription and translation. In some cases, the topology of GRNs is not known, and has to be inferred from gene expression data. Most of the existing GRNs reconstruction algorithms are either applied to time-series data or steady-state data. Although time-series data include more information about the system dynamics, steady-state data imply stability of the underlying regulatory networks. Results In this article, we propose a method for inferring GRNs from time-series and steady-state data jointly. We make use of a non-linear ordinary differential equations framework to model dynamic gene regulation and an importance measurement strategy to infer all putative regulatory links efficiently. The proposed method is evaluated extensively on the artificial DREAM4 dataset and two real gene expression datasets of yeast and Escherichia coli. Based on public benchmark datasets, the proposed method outperforms other popular inference algorithms in terms of overall score. By comparing the performance on the datasets with different scales, the results show that our method still keeps good robustness and accuracy at a low computational complexity. Availability and implementation The proposed method is written in the Python language, and is available at: https://github.com/lab319/GRNs_nonlinear_ODEs Supplementary information Supplementary data are available at Bioinformatics online.


2022 ◽  
Vol 3 (1) ◽  
pp. 1-26
Author(s):  
Omid Hajihassani ◽  
Omid Ardakanian ◽  
Hamzeh Khazaei

The abundance of data collected by sensors in Internet of Things devices and the success of deep neural networks in uncovering hidden patterns in time series data have led to mounting privacy concerns. This is because private and sensitive information can be potentially learned from sensor data by applications that have access to this data. In this article, we aim to examine the tradeoff between utility and privacy loss by learning low-dimensional representations that are useful for data obfuscation. We propose deterministic and probabilistic transformations in the latent space of a variational autoencoder to synthesize time series data such that intrusive inferences are prevented while desired inferences can still be made with sufficient accuracy. In the deterministic case, we use a linear transformation to move the representation of input data in the latent space such that the reconstructed data is likely to have the same public attribute but a different private attribute than the original input data. In the probabilistic case, we apply the linear transformation to the latent representation of input data with some probability. We compare our technique with autoencoder-based anonymization techniques and additionally show that it can anonymize data in real time on resource-constrained edge devices.


Sensors ◽  
2021 ◽  
Vol 21 (18) ◽  
pp. 6004
Author(s):  
Joseph Taylor ◽  
Elmer Ccopa-Rivera ◽  
Solomon Kim ◽  
Reise Campbell ◽  
Rodney Summerscales ◽  
...  

Machine learning (ML) can be an appropriate approach to overcoming common problems associated with sensors for low-cost, point-of-care diagnostics, such as non-linearity, multidimensionality, sensor-to-sensor variations, presence of anomalies, and ambiguity in key features. This study proposes a novel approach based on ML algorithms (neural nets, Gaussian Process Regression, among others) to model the electrochemiluminescence (ECL) quenching mechanism of the [Ru(bpy)3]2+/TPrA system by phenolic compounds, thus allowing their detection and quantification. The relationships between the concentration of phenolic compounds and their effect on the ECL intensity and current data measured using a mobile phone-based ECL sensor is investigated. The ML regression tasks with a tri-layer neural net using minimally processed time series data showed better or comparable detection performance compared to the performance using extracted key features without extra preprocessing. Combined multimodal characteristics produced an 80% more enhanced performance with multilayer neural net algorithms than a single feature based-regression analysis. The results demonstrated that the ML could provide a robust analysis framework for sensor data with noises and variability. It demonstrates that ML strategies can play a crucial role in chemical or biosensor data analysis, providing a robust model by maximizing all the obtained information and integrating nonlinearity and sensor-to-sensor variations.


Mathematics ◽  
2021 ◽  
Vol 9 (17) ◽  
pp. 2146
Author(s):  
Mikhail Zymbler ◽  
Elena Ivanova

Currently, big sensor data arise in a wide spectrum of Industry 4.0, Internet of Things, and Smart City applications. In such subject domains, sensors tend to have a high frequency and produce massive time series in a relatively short time interval. The data collected from the sensors are subject to mining in order to make strategic decisions. In the article, we consider the problem of choosing a Time Series Database Management System (TSDBMS) to provide efficient storing and mining of big sensor data. We overview InfluxDB, OpenTSDB, and TimescaleDB, which are among the most popular state-of-the-art TSDBMSs, and represent different categories of such systems, namely native, add-ons over NoSQL systems, and add-ons over relational DBMSs (RDBMSs), respectively. Our overview shows that, at present, TSDBMSs offer a modest built-in toolset to mine big sensor data. This leads to the use of third-party mining systems and unwanted overhead costs due to exporting data outside a TSDBMS, data conversion, and so on. We propose an approach to managing and mining sensor data inside RDBMSs that exploits the Matrix Profile concept. A Matrix Profile is a data structure that annotates a time series through the index of and the distance to the nearest neighbor of each subsequence of the time series and serves as a basis to discover motifs, anomalies, and other time-series data mining primitives. This approach is implemented as a PostgreSQL extension that allows an application programmer both to compute matrix profiles and mining primitives and to represent them as relational tables. Experimental case studies show that our approach surpasses the above-mentioned out-of-TSDBMS competitors in terms of performance since it assumes that sensor data are mined inside a TSDBMS at no significant overhead costs.


While analyzing iot projects it is very expensive to buy a lot of sensors , corresponding processor boards, power supplies etc. Moreover the entire process is to be replicated to cater to large topologies. The whole experiment is to be planned at a large scale before we can actually start to see analytics working. At a smaller scale this can be implemented as a simulation program in linux where the sensor data is created using a random number generator and scaled appropriately for each type of sensor to mimic representative data. This is them encrypted before sending it over the network to the edge nodes. At the server a socket stream now continuously awaits sensor data Here the required sensor data is retrieved and decrypted to give true time series data. This time series is now given to an analytics engine which calculates the trends and cyclicity and is used to train a neural network. The anomalies so found are properly deciphered. The multiplicity of the nodes can be characterized by having several client programs running in separate terminals. A simple client server architecture is thus able to simulate a large iot infrastructure and is able to perform analytics on a scaled model


2021 ◽  
Vol 13 (1) ◽  
Author(s):  
Mahbubul Alam ◽  
Laleh Jalali ◽  
Mahbubul Alam ◽  
Ahmed Farahat ◽  
Chetan Gupta

Abstract—Prognostics aims to predict the degradation of equipment by estimating their remaining useful life (RUL) and/or the failure probability within a specific time horizon. The high demand of equipment prognostics in the industry have propelled researchers to develop robust and efficient prognostics techniques. Among data driven techniques for prognostics, machine learning and deep learning (DL) based techniques, particularly Recurrent Neural Networks (RNNs) have gained significant attention due to their ability of effectively representing the degradation progress by employing dynamic temporal behaviors. RNNs are well known for handling sequential data, especially continuous time series sequential data where the data follows certain pattern. Such data is usually obtained from sensors attached to the equipment. However, in many scenarios sensor data is not readily available and often very tedious to acquire. Conversely, event data is more common and can easily be obtained from the error logs saved by the equipment and transmitted to a backend for further processing. Nevertheless, performing prognostics using event data is substantially more difficult than that of the sensor data due to the unique nature of event data. Though event data is sequential, it differs from other seminal sequential data such as time series and natural language in the following manner, i) unlike time series data, events may appear at any time, i.e., the appearance of events lacks periodicity; ii) unlike natural languages, event data do not follow any specific linguistic rule. Additionally, there may be a significant variability in the event types appearing within the same sequence.  Therefore, this paper proposes an RUL estimation framework to effectively handle the intricate and novel event data. The proposed framework takes discrete events generated by an equipment (e.g., type, time, etc.) as input, and generates for each new event an estimate of the remaining operating cycles in the life of a given component. To evaluate the efficacy of our proposed method, we conduct extensive experiments using benchmark datasets such as the CMAPSS data after converting the time-series data in these datasets to sequential event data. The event data conversion is carried out by careful exploration and application of appropriate transformation techniques to the time series. To the best of our knowledge this is the first time such event-based RUL estimation problem is introduced to the community. Furthermore, we propose several deep learning and machine learning based solution for the event-based RUL estimation problem. Our results suggest that the deep learning models, 1D-CNN, LSTM, and multi-head attention show similar RMSE, MAE and Score performance. Foreseeably, the XGBoost model achieve lower performance compared to the deep learning models since the XGBoost model fails to capture ordering information from the sequence of events. 


Sign in / Sign up

Export Citation Format

Share Document