New Time Series Data Representation ESAX for Financial Applications

Author(s):  
B. Lkhagva ◽  
Yu Suzuki ◽  
K. Kawagoe
Sensors ◽  
2020 ◽  
Vol 20 (7) ◽  
pp. 1908
Author(s):  
Chao Ma ◽  
Xiaochuan Shi ◽  
Wei Li ◽  
Weiping Zhu

In the past decade, time series data have been generated from various fields at a rapid speed, which offers a huge opportunity for mining valuable knowledge. As a typical task of time series mining, Time Series Classification (TSC) has attracted lots of attention from both researchers and domain experts due to its broad applications ranging from human activity recognition to smart city governance. Specifically, there is an increasing requirement for performing classification tasks on diverse types of time series data in a timely manner without costly hand-crafting feature engineering. Therefore, in this paper, we propose a framework named Edge4TSC that allows time series to be processed in the edge environment, so that the classification results can be instantly returned to the end-users. Meanwhile, to get rid of the costly hand-crafting feature engineering process, deep learning techniques are applied for automatic feature extraction, which shows competitive or even superior performance compared to state-of-the-art TSC solutions. However, because time series presents complex patterns, even deep learning models are not capable of achieving satisfactory classification accuracy, which motivated us to explore new time series representation methods to help classifiers further improve the classification accuracy. In the proposed framework Edge4TSC, by building the binary distribution tree, a new time series representation method was designed for addressing the classification accuracy concern in TSC tasks. By conducting comprehensive experiments on six challenging time series datasets in the edge environment, the potential of the proposed framework for its generalization ability and classification accuracy improvement is firmly validated with a number of helpful insights.


2021 ◽  
Author(s):  
Atsushi Kamimura ◽  
Tetsuya J. Kobayashi

The regulation and coordination of cell growth and division is a long-standing problem in cell physiology. Recent single-cell measurements using microfluidic devices provide quantitative time-series data of various physiological parameters of cells. To clarify the regulatory laws and associated relevant parameters such as cell size, mathematical models have been constructed based on physical insights over the phenomena and tested by their capabilities to reproduce the measured data. However, such a conventional model construction by abduction faces a constant risk that we may overlook important parameters and factors especially when complicated time series data is concerned. In addition, comparing a model and data for validation is not trivial when we work on noisy multi-dimensional data. Using cell size control as an example, we demonstrate that this problem can be addressed by employing a neural network (NN) method, originally developed for history-dependent temporal point processes. The NN can effectively segregate history-dependent deterministic factors and unexplainable noise from a given data by flexibly representing functional forms of the deterministic relation and noise distribution. With this method, we represent and infer birth and division cell size distributions of bacteria and fission yeast. The known size control mechanisms such as adder model are revealed as the conditional dependence of the size distributions on history and their Markovian properties are shown sufficient. In addition, the inferred NN model provides a better data representation for the abductive model searching than descriptive statistics. Thus, the NN method can work as a powerful tool to process the noisy data for uncovering hidden dynamic laws.


Author(s):  
Relita Buaton ◽  
Muhammad Zarlis ◽  
Herman Mawengkang ◽  
Syahril Effendi

Computer system development is increasing very rapidly in generating and collecting data that can be seen in terms of the application of computerized systems that continuously improve transaction data in the business world and in government systems, as well as the ability of hardware to store data with large capacity, increasing interest in mining data in accordance with technological developments, including problems related to computer science and data representation which are considered effective and efficient solutions. In this study the technique used is processing time series data, some of the knowledge produced is then ranked so that priority knowledge is obtained with a high level of confidence. Proximity distance using j-measure euclidean and cracking. Based on the results of cracking found rules based on the level of confidence that can be used as a decision support


Sign in / Sign up

Export Citation Format

Share Document