scholarly journals Modeling of Moisture Content of Subgrade Materials in High-Speed Railway Using a Deep Learning Method

2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
LiLei Chen ◽  
Jing Chen ◽  
Chao Wang ◽  
Yanhua Dai ◽  
Rongyan Guo ◽  
...  

Moisture content of subgrade materials is an essential factor affecting frost heave deformation of high-speed railway subgrade in a seasonally frozen region. Modeling and predicting moisture transport play an important role in analyzing the subgrade thermal and hydraulic conditions in cold regions. In this study, a long short-term memory (LSTM) model was proposed based on subgrade material moisture in two sections during one winter and spring cycle from 2015 to 2016. The reliability of the model was verified by comparing the monitoring data with the model results. The results demonstrate that the LSTM model can be effectively used to forecast the dynamic characteristics of the moisture of subgrade materials. The data of simulated moisture content of subgrade materials have a root mean square error ranging from 0.17 to 0.47 in the training phase and from 0.20 to 10.5 in the testing phase. The proposed model provides a novel method for long-term moisture prediction in subgrade materials of high-speed railways in cold regions.

Sensors ◽  
2019 ◽  
Vol 19 (4) ◽  
pp. 861 ◽  
Author(s):  
Xiangdong Ran ◽  
Zhiguang Shan ◽  
Yufei Fang ◽  
Chuang Lin

Traffic prediction is based on modeling the complex non-linear spatiotemporal traffic dynamics in road network. In recent years, Long Short-Term Memory has been applied to traffic prediction, achieving better performance. The existing Long Short-Term Memory methods for traffic prediction have two drawbacks: they do not use the departure time through the links for traffic prediction, and the way of modeling long-term dependence in time series is not direct in terms of traffic prediction. Attention mechanism is implemented by constructing a neural network according to its task and has recently demonstrated success in a wide range of tasks. In this paper, we propose an Long Short-Term Memory-based method with attention mechanism for travel time prediction. We present the proposed model in a tree structure. The proposed model substitutes a tree structure with attention mechanism for the unfold way of standard Long Short-Term Memory to construct the depth of Long Short-Term Memory and modeling long-term dependence. The attention mechanism is over the output layer of each Long Short-Term Memory unit. The departure time is used as the aspect of the attention mechanism and the attention mechanism integrates departure time into the proposed model. We use AdaGrad method for training the proposed model. Based on the datasets provided by Highways England, the experimental results show that the proposed model can achieve better accuracy than the Long Short-Term Memory and other baseline methods. The case study suggests that the departure time is effectively employed by using attention mechanism.


Author(s):  
Tao Gui ◽  
Qi Zhang ◽  
Lujun Zhao ◽  
Yaosong Lin ◽  
Minlong Peng ◽  
...  

In recent years, long short-term memory (LSTM) has been successfully used to model sequential data of variable length. However, LSTM can still experience difficulty in capturing long-term dependencies. In this work, we tried to alleviate this problem by introducing a dynamic skip connection, which can learn to directly connect two dependent words. Since there is no dependency information in the training data, we propose a novel reinforcement learning-based method to model the dependency relationship and connect dependent words. The proposed model computes the recurrent transition functions based on the skip connections, which provides a dynamic skipping advantage over RNNs that always tackle entire sentences sequentially. Our experimental results on three natural language processing tasks demonstrate that the proposed method can achieve better performance than existing methods. In the number prediction experiment, the proposed model outperformed LSTM with respect to accuracy by nearly 20%.


Author(s):  
Shirien K A ◽  
Neethu George ◽  
Surekha Mariam Varghese

Descriptive answer script assessment and rating program is an automated framework to evaluate the answer scripts correctly. There are several classification schemes in which a piece of text is evaluated on the basis of spelling, semantics and meaning. But, lots of these aren’t successful. Some of the models available to rate the response scripts include Simple Long Short Term Memory (LSTM), Deep LSTM. In addition to that Convolution Neural Network and Bi-directional LSTM is considered here to refine the result. The model uses convolutional neural networks and bidirectional LSTM networks to learn local information of words and capture long-term dependency information of contexts on the Tensorflow and Keras deep learning framework. The embedding semantic representation of texts can be used for computing semantic similarities between pieces of texts and to grade them based on the similarity score. The experiment used methods for data optimization, such as data normalization and dropout, and tested the model on an Automated Student Evaluation Short Response Scoring, a commonly used public dataset. By comparing with the existing systems, the proposed model has achieved the state-of-the-art performance and achieves better results in the accuracy of the test dataset.


Author(s):  
Shruti Patil ◽  
Venkatesh M. Mudaliar ◽  
Pooja Kamat ◽  
Shilpa Gite

A chatbot is a software that can reproduce a discussion portraying a specific dimension of articulation among people and machines utilizing Natural Human Language. With the advent of AI, chatbots have developed from being minor guideline-based models to progressively modern models. A striking highlight of the current chatbot frameworks is their capacity to maintain and support explicit highlights and settings of the discussions empowering them to have human interaction in real-time surroundings. The paper presents a detailed database concerning the models utilized to deal with the learning of long haul conditions in a chatbot. The paper proposes a novel crossbreed Long Short Term Memory based Ensemble model to retain the information in specific situations. The proposed model uses a characterized number of Long Short Term Memory Networks as a significant aspect of its working as one to create the aggregate forecast class for the information inquiry and conversation. We found that both of the ensemble methods LSTM and GRU work well in different dataset environments and the ensemble technique is an effective one in chatbot applications.


Author(s):  
Waris Quamer ◽  
Praphula Kumar Jain ◽  
Arpit Rai ◽  
Vijayalakshmi Saravanan ◽  
Rajendra Pamula ◽  
...  

Inference has been central problem for understanding and reasoning in artificial intelligence. Especially, Natural Language Inference is an interesting problem that has attracted the attention of many researchers. Natural language inference intends to predict whether a hypothesis sentence can be inferred from the premise sentence. Most prior works rely on a simplistic association between the premise and hypothesis sentence pairs, which is not sufficient for learning complex relationships between them. The strategy also fails to exploit local context information fully. Long Short Term Memory (LSTM) or gated recurrent units networks (GRU) are not effective in modeling long-term dependencies, and their schemes are far more complex as compared to Convolutional Neural Networks (CNN). To address this problem of long-term dependency, and to involve context for modeling better representation of a sentence, in this article, a general Self-Attentive Convolution Neural Network (SACNN) is presented for natural language inference and sentence pair modeling tasks. The proposed model uses CNNs to integrate mutual interactions between sentences, and each sentence with their counterparts is taken into consideration for the formulation of their representation. Moreover, the self-attention mechanism helps fully exploit the context semantics and long-term dependencies within a sentence. Experimental results proved that SACNN was able to outperform strong baselines and achieved an accuracy of 89.7% on the stanford natural language inference (SNLI) dataset.


Sign in / Sign up

Export Citation Format

Share Document