Enhancing Hierarchical Multiscale Off-Road Mobility Model by Neural Network Surrogate Model

Author(s):  
Guanchu Chen ◽  
Hiroki Yamashita ◽  
Yeefeng Ruan ◽  
Paramsothy Jayakumar ◽  
Jaroslaw Knap ◽  
...  

Abstract A hierarchical multiscale off-road mobility model is enhanced through the development of an artificial neural network (ANN) surrogate model that captures the complex material behavior of deformable terrain. By exploiting learning capability of neural networks, the incremental stress and strain relationship of granular terrain is predicted by the ANN representative volume elements (RVE) at various states of the stress and strain. A systematic training procedure for ANN RVEs is developed with a virtual tire test rig model for producing training data from the discrete-element (DE) RVEs without relying on computationally intensive full vehicle simulations on deformable terrain. The ANN surrogate RVEs are then integrated into the hierarchical multiscale computational framework as a lower-scale model with the scalable parallel computing capability, while the macro-scale terrain deformation is described by the finite element (FE) approach. It is demonstrated with several numerical examples that off-road vehicle mobility performances predicted by the proposed FE-ANN multiscale terrain model are in good agreement with those of the FE-DE multiscale model while achieving a substantial computational time reduction. The accuracy and robustness of the ANN RVE for fine grain sand terrain are discussed for scenarios not considered in training datasets. Furthermore, a drawbar pull test simulation is presented with the ANN RVE developed with data in the cornering scenario and validated against the full-scale vehicle test data. The numerical results confirm the predictive ability of the FE-ANN multiscale terrain model for off-road mobility simulations.

Materials ◽  
2021 ◽  
Vol 14 (11) ◽  
pp. 2875
Author(s):  
Xiaoxin Lu ◽  
Julien Yvonnet ◽  
Leonidas Papadopoulos ◽  
Ioannis Kalogeris ◽  
Vissarion Papadopoulos

A stochastic data-driven multilevel finite-element (FE2) method is introduced for random nonlinear multiscale calculations. A hybrid neural-network–interpolation (NN–I) scheme is proposed to construct a surrogate model of the macroscopic nonlinear constitutive law from representative-volume-element calculations, whose results are used as input data. Then, a FE2 method replacing the nonlinear multiscale calculations by the NN–I is developed. The NN–I scheme improved the accuracy of the neural-network surrogate model when insufficient data were available. Due to the achieved reduction in computational time, which was several orders of magnitude less than that to direct FE2, the use of such a machine-learning method is demonstrated for performing Monte Carlo simulations in nonlinear heterogeneous structures and propagating uncertainties in this context, and the identification of probabilistic models at the macroscale on some quantities of interest. Applications to nonlinear electric conduction in graphene–polymer composites are presented.


2021 ◽  
Vol 11 (9) ◽  
pp. 4243
Author(s):  
Chieh-Yuan Tsai ◽  
Yi-Fan Chiu ◽  
Yu-Jen Chen

Nowadays, recommendation systems have been successfully adopted in variant online services such as e-commerce, news, and social media. The recommenders provide users a convenient and efficient way to find their exciting items and increase service providers’ revenue. However, it is found that many recommenders suffered from the cold start (CS) problem where only a small number of ratings are available for some new items. To conquer the difficulties, this research proposes a two-stage neural network-based CS item recommendation system. The proposed system includes two major components, which are the denoising autoencoder (DAE)-based CS item rating (DACR) generator and the neural network-based collaborative filtering (NNCF) predictor. In the DACR generator, a textual description of an item is used as auxiliary content information to represent the item. Then, the DAE is applied to extract the content features from high-dimensional textual vectors. With the compact content features, a CS item’s rating can be efficiently derived based on the ratings of similar non-CS items. Second, the NNCF predictor is developed to predict the ratings in the sparse user–item matrix. In the predictor, both spare binary user and item vectors are projected to dense latent vectors in the embedding layer. Next, latent vectors are fed into multilayer perceptron (MLP) layers for user–item matrix learning. Finally, appropriate item suggestions can be accurately obtained. The extensive experiments show that the DAE can significantly reduce the computational time for item similarity evaluations while keeping the original features’ characteristics. Besides, the experiments show that the proposed NNCF predictor outperforms several popular recommendation algorithms. We also demonstrate that the proposed CS item recommender can achieve up to 8% MAE improvement compared to adding no CS item rating.


Energies ◽  
2021 ◽  
Vol 14 (9) ◽  
pp. 2710
Author(s):  
Shivam Barwey ◽  
Venkat Raman

High-fidelity simulations of turbulent flames are computationally expensive when using detailed chemical kinetics. For practical fuels and flow configurations, chemical kinetics can account for the vast majority of the computational time due to the highly non-linear nature of multi-step chemistry mechanisms and the inherent stiffness of combustion chemistry. While reducing this cost has been a key focus area in combustion modeling, the recent growth in graphics processing units (GPUs) that offer very fast arithmetic processing, combined with the development of highly optimized libraries for artificial neural networks used in machine learning, provides a unique pathway for acceleration. The goal of this paper is to recast Arrhenius kinetics as a neural network using matrix-based formulations. Unlike ANNs that rely on data, this formulation does not require training and exactly represents the chemistry mechanism. More specifically, connections between the exact matrix equations for kinetics and traditional artificial neural network layers are used to enable the usage of GPU-optimized linear algebra libraries without the need for modeling. Regarding GPU performance, speedup and saturation behaviors are assessed for several chemical mechanisms of varying complexity. The performance analysis is based on trends for absolute compute times and throughput for the various arithmetic operations encountered during the source term computation. The goals are ultimately to provide insights into how the source term calculations scale with the reaction mechanism complexity, which types of reactions benefit the GPU formulations most, and how to exploit the matrix-based formulations to provide optimal speedup for large mechanisms by using sparsity properties. Overall, the GPU performance for the species source term evaluations reveals many informative trends with regards to the effect of cell number on device saturation and speedup. Most importantly, it is shown that the matrix-based method enables highly efficient GPU performance across the board, achieving near-peak performance in saturated regimes.


Sensors ◽  
2021 ◽  
Vol 21 (9) ◽  
pp. 2939
Author(s):  
Yong Hong ◽  
Jin Liu ◽  
Zahid Jahangir ◽  
Sheng He ◽  
Qing Zhang

This paper provides an efficient way of addressing the problem of detecting or estimating the 6-Dimensional (6D) pose of objects from an RGB image. A quaternion is used to define an object′s three-dimensional pose, but the pose represented by q and the pose represented by -q are equivalent, and the L2 loss between them is very large. Therefore, we define a new quaternion pose loss function to solve this problem. Based on this, we designed a new convolutional neural network named Q-Net to estimate an object’s pose. Considering that the quaternion′s output is a unit vector, a normalization layer is added in Q-Net to hold the output of pose on a four-dimensional unit sphere. We propose a new algorithm, called the Bounding Box Equation, to obtain 3D translation quickly and effectively from 2D bounding boxes. The algorithm uses an entirely new way of assessing the 3D rotation (R) and 3D translation rotation (t) in only one RGB image. This method can upgrade any traditional 2D-box prediction algorithm to a 3D prediction model. We evaluated our model using the LineMod dataset, and experiments have shown that our methodology is more acceptable and efficient in terms of L2 loss and computational time.


Sensors ◽  
2021 ◽  
Vol 21 (5) ◽  
pp. 1688
Author(s):  
Luqman Ali ◽  
Fady Alnajjar ◽  
Hamad Al Jassmi ◽  
Munkhjargal Gochoo ◽  
Wasif Khan ◽  
...  

This paper proposes a customized convolutional neural network for crack detection in concrete structures. The proposed method is compared to four existing deep learning methods based on training data size, data heterogeneity, network complexity, and the number of epochs. The performance of the proposed convolutional neural network (CNN) model is evaluated and compared to pretrained networks, i.e., the VGG-16, VGG-19, ResNet-50, and Inception V3 models, on eight datasets of different sizes, created from two public datasets. For each model, the evaluation considered computational time, crack localization results, and classification measures, e.g., accuracy, precision, recall, and F1-score. Experimental results demonstrated that training data size and heterogeneity among data samples significantly affect model performance. All models demonstrated promising performance on a limited number of diverse training data; however, increasing the training data size and reducing diversity reduced generalization performance, and led to overfitting. The proposed customized CNN and VGG-16 models outperformed the other methods in terms of classification, localization, and computational time on a small amount of data, and the results indicate that these two models demonstrate superior crack detection and localization for concrete structures.


1996 ◽  
Vol 07 (05) ◽  
pp. 559-568 ◽  
Author(s):  
J. FERRE-GINE ◽  
R. RALLO ◽  
A. ARENAS ◽  
FRANCE GIRALT

An implementation of a Fuzzy Artmap neural network is used to detect and to identify (recognise) structures (patterns) embedded in the velocity field of a turbulent wake behind a circular cylinder. The net is trained to recognise both clockwise and anticlockwise eddies present in the u and v velocity fields at 420 diameters downstream of the cylinder that generates the wake, using a pre-processed part of the recorded velocity data. The phase relationship that exists between the angles of the velocity vectors of an eddy pattern is used to reduce the number of classes contained in the data, before the start of the training procedure. The net was made stricter by increasing the vigilance parameter within the interval [0.90, 0.95] and a set of net-weights were obtained for each value. Full data files were scanned with the net classifying patterns according to their phase characteristics. The net classifies about 27% of the recorded signals as eddy motions, with the strictest vigilance parameter and without the need to impose external initial templates. Spanwise distances (homogeneous direction of the flow) within the centres of the eddies identified suggest that they form pairs of counter-rotating vortices (double rollers). The number of patterns selected with Fuzzy Artmap is lower than that reported for template matching because the net classifies eddies according to the recirculating pattern present at the core or central region, while template matching extends the region over which correlation between data and template is performed. In both cases, the topology of educed patterns is in agreement.


Author(s):  
Mahyar Asadi ◽  
Ghazi Alsoruji

Weld sequence optimization, which is determining the best (and worst) welding sequence for welding work pieces, is a very common problem in welding design. The solution for such a combinatorial problem is limited by available resources. Although there are fast simulation models that support sequencing design, still it takes long because of many possible combinations, e.g. millions in a welded structure involving 10 passes. It is not feasible to choose the optimal sequence by evaluating all possible combinations, therefore this paper employs surrogate modeling that partially explores the design space and constructs an approximation model from some combinations of solutions of the expensive simulation model to mimic the behavior of the simulation model as closely as possible but at a much lower computational time and cost. This surrogate model, then, could be used to approximate the behavior of the other combinations and to find the best (and worst) sequence in terms of distortion. The technique is developed and tested on a simple panel structure with 4 weld passes, but essentially can be generalized to many weld passes. A comparison between the results of the surrogate model and the full transient FEM analysis all possible combinations shows the accuracy of the algorithm/model.


2021 ◽  
Author(s):  
Alberto Jose Ramirez ◽  
Jessica Graciela Iriarte

Abstract Breakdown pressure is the peak pressure attained when fluid is injected into a borehole until fracturing occurs. Hydraulic fracturing operations are conducted above the breakdown pressure, at which the rock formation fractures and allows fluids to flow inside. This value is essential to obtain formation stress measurements. The objective of this study is to automate the selection of breakdown pressure flags on time series fracture data using a novel algorithm in lieu of an artificial neural network. This study is based on high-frequency treatment data collected from a cloud-based software. The comma separated (.csv) files include treating pressure (TP), slurry rate (SR), and bottomhole proppant concentration (BHPC) with defined start and end time flags. Using feature engineering, the model calculates the rate of change of treating pressure (dtp_1st) slurry rate (dsr_1st), and bottomhole proppant concentration (dbhpc_1st). An algorithm isolates the initial area of the treatment plot before proppant reaches the perforations, the slurry rate is constant, and the pressure increases. The first approach uses a neural network trained with 872 stages to isolate the breakdown pressure area. The expert rule-based approach finds the highest pressure spikes where SR is constant. Then, a refining function finds the maximum treating pressure value and returns its job time as the predicted breakdown pressure flag. Due to the complexity of unconventional reservoirs, the treatment plots may show pressure changes while the slurry rate is constant multiple times during the same stage. The diverse behavior of the breakdown pressure inhibits an artificial neural network's ability to find one "consistent pattern" across the stage. The multiple patterns found through the stage makes it difficult to select an area to find the breakdown pressure value. Testing this complex model worked moderately well, but it made the computational time too high for deployment. On the other hand, the automation algorithm uses rules to find the breakdown pressure value with its location within the stage. The breakdown flag model was validated with 102 stages and tested with 775 stages, returning the location and values corresponding to the highest pressure point. Results show that 86% of the predicted breakdown pressures are within 65 psi of manually picked values. Breakdown pressure recognition automation is important because it saves time and allows engineers to focus on analytical tasks instead of repetitive data-structuring tasks. Automating this process brings consistency to the data across service providers and basins. In some cases, due to its ability to zoom-in, the algorithm recognized breakdown pressures with higher accuracy than subject matter experts. Comparing the results from two different approaches allowed us to conclude that similar or better results with lower running times can be achieved without using complex algorithms.


2018 ◽  
Vol 16 (08) ◽  
pp. 1840001
Author(s):  
Johannes Bausch

The goal of this work is to define a notion of a “quantum neural network” to classify data, which exploits the low-energy spectrum of a local Hamiltonian. As a concrete application, we build a binary classifier, train it on some actual data and then test its performance on a simple classification task. More specifically, we use Microsoft’s quantum simulator, LIQ[Formula: see text][Formula: see text], to construct local Hamiltonians that can encode trained classifier functions in their ground space, and which can be probed by measuring the overlap with test states corresponding to the data to be classified. To obtain such a classifier Hamiltonian, we further propose a training scheme based on quantum annealing which is completely closed-off to the environment and which does not depend on external measurements until the very end, avoiding unnecessary decoherence during the annealing procedure. For a network of size [Formula: see text], the trained network can be stored as a list of [Formula: see text] coupling strengths. We address the question of which interactions are most suitable for a given classification task, and develop a qubit-saving optimization for the training procedure on a simulated annealing device. Furthermore, a small neural network to classify colors into red versus blue is trained and tested, and benchmarked against the annealing parameters.


Sign in / Sign up

Export Citation Format

Share Document