scholarly journals A Size-Dependent Cost Function to Solve the Inverse Elasticity Problem

2019 ◽  
Vol 9 (9) ◽  
pp. 1799
Author(s):  
Xinbo Zhao ◽  
Yanli Sun ◽  
Yue Mei

Characterizing nonhomogeneous elastic property distribution of solids is of great significance in various engineering fields. In this paper, we observe that the solution to the inverse problem utilizing the standard optimization-based inverse approach is sensitive to the sizes of inclusions. The standard optimization-based inverse approach minimizes a cost function, containing the absolute error between the measured and computed displacements in L2 norm. To address this issue, we propose a novel inverse scheme to characterize nonhomogeneous shear modulus distribution of solids. In this novel method, the cost function is modified, and is dependent on the size of the inclusions. A number of simulated experiments are performed, and demonstrate that the proposed approach is capable of improving the shear modulus contrast in inclusions and reducing the size sensitivity. Furthermore, a theoretical analysis is conducted to validate what we have observed in simulated experiments. This theoretical analysis reveals that what we have observed in the simulated experiments is not induced by the numerical issues Instead, the size sensitivity issue is induced by regularization. The findings of this work encourage us to propose new cost functions for the optimization-based inverse approach to improve the quality of the shear modulus reconstruction.

2021 ◽  
Author(s):  
Mohammad Shushtari ◽  
Rezvan Nasiri ◽  
Arash Arami

This paper presents a novel method for reference trajectory adaptation in lower limb rehabilitation exoskeletons during walking. Our adaptation rule is extracted from a cost function that penalizes both interaction force and trajectory modification. By adding trajectory modification term into the cost function, we restrict the boundaries of the reference trajectory adaptation according to the patient's motor capacity. The performance of the proposed adaptation method is studied analytically in terms of convergence and optimality. We also developed a realistic dynamic walking simulator and utilized it in performance analysis of the presented method. The proposed trajectory adaptation technique guarantees convergence to a stable, reliable, and rhythmic reference trajectory with no prior knowledge about the human intended motion. Our simulations demonstrate the convergence of exoskeleton trajectories to those of simulated healthy subjects while the exoskeleton trajectories adapt less to the trajectories of patients with reduced motor capacity (less reliable trajectories). Furthermore, the gait stability and spatiotemporal parameters such as step time symmetry and minimum toe off clearance enhanced by the adaptation in all subjects. The presented mathematical analysis and simulation results show the applicability and effectiveness of the proposed method and its potential to be applied for trajectory adaptation in lower limb rehabilitation exoskeletons.


1997 ◽  
Vol 272 (6) ◽  
pp. C2037-C2048 ◽  
Author(s):  
X. Yu ◽  
N. M. Alpert ◽  
E. D. Lewandowski

Measurements of oxidative metabolism in the heart from dynamic 13C nuclear magnetic resonance (NMR) spectroscopy rely on 13C turnover in the NMR-detectable glutamate pool. A kinetic model was developed for the analysis of isotope turnover to determine tricarboxylic acid cycle flux (VTCA) and the interconversion rate between alpha-ketoglutarate and glutamate (F1) by fitting the model to NMR data of glutamate enrichment. The results of data fitting are highly reproducible when the noise level is within 10%, making this model applicable to single or grouped experiments. The values for VTCA and F1 were unchanged whether obtained from least-squares fitting of the model to mean experimental enrichment data with standard deviations in the cost function (VTCA = 10.52 mumol.min-1.g dry wt-1, F1 = 10.67 mumol.min-1.g dry wt-1) or to the individual enrichment values for each heart with the NMR noise level in the cost function (VTCA = 10.67 mumol.min-1.g dry wt-1, F1 = 10.18 mumol.min-1.g dry wt-1). Computer simulation and theoretical analysis indicate that glutamate enrichment kinetics are insensitive to the fractional enrichment of acetyl-CoA and changes in small intermediate pools (< 1 mumol/g dry wt). Therefore, high-resolution NMR analysis of tissue extracts and biochemical assays for intermediates at low concentrations are unnecessary. However, a high correlation between VTCA and F1 exists, as anticipated from competition for alpha-ketoglutarate, which indicates the utility of introducing independent experimental constraints into the data fitting for accurate quantification.


Author(s):  
Tuan Hoang ◽  
Thanh-Toan Do ◽  
Tam V. Nguyen ◽  
Ngai-Man Cheung

This paper proposes two novel techniques to train deep convolutional neural networks with low bit-width weights and activations. First, to obtain low bit-width weights, most existing methods obtain the quantized weights by performing quantization on the full-precision network weights. However, this approach would result in some mismatch: the gradient descent updates full-precision weights, but it does not update the quantized weights. To address this issue, we propose a novel method that enables direct updating of quantized weights with learnable quantization levels to minimize the cost function using gradient descent. Second, to obtain low bit-width activations, existing works consider all channels equally. However, the activation quantizers could be biased toward a few channels with high-variance. To address this issue, we propose a method to take into account the quantization errors of individual channels. With this approach, we can learn activation quantizers that minimize the quantization errors in the majority of channels. Experimental results demonstrate that our proposed method achieves state-of-the-art performance on the image classification task, using AlexNet, ResNet and MobileNetV2 architectures on CIFAR-100 and ImageNet datasets.


Geophysics ◽  
1992 ◽  
Vol 57 (11) ◽  
pp. 1428-1434 ◽  
Author(s):  
K. J. Ellefsen ◽  
M. N. Toksöz ◽  
K. M. Tubman ◽  
C. H. Cheng

We have developed a method that estimates a shear modulus [Formula: see text] of a transversely isotropic formation using the tube wave generated during acoustic logging. (The symmetry axis of the anisotropy is assumed to parallel the borehole.) The inversion, which is implemented in the frequency‐wavenumber domain, is based upon a cost function that has three terms: a measure of the misfit between the observed and predicted wavenumbers of the tube wave, a measure of the misfit between the current estimate for [Formula: see text] and the most‐likely value for [Formula: see text], and penalty functions that constrain the estimate to physically acceptable values. The largest contribution to the value of the cost function ordinarily comes from the first term, indicating that the estimate for [Formula: see text] depends mostly on the data. Because the cost function only has one minimum, it can be found using standard optimization methods. The minimum is well defined indicating that the estimate for [Formula: see text] is well resolved. Estimates for [Formula: see text] from synthetic data are almost always within 1 percent of their correct value. Estimates for [Formula: see text] from field data that were collected in a formation with a high clay content are typical of transversely isotropic rocks.


Author(s):  
GUOLI ZHANG ◽  
GENGYIN LI ◽  
HONG XIE ◽  
JIANWEI MA

In this paper we propose a new economic load dispatch model considered the cost function coefficients with uncertainties and the constraints of ramp rate. The uncertain parameters are represented by fuzzy numbers, with the model called fuzzy dynamic economic load dispatch model (FDELD). A novel hybrid evolutionary algorithm and fuzzy number ranking method is proposed to solve FDELD problem. Hybrid evolutionary algorithm combines evolutionary algorithm of very strong global search ability with quasi-simplex technique of better local search capability. The fuzzy number ranking method is used to compare the fuzzy cost function values when optimizing fuzzy cost function. In addition, this paper gives a novel method dealing with directly constrained conditions, and it is not necessary to construct penalty function, as a common disposal constraints method. The experimental study shows that FDELD is practical and the algorithm and techniques proposed are very efficient to solve FDELD problem.


2021 ◽  
Vol 11 (2) ◽  
pp. 850
Author(s):  
Dokkyun Yi ◽  
Sangmin Ji ◽  
Jieun Park

Artificial intelligence (AI) is achieved by optimizing the cost function constructed from learning data. Changing the parameters in the cost function is an AI learning process (or AI learning for convenience). If AI learning is well performed, then the value of the cost function is the global minimum. In order to obtain the well-learned AI learning, the parameter should be no change in the value of the cost function at the global minimum. One useful optimization method is the momentum method; however, the momentum method has difficulty stopping the parameter when the value of the cost function satisfies the global minimum (non-stop problem). The proposed method is based on the momentum method. In order to solve the non-stop problem of the momentum method, we use the value of the cost function to our method. Therefore, as the learning method processes, the mechanism in our method reduces the amount of change in the parameter by the effect of the value of the cost function. We verified the method through proof of convergence and numerical experiments with existing methods to ensure that the learning works well.


2021 ◽  
Vol 13 (11) ◽  
pp. 6075
Author(s):  
Ola Lindroos ◽  
Malin Söderlind ◽  
Joel Jensen ◽  
Joakim Hjältén

Translocation of dead wood is a novel method for ecological compensation and restoration that could, potentially, provide a new important tool for biodiversity conservation. With this method, substrates that normally have long delivery times are instantly created in a compensation area, and ideally many of the associated dead wood dwelling organisms are translocated together with the substrates. However, to a large extent, there is a lack of knowledge about the cost efficiency of different methods of ecological compensation. Therefore, the costs for different parts of a translocation process and its dependency on some influencing factors were studied. The observed cost was 465 SEK per translocated log for the actual compensation measure, with an additional 349 SEK/log for work to enable evaluation of the translocation’s ecological results. Based on time studies, models were developed to predict required work time and costs for different transportation distances and load sizes. Those models indicated that short extraction and insertion distances for logs should be prioritized over road transportation distances to minimize costs. They also highlighted a trade-off between costs and time until a given ecological value is reached in the compensation area. The methodology used can contribute to more cost-efficient operations and, by doing so, increase the use of ecological compensation and the benefits from a given input.


2020 ◽  
Vol 18 (02) ◽  
pp. 2050006 ◽  
Author(s):  
Alexsandro Oliveira Alexandrino ◽  
Carla Negri Lintzmayer ◽  
Zanoni Dias

One of the main problems in Computational Biology is to find the evolutionary distance among species. In most approaches, such distance only involves rearrangements, which are mutations that alter large pieces of the species’ genome. When we represent genomes as permutations, the problem of transforming one genome into another is equivalent to the problem of Sorting Permutations by Rearrangement Operations. The traditional approach is to consider that any rearrangement has the same probability to happen, and so, the goal is to find a minimum sequence of operations which sorts the permutation. However, studies have shown that some rearrangements are more likely to happen than others, and so a weighted approach is more realistic. In a weighted approach, the goal is to find a sequence which sorts the permutations, such that the cost of that sequence is minimum. This work introduces a new type of cost function, which is related to the amount of fragmentation caused by a rearrangement. We present some results about the lower and upper bounds for the fragmentation-weighted problems and the relation between the unweighted and the fragmentation-weighted approach. Our main results are 2-approximation algorithms for five versions of this problem involving reversals and transpositions. We also give bounds for the diameters concerning these problems and provide an improved approximation factor for simple permutations considering transpositions.


2005 ◽  
Vol 133 (6) ◽  
pp. 1710-1726 ◽  
Author(s):  
Milija Zupanski

Abstract A new ensemble-based data assimilation method, named the maximum likelihood ensemble filter (MLEF), is presented. The analysis solution maximizes the likelihood of the posterior probability distribution, obtained by minimization of a cost function that depends on a general nonlinear observation operator. The MLEF belongs to the class of deterministic ensemble filters, since no perturbed observations are employed. As in variational and ensemble data assimilation methods, the cost function is derived using a Gaussian probability density function framework. Like other ensemble data assimilation algorithms, the MLEF produces an estimate of the analysis uncertainty (e.g., analysis error covariance). In addition to the common use of ensembles in calculation of the forecast error covariance, the ensembles in MLEF are exploited to efficiently calculate the Hessian preconditioning and the gradient of the cost function. A sufficient number of iterative minimization steps is 2–3, because of superior Hessian preconditioning. The MLEF method is well suited for use with highly nonlinear observation operators, for a small additional computational cost of minimization. The consistent treatment of nonlinear observation operators through optimization is an advantage of the MLEF over other ensemble data assimilation algorithms. The cost of MLEF is comparable to the cost of existing ensemble Kalman filter algorithms. The method is directly applicable to most complex forecast models and observation operators. In this paper, the MLEF method is applied to data assimilation with the one-dimensional Korteweg–de Vries–Burgers equation. The tested observation operator is quadratic, in order to make the assimilation problem more challenging. The results illustrate the stability of the MLEF performance, as well as the benefit of the cost function minimization. The improvement is noted in terms of the rms error, as well as the analysis error covariance. The statistics of innovation vectors (observation minus forecast) also indicate a stable performance of the MLEF algorithm. Additional experiments suggest the amplified benefit of targeted observations in ensemble data assimilation.


Sign in / Sign up

Export Citation Format

Share Document