Globally Optimal Force Allocation in Active Mechanisms With Four Frictional Contacts

1996 ◽  
Vol 118 (3) ◽  
pp. 353-359 ◽  
Author(s):  
S. V. Sreenivasan ◽  
K. J. Waldron ◽  
S. Mukherjee

Mechanisms interacting with their environments that possess complete contact force controllability such as multifingered hands and walking vehicles are considered in this article. In these systems, the redundancy in actuation can be used to optimize the force distribution characteristics. The resulting optimization problems can be highly nonlinear. Here, the redundancy in actuation is characterized using geometric reasoning which leads to simplifications in the formulation of the optimization problems. Next, advanced polynomial continuation techniques are adapted to solve for the global optimum of an important nonlinear optimization problem for the case of four frictional contacts. The algorithms developed here are not suited for real-time implementation. However, these algorithms can be used in off-line force planning, and they can be used to develop look-up tables for certain applications. The outputs of these algorithms can also be used as a baseline to evaluate the effectiveness of sub-optimal schemes.

2021 ◽  
Vol 12 (4) ◽  
pp. 98-116
Author(s):  
Noureddine Boukhari ◽  
Fatima Debbat ◽  
Nicolas Monmarché ◽  
Mohamed Slimane

Evolution strategies (ES) are a family of strong stochastic methods for global optimization and have proved their capability in avoiding local optima more than other optimization methods. Many researchers have investigated different versions of the original evolution strategy with good results in a variety of optimization problems. However, the convergence rate of the algorithm to the global optimum stays asymptotic. In order to accelerate the convergence rate, a hybrid approach is proposed using the nonlinear simplex method (Nelder-Mead) and an adaptive scheme to control the local search application, and the authors demonstrate that such combination yields significantly better convergence. The new proposed method has been tested on 15 complex benchmark functions and applied to the bi-objective portfolio optimization problem and compared with other state-of-the-art techniques. Experimental results show that the performance is improved by this hybridization in terms of solution eminence and strong convergence.


2018 ◽  
Vol 8 (11) ◽  
pp. 2080 ◽  
Author(s):  
Enrique Cortés-Toro ◽  
Broderick Crawford ◽  
Juan Gómez-Pulido ◽  
Ricardo Soto ◽  
José Lanza-Gutiérrez

In this article, a novel optimization metaheuristic based on the vapour-liquid equilibrium is described to solve highly nonlinear optimization problems in continuous domains. During the search for the optimum, the procedure truly simulates the vapour-liquid equilibrium state of multiple binary chemical systems. Each decision variable of the optimization problem behaves as the molar fraction of the lightest component of a binary chemical system. The equilibrium state of each system is modified several times, independently and gradually, in two opposite directions and at different rates. The best thermodynamic conditions of equilibrium for each system are searched and evaluated to identify the following step towards the solution of the optimization problem. While the search is carried out, the algorithm randomly accepts inadequate solutions. This process is done in a controlled way by setting a minimum acceptance probability to restart the exploration in other areas to prevent becoming trapped in local optimal solutions. Moreover, the range of each decision variable is reduced autonomously during the search. The algorithm reaches competitive results with those obtained by other stochastic algorithms when testing several benchmark functions, which allows us to conclude that our metaheuristic is a promising alternative in the optimization field.


2013 ◽  
Vol 421 ◽  
pp. 507-511 ◽  
Author(s):  
Nurezayana Zainal ◽  
Azlan Mohd Zain ◽  
Nor Haizan Mohamed Radzi ◽  
Amirmudin Udin

Glowworm Swarm Optimization (GSO) algorithm is a derivative-free, meta-heuristic algorithm and mimicking the glow behavior of glowworms which can efficiently capture all the maximum multimodal function. Nevertheless, there are several weaknesses to locate the global optimum solution for instance low calculation accuracy, simply falling into the local optimum, convergence rate of success and slow speed to converge. This paper reviews the exposition of a new method of swarm intelligence in solving optimization problems using GSO. Recently the GSO algorithm was used simultaneously to find solutions of multimodal function optimization problem in various fields in today industry such as science, engineering, network and robotic. From the paper review, we could conclude that the basic GSO algorithm, GSO with modification or improvement and GSO with hybridization are considered by previous researchers in order to solve the optimization problem. However, based on the literature review, many researchers applied basic GSO algorithm in their research rather than others.


Author(s):  
Bram Demeulenaere ◽  
Jan Swevers ◽  
Joris De Schutter

The designer’s main challenge when counterweight balancing a linkage is to determine the counterweights that realize an optimal trade-off between the dynamic forces of interest. This problem is often formulated as an optimization problem that is generally nonlinear and therefore suffers from local optima. It has been shown earlier, however, that, through a proper parametrization of the counterweights, a convex program can be obtained. Convex programs are nonlinear optimization problems of which the global optimum is guaranteed to be found with great efficiency. The present paper extends this previous work in two respects: (i) the methodology is generalized from four-bar to planar N-bar (rigid) linkages and (ii) it is shown that requiring the counterweights to be realizable in practice can be cast as a convex constraint. Numerical results for a Watt six-bar linkage suggest much more balancing potential for six-bar linkages than for four-bar linkages.


Author(s):  
Myriam Verschuure ◽  
Bram Demeulenaere ◽  
Jan Swevers ◽  
Joris De Schutter

This paper focusses on reducing, through counterweight addition, the vibration of an elastically mounted, rigid machine frame that supports a linkage. In order to determine the counterweights that yield a maximal reduction in frame vibration, a non-linear optimization problem is formulated with the frame kinetic energy as objective function and such that a convex optimization problem is obtained. Convex optimization problems are nonlinear optimization problems that have a unique (global) optimum, which can be found with great efficiency. The proposed methodology is successfully applied to improve the results of the benchmark four-bar problem, first considered by Kochev and Gurdev. For this example, the balancing is shown to be very robust for drive speed variations and to benefit only marginally from using a coupler counterweight.


Author(s):  
Peter Bamidele Shola

<div class="Section1"><p>In this paper a population-based meta-heuristic algorithm for optimization problems in a continous space is presented.The algorithm,here called cheapest shop seeker is modeled after a group of shoppers seeking to identify the cheapest shop (among many available) for shopping. The  algorithm was tested on many benchmark functions with the result  compared with those from some other methods. The algorithm appears to  have a better  success  rate of hitting the global optimum point  of a function  and of the rate of convergence (in terms of the number of iterations required to reach the optimum  value) for some functions  in spite  of its simplicity.</p></div>


Author(s):  
Peter Bamidele Shola ◽  
L B Asaju

<p>Optimization problem is one such problem commonly encountered in many area of endeavor, obviously due to the need to economize the use of the available resources in many problems. This paper presents a population-based meta-heuristic algorithm   for solving optimization problems in a continous space. The algorithm, combines a form of cross-over technique with a position updating formula based on the instantaneous global best position to update each particle position .The algorithm was tested and compared with the standard particle swarm optimization (PSO)  on many benchmark functions. The result suggests a better performance of the algorithm over the later in terms of reaching (attaining) the global optimum value (at least for those benchmark functions considered) and the rate of convergence in terms of the number of iterations required reaching the optimum values.</p>


Author(s):  
Adam N. Elmachtoub ◽  
Paul Grigas

Many real-world analytics problems involve two significant challenges: prediction and optimization. Because of the typically complex nature of each challenge, the standard paradigm is predict-then-optimize. By and large, machine learning tools are intended to minimize prediction error and do not account for how the predictions will be used in the downstream optimization problem. In contrast, we propose a new and very general framework, called Smart “Predict, then Optimize” (SPO), which directly leverages the optimization problem structure—that is, its objective and constraints—for designing better prediction models. A key component of our framework is the SPO loss function, which measures the decision error induced by a prediction. Training a prediction model with respect to the SPO loss is computationally challenging, and, thus, we derive, using duality theory, a convex surrogate loss function, which we call the SPO+ loss. Most importantly, we prove that the SPO+ loss is statistically consistent with respect to the SPO loss under mild conditions. Our SPO+ loss function can tractably handle any polyhedral, convex, or even mixed-integer optimization problem with a linear objective. Numerical experiments on shortest-path and portfolio-optimization problems show that the SPO framework can lead to significant improvement under the predict-then-optimize paradigm, in particular, when the prediction model being trained is misspecified. We find that linear models trained using SPO+ loss tend to dominate random-forest algorithms, even when the ground truth is highly nonlinear. This paper was accepted by Yinyu Ye, optimization.


Author(s):  
Po Ting Lin ◽  
Wei-Hao Lu ◽  
Shu-Ping Lin

In the past few years, researchers have begun to investigate the existence of arbitrary uncertainties in the design optimization problems. Most traditional reliability-based design optimization (RBDO) methods transform the design space to the standard normal space for reliability analysis but may not work well when the random variables are arbitrarily distributed. It is because that the transformation to the standard normal space cannot be determined or the distribution type is unknown. The methods of Ensemble of Gaussian-based Reliability Analyses (EoGRA) and Ensemble of Gradient-based Transformed Reliability Analyses (EGTRA) have been developed to estimate the joint probability density function using the ensemble of kernel functions. EoGRA performs a series of Gaussian-based kernel reliability analyses and merged them together to compute the reliability of the design point. EGTRA transforms the design space to the single-variate design space toward the constraint gradient, where the kernel reliability analyses become much less costly. In this paper, a series of comprehensive investigations were performed to study the similarities and differences between EoGRA and EGTRA. The results showed that EGTRA performs accurate and effective reliability analyses for both linear and nonlinear problems. When the constraints are highly nonlinear, EGTRA may have little problem but still can be effective in terms of starting from deterministic optimal points. On the other hands, the sensitivity analyses of EoGRA may be ineffective when the random distribution is completely inside the feasible space or infeasible space. However, EoGRA can find acceptable design points when starting from deterministic optimal points. Moreover, EoGRA is capable of delivering estimated failure probability of each constraint during the optimization processes, which may be convenient for some applications.


2020 ◽  
Author(s):  
Alberto Bemporad ◽  
Dario Piga

AbstractThis paper proposes a method for solving optimization problems in which the decision-maker cannot evaluate the objective function, but rather can only express a preference such as “this is better than that” between two candidate decision vectors. The algorithm described in this paper aims at reaching the global optimizer by iteratively proposing the decision maker a new comparison to make, based on actively learning a surrogate of the latent (unknown and perhaps unquantifiable) objective function from past sampled decision vectors and pairwise preferences. A radial-basis function surrogate is fit via linear or quadratic programming, satisfying if possible the preferences expressed by the decision maker on existing samples. The surrogate is used to propose a new sample of the decision vector for comparison with the current best candidate based on two possible criteria: minimize a combination of the surrogate and an inverse weighting distance function to balance between exploitation of the surrogate and exploration of the decision space, or maximize a function related to the probability that the new candidate will be preferred. Compared to active preference learning based on Bayesian optimization, we show that our approach is competitive in that, within the same number of comparisons, it usually approaches the global optimum more closely and is computationally lighter. Applications of the proposed algorithm to solve a set of benchmark global optimization problems, for multi-objective optimization, and for optimal tuning of a cost-sensitive neural network classifier for object recognition from images are described in the paper. MATLAB and a Python implementations of the algorithms described in the paper are available at http://cse.lab.imtlucca.it/~bemporad/glis.


Sign in / Sign up

Export Citation Format

Share Document