Robust Design for Profit Maximization With Aversion to Downside Risk From Parametric Uncertainty in Consumer Choice Models

2012 ◽  
Vol 134 (10) ◽  
Author(s):  
Camilo B. Resende ◽  
C. Grace Heckmann ◽  
Jeremy J. Michalek

In new product design, risk averse firms must consider downside risk in addition to expected profitability, since some designs are associated with greater market uncertainty than others. We propose an approach to robust optimal product design for profit maximization by introducing an α-profit metric to manage expected profitability vs. downside risk due to uncertainty in market share predictions. Our goal is to maximize profit at a firm-specified level of risk tolerance. Specifically, we find the design that maximizes the α-profit: the value that the firm has a (1 − α) chance of exceeding, given the distribution of possible outcomes. The parameter α ∈ (0,1) is set by the firm to reflect sensitivity to downside risk (or upside gain), and parametric study of α reveals the sensitivity of optimal design choices to firm risk preference. We account here only for uncertainty of choice model parameter estimates due to finite data sampling when the choice model is assumed to be correctly specified (no misspecification error). We apply the delta method to estimate the mapping from uncertainty in discrete choice model parameters to uncertainty of profit outcomes and identify the estimated α-profit as a closed-form function of decision variables for the multinomial logit model. An example demonstrates implementation of the method to find the optimal design characteristics of a dial-readout scale using conjoint data.

Author(s):  
Camilo B. Resende ◽  
C. Grace Heckmann ◽  
Jeremy J. Michalek

In new product design, risk averse firms must consider downside risk in addition to expected profitability, since some designs are associated with greater market uncertainty than others. We propose an approach to robust optimal product design for profit maximization by introducing an α-profit metric to manage expected profitability vs. downside risk due to uncertainty in market share predictions. Our goal is to maximize profit at a firm-specified level of risk tolerance. Specifically, we find the design that maximizes the α-profit: the value that the firm has a (1−α) chance of exceeding, given the distribution of possible outcomes. The parameter α∈[0,1] is set by the firm to reflect sensitivity to downside risk (or upside gain), and parametric study of α reveals the sensitivity of optimal design choices to firm risk preference. We account here only for uncertainty of choice model parameter estimates due to finite data sampling when the choice model is assumed to be correctly specified (no misspecification error). We apply the delta method to estimate the mapping from uncertainty in discrete choice model parameters to uncertainty of profit outcomes and identify the estimated α-profit as a closed form function of design decision variables. This process is described for the multinomial logit model, and a case study demonstrates implementation of the method to find the optimal design characteristics of a midsize consumer automobile.


Author(s):  
Tristan Gally ◽  
Peter Groche ◽  
Florian Hoppe ◽  
Anja Kuttich ◽  
Alexander Matei ◽  
...  

AbstractIn engineering applications almost all processes are described with the help of models. Especially forming machines heavily rely on mathematical models for control and condition monitoring. Inaccuracies during the modeling, manufacturing and assembly of these machines induce model uncertainty which impairs the controller’s performance. In this paper we propose an approach to identify model uncertainty using parameter identification, optimal design of experiments and hypothesis testing. The experimental setup is characterized by optimal sensor positions such that specific model parameters can be determined with minimal variance. This allows for the computation of confidence regions in which the real parameters or the parameter estimates from different test sets have to lie. We claim that inconsistencies in the estimated parameter values, considering their approximated confidence ellipsoids as well, cannot be explained by data uncertainty but are indicators of model uncertainty. The proposed method is demonstrated using a component of the 3D Servo Press, a multi-technology forming machine that combines spindles with eccentric servo drives.


1985 ◽  
Vol 248 (3) ◽  
pp. R378-R386 ◽  
Author(s):  
M. H. Nathanson ◽  
G. M. Saidel

Optimal experimental design is used to predict the experimental conditions that will allow the "best" estimates of model parameters. A variety of criteria must be considered before an optimal design is chosen. Maximizing the determinant of the information matrix (D optimality), which tends to produce the most precise simultaneous estimates of all parameters, is commonly considered as the primary criterion. To complement this criterion, we present another whose effect is to reduce the interaction among the parameter estimates so that changes in any one parameter can be more distinct. This new criterion consists of maximizing the determinant of an appropriately scaled information matrix (M optimality). These criteria are applied jointly in a multiple-objective function. To illustrate the use of these concepts, we develop an optimal experimental design of blood sampling schedules using a detailed ferrokinetic model.


1998 ◽  
Vol 1645 (1) ◽  
pp. 160-169 ◽  
Author(s):  
James E. Hicks ◽  
Mounir M. Abdel-Aal

Equilibrium models of combined location and travel choices solve for the modal link flow pattern, which simultaneously solves a constrained minimization problem and satisfies a set of equilibrium conditions characterizing a rational behavior for traveler choices in an urban transportation system. The minimization problem typically is made to be representative of the particular urban area being studied by including coefficients of travel costs and travel choices that have been estimated from locally available observed data. For large urban areas, in practice, it is possible to derive interzonal travel times and costs only from the travel model, because suitable observed data are nonexistent. In this case, the estimation problem is a function of the travel model variables and, at the same time, the travel model is a function of the parameters determined by the estimation problem. Procedures to computationally search for a stable solution to this bilevel optimization problem have been addressed with limited success. The parameter estimation is solved in an iterative procedure in which first parameters are held fixed and the travel model is solved, then travel patterns are held fixed and the maximum likelihood parameters are solved by the Newton-Raphson method. Each successive parameter estimation resulting from these two steps results in a new set of parameter values for the next iteration until stable values for the parameters are achieved. The quality of the convergence of the parameter estimates is reported.


2021 ◽  
pp. 135481662110300
Author(s):  
Usamah F Alfarhan ◽  
Khaldoon Nusair ◽  
Hamed Al-Azri ◽  
Saeed Al-Muharrami ◽  
Nan Hua

Tourism expenditures are determined by a set of antecedents that reflect tourists’ willingness and ability to spend, and de facto incremental monetary outlays at which willingness and ability is transformed into total expenditures. Based on the neoclassical theoretical argument of utility-constrained expenditure minimization, we extend the current literature by applying a sustainability-based segmentation criterion, namely, the Legatum Prosperity IndexTM to the decomposition of a total expenditure differential into tourists’ relative willingness to spend and an upper bound of third-degree price discrimination, using mean-level and conditional quantile estimates. Our results indicate that understanding the price–quantity composition of international inbound tourism expenditure differentials assists agents in the tourism industry in their quest for profit maximization.


2008 ◽  
Vol 10 (2) ◽  
pp. 153-162 ◽  
Author(s):  
B. G. Ruessink

When a numerical model is to be used as a practical tool, its parameters should preferably be stable and consistent, that is, possess a small uncertainty and be time-invariant. Using data and predictions of alongshore mean currents flowing on a beach as a case study, this paper illustrates how parameter stability and consistency can be assessed using Markov chain Monte Carlo. Within a single calibration run, Markov chain Monte Carlo estimates the parameter posterior probability density function, its mode being the best-fit parameter set. Parameter stability is investigated by stepwise adding new data to a calibration run, while consistency is examined by calibrating the model on different datasets of equal length. The results for the present case study indicate that various tidal cycles with strong (say, >0.5 m/s) currents are required to obtain stable parameter estimates, and that the best-fit model parameters and the underlying posterior distribution are strongly time-varying. This inconsistent parameter behavior may reflect unresolved variability of the processes represented by the parameters, or may represent compensational behavior for temporal violations in specific model assumptions.


1991 ◽  
Vol 18 (2) ◽  
pp. 320-327 ◽  
Author(s):  
Murray A. Fitch ◽  
Edward A. McBean

A model is developed for the prediction of river flows resulting from combined snowmelt and precipitation. The model employs a Kalman filter to reflect uncertainty both in the measured data and in the system model parameters. The forecasting algorithm is used to develop multi-day forecasts for the Sturgeon River, Ontario. The algorithm is shown to develop good 1-day and 2-day ahead forecasts, but the linear prediction model is found inadequate for longer-term forecasts. Good initial parameter estimates are shown to be essential for optimal forecasting performance. Key words: Kalman filter, streamflow forecast, multi-day, streamflow, Sturgeon River, MISP algorithm.


2011 ◽  
Vol 64 (S1) ◽  
pp. S3-S18 ◽  
Author(s):  
Yuanxi Yang ◽  
Jinlong Li ◽  
Junyi Xu ◽  
Jing Tang

Integrated navigation using multiple Global Navigation Satellite Systems (GNSS) is beneficial to increase the number of observable satellites, alleviate the effects of systematic errors and improve the accuracy of positioning, navigation and timing (PNT). When multiple constellations and multiple frequency measurements are employed, the functional and stochastic models as well as the estimation principle for PNT may be different. Therefore, the commonly used definition of “dilution of precision (DOP)” based on the least squares (LS) estimation and unified functional and stochastic models will be not applicable anymore. In this paper, three types of generalised DOPs are defined. The first type of generalised DOP is based on the error influence function (IF) of pseudo-ranges that reflects the geometry strength of the measurements, error magnitude and the estimation risk criteria. When the least squares estimation is used, the first type of generalised DOP is identical to the one commonly used. In order to define the first type of generalised DOP, an IF of signal–in-space (SIS) errors on the parameter estimates of PNT is derived. The second type of generalised DOP is defined based on the functional model with additional systematic parameters induced by the compatibility and interoperability problems among different GNSS systems. The third type of generalised DOP is defined based on Bayesian estimation in which the a priori information of the model parameters is taken into account. This is suitable for evaluating the precision of kinematic positioning or navigation. Different types of generalised DOPs are suitable for different PNT scenarios and an example for the calculation of these DOPs for multi-GNSS systems including GPS, GLONASS, Compass and Galileo is given. New observation equations of Compass and GLONASS that may contain additional parameters for interoperability are specifically investigated. It shows that if the interoperability of multi-GNSS is not fulfilled, the increased number of satellites will not significantly reduce the generalised DOP value. Furthermore, the outlying measurements will not change the original DOP, but will change the first type of generalised DOP which includes a robust error IF. A priori information of the model parameters will also reduce the DOP.


Sign in / Sign up

Export Citation Format

Share Document