Minimum Exponential Cost Allocation of Sure-Fit Tolerances

1975 ◽  
Vol 97 (4) ◽  
pp. 1395-1398 ◽  
Author(s):  
D. Wilde ◽  
E. Prentice

The least cost allocation of sure-fit machine tolerances for Speckhart’s exponential cost model is solved in closed form, without numerical iteration, as a geometric program with zero degrees of difficulty. The results show the importance of an exponential cost sensitivity parameter defined as the “characteristic tolerance”. The theoretical minimum cost can be determined without specifying the corresponding tolerances. Specific minimum cost tolerances can be computed later in closed form if potential cost savings are significant.

2013 ◽  
Vol 22 (7) ◽  
pp. 871 ◽  
Author(s):  
Rachel M. Houtman ◽  
Claire A. Montgomery ◽  
Aaron R. Gagnon ◽  
David E. Calkin ◽  
Thomas G. Dietterich ◽  
...  

Where a legacy of aggressive wildland fire suppression has left forests in need of fuel reduction, allowing wildland fire to burn may provide fuel treatment benefits, thereby reducing suppression costs from subsequent fires. The least-cost-plus-net-value-change model of wildland fire economics includes benefits of wildfire in a framework for evaluating suppression options. In this study, we estimated one component of that benefit – the expected present value of the reduction in suppression costs for subsequent fires arising from the fuel treatment effect of a current fire. To that end, we employed Monte Carlo methods to generate a set of scenarios for subsequent fire ignition and weather events, which are referred to as sample paths, for a study area in central Oregon. We simulated fire on the landscape over a 100-year time horizon using existing models of fire behaviour, vegetation and fuels development, and suppression effectiveness, and we estimated suppression costs using an existing suppression cost model. Our estimates suggest that the potential cost savings may be substantial. Further research is needed to estimate the full least-cost-plus-net-value-change model. This line of research will extend the set of tools available for developing wildfire management plans for forested landscapes.


Author(s):  
Al Chen ◽  
Karen Nunez

The bulk chemical industry primarily relies upon truck transportation. Truck transportation, although costly, has a high percentage of on-time deliveries. Cheaper alternative transportation modes are less preferred due to a lack of supply chain information. Traditionally, the lack of information about in-transit products leads to higher safety stock and inventory levels, which results in higher costs. Process mapping and activity-based cost analysis are used to identify cost drivers and highlight the areas of opportunity for improving bulk chemical supply chain management. The activity-based cost information was used to develop a logistics cost model specifically tailored to the bulk chemical industry. The activity-based logistics cost model was used to assess potential cost savings from integrating centralized supply chain management software (visibility solutions), into the bulk chemical supply chain. The results of our analysis support integrating visibility solution software into multi-modal transportation to improve bulk chemical supply chain management. Integration of visibility solutions enables suppliers to improve their ability to monitor and control their inventory throughout the supply chain, increase overall asset utilization, and reduce global supply chain costs.


2021 ◽  
Author(s):  
◽  
Ryan Chard

<p>Cloud computing provides access to a large scale set of readily available computing resources at the click of a button. The cloud paradigm has commoditised computing capacity and is often touted as a low-cost model for executing and scaling applications. However, there are significant technical challenges associated with selecting, acquiring, configuring, and managing cloud resources which can restrict the efficient utilisation of cloud capabilities.  Scientific computing is increasingly hosted on cloud infrastructure—in which scientific capabilities are delivered to the broad scientific community via Internet-accessible services. This migration from on-premise to on-demand cloud infrastructure is motivated by the sporadic usage patterns of scientific workloads and the associated potential cost savings without the need to purchase, operate, and manage compute infrastructure—a task that few scientific users are trained to perform. However, cloud platforms are not an automatic solution. Their flexibility is derived from an enormous number of services and configuration options, which in turn result in significant complexity for the user. In fact, naïve cloud usage can result in poor performance and excessive costs, which are then directly passed on to researchers.  This thesis presents methods for developing efficient cloud-based scientific services. Three real-world scientific services are analysed and a set of common requirements are derived. To address these requirements, this thesis explores automated and scalable methods for inferring network performance, considers various trade-offs (e.g., cost and performance) when provisioning instances, and profiles application performance, all in heterogeneous and dynamic cloud environments. Specifically, network tomography provides the mechanisms to infer network performance in dynamic and opaque cloud networks; cost-aware automated provisioning approaches enable services to consider, in real-time, various trade-offs such as cost, performance, and reliability; and automated application profiling allows a huge search space of applications, instance types, and configurations to be analysed to determine resource requirements and application performance. Finally, these contributions are integrated into an extensible and modular cloud provisioning and resource management service called SCRIMP. Cloud-based scientific applications and services can subscribe to SCRIMP to outsource their provisioning, usage, and management of cloud infrastructures. Collectively, the approaches presented in this thesis are shown to provide order of magnitude cost savings and significant performance improvement when employed by production scientific services.</p>


2021 ◽  
Vol 6 (1) ◽  
pp. 203-220
Author(s):  
Gesine Wanke ◽  
Leonardo Bergami ◽  
Frederik Zahle ◽  
David Robert Verelst

Abstract. Within this work, an existing model of a Suzlon S111 2.1 MW turbine is used to estimate potential cost savings when the conventional upwind rotor concept is changed into a downwind rotor concept. A design framework is used to get realistic design updates for the upwind configuration, as well as two design updates for the downwind configuration, including a pure material cost out of the rotor blades and a new planform design. A full design load basis according to the standard has been used to evaluate the impact of the redesigns on the loads. A detailed cost model with load scaling is used to estimate the impact of the design changes on the turbine costs and the cost of energy. It is shown that generally lower blade mass of up to 5 % less than the upwind redesign can be achieved with the downwind configurations. Compared to an upwind baseline, the upwind redesign shows an estimated cost of energy reduction of 2.3 %, and the downwind designs achieve a maximum reduction of 1.3 %.


2020 ◽  
Author(s):  
Gesine Wanke ◽  
Leonardo Bergami ◽  
Frederik Zahle ◽  
David Robert Verelst

Abstract. Within this work, an existing model of a Suzlon S111 2.1 MW turbine is used to estimate potential cost savings when the conventional upwind rotor concept is changed into a downwind rotor concept. A design framework is used to get realistic design updates for the upwind configuration as well as two design updates for the downwind configuration, including a pure material cost-out on the rotor blades and a new planform design. A full design load basis according to the standard has been used to evaluate the impact of the redesigns on the loads. A detailed cost model with load scaling is used to estimate the impact of the design changes on the turbine costs and the cost of energy. It is shown that generally lower blade mass can be achieved with the downwind configurations of up to 5 % less than the upwind redesign. Compared to an upwind baseline, the upwind redesign shows an estimated cost of energy reduction of 2.3 % where the downwind designs achieve a maximum reduction of 1.3 %.


2021 ◽  
Author(s):  
◽  
Ryan Chard

<p>Cloud computing provides access to a large scale set of readily available computing resources at the click of a button. The cloud paradigm has commoditised computing capacity and is often touted as a low-cost model for executing and scaling applications. However, there are significant technical challenges associated with selecting, acquiring, configuring, and managing cloud resources which can restrict the efficient utilisation of cloud capabilities.  Scientific computing is increasingly hosted on cloud infrastructure—in which scientific capabilities are delivered to the broad scientific community via Internet-accessible services. This migration from on-premise to on-demand cloud infrastructure is motivated by the sporadic usage patterns of scientific workloads and the associated potential cost savings without the need to purchase, operate, and manage compute infrastructure—a task that few scientific users are trained to perform. However, cloud platforms are not an automatic solution. Their flexibility is derived from an enormous number of services and configuration options, which in turn result in significant complexity for the user. In fact, naïve cloud usage can result in poor performance and excessive costs, which are then directly passed on to researchers.  This thesis presents methods for developing efficient cloud-based scientific services. Three real-world scientific services are analysed and a set of common requirements are derived. To address these requirements, this thesis explores automated and scalable methods for inferring network performance, considers various trade-offs (e.g., cost and performance) when provisioning instances, and profiles application performance, all in heterogeneous and dynamic cloud environments. Specifically, network tomography provides the mechanisms to infer network performance in dynamic and opaque cloud networks; cost-aware automated provisioning approaches enable services to consider, in real-time, various trade-offs such as cost, performance, and reliability; and automated application profiling allows a huge search space of applications, instance types, and configurations to be analysed to determine resource requirements and application performance. Finally, these contributions are integrated into an extensible and modular cloud provisioning and resource management service called SCRIMP. Cloud-based scientific applications and services can subscribe to SCRIMP to outsource their provisioning, usage, and management of cloud infrastructures. Collectively, the approaches presented in this thesis are shown to provide order of magnitude cost savings and significant performance improvement when employed by production scientific services.</p>


2007 ◽  
Vol 34 (12) ◽  
pp. 1529-1541 ◽  
Author(s):  
Hesham Osman ◽  
Tamer El-Diraby

This paper investigates a relatively new engineering service that is being introduced in Ontario: subsurface utility engineering (SUE). This service combines civil engineering, surveying, geophysics, and nondestructive excavation for the accurate mapping of underground utilities. This paper presents the results of a one-year study that investigated the use of SUE on large infrastructure projects in Ontario. The study involved performing a detailed cost analysis of nine successful SUE projects, four of which are presented in this paper. Potential cost savings were estimated for each case study and all indicated that SUE has a positive return on investment. In addition, two industry-wide surveys were conducted to investigate the effects of inaccurate utility information on projects. Results indicate that inaccurate utility information has a significant impact on project cost, schedule, and damage to existing utilities. Using the results of the case study analysis and the survey, a generic cost model for SUE was developed that relates project specific characteristics to costs that could be incurred because of inaccurate utility information. This investigation provides valuable insight to the application of a relatively new process in Canada following successful results in the United States.


Author(s):  
M. Hamzah

Classical Oil Country Tubular Goods (OCTG) procurement approach has been practiced in the indus-try with the typical process of setting a quantity level of tubulars ahead of the drilling project, includ-ing contingencies, and delivery to a storage location close to the drilling site. The total cost of owner-ship for a drilling campaign can be reduced in the range of 10-30% related to tubulars across the en-tire supply chain. In recent decades, the strategy of OCTG supply has seen an improvement resulting in significant cost savings by employing the integrated tubular supply chain management. Such method integrates the demand and supply planning of OCTG of several wells in a drilling project and synergize the infor-mation between the pipes manufacturer and drilling operators to optimize the deliveries, minimizing inventory levels and safety stocks. While the capital cost of carrying the inventory of OCTG can be reduced by avoiding the procurement of substantial volume upfront for the entire project, several hidden costs by carrying this inventory can also be minimized. These include storage costs, maintenance costs, and costs associated to stock obsolescence. Digital technologies also simplify the tasks related to the traceability of the tubulars since the release of the pipes from the manufacturing facility to the rig floor. Health, Safety, and Environmental (HSE) risks associated to pipe movements on the rig can be minimized. Pipe-by-pipe traceability provides pipes’ history and their properties on demand. Digitalization of the process has proven to simplify back end administrative tasks. The paper reviews the OCTG supply methods and lays out tangible improvement factors by employ-ing an alternative scheme as discussed in the paper. It also provides an insight on potential cost savings based on the observed and calculated experiences from several operations in the Asia Pacific region.


1988 ◽  
Vol 20 (4-5) ◽  
pp. 101-108 ◽  
Author(s):  
R. C. Clifft ◽  
M. T. Garrett

Now that oxygen production facilities can be controlled to match the requirements of the dissolution system, improved oxygen dissolution control can result in significant cost savings for oxygen activated sludge plants. This paper examines the potential cost savings of the vacuum exhaust control (VEC) strategy for the City of Houston, Texas 69th Street Treatment Complex. The VEC strategy involves operating a closed-tank reactor slightly below atmospheric pressure and using an exhaust apparatus to remove gas from the last stage of the reactor. Computer simulations for one carbonaceous reactor at the 69th Street Complex are presented for the VEC and conventional control strategies. At 80% of design loading the VEC strategy was found to provide an oxygen utilization efficiency of 94.9% as compared to 77.0% for the conventional control method. At design capacity the oxygen utilization efficiency for VEC and conventional control was found to be 92.3% and 79.5%, respectively. Based on the expected turn-down capability of Houston's oxygen production faciilities, the simulations indicate that the VEC strategy will more than double the possible cost savings of the conventional control method.


2020 ◽  
Vol 15 ◽  
Author(s):  
Billu Payal ◽  
Anoop Kumar ◽  
Harsh Saxena

Background: Asthma and Chronic Obstructive Pulmonary Diseases (COPD) are well known respiratory diseases affecting millions of peoples in India. In the market, various branded generics, as well as generic drugs, are available for their treatment and how much cost will be saved by utilizing generic medicine is still unclear among physicians. Thus, the main aim of the current investigation was to perform cost-minimization analysis of generic versus branded generic (high and low expensive) drugs and branded generic (high expensive) versus branded generic (least expensive) used in the Department of Pulmonary Medicine of Era Medical University, Lucknow for the treatment of asthma and COPD. Methodology: The current index of medical stores (CIMS) was referred for the cost of branded drugs whereas the cost of generic drugs was taken from Jan Aushadi scheme of India 2016. The percentage of cost variation particularly to Asthma and COPD regimens on substituting available generic drugs was calculated using standard formula and costs were presented in Indian Rupees (as of 2019). Results: The maximum cost variation was found between the respules budesonide high expensive branded generic versus least expensive branded generic drugs and generic versus high expensive branded generic. In combination, the maximum cost variation was observed in the montelukast and levocetirizine combination. Conclusion: In conclusion, this study inferred that substituting generic antiasthmatics and COPD drugs can bring potential cost savings in patients.


Sign in / Sign up

Export Citation Format

Share Document