The use of cellular automaton approach in forest planning

2007 ◽  
Vol 37 (11) ◽  
pp. 2188-2200 ◽  
Author(s):  
Tero Heinonen ◽  
Timo Pukkala

This study presents an optimization method based on cellular automaton (CA) for solving spatial forest planning problems. The CA maximizes stand-level and neighbourhood objectives locally, i.e., separately for different stands or raster cells. Global objectives are dealt with by adding a global part to the objective function and gradually increasing its weight until the global targets are met to a required degree. The method was tested in an area that consisted of 2500 (50 × 50) hexagons 1 ha in size. The CA was used with both parallel and sequential state-updating rules. The method was compared with linear programming (LP) in four nonspatial forest planning problems where net present value (NPV) was maximized subject to harvest constraints. The CA solutions were within 99.6% of the LP solutions in three problems and 97.9% in the fourth problem. The CA was compared with simulated annealing (SA) in three spatial problems where a multiobjective utility function was maximized subject to periodical harvest and ending volume constraints. The nonspatial goal was the NPV and the spatial goals were old forest and cutting area aggregation as well as dispersion of regeneration cuttings. The CA produced higher objective function values than SA in all problems. Especially, the spatial objective variables were better in the CA solutions, whereas differences in NPV were small. There were no major differences in the performance of the parallel and sequential cell state-updating modes of the CA.

2005 ◽  
Vol 29 (4) ◽  
pp. 185-193 ◽  
Author(s):  
Wise H. Batten ◽  
Pete Bettinger ◽  
Jianping Zhu

Abstract Forest plans related to a number of spatial harvest scheduling scenarios were developed for a medium-sized forest holding using a heuristic forest planning technique (tabu search). Green-up periods of 2 to 7 years were assessed in conjunction with the two types of adjacency constraints. The results indicate for this one property that a short green-up period (2–3 years) did not significantly affect the economic value of the forest holding studied. Longer green-up periods and smaller maximum clearcut sizes could reduce the net present value of this forest holding by as much as 5 to 15% depending on the clearcut adjacency rules used. In a validation of the heuristic solutions, we found that the best solution generated with the heuristic (for three separate problems) was within 0.25% of the integer programming solution. South. J. Appl. For. 29(4):185–193.


Author(s):  
Pär Wilhelmsson ◽  
Edward Sjödin ◽  
André Wästlund ◽  
Jörgen Wallerman ◽  
Tomas Lämås ◽  
...  

In forest management planning, the dynamic treatment unit (DTU) approach has become an increasingly relevant alternative to the traditional planning approach using fixed stands, due to improved remote sensing techniques and optimization procedures, with the potential for the higher goal fulfillment of forest activities. For the DTU approach, the traditional concept of fixed stands is disregarded, and forest data are kept in units with a high spatial resolution. Forest operations are planned by clustering cells to form treatment units for harvest operations. This paper presents a new model with an exact optimization technique for forming DTUs in forest planning. In comparison with most previous models, this model aims for increased flexibility by modelling the spatial dimension according to cell proximity rather than immediate adjacency. The model is evaluated using a case study with harvest flow constraints for a forest estate in southern Sweden, represented by 3587 cells. The parameter settings differed between cases, resulting in varying degrees of clustered DTUs, which caused relative net present value losses of up to 4.3%. The case without clustering had the lowest net present value when considering entry costs. The solution times varied between 2.2 s and 42 min 6 s and grew rapidly with increasing problem size.


2020 ◽  
Author(s):  
Bernard Dusseault ◽  
Philippe Pasquier

<p>The design by optimization of hybrid ground-coupled heat pump (Hy-GCHP) systems is a complex process that involves dozens of parameters, some of which cannot be known with absolute certainty. Therefore, designers face the possibility of under or oversizing Hy-GCHP systems as a result of those uncertainties. Of course, both situations are undesirable, either raising upfront costs or operating costs. The most common way designers try to evaluate their impacts and prepare the designs against unforeseen conditions is to use sensitivity analyses, an operation that can only be done after the sizing.</p><p>Traditional stochastic methods, like Markov chain Monte Carlo, can handle uncertainties during the sizing, but come at a high computational price paid for in millions of simulations. Considering that individual simulation of Hy-GCHP system operation during 10 or 20 years can range between seconds and minutes, millions of simulations are therefore not a realistic approach for design under uncertainty. Alternative stochastic design methodologies are exploited in other fields with great success that do not require nearly as many simulations. This is the case for the conditional-value-at-risk (CVaR) in the financial sector and for the net present value-at-risk (NPVaR) in civil engineering. Both financial indicators are used as objective functions in their respective fields to consider uncertainties. To do that, they involve distributions of uncertain parameters but only focus on the tail of distributions. This results in quicker optimizations but also in more conservative designs. This way, they remain profitable even when faced with extremely unfavorable conditions.</p><p>In this work, we adapt the NPVaR to make the sizing of Hy-GCHP systems under uncertainties viable. The mixed-integer non-linear optimization algorithm used jointly with the NPVaR, the Hy-GCHP simulation algorithm and the g-function assessment methods used are presented broadly, all of which are validated in this work or in referenced publications. The way in which the NPVaR is implemented is discussed, more specifically how computation time can be further reduced using a clever implementation without sacrificing its conservative property. The implications of using the NPVaR over a deterministic algorithm are investigated during a case study that revolves around the design of an Hy-GCHP system in the heating-dominated environment of Montreal (Canada). Our results show that over 1000 experiments, a design sized using the NPVaR has an average return on investment of 126,829 $ with a standard deviation of 18,499 $ while a design sized with a deterministic objective function yields 137,548 $ on average with a standard deviation of 33,150 $. Furthermore, the worst returns in both cases are respectively 35,229 $ and -32,151 $. This shows that, although slightly less profitable on average, the NPVaR is a better objective function when the concern is about avoiding losses rather than making a huge profit.</p><p>In that regard, since HVAC is usually considered a commodity rather than an investment, we believe that a more financially stable and predictable objective function is a welcome addition in the toolbox of engineers and professionals alike that deal with the design of expensive systems such as Hy-GCHP.</p>


Author(s):  
Vinícius Ramos Rosa ◽  
Virgilio José Martins Ferreira Filho

This paper presents a general method for optimize platform and manifold location in offshore projects, by maximizing their net present value. Finding the best offshore facilities locations is further than minimize the total pipelines length. The distances between wells and platform are related to their productivity, the farther the distance the less production. Each well has a particular response to pressure loss along the pipelines due to fluids and reservoir characteristics. Therefore, not only costs related to distances must be taken in account, but also revenues from the oil production, which are a function of flow conditions in the reservoir, tubing and pipelines. Moreover, to locate a manifold also means choose which wells will be linked with it. Every possible combination of wells leads to a different production rate. Again, petroleum engineering expertise it is needed to optimize the project, working together numerical techniques. A simple optimization method is proposed and to testify it a small example model was built.


Forests ◽  
2018 ◽  
Vol 9 (12) ◽  
pp. 750 ◽  
Author(s):  
Anssi Ahtikoski ◽  
Jouni Siipilehto ◽  
Hannu Salminen ◽  
Mika Lehtonen ◽  
Jari Hynynen

This study presents an attempt to discover the effect of sample size on the financial outcome derived by stand-level optimization with individual tree modeling. The initial stand structure was altered to reflect sparse, average, and dense Scots pine (Pinus sylvestris L.) stands. The stands had varying numbers of stems but identical weighted median diameters and stand basal areas. The hypothetical Weibull diameter distributions were solved according to the parameter recovery method. The trees were systematically sampled with respect to the tree basal area corresponding to sample sizes of 10, 20, or 40 trees. We optimized the stand management with varying numbers of sample trees and with varying stand structures and compared the optimal solutions with respect to the objective function value (maximum net present value) and underlying management schedule. The results for the pine stands in southern and central Finland indicated that the variations in the objective function value relating to sample size were minor (<2.6%) in the sparse and average stand densities but exceeded 3% in the dense stands. Generally, the stand density is not always known, and thus, we may need to generalize the average density for all cases in question. This assumption, however, resulted in overestimations with respect to the optimal rotation period and financial performance in this study. The overestimations in the net present value decreased along with the increasing sample size, from 22% to 14% in the sample sizes of 10 and 40 trees, respectively.


2019 ◽  
Vol 2019 ◽  
pp. 1-8 ◽  
Author(s):  
Zhenkai Zhang ◽  
Xinxing Liu ◽  
Bing Zhang ◽  
Hailin Li

In this paper, pattern synthesis through time-modulated linear array is studied, and a novel strategy for harmonic beamforming in time-modulated array is proposed. The peak side lobe level is designed as optimization objective function, and the switch-on time sequence of each element is selected as optimization variable. An improved invasive weed optimization (IWO) algorithm is developed in order to determine the optimal parameters describing the pulse sequence used to modulate the excitation weights of array elements. Representative results are reported and discussed to point out potentialities and advantages of the proposed approach, which can obtain lower objective function values.


2020 ◽  
Vol 62 (3) ◽  
pp. 334-351
Author(s):  
K. G. SIRINANDA ◽  
M. BRAZIL ◽  
P. A. GROSSMAN ◽  
J. H. RUBINSTEIN ◽  
D. A. THOMAS

AbstractThe objective of this paper is to demonstrate that the gradient-constrained discounted Steiner point algorithm (GCDSPA) described in an earlier paper by the authors is applicable to a class of real mine planning problems, by using the algorithm to design a part of the underground access in the Rubicon gold mine near Kalgoorlie in Western Australia. The algorithm is used to design a decline connecting two ore bodies so as to maximize the net present value (NPV) associated with the connector. The connector is to break out from the access infrastructure of one ore body and extend to the other ore body. There is a junction on the connector where it splits in two near the second ore body. The GCDSPA is used to obtain the optimal location of the junction and the corresponding NPV. The result demonstrates that the GCDSPA can be used to solve certain problems in mine planning for which currently available methods cannot provide optimal solutions.


2015 ◽  
Vol 2015 ◽  
pp. 1-15 ◽  
Author(s):  
Kai Moriguchi ◽  
Tatsuhito Ueki ◽  
Masashi Saito

We evaluated the potential of simulated annealing as a reliable method for optimizing thinning rates for single even-aged stands. Four types of yield models were used as benchmark models to examine the algorithm’s versatility. Thinning rate, which was constrained to 0–50% every 5 years at stand ages of 10–45 years, was optimized to maximize the net present value for one fixed rotation term (50 years). The best parameters for the simulated annealing were chosen from 113patterns, using the mean of the net present value from 39 runs to ensure the best performance. We compared the solutions with those from coarse full enumeration to evaluate the method’s reliability and with 39 runs of random search to evaluate its efficiency. In contrast to random search, the best run of simulated annealing for each of the four yield models resulted in a better solution than coarse full enumeration. However, variations in the objective function for two yield models obtained with simulated annealing were significantly larger than those of random search. In conclusion, simulated annealing with optimized parameters is more efficient for optimizing thinning rates than random search. However, it is necessary to execute multiple runs to obtain reliable solutions.


2018 ◽  
Vol 67 ◽  
pp. 02017 ◽  
Author(s):  
Agustina A.Y. Simanjuntak ◽  
E. Kusrini

In industry, kaolin is widely employed as an additive to paper, rubber and ceramics, among other uses, and can be synthesized into zeolite. Zeolites have been hydrothermally synthesized using alumina and silica based on deposits (kaolin) sampled from region in Bangka- Belitung. The synthesis of Zeolite A based on kaolin through several process stages such as drying, grinding and sieving prior to the hydrothermal process. It is then calcined into metakaolin, followed by the addition of NaOH solution, heating, filtration and washing to obtain the synthesis. This study examines how assessment models can be built and used for financial, technical, and marketing feasibility analysis of synthesized Zeolite A from kaolin. A new optimization method used to estimate financing requirements of investment products is presented, as well as a new method to predict the optimal year to sell the product. The conclusion is that Net Present Value with a positive value, Pay Back Period, Internal Rate of Return 30.78% higher than the interest rate set at 12%, and marketing aspects show that the process is feasible.


SPE Journal ◽  
2014 ◽  
Vol 20 (01) ◽  
pp. 155-168 ◽  
Author(s):  
R.M.. M. Fonseca ◽  
O.. Leeuwenburgh ◽  
P.M.J.. M.J. Van den Hof ◽  
J.D.. D. Jansen

Summary Ensemble optimization (referred to throughout the remainder of the paper as EnOpt) is a rapidly emerging method for reservoir-model-based production optimization. EnOpt uses an ensemble of controls to approximate the gradient of the objective function with respect to the controls. Current implementations of EnOpt use a Gaussian ensemble of control perturbations with a constant covariance matrix, and thus a constant perturbation size, during the entire optimization process. The covariance-matrix-adaptation evolutionary strategy is a gradient-free optimization method developed in the “machine learning” community, which also uses an ensemble of controls, but with a covariance matrix that is continually updated during the optimization process. It was shown to be an efficient method for several difficult but small-dimensional optimization problems and was recently applied in the petroleum industry for well location and production optimization. In this study, we investigate the scope to improve the computational efficiency of EnOpt through the use of covariance-matrix adaptation (referred to throughout the remainder of the paper as CMA-EnOpt). The resulting method is applied to the waterflooding optimization of a small multilayer test model and a modified version of the Brugge benchmark model. The controls used are inflow-control-valve settings at predefined time intervals for injectors and producers with undiscounted net present value as the objective function. We compare EnOpt and CMA-EnOpt starting from identical covariance matrices. For the small model, we achieve only slightly higher (0.7 to 1.8%) objective-function values and modest speedups with CMA-EnOpt compared with EnOpt. Significantly higher objective-function values (10%) are obtained for the modified Brugge model. The possibility to adapt the covariance matrix, and thus the perturbation size, during the optimization allows for the use of relatively large perturbations initially, for fast exploration of the control space, and small perturbations later, for more-precise gradients near the optimum. Moreover, the results demonstrate that a major benefit of CMA-EnOpt is its robustness with respect to the initial choice of the covariance matrix. A poor choice of the initial matrix can be detrimental to EnOpt, whereas the CMA-EnOpt performance is near-independent of the initial choice and produces higher objective-function values at no additional computational cost.


Sign in / Sign up

Export Citation Format

Share Document