Comparative analysis of the production technologies of logging, sawmill, pulp and paper, and veneer and plywood industries in Ontario

2011 ◽  
Vol 41 (3) ◽  
pp. 621-631 ◽  
Author(s):  
Chander Shahi ◽  
Thakur Prasad Upadhyay ◽  
Reino Pulkki ◽  
Mathew Leitch

Technological growth in production and efficient utilization of input factors are the two biggest contributors to total factor productivity (TFP). TFP of the four major forest industries (logging, pulp and paper, sawmill, and veneer and plywood industries) of Ontario are compared by analyzing their production structures using duality theory in production and costs. The study uses annual data of output and four inputs — labour, capital, energy and materials — from 1967 to 2003. Different restrictions on the translog cost function are applied to each industry to determine the cost function that best describes each industry’s technology, which is further used to estimate Morishima elasticities of substitution, own-price and cross-price elasticities, rate of technological change, and TFP. The production structure of sawmill and veneer and plywood industries is found to be linear homogeneous and homothetic, and that of logging and pulp and paper industries is non-homothetic. Further, Hicks neutral technological change for all four industries is rejected, indicating that the production structure in all four industries is biased in favour of certain inputs and against others. This suggests that policies that improve the efficiency of each industry should focus on input-saving factors of that industry, thereby improving its competitive position.

1971 ◽  
Vol 1 (3) ◽  
pp. 159-166 ◽  
Author(s):  
G. H. Manning ◽  
G. Thornburn

The pulp and paper industry is generally considered the most technologically progressive of the forest industries. A study employing Solow's method indicated a rise in the index of technological change of 50% between 1940 and 1960. This compares with a 547% increase for the chemical industry. Derivation of the capital production function for the pulp and paper industries shows that all increases in productivity, 1940–1960, were due to change in technology. There is also some indication that optimal plant size has been reached.


1985 ◽  
Vol 15 (6) ◽  
pp. 1116-1124 ◽  
Author(s):  
Felice Martinello

This paper reports estimates of factor substitution, technical change, and returns to scale for three Canadian industries, pulp and paper, sawmills and shingle mills, and logging, using annual data from 1963 to 1982. Each industry's input-demand functions slope down and are inelastic. Factor substitution is not rejected in any of the industries but it is not large. Sawmills and shingle mills show moderate increasing returns to scale, while logging and pulp and paper show very large increasing returns to scale. The technology of the industries is nonhomothetic and cost savings as a result of changes in scale are made mostly on the capital and labour inputs. Technical change is nonneutral, capital using, and labour saving in all industries. Negative technical change is estimated for sawmills and shingle mills and pulp and paper so that all of the productivity gains made over the period of the sample are associated with changes in scale rather than the passage of time. The technical change in all industries is labour saving enough that labour becomes more productive over time, holding everything else constant.


2021 ◽  
Vol 13 (14) ◽  
pp. 7706
Author(s):  
Tova Jarnerud ◽  
Andrey V. Karasev ◽  
Chuan Wang ◽  
Frida Bäck ◽  
Pär G. Jönsson

A six day industrial trial using hydrochar as part of the carbon source for hot metal production was performed in a production blast furnace (BF). The hydrochar came from two types of feedstocks, namely an organic mixed biosludge generated from pulp and paper production and an organic green waste residue. These sludges and residues were upgraded to hydrochar in the form of pellets by using a hydrothermal carbonization (HTC) technology. Then, the hydrochar pellets were pressed into briquettes together with commonly used briquetting material (in-plant fines such as fines from pellets and scraps, dust, etc. generated from the steel plant) and the briquettes were top charged into the blast furnace. In total, 418 tons of hydrochar briquettes were produced. The aim of the trials was to investigate the stability and productivity of the blast furnace during charging of these experimental briquettes. The results show that briquettes containing hydrochar from pulp and paper industries waste and green waste can partially be used for charging in blast furnaces together with conventional briquettes. Most of the technological parameters of the BF process, such as the production rate of hot metal (<1.5% difference between reference days and trial days), amount of dust, fuel rate and amount of injected coal, amount of slag, as well as contents of FeO in slag and %C, %S and %P in the hot metal in the experimental trials were very similar compared to those in the reference periods (two days before and two days after the trials) without using these experimental charge materials. Thus, it was proven that hydrochar derived from various types of organic residues could be used for metallurgical applications. While in this trial campaign only small amounts of hydrochar were used, nevertheless, these positive results support our efforts to perform more in-depth investigations in this direction in the future.


2021 ◽  
Vol 11 (2) ◽  
pp. 850
Author(s):  
Dokkyun Yi ◽  
Sangmin Ji ◽  
Jieun Park

Artificial intelligence (AI) is achieved by optimizing the cost function constructed from learning data. Changing the parameters in the cost function is an AI learning process (or AI learning for convenience). If AI learning is well performed, then the value of the cost function is the global minimum. In order to obtain the well-learned AI learning, the parameter should be no change in the value of the cost function at the global minimum. One useful optimization method is the momentum method; however, the momentum method has difficulty stopping the parameter when the value of the cost function satisfies the global minimum (non-stop problem). The proposed method is based on the momentum method. In order to solve the non-stop problem of the momentum method, we use the value of the cost function to our method. Therefore, as the learning method processes, the mechanism in our method reduces the amount of change in the parameter by the effect of the value of the cost function. We verified the method through proof of convergence and numerical experiments with existing methods to ensure that the learning works well.


2020 ◽  
Vol 18 (02) ◽  
pp. 2050006 ◽  
Author(s):  
Alexsandro Oliveira Alexandrino ◽  
Carla Negri Lintzmayer ◽  
Zanoni Dias

One of the main problems in Computational Biology is to find the evolutionary distance among species. In most approaches, such distance only involves rearrangements, which are mutations that alter large pieces of the species’ genome. When we represent genomes as permutations, the problem of transforming one genome into another is equivalent to the problem of Sorting Permutations by Rearrangement Operations. The traditional approach is to consider that any rearrangement has the same probability to happen, and so, the goal is to find a minimum sequence of operations which sorts the permutation. However, studies have shown that some rearrangements are more likely to happen than others, and so a weighted approach is more realistic. In a weighted approach, the goal is to find a sequence which sorts the permutations, such that the cost of that sequence is minimum. This work introduces a new type of cost function, which is related to the amount of fragmentation caused by a rearrangement. We present some results about the lower and upper bounds for the fragmentation-weighted problems and the relation between the unweighted and the fragmentation-weighted approach. Our main results are 2-approximation algorithms for five versions of this problem involving reversals and transpositions. We also give bounds for the diameters concerning these problems and provide an improved approximation factor for simple permutations considering transpositions.


2005 ◽  
Vol 133 (6) ◽  
pp. 1710-1726 ◽  
Author(s):  
Milija Zupanski

Abstract A new ensemble-based data assimilation method, named the maximum likelihood ensemble filter (MLEF), is presented. The analysis solution maximizes the likelihood of the posterior probability distribution, obtained by minimization of a cost function that depends on a general nonlinear observation operator. The MLEF belongs to the class of deterministic ensemble filters, since no perturbed observations are employed. As in variational and ensemble data assimilation methods, the cost function is derived using a Gaussian probability density function framework. Like other ensemble data assimilation algorithms, the MLEF produces an estimate of the analysis uncertainty (e.g., analysis error covariance). In addition to the common use of ensembles in calculation of the forecast error covariance, the ensembles in MLEF are exploited to efficiently calculate the Hessian preconditioning and the gradient of the cost function. A sufficient number of iterative minimization steps is 2–3, because of superior Hessian preconditioning. The MLEF method is well suited for use with highly nonlinear observation operators, for a small additional computational cost of minimization. The consistent treatment of nonlinear observation operators through optimization is an advantage of the MLEF over other ensemble data assimilation algorithms. The cost of MLEF is comparable to the cost of existing ensemble Kalman filter algorithms. The method is directly applicable to most complex forecast models and observation operators. In this paper, the MLEF method is applied to data assimilation with the one-dimensional Korteweg–de Vries–Burgers equation. The tested observation operator is quadratic, in order to make the assimilation problem more challenging. The results illustrate the stability of the MLEF performance, as well as the benefit of the cost function minimization. The improvement is noted in terms of the rms error, as well as the analysis error covariance. The statistics of innovation vectors (observation minus forecast) also indicate a stable performance of the MLEF algorithm. Additional experiments suggest the amplified benefit of targeted observations in ensemble data assimilation.


2017 ◽  
Vol 6 (3) ◽  
pp. 385-395
Author(s):  
Richard Cebula ◽  
James E. Payne ◽  
Donnie Horner ◽  
Robert Boylan

Purpose The purpose of this paper is to examine the impact of labor market freedom on state-level cost of living differentials in the USA using cross-sectional data for 2016 after allowing for the impacts of economic and quality of life factors. Design/methodology/approach The study uses two-stage least squares estimation controlling for factors contributing to cost of living differences across states. Findings The results reveal that an increase in labor market freedom reduces the overall cost of living. Research limitations/implications The study can be extended using panel data and alternative measures of labor market freedom. Practical implications In general, the finding that less intrusive government and greater labor freedom are associated with a reduced cost of living should not be surprising. This is because less government intrusion and greater labor freedom both inherently allow markets to be more efficient in the rationalization of and interplay with forces of supply and demand. Social implications The findings of this and future related studies could prove very useful to policy makers and entrepreneurs, as well as small business owners and public corporations of all sizes – particularly those considering either location in, relocation to, or expansion into other markets within the USA. Furthermore, the potential benefits of the National Right-to-Work Law currently under consideration in Congress could add cost of living reductions to the debate. Originality/value The authors extend the literature on cost of living differentials by investigating whether higher amounts of state-level labor market freedom act to reduce the states’ cost of living using the most recent annual data available (2016). That labor freedom has a systemic efficiency impact on the state-level cost of living is a significant finding. In our opinion, it is likely that labor market freedom is increasing the efficiency of labor market transactions in the production and distribution of goods and services, and acts to reduce the cost of living in states. In addition, unlike previous related studies, the authors investigate the impact of not only overall labor market freedom on the state-level cost of living, but also how the three sub-indices of labor market freedom, as identified and measured by Stansel et al. (2014, 2015), impact the cost of living state by state.


2000 ◽  
Vol 25 (2) ◽  
pp. 209-227 ◽  
Author(s):  
Keith R. McLaren ◽  
Peter D. Rossitter ◽  
Alan A. Powell

2021 ◽  
pp. 107754632110324
Author(s):  
Berk Altıner ◽  
Bilal Erol ◽  
Akın Delibaşı

Adaptive optics systems are powerful tools that are implemented to degrade the effects of wavefront aberrations. In this article, the optimal actuator placement problem is addressed for the improvement of disturbance attenuation capability of adaptive optics systems due to the fact that actuator placement is directly related to the enhancement of system performance. For this purpose, the linear-quadratic cost function is chosen, so that optimized actuator layouts can be specialized according to the type of wavefront aberrations. It is then considered as a convex optimization problem, and the cost function is formulated for the disturbance attenuation case. The success of the presented method is demonstrated by simulation results.


Sign in / Sign up

Export Citation Format

Share Document