scholarly journals Pulpwood green density prediction models and sampling-based calibration

Silva Fennica ◽  
2021 ◽  
Vol 55 (4) ◽  
Author(s):  
Jaakko Repola ◽  
Juha Heikkinen ◽  
Jari Lindblad

Pulpwood arriving at the mills is mainly measured by weighing. In the loading phase of forwarding and trucking, timber is weighed using scales mounted in the grapple loader. The measured weight of timber is converted into volume using a conversion factor defined as green density (kg m). At the mill, the green density factor is determined by sampling measurements, while in connection with weighing with grapple-mounted scales during transportation, fixed green density factors are used. In this study, we developed predictive regression models for the green density of pulpwood. The models were constructed separately by pulpwood assortments: pine (contains mainly L); spruce (mainly (L.) Karst.); decayed spruce; birch (mainly Ehrh. and Roth); and aspen (mainly L.). Study material was composed of the sampling-based measurements at the mills between 2013–2019. The models were specified as linear mixed models with both fixed and random parameters. The fixed effect produced the expected value of green density as a function of delivery week, storage time, and meteorological conditions during storage. The random effects allowed the model calibration by utilizing the previous sampling weight measurements. The model validation showed that the model predictions faithfully reproduced the observed seasonal variation in green density. They were more reliable than those obtained with the current practices. Even the uncalibrated (fixed) predictions had lower relative root mean squared prediction errors than those obtained with the current practices.–3Pinus sylvestrisPicea abiesBetula pubescensBetula pendulaPopulus tremula

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Sylvia Kalli ◽  
Carla Araya-Cloutier ◽  
Jos Hageman ◽  
Jean-Paul Vincken

AbstractHigh resistance towards traditional antibiotics has urged the development of new, natural therapeutics against methicillin-resistant Staphylococcus aureus (MRSA). Prenylated (iso)flavonoids, present mainly in the Fabaceae, can serve as promising candidates. Herein, the anti-MRSA properties of 23 prenylated (iso)flavonoids were assessed in-vitro. The di-prenylated (iso)flavonoids, glabrol (flavanone) and 6,8-diprenyl genistein (isoflavone), together with the mono-prenylated, 4′-O-methyl glabridin (isoflavan), were the most active anti-MRSA compounds (Minimum Inhibitory Concentrations (MIC) ≤ 10 µg/mL, 30 µM). The in-house activity data was complemented with literature data to yield an extended, curated dataset of 67 molecules for the development of robust in-silico prediction models. A QSAR model having a good fit (R2adj 0.61), low average prediction errors and a good predictive power (Q2) for the training (4% and Q2LOO 0.57, respectively) and the test set (5% and Q2test 0.75, respectively) was obtained. Furthermore, the model predicted well the activity of an external validation set (on average 5% prediction errors), as well as the level of activity (low, moderate, high) of prenylated (iso)flavonoids against other Gram-positive bacteria. For the first time, the importance of formal charge, besides hydrophobic volume and hydrogen-bonding, in the anti-MRSA activity was highlighted, thereby suggesting potentially different modes of action of the different prenylated (iso)flavonoids.


2017 ◽  
Vol 28 (1) ◽  
pp. 309-320 ◽  
Author(s):  
Scott Powers ◽  
Valerie McGuire ◽  
Leslie Bernstein ◽  
Alison J Canchola ◽  
Alice S Whittemore

Personal predictive models for disease development play important roles in chronic disease prevention. The performance of these models is evaluated by applying them to the baseline covariates of participants in external cohort studies, with model predictions compared to subjects' subsequent disease incidence. However, the covariate distribution among participants in a validation cohort may differ from that of the population for which the model will be used. Since estimates of predictive model performance depend on the distribution of covariates among the subjects to which it is applied, such differences can cause misleading estimates of model performance in the target population. We propose a method for addressing this problem by weighting the cohort subjects to make their covariate distribution better match that of the target population. Simulations show that the method provides accurate estimates of model performance in the target population, while un-weighted estimates may not. We illustrate the method by applying it to evaluate an ovarian cancer prediction model targeted to US women, using cohort data from participants in the California Teachers Study. The methods can be implemented using open-source code for public use as the R-package RMAP (Risk Model Assessment Package) available at http://stanford.edu/~ggong/rmap/ .


2021 ◽  
Author(s):  
Shaomei Yang ◽  
Haoyue Wu

Abstract PM2.5 has a significant negative impact on human health and atmospheric quality, and accurate prediction of its concentration is necessary. PM2.5 concentration is influenced by a combination of factors from both meteorological conditions and air quality. It is essential to identify the significant factors influencing PM2.5 concentrations in the prediction process. To address this issue, this paper proposes the quantile regression (QR) model based on the least absolute shrinkage and selection operator (LASSO), combined with kernel density estimation (KDE) for probabilistic density prediction of PM2.5 concentrations. The model uses LASSO regression to select the influential factors, and then the quartiles of daily PM2.5 concentrations obtained using the QR model are imported into the KDE model to obtain the probability density curves of PM2.5 concentrations. In this paper, empirical analysis is performed with the data sets of Beijing, China, and Jinan, China, and the accuracy of the model is evaluated using the mean absolute percentage error(MAPE) and the relative mean square error (RMSE). The simulation results reveal that the LASSSO-QR-KDE model has a higher accuracy than the traditional prediction models and the currently used research models. The model provides a novel and excellent tool for policy makers to predict PM2.5 concentrations.


Biostatistics ◽  
2020 ◽  
Author(s):  
Chuan Hong ◽  
Yan Wang ◽  
Tianxi Cai

Summary Divide-and-conquer (DAC) is a commonly used strategy to overcome the challenges of extraordinarily large data, by first breaking the dataset into series of data blocks, then combining results from individual data blocks to obtain a final estimation. Various DAC algorithms have been proposed to fit a sparse predictive regression model in the $L_1$ regularization setting. However, many existing DAC algorithms remain computationally intensive when sample size and number of candidate predictors are both large. In addition, no existing DAC procedures provide inference for quantifying the accuracy of risk prediction models. In this article, we propose a screening and one-step linearization infused DAC (SOLID) algorithm to fit sparse logistic regression to massive datasets, by integrating the DAC strategy with a screening step and sequences of linearization. This enables us to maximize the likelihood with only selected covariates and perform penalized estimation via a fast approximation to the likelihood. To assess the accuracy of a predictive regression model, we develop a modified cross-validation (MCV) that utilizes the side products of the SOLID, substantially reducing the computational burden. Compared with existing DAC methods, the MCV procedure is the first to make inference on accuracy. Extensive simulation studies suggest that the proposed SOLID and MCV procedures substantially outperform the existing methods with respect to computational speed and achieve similar statistical efficiency as the full sample-based estimator. We also demonstrate that the proposed inference procedure provides valid interval estimators. We apply the proposed SOLID procedure to develop and validate a classification model for disease diagnosis using narrative clinical notes based on electronic medical record data from Partners HealthCare.


Energies ◽  
2019 ◽  
Vol 12 (15) ◽  
pp. 3029 ◽  
Author(s):  
Shuang Feng ◽  
Chaofan Wei ◽  
Jiaxing Lei

In this paper, an improved model predictive control (MPC) is proposed for the matrix converter (MC). First, the conventional MPC which adopts the separately discretized prediction models is discussed. It shows that the conventional MPC ignores the input–output interaction in every sampling period. Consequently, additional prediction errors arise, resulting in more current harmonics. Second, the principle of the improved MPC is presented. With the interaction considered, the integral state-space equation of the whole MC system is constructed and discretized to obtain the precise model. The eigenvalue analysis shows that the proposed prediction model has the same eigenvalues with the continuous model, and thus is more accurate than the conventional one to describe the MC’s behavior in every sampling period. Finally, experimental results under various working conditions prove that the proposed approach can always increase the control accuracy and reduce the harmonic distortions, which in turn requires smaller filter components.


Polymers ◽  
2019 ◽  
Vol 11 (3) ◽  
pp. 484 ◽  
Author(s):  
Heeseok Song ◽  
Byoung Kim ◽  
Yong Kim ◽  
Youn-Sang Bae ◽  
Jooheon Kim ◽  
...  

In this study, thermally conductive composite films were fabricated using an anisotropic boron nitride (BN) and hybrid filler system mixed with spherical aluminum nitride (AlN) or aluminum oxide (Al2O3) particles in a polyimide matrix. The hybrid system yielded a decrease in the through-plane thermal conductivity, however an increase in the in-plane thermal conductivity of the BN composite, resulting from the horizontal alignment and anisotropy of BN. The behavior of the in-plane thermal conductivity was theoretically treated using the Lewis–Nielsen and modified Lewis–Nielsen theoretical prediction models. A single-filler system using BN exhibited a relatively good fit with the theoretical model. Moreover, a hybrid system was developed based on two-population approaches, the additive and multiplicative. This development represented the first ever implementation of two different ceramic conducting fillers. The multiplicative-approach model yielded overestimated thermal conductivity values, whereas the additive approach exhibited better agreement for the prediction of the thermal conductivity of a binary-filler system.


2008 ◽  
Vol 35 (7) ◽  
pp. 699-707 ◽  
Author(s):  
Halil Ceylan ◽  
Kasthurirangan Gopalakrishnan ◽  
Sunghwan Kim

The dynamic modulus (|E*|) is one of the primary hot-mix asphalt (HMA) material property inputs at all three hierarchical levels in the new Mechanistic–empirical pavement design guide (MEPDG). The existing |E*| prediction models were developed mainly from regression analysis of an |E*| database obtained from laboratory testing over many years and, in general, lack the necessary accuracy for making reliable predictions. This paper describes the development of a simplified HMA |E*| prediction model employing artificial neural network (ANN) methodology. The intelligent |E*| prediction models were developed using the latest comprehensive |E*| database that is available to researchers (from National Cooperative Highway Research Program Report 547) containing 7400 data points from 346 HMA mixtures. The ANN model predictions were compared with the Hirsch |E*| prediction model, which has a logical structure and a relatively simple prediction model in terms of the number of input parameters needed with respect to the existing |E*| models. The ANN-based |E*| predictions showed significantly higher accuracy compared with the Hirsch model predictions. The sensitivity of input variables to the ANN model predictions were also examined and discussed.


2002 ◽  
Vol 14 (6) ◽  
pp. 1347-1369 ◽  
Author(s):  
Kenji Doya ◽  
Kazuyuki Samejima ◽  
Ken-ichi Katagiri ◽  
Mitsuo Kawato

We propose a modular reinforcement learning architecture for nonlinear, nonstationary control tasks, which we call multiple model-based reinforcement learning (MMRL). The basic idea is to decompose a complex task into multiple domains in space and time based on the predictability of the environmental dynamics. The system is composed of multiple modules, each of which consists of a state prediction model and a reinforcement learning controller. The “responsibility signal,” which is given by the softmax function of the prediction errors, is used to weight the outputs of multiple modules, as well as to gate the learning of the prediction models and the reinforcement learning controllers. We formulate MMRL for both discrete-time, finite-state case and continuous-time, continuous-state case. The performance of MMRL was demonstrated for discrete case in a nonstationary hunting task in a grid world and for continuous case in a nonlinear, nonstationary control task of swinging up a pendulum with variable physical parameters.


2017 ◽  
Vol 43 (3) ◽  
pp. 74-81 ◽  
Author(s):  
Bartosz Szeląg ◽  
Lidia Bartkiewicz ◽  
Jan Studziński ◽  
Krzysztof Barbusiński

AbstractThe aim of the study was to evaluate the possibility of applying different methods of data mining to model the inflow of sewage into the municipal sewage treatment plant. Prediction models were elaborated using methods of support vector machines (SVM), random forests (RF), k-nearest neighbour (k-NN) and of Kernel regression (K). Data consisted of the time series of daily rainfalls, water level measurements in the clarified sewage recipient and the wastewater inflow into the Rzeszow city plant. Results indicate that the best models with one input delayed by 1 day were obtained using the k-NN method while the worst with the K method. For the models with two input variables and one explanatory one the smallest errors were obtained if model inputs were sewage inflow and rainfall data delayed by 1 day and the best fit is provided using RF method while the worst with the K method. In the case of models with three inputs and two explanatory variables, the best results were reported for the SVM and the worst for the K method. In the most of the modelling runs the smallest prediction errors are obtained using the SVM method and the biggest ones with the K method. In the case of the simplest model with one input delayed by 1 day the best results are provided using k-NN method and by the models with two inputs in two modelling runs the RF method appeared as the best.


2012 ◽  
Vol 9 (10) ◽  
pp. 11199-11225
Author(s):  
P. Pokhrel ◽  
D. E. Robertson ◽  
Q. J. Wang

Abstract. Hydrological post-processors refer here to statistical models that are applied to hydrological model predictions to further reduce prediction errors and to quantify remaining uncertainty. For streamflow predictions, post-processors are generally applied to daily or sub-daily time scales. For many applications such as seasonal streamflow forecasting and water resources assessment, monthly volumes of streamflows are of primary interest. While it is possible to aggregate post-processed daily or sub-daily predictions to monthly time scales, the monthly volumes so produced may not have the least errors achievable and may not be reliable in uncertainty distributions. Post-processing directly at the monthly time scale is likely to be more effective. In this study, we investigate the use of a Bayesian joint probability modelling approach to directly post-process model predictions of monthly streamflow volumes. We apply the BJP post-processor to 18 catchments located in eastern Australia and demonstrate its effectiveness in reducing prediction errors and quantifying prediction uncertainty.


Sign in / Sign up

Export Citation Format

Share Document