scholarly journals Bayesian Computational Methods for Sampling from the Posterior Distribution of a Bivariate Survival Model, Based on AMH Copula in the Presence of Right-Censored Data

Entropy ◽  
2018 ◽  
Vol 20 (9) ◽  
pp. 642 ◽  
Author(s):  
Erlandson Saraiva ◽  
Adriano Suzuki ◽  
Luis Milan

In this paper, we study the performance of Bayesian computational methods to estimate the parameters of a bivariate survival model based on the Ali–Mikhail–Haq copula with marginal distributions given by Weibull distributions. The estimation procedure was based on Monte Carlo Markov Chain (MCMC) algorithms. We present three version of the Metropolis–Hastings algorithm: Independent Metropolis–Hastings (IMH), Random Walk Metropolis (RWM) and Metropolis–Hastings with a natural-candidate generating density (MH). Since the creation of a good candidate generating density in IMH and RWM may be difficult, we also describe how to update a parameter of interest using the slice sampling (SS) method. A simulation study was carried out to compare the performances of the IMH, RWM and SS. A comparison was made using the sample root mean square error as an indicator of performance. Results obtained from the simulations show that the SS algorithm is an effective alternative to the IMH and RWM methods when simulating values from the posterior distribution, especially for small sample sizes. We also applied these methods to a real data set.

1998 ◽  
Vol 37 (12) ◽  
pp. 335-342 ◽  
Author(s):  
Jacek Czeczot

This paper deals with the minimal-cost control of the modified activated sludge process with varying level of wastewater in the aerator tank. The model-based adaptive controller of the effluent substrate concentration, basing on the substrate consumption rate and manipulating the effluent flow rate outcoming from the aerator tank, is proposed and its performance is compared with conventional PI controller and open loop behavior. Since the substrate consumption rate is not measurable on-line, the estimation procedure on the basis of the least-square method is suggested. Finally, it is proved that cooperation of the DO concentration controller with the adaptive controller of the effluent substrate concentration allows the process to be operated at minimum costs (low consumption of aeration energy).


2019 ◽  
Vol 11 (1) ◽  
pp. 156-173
Author(s):  
Spenser Robinson ◽  
A.J. Singh

This paper shows Leadership in Energy and Environmental Design (LEED) certified hospitality properties exhibit increased expenses and earn lower net operating income (NOI) than non-certified buildings. ENERGY STAR certified properties demonstrate lower overall expenses than non-certified buildings with statistically neutral NOI effects. Using a custom sample of all green buildings and their competitive data set as of 2013 provided by Smith Travel Research (STR), the paper documents potential reasons for this result including increased operational expenses, potential confusion with certified and registered LEED projects in the data, and qualitative input. The qualitative input comes from a small sample survey of five industry professionals. The paper provides one of the only analyses on operating efficiencies with LEED and ENERGY STAR hospitality properties.


2021 ◽  
Author(s):  
Junjie Shi ◽  
Jiang Bian ◽  
Jakob Richter ◽  
Kuan-Hsun Chen ◽  
Jörg Rahnenführer ◽  
...  

AbstractThe predictive performance of a machine learning model highly depends on the corresponding hyper-parameter setting. Hence, hyper-parameter tuning is often indispensable. Normally such tuning requires the dedicated machine learning model to be trained and evaluated on centralized data to obtain a performance estimate. However, in a distributed machine learning scenario, it is not always possible to collect all the data from all nodes due to privacy concerns or storage limitations. Moreover, if data has to be transferred through low bandwidth connections it reduces the time available for tuning. Model-Based Optimization (MBO) is one state-of-the-art method for tuning hyper-parameters but the application on distributed machine learning models or federated learning lacks research. This work proposes a framework $$\textit{MODES}$$ MODES that allows to deploy MBO on resource-constrained distributed embedded systems. Each node trains an individual model based on its local data. The goal is to optimize the combined prediction accuracy. The presented framework offers two optimization modes: (1) $$\textit{MODES}$$ MODES -B considers the whole ensemble as a single black box and optimizes the hyper-parameters of each individual model jointly, and (2) $$\textit{MODES}$$ MODES -I considers all models as clones of the same black box which allows it to efficiently parallelize the optimization in a distributed setting. We evaluate $$\textit{MODES}$$ MODES by conducting experiments on the optimization for the hyper-parameters of a random forest and a multi-layer perceptron. The experimental results demonstrate that, with an improvement in terms of mean accuracy ($$\textit{MODES}$$ MODES -B), run-time efficiency ($$\textit{MODES}$$ MODES -I), and statistical stability for both modes, $$\textit{MODES}$$ MODES outperforms the baseline, i.e., carry out tuning with MBO on each node individually with its local sub-data set.


Author(s):  
Benmei Liu ◽  
Isaac Dompreh ◽  
Anne M Hartman

Abstract Background The workplace and home are sources of exposure to secondhand smoke (SHS), a serious health hazard for nonsmoking adults and children. Smoke-free workplace policies and home rules protect nonsmoking individuals from SHS and help individuals who smoke to quit smoking. However, estimated population coverages of smoke-free workplace policies and home rules are not typically available at small geographic levels such as counties. Model-based small area estimation techniques are needed to produce such estimates. Methods Self-reported smoke-free workplace policies and home rules data came from the 2014-2015 Tobacco Use Supplement to the Current Population Survey. County-level design-based estimates of the two measures were computed and linked to county-level relevant covariates obtained from external sources. Hierarchical Bayesian models were then built and implemented through Markov Chain Monte Carlo methods. Results Model-based estimates of smoke-free workplace policies and home rules were produced for 3,134 (out of 3,143) U.S. counties. In 2014-2015, nearly 80% of U.S. adult workers were covered by smoke-free workplace policies, and more than 85% of U.S. adults were covered by smoke-free home rules. We found large variations within and between states in the coverage of smoke-free workplace policies and home rules. Conclusions The small-area modeling approach efficiently reduced the variability that was attributable to small sample size in the direct estimates for counties with data and predicted estimates for counties without data by borrowing strength from covariates and other counties with similar profiles. The county-level modeled estimates can serve as a useful resource for tobacco control research and intervention. Implications Detailed county- and state-level estimates of smoke-free workplace policies and home rules can help identify coverage disparities and differential impact of smoke-free legislation and related social norms. Moreover, this estimation framework can be useful for modeling different tobacco control variables and applied elsewhere, e.g., to other behavioral, policy, or health related topics.


2021 ◽  
Vol 11 (15) ◽  
pp. 7104
Author(s):  
Xu Yang ◽  
Ziyi Huan ◽  
Yisong Zhai ◽  
Ting Lin

Nowadays, personalized recommendation based on knowledge graphs has become a hot spot for researchers due to its good recommendation effect. In this paper, we researched personalized recommendation based on knowledge graphs. First of all, we study the knowledge graphs’ construction method and complete the construction of the movie knowledge graphs. Furthermore, we use Neo4j graph database to store the movie data and vividly display it. Then, the classical translation model TransE algorithm in knowledge graph representation learning technology is studied in this paper, and we improved the algorithm through a cross-training method by using the information of the neighboring feature structures of the entities in the knowledge graph. Furthermore, the negative sampling process of TransE algorithm is improved. The experimental results show that the improved TransE model can more accurately vectorize entities and relations. Finally, this paper constructs a recommendation model by combining knowledge graphs with ranking learning and neural network. We propose the Bayesian personalized recommendation model based on knowledge graphs (KG-BPR) and the neural network recommendation model based on knowledge graphs(KG-NN). The semantic information of entities and relations in knowledge graphs is embedded into vector space by using improved TransE method, and we compare the results. The item entity vectors containing external knowledge information are integrated into the BPR model and neural network, respectively, which make up for the lack of knowledge information of the item itself. Finally, the experimental analysis is carried out on MovieLens-1M data set. The experimental results show that the two recommendation models proposed in this paper can effectively improve the accuracy, recall, F1 value and MAP value of recommendation.


Kybernetes ◽  
2014 ◽  
Vol 43 (5) ◽  
pp. 672-685 ◽  
Author(s):  
Zheng-Xin Wang

Purpose – The purpose of this paper is to propose an economic cybernetics model based on the grey differential equation GM(1,N) for China's high-tech industries and provide the necessary support to assist high-tech industries management departments with their policy making. Design/methodology/approach – Based on the principle of grey differential equation GM(1,N), the grey differential equations of five high-tech industries in China are established using the net fixed assets, labor quantity and patent application quantity as cybernetics variables. After the discretization and first-order subtraction reduction to the simultaneous equation of the five grey models, a linear cybernetics model is resulted in. The structure parameters in the cybernetics system show explicit economic significance and can be identified through least square principle. At last, the actual data in 2004-2010 are introduced to empirically analyze the high-tech industrial system in China. Findings – The cybernetics system for China's high-tech industries are stable, observable, and controllable. On the whole, China's high-tech industries show higher output coefficients of the patent application quantity than those of net fixed assets and labor quantity. This suggests that China's industry development mainly depends on technological innovation rather than capital or labor inputs. It is expected that the total output value of China's high-tech industries will grow at an average annual rate of 15 percent in 2011-2015, with contributions of pharmaceuticals, aircraft and spacecraft, electronic and telecommunication equipments, computers and office equipments, medical equipments and meters by 21, 16, 13, 10, and 28 percent, respectively. In addition, pharmaceuticals, as well as medical equipments and meters, present upward proportions in the gross of Chinese high-tech industries significantly. Electronic and telecommunication equipments, plus computers and office equipments exhibit an obvious decreasing proportion. The proportion of the output value of aircraft and spacecraft is basically stable. Practical implications – Empirical analysis results are helpful for related management departments to formulate reasonable industrial policies to keep the sustained and stable development of the high-tech industries in China. Originality/value – Based on the grey differential equation GM(1,N), this research puts forward an economic cybernetics model for the high-tech industries in China. This model is applicable to the economic system with small sample data set.


Author(s):  
Silvina Botta ◽  
Eduardo R. Secchi ◽  
Mônica M.C. Muelbert ◽  
Daniel Danilewicz ◽  
Maria Fernanda Negri ◽  
...  

Age and length data of 291 franciscana dolphins (Pontoporia blainvillei) incidentally captured on the coast of Rio Grande do Sul State (RS), southern Brazil, were used to fit growth curves using Gompertz and Von Bertalanffy growth models. A small sample of franciscanas (N = 35) from Buenos Aires Province (BA), Argentina, were used to see if there are apparent growth differences between the populations. Male and female franciscana samples from both areas were primarily (78–85%) <4 years of age. The Von Bertalanffy growth model with a data set that excluded animals <1 year of age provided the best fit to data. Based on this model, dolphins from the RS population reached asymptotic length at 136.0 cm and 158.4 cm, for males and females, respectively. No remarkable differences were observed in the growth trajectories of males and females between the RS and BA populations.


Sign in / Sign up

Export Citation Format

Share Document