scholarly journals Calculation of Dynamic Viscosity in Concentrated Cementitious Suspensions: Probabilistic Approximation and Bayesian Analysis

Materials ◽  
2021 ◽  
Vol 14 (8) ◽  
pp. 1971
Author(s):  
Ángel De La Rosa ◽  
Gonzalo Ruiz ◽  
Enrique Castillo ◽  
Rodrigo Moreno

We present a new focus for the Krieger–Dougherty equation from a probabilistic point of view. This equation allows the calculation of dynamic viscosity in suspensions of various types, like cement paste and self-compacting mortar/concrete. The physical meaning of the parameters that intervene in the equation (maximum packing fraction of particles and intrinsic viscosity), together with the random nature associated with these systems, make the application of the Bayesian analysis desirable. This analysis permits the transformation of parametric-deterministic models into parametric-probabilistic models, which improves and enriches their results. The initial limitations of the Bayesian methods, due to their complexity, have been overcome by numerical methods (Markov Chain Monte Carlo and Gibbs Sampling) and the development of specific software (OpenBUGS). Here we use it to compute the probability density functions that intervene in the Krieger–Dougherty equation applied to the calculation of viscosity in several cement pastes, self-compacting mortars, and self-compacting concretes. The dynamic viscosity calculations made with the Bayesian distributions are significantly better than those made with the theoretical values.

Author(s):  
Paulo Afonso ◽  
Victor J. Jiménez

One measure of organizations' performance is the cost of products, as this may indicate the level of efficiency and contribute to define the business strategy of the firm. Nevertheless, companies face an increasingly fast-changing environment where the variability in processes, products, technology, prices, among other variables affect the performance of the organization. Particularly, given such changing environment, product and service costs may be changing over time. In this context, deterministic models for costing systems might be inappropriate. Thus, this chapter proposes a model for calculating costs which considers the variability of endogenous and exogenous cost variables. It uses the logic of a Two-Stage costing model and Monte Carlo Simulation. The proposed model may allow to some extent to predict the risk associated with the variability in costs and support the necessary steps which should be taken to better manage such risk, whether from the point of view of processes rationalization and of cost management.


1998 ◽  
Vol 10 (3) ◽  
pp. 731-747 ◽  
Author(s):  
Volker Tresp ◽  
Reimar Hofmann

We derive solutions for the problem of missing and noisy data in nonlinear time-series prediction from a probabilistic point of view. We discuss different approximations to the solutions—in particular, approximations that require either stochastic simulation or the substitution of a single estimate for the missing data. We show experimentally that commonly used heuristics can lead to suboptimal solutions. We show how error bars for the predictions can be derived and how our results can be applied to K-step prediction. We verify our solutions using two chaotic time series and the sunspot data set. In particular, we show that for K-step prediction, stochastic simulation is superior to simply iterating the predictor.


Open Physics ◽  
2011 ◽  
Vol 9 (6) ◽  
Author(s):  
Tomáš Ficker ◽  
Dalibor Martišek

AbstractThe 3D profile surface parameter H q and fractal dimension D were tested as indicators of mechanical properties inferred from fracture surfaces of porous solids. High porous hydrated cement pastes were used as prototypes of porous materials. Both the profile parameter H q and the fractal dimension D showed capability to assess compressive strength from the fracture surfaces of hydrated pastes. From a practical point of view the 3D profile parameter H q seems to be more convenient as an indicator of mechanical properties, as its values suffer much less from statistical scatter than those of fractal dimensions.


2020 ◽  
Vol 8 (9) ◽  
pp. 642
Author(s):  
Chao Ji ◽  
Cynthia Juyne Beegle-Krause ◽  
James D. Englehardt

Submerged oil, oil in the water column (neither at the surface nor on the bottom), was found in the form of oil droplet layers in the mid depths between 900–1300 m in the Gulf of Mexico during and following the Deepwater Horizon oil spill. The subsurface peeling layers of submerged oil droplets were released from the well blowout plume and moved along constant density layers (also known as isopycnals) in the ocean. The submerged oil layers were a challenge to locate during the oil spill response. To better understand and find submerged oil layers, we review the mechanisms of submerged oil formation, along with detection methods and modeling techniques. The principle formation mechanisms under stratified and cross-current conditions and the concepts for determining the depths of the submerged oil layers are reviewed. Real-time in situ detection methods and various sensors were used to reveal submerged oil characteristics, e.g., colored dissolved organic matter and dissolved oxygen levels. Models are used to locate and to predict the trajectories and concentrations of submerged oil. These include deterministic models based on hydrodynamical theory, and probabilistic models exploiting statistical theory. The theoretical foundations, model inputs and the applicability of these models during the Deepwater Horizon oil spill are reviewed, including the pros and cons of these two types of models. Deterministic models provide a comprehensive prediction on the concentrations of the submerged oil and may be calibrated using the field data. Probabilistic models utilize the field observations but only provide the relative concentrations of the submerged oil and potential future locations. We find that the combination of a probabilistic integration of real-time detection with trajectory model output appears to be a promising approach to support emergency response efforts in locating and tracking submerged oil in the field.


Author(s):  
Mª Izaskun Benedicto ◽  
Rafael M. García Morales ◽  
Javier Marino ◽  
Francisco De los Santos

In recent decades, international shipping trade has grown considerably and ports have been extended in order to satisfy the demands of new ships and cargo. Port management has become a difficult task due to the high number of simultaneous operations in ports and the random nature of the agents that are involved in port operations (climate agents, ship arrivals). This makes necessary an aid-decision making tool that reproduces maritime operations and estimates the uncertainty of the port performance. In this work, the software, based on the methodologies proposed in Benedicto et al (2013) and García Morales et al (2015), is presented. The software has a user-friendly interface, reproduces port operations for a given case and provides a set of indicators that measure the performance of the simulated case, like waiting times or occupancy of berths and harbor services. Port performance is characterized from a statistical point of view. Software validation, with Algeciras Port (Spain) as pilot port, is also presented.


Author(s):  
M Newby

Deterministic models of crack growth can be fitted to experimental data. This paper shows that stochastic growth models are easy to use and provides a simple framework for data analysis. A simple transformation allows the standard linear regression model to be used and opens the way for a fully Bayesian analysis. The Bayesian analysis allows the incorporation of prior information and coherent predictions of crack length to be made. The parameters of the Paris-Erdogan model are readily evaluated directly from crack length data without the need for intermediate estimates of the crack growth rate. The approach lends itself to the analysis of properly designed experiments to determine the effect of environmental factors on the parameters of the Paris-Erdogan equation through the medium of accelerated failure time models. The paper also emphasizes the need for adequate communication between experimenter and statistician to ensure efficient experimental designs.


2000 ◽  
Vol 6 (3) ◽  
pp. 209-226 ◽  
Author(s):  
William C. Haneberg

Abstract The occurrence of potentially hazardous geologic events such as landslides, rock falls, earthquakes, floods, and debris flows can be predicted using two fundamentally different approaches: deterministic and probabilistic. The most significant difference between the two approaches to geologic hazard assessment is whether a process is envisioned to be the result of an exact causal relationship or if some element of random behavior is assumed to be part of the system. Although the assumption of random behavior may seem self-defeating, it can provide a useful tool for the solution of important problems as long as the randomness can be quantified using statistical models. Each of these two methods can be approached either rationally (sing models derived from accepted physical or chemical principles) or empirically (by studying the occurrence of events without explicit regard to their driving mechanism). The complexity of the geologic process commonly dictates which approach is used for a particular problem, ranging from rational deterministic models for relatively simple systems such as small landslides to empirical probabilistic models for complicated processes such as floods and earthquakes. Examples of each type of model are discussed throughout the paper, primarily within the context of slope stability and the recurrence of extreme events such as floods.


Sign in / Sign up

Export Citation Format

Share Document