acceptable probability
Recently Published Documents


TOTAL DOCUMENTS

16
(FIVE YEARS 2)

H-INDEX

3
(FIVE YEARS 0)

Author(s):  
Natalia V. Solovjova

Based on ecoscreening models, results of dynamic (ecosystem) modeling and field observations, environmental risk assessments for the Northern Caspian ecosystem were calculated. The proposed method is effective in assessing the risk from the combined action of natural, anthropogenic, and invasive factors in the development of oil resources in offshore waters. Calculations of ecological risk and the acceptable probability of impact for the ecosystem were carried out for three frequency ranges of anthropogenic impacts such as "technical system accidents" during the spring phytoplankton outbreak, taking into account the natural and low illumination of the Northern Caspian water area. The results obtained for spring and summer-autumn maxima of phytoplankton biomass of various durations revealed ranges of practically safe, with a probability of acceptable impact from 80 to 100%, ecosystem conditions, and extremely dangerous, with a probability of acceptable impact less than 5%. The results obtained showed that conclusions about ecosystem vulnerability in conditions of intense anthropogenic pressure are not trivial and that it is necessary first to calculate the environmental risk and then evaluate the acceptable probability of impact. The proposed approach is important for overcoming difficulties in the practical harmonization of environmental and economic requirements for the safe development of shelf resources.


Author(s):  
M. Liu ◽  
C. Cross

For subsea pipeline UHB design, it is essential to establish appropriate UHB load factors by undertaking a structural reliability analysis based on a target acceptable probability of failure. The current practice is to setup the acceptable probability of failure according to the best industrial practice coded in DNV OS F101 and ISO16708 but regardless of trench performance, imperfection details and inspection results. This paper is intended to discuss the relationship between the UHB failure probability for the whole pipeline and the trench performance and imperfection frequency statistics. A detailed statistic and reliability analysis is undertaken to address some disparity in the current practice. It is shown that in order to achieve a specific reliability level required for the entire pipeline, it is paramount to calibrate the target probability of failure based on the survey data. The common practice for UHB is shown un-conservative giving rise to an insufficient safety level and non-compliant OOS design. An improved approach is outlined to address the critical issues allowing for a robust OOS assessment by means of case studies.


2016 ◽  
Vol 53 (7) ◽  
pp. 1167-1175 ◽  
Author(s):  
Farzaneh Naghibi ◽  
Gordon A. Fenton ◽  
D.V. Griffiths

Current foundation design practice for serviceability limit states involves proportioning the foundation to achieve an acceptably small probability that the foundation settlement exceeds some target maximum total settlement. However, it is usually differential settlement that leads to problems in the supported structure. The design question, then, is how should the target maximum total settlement of an individual foundation be selected so that differential settlement is not excessive? Evidently, if the target maximum total settlement is increased, the differential settlement between foundations will also tend to increase, so that there is a relationship between the two, although not necessarily a simple one. This paper investigates how the target maximum total settlement specified in the design of an individual foundation relates to the distribution of the differential settlement between two identical foundation elements, as a function of the ground statistics and the distance between the two foundations. A probabilistic theory is developed, and validated by simulation, which is used to prescribe target maximum settlements employed in the design process to avoid excessive differential settlements to some acceptable probability.


Author(s):  
Philippe Cambos ◽  
Guy Parmentier

During ship life, operating conditions may change, tanker may be converted into FPSO, and flag requirements may be modified. Generally these modifications have few impacts on existing structures; flag requirements only rarely are to be applied retroactively. Nevertheless in some cases modifications of operating condition may induce considerable consequences, making in the worst cases impossible any reengineering. For example converting a common tanker, built with plain steel of grade A into an Offshore Floating Unit able operating in cold region, may require a grade change corresponding to a grade B. It is obviously meaningless to replace all material just because material certificates. Steels used by shipyards have to fulfill Classification society’s requirements dealing with mechanical strength; generally shipbuilding corresponds to a small part of steelmaker’s production. For this reason steelmakers are reluctant to produce steels with mechanical properties corresponding exactly to the minima required. They generally deliver steels already in stock, with higher mechanical characteristics than required. In this case it can be taken advantage of this common practice. In order to demonstrate that the material fulfill the requirements of grade B it has been decided to adopt a statistic approach. At this stage there are two main issues, the first one is that it is needed to provide evidences that the actual material Charpy V characteristics fulfill the requirements of grade B; the second one is to provide these evidences with a minimum testing. To assess this assumption a random check has been carried out. Different probabilistic model have been tested in order to check common approaches and probabilistic model based on physical considerations. In the paper the main assumptions for estimating the minimum Charpy value main assumption in the probabilistic models are recalled, the behavior of empirical sample is examined, the parameters of probability laws fitting the empirical distribution and definitely as accuracy of probability law parameters determination is not perfect with a finite number of specimens the uncertainty in the determination of parameters is taken into account with confidence limits. According to the selected probabilistic model the minimum value corresponds to an acceptable probability of failure, taking into account the target confidence level, or is independent of any acceptable probability of failure and is defined with the same confidence level. At the end it is concluded that a random check with a data treatment assuming a random distribution of Charpy V test results distributed according to a Weibull probability law of the minimum allows providing evidences that with a sufficient confidence level the steel used for the considered structure fulfill the requirements of the new operating conditions.


2013 ◽  
Vol 3 (2) ◽  
pp. 79-97 ◽  
Author(s):  
C. Andrade

RESUMENLos cálculos de vida útil de las estructuras de hormigón están pasando rápidamente de los laboratorios a las normativas y a ser especificados en la licitación de grandes infraestructuras. Así vidas útiles de 100 años o más se han requerido en puentes como Oresund o en el nuevo canal de Panamá. Sin embargo la especificación se realiza de forma resumida sin que se defina la forma de demostrar esa durabilidad y en algunos casos, sin siquiera mencionar los ensayos y sus valores limites que se deben utilizar. En la presente comunicación se describen los aspectos más importantes que se deben especificar en los modelos que deben ser además de los coeficientes de difusión, la concentración superficial, los factores de envejecimiento y el límite de cloruros así como la probabilidad de corrosión que se considera inaceptable.Palabras Clave: hormigón; cloruros; resistividad; difusión.ABSTRACTEstimates of service life of concrete structures are rapidly moving from laboratories to the standards and to be specified in the construction for large infrastructures. So service life of 100 years or more were required to Oresund bridge or the new Panama Canal. However, the specification is made without defining how to prove the specified durability and in some cases, without even mentioning the tests and limit values to be used. Present communication describes the most important aspects to be specified in the chloride prediction models in addition to the diffusion coefficients, which are the surface concentration, the aging factor, the limit of chlorides and the acceptable probability of corrosion.Keywords: concrete; chlorides; resistivity; diffusion.


Blood ◽  
2011 ◽  
Vol 118 (21) ◽  
pp. 1552-1552
Author(s):  
Jack M. Lionberger ◽  
Kathleen Shannon Dorcy ◽  
Carol Dean ◽  
Nathan Holm ◽  
Bart Lee Scott ◽  
...  

Abstract Abstract 1552 Background: Novel drugs or drug combinations are conventionally tested first in Phase I studies (in which therapeutic decisions are based solely on toxicity) with Phase II (efficacy) evaluations following as a separate trial. This process not only slows new drug development, it is challenging for patients during the informed consent process, because they usually enter trials not merely in hope of “no toxicity” but in hope of response. Response rates in Phase I at doses less than the maximum tolerated dose (MTD) may be irrelevant to efficacy, but this common assumption remains unproven. An equally plausible alternative is efficacy failure at these lower doses augurs failure at the MTD in Phase II. This hypothesis prompted development of a Phase I-II Bayesian design that uses both efficacy and toxicity to find a clinically relevant dose (Biometrics 2004;60: 684–93). In the current study, we apply the innovative Bayesian approach to the design of a Phase I-II trial using bendamustine + idarubicin in older patients (>50 yo) with newly-diagnosed AML or high risk MDS (>10% marrow blasts). We then compare and contrast our trial operation with that of the standard 3+3 Phase I design. Methods: The design specifies anticipated probabilities (“priors”) of response (CR or no CR) and toxicity (grade 3–4 or not) at each of 4 doses of bendamustine (45,60,75,90 mg/m2 daily × 5 together with idarubicin 12 mg/m2 daily on day 1 and 2). Patients are entered in groups of 3 beginning at the 45 mg/m2 dose. As response/toxicity data became available for each cohort, Bayes theorem is used to update the priors and derive current probabilities (“posteriors”) of response/toxicity at each dose. The priors are set to be relatively non-informative allowing the posteriors to be primarily influenced by the data from the trial. The posteriors are referred to a minimum acceptable probability of response (here 40%) and a maximum acceptable probability of toxicity (30%). If the posteriors indicate that it is highly unlikely (< 2% chance) that any dose is associated with both of these probabilities the trial stops. Otherwise the next cohort of patients is treated at a dose so associated. This process is repeated iteratively to a maximum sample size of 48 patients. The parameters noted above were chosen to give desirable probabilities of selecting for future study doses meeting the minimum acceptable response and maximum acceptable toxicity rates. Results: Table 1 compares the operation of this trial with a standard 3+3 Phase I trial. Given that 2/3 patients had toxicity at the 75 dose, a Phase I 3+3 design would have declared 60 the MTD. Subsequently, an “expansion cohort” as a Phase II trial would be treated at this dose without any possibility of revisiting the 75 dose. This conclusion flies in the face of basic notions of statistical reliability and ignores the possibility that patients experiencing toxicity may have been particularly old, had significant comorbidities, or have a variable functional reserve for undefined reasons. In contrast, the Phase I-II design allows the trial to continue, and potentially revisit higher doses of therapy depending on the collective outcome of a greater number of patients. Based on our actual data, this trial continued to treat patients at the 60 mg/m2 dose level, and in the next three patients there was no toxicity. In this case response data becomes the determining factor, which improves the efficiency of the trial. If 0/3 patients had a response, the trial would return to 75 mg/m2, however, because 2/3 patients had a response, the trial continues to accrue at 60mg/m2, with the statistical force of twice the number of patients. Conclusion: Accounting for response during dose finding seems to permit more sophisticated/flexible decisions about dosing in addition to improving efficiency. Disclosures: Shannon Dorcy: Cephalon: Consultancy, Honoraria, Speakers Bureau.


Sign in / Sign up

Export Citation Format

Share Document