A numerical prediction of flexural strength probability for NBG-18 nuclear grade graphite using strength pair model

2017 ◽  
Vol 52 (3) ◽  
pp. 204-211 ◽  
Author(s):  
Manik Bansal ◽  
Indra Vir Singh ◽  
Bhanu K Mishra ◽  
Kamal Sharma ◽  
IA Khan

In this work, a strength pair model has been proposed for the numerical prediction of flexural strength probability of NBG-18 nuclear grade graphite. The input to the proposed model is a random strength pair of tensile and compressive strengths whose value is based on its probability of occurrence in the experimental data. A finite element–based deterministic numerical approach has been implemented. To account for the large difference in tensile and compressive strengths, Drucker–Prager failure criteria has been implemented. The failure envelope of the Drucker–Prager failure criteria is assumed to have uniaxial fit with Mohr–Coulomb model in the principal stress space. A total of 292 simulations with random pairs of tensile and compressive strength are performed on a three-point bend specimen to obtain a set of flexural strength data. The flexural strength data obtained through numerical simulations are fitted using normal and Weibull distributions. The flexural strength probability obtained from the proposed model is found on conservative side. A goodness-of-fit test concludes that Weibull distribution fits the numerical data better than normal distribution.

2019 ◽  
Vol 29 (7) ◽  
pp. 1787-1798
Author(s):  
Hyunkeun Ryan Cho ◽  
Seonjin Kim ◽  
Myung Hee Lee

Biomedical studies often involve an event that occurs to individuals at different times and has a significant influence on individual trajectories of response variables over time. We propose a statistical model to capture the mean trajectory alteration caused by not only the occurrence of the event but also the subject-specific time of the event. The proposed model provides a post-event mean trajectory smoothly connected with the pre-event mean trajectory by allowing the model parameters associated with the post-event mean trajectory to vary over time of the event. A goodness-of-fit test is considered to investigate how well the proposed model is fit to the data. Hypothesis tests are also developed to assess the influence of the subject-specific time of event on the mean trajectory. Theoretical and simulation studies confirm that the proposed tests choose the correctly specified model consistently and examine the effect of the subject-specific time of event successfully. The proposed model and tests are also illustrated by the analysis of two real-life data from a biomarker study for HIV patients along with their own time of treatment initiation and a body fatness study in girls with different age of menarche.


2017 ◽  
Vol 68 (3) ◽  
pp. 167-179
Author(s):  
Tibor Csóka ◽  
Jaroslav Polec ◽  
Filip Csóka ◽  
Kvetoslava Kotuliaková

AbstractA variety of complex techniques, such as forward error correction (FEC), automatic repeat request (ARQ), hybrid ARQ or cross-layer optimization, require in their design and optimization phase a realistic model of binary error process present in a specific digital channel. Past and more recent modeling approaches focus on capturing one or more stochastic characteristics with precision sufficient for the desired model application, thereby applying concepts and methods severely limiting the model applicability (egin the form of modeled process prerequisite expectations). The proposed novel concept utilizing a Vector Quantization (VQ)-based approach to binary process modeling offers a viable alternative capable of superior modeling of most commonly observed small- and large-scale stochastic characteristics of a binary error process on the digital channel. Precision of the proposed model was verified using multiple statistical distances against the data captured in a wireless sensor network logical channel trace. Furthermore, the Pearson’s goodness of fit test of all model variants’ output was performed to conclusively demonstrate usability of the model for realistic captured binary error process. Finally, the presented results prove the proposed model applicability and its ability to far surpass the capabilities of the reference Elliot’s model.


2018 ◽  
Vol 7 (3) ◽  
pp. 1558
Author(s):  
S Lakshmisridevi ◽  
R Devanathan

The application of Zipf’s law is universal not only in linguistics but also in various other areas. Mandelbrot modified Zipf law as Zipf Mandelbrot law and it is further we proposed a modification of the ZM law for modeling rank frequency- data of linguistic text. Our model generalized ZM law into a linear regression model involving arbitrary order of Zipfian rank of words in a text .The performance of the proposed model is studied for an English text and it shown to compare favorably with that of Z-M law using Chi-Square goodness of fit test. In this paper we have applied to Tamil text and its performance is also up to the mark and it is been proved by the Chi-Square test and it addresses mainly the lower ranks, we propose to extend the work to higher order ranks using LNRE model in the future. 


2020 ◽  
Vol 30 (Supplement_5) ◽  
Author(s):  
T Besbes ◽  
S Mleyhi ◽  
J Sahli ◽  
M Messai ◽  
J Ziadi ◽  
...  

Abstract Background Early prediction of patients at highest risk of a poor outcome after cardiovascular surgery, including death can aid medical decision making, and adapt health care management in order to improve prognosis. In this context, we conducted this study to validate the CASUS severity score after cardiac surgery in the Tunisian population. Methods This is a retrospective cohort study conducted among patients who underwent cardiac surgery under extracorporeal circulation during the year 2018 at the Cardiovascular Surgery Department of La Rabta University Hospital in Tunisia. Data were collected from the patients hospitalization records. The discrimination of the score was assessed using the ROC curve and the calibration using the Hosmer-Lemeshow goodness of fit test and then by constructing the calibration curve. Overall correct classification was also obtained. Results In our study, the observed mortality rate was 10.52% among the 95 included patients. The discriminating power of the CASUS score was estimated by the area under the ROC curve (AUC), this scoring system had a good discrimination with AUC greater than 0.9 from postoperative Day 0 to Day 5.From postoperative day 0 to day 5, the Hosmer-Lemeshow's test gave a value of chi square test statistic ranging from 1.474 to 8.42 and a value of level of significance ranging from 0.39 to 0.99 indicating a good calibration. The overall correct classification rate from postoperative day 0 to day 5 ranged from 84.4% to 92.4%. Conclusions Despite the differences in the profile of the risk factors between the Tunisian population and the population constituting the database used to develop the CASUS score, we can say that this risk model presents acceptable performances in our population, attested by adequate discrimination and calibration. Prospective and especially multicentre studies on larger samples are needed before definitively conclude on the performance of this model in our country. Key messages The casus score seems to be valid to predict mortality among patients undergoing cardiac surgery. Multicenter study on larger sample is needed to derive and validate models able to predict in-hospitals mortality.


Test ◽  
2021 ◽  
Author(s):  
Jiming Jiang ◽  
Mahmoud Torabi

2019 ◽  
Vol 2019 ◽  
pp. 1-15 ◽  
Author(s):  
T. Mesbahzadeh ◽  
M. M. Miglietta ◽  
M. Mirakbari ◽  
F. Soleimani Sardoo ◽  
M. Abdolhoseini

Precipitation and temperature are very important climatic parameters as their changes may affect life conditions. Therefore, predicting temporal trends of precipitation and temperature is very useful for societal and urban planning. In this research, in order to study the future trends in precipitation and temperature, we have applied scenarios of the fifth assessment report of IPCC. The results suggest that both parameters will be increasing in the studied area (Iran) in future. Since there is interdependence between these two climatic parameters, the independent analysis of the two fields will generate errors in the interpretation of model simulations. Therefore, in this study, copula theory was used for joint modeling of precipitation and temperature under climate change scenarios. By the joint distribution, we can find the structure of interdependence of precipitation and temperature in current and future under climate change conditions, which can assist in the risk assessment of extreme hydrological and meteorological events. Based on the results of goodness of fit test, the Frank copula function was selected for modeling of recorded and constructed data under RCP2.6 scenario and the Gaussian copula function was used for joint modeling of the constructed data under the RCP4.5 and RCP8.5 scenarios.


Sign in / Sign up

Export Citation Format

Share Document