Predicting Intrauterine Transfusion Interval and Perinatal Outcomes in Alloimmunized Pregnancies: Time-to-Event Survival Analysis

2019 ◽  
Vol 46 (6) ◽  
pp. 425-432
Author(s):  
John W. Snelgrove ◽  
Rohan D’Souza ◽  
P. Gareth R. Seaward ◽  
Rory Windrim ◽  
Edmond N. Kelly ◽  
...  
2020 ◽  
pp. 181-218
Author(s):  
Bendix Carstensen

This chapter describes survival analysis. Survival analysis concerns data where the outcome is a length of time, namely the time from inclusion in the study (such as diagnosis of some disease) till death or some other event — hence the term 'time to event analysis', which is also used. There are two primary targets normally addressed in survival analysis: survival probabilities and event rates. The chapter then looks at the life table estimator of survival function and the Kaplan–Meier estimator of survival. It also considers the Cox model and its relationship with Poisson models, as well as the Fine–Gray approach to competing risks.


Plants ◽  
2020 ◽  
Vol 9 (5) ◽  
pp. 617
Author(s):  
Alessandro Romano ◽  
Piergiorgio Stevanato

Germination data are analyzed by several methods, which can be mainly classified as germination indexes and traditional regression techniques to fit non-linear parametric functions to the temporal sequence of cumulative germination. However, due to the nature of germination data, often different from other biological data, the abovementioned methods may present some limits, especially when ungerminated seeds are present at the end of an experiment. A class of methods that could allow addressing these issues is represented by the so-called “time-to-event analysis”, better known in other scientific fields as “survival analysis” or “reliability analysis”. There is relatively little literature about the application of these methods to germination data, and some reviews dealt only with parts of the possible approaches such as either non-parametric and semi-parametric or parametric ones. The present study aims to give a contribution to the knowledge about the reliability of these methods by assessing all the main approaches to the same germination data provided by sugar beet (Beta vulgaris L.) seeds cohorts. The results obtained confirmed that although the different approaches present advantages and disadvantages, they could generally represent a valuable tool to analyze germination data providing parameters whose usefulness depends on the purpose of the research.


2019 ◽  
Vol 40 (Supplement_1) ◽  
Author(s):  
L E Juarez-Orozco ◽  
J W Benjamins ◽  
T Maaniitty ◽  
A Saraste ◽  
P Van Der Harst ◽  
...  

Abstract Background Deep Learning (DL) is revolutionizing cardiovascular medicine through complex data-pattern recognition. In spite of its success in the diagnosis of coronary artery disease (CAD), DL implementation for prognostic evaluation of cardiovascular events is still limited. Traditional survival models (e.g.Cox) notably incorporate the effect of time-to-event but are unable to exploit complex non-liner dependencies between large numbers of predictors. On the other hand, DL hasn't systematically incorporated time-to-event for prognostic evaluations. Long-term registries of hybrid PET/CT imaging represent a suitable substrate for DL-based survival analysis due the large amount of time-dependent structured variables that they convey. Therefore, we sought to evaluate the feasibility and performance of DL Survival Analysis in predicting the occurrence of myocardial infarction (MI) and death in a long-term registry of cardiac hybrid PET/CT. Methods Data from our PET/CT registry of symptomatic patients with intermediate CAD risk who underwent sequential CT angiography and 15O-water PET for suspected ischemia, was analyzed. The sample has been followed for a 6-year average for MI or death. Ten clinical variables were extracted from electronic records including cardiovascular risk factors, dyspnea and early revascularization. CT angiography images were evaluated segmentally for: presence of plaque, % of luminal stenosis and calcification (58 variables). Absolute stress PET myocardial perfusion data was evaluated globally and regionally across vascular territories (4 variables). Cox-Nnet (a deep survival neural network) was implemented in a 5-fold cross-validated 80:20 split for training and testing. Resulting DL-hazard ratios were operationalized and compared to the observed events developed during follow-up. The performance of Cox-Nnet evaluating structured CT, PET/CT, and PET/CT+clinical variables was compared to expert interpretation (operationalized as: normal coronaries, non-obstructive CAD, obstructive CAD) and to Calcium Score (CaSc), through the concordance (c)-index. Results There were 426 men and 525 women with a mean age of 61±9 years-old. Twenty-four MI and 49 deaths occurred during follow-up (1 month–9.6 years), while 11.5% patients underwent early revascularization. Cox-Nnet evaluation of PET/CT data (c-index=0.75) outperformed categorical expert interpretation (c-index=0.54) and CaSc (c-index=0.65), while hybrid PET/CT and PET/CT+clinical (c-index=0.75) variables demonstrated incremental performance overall independent from early revascularization. Conclusion Deep Learning Survival Analysis is feasible in the evaluation of cardiovascular prognostic data. It might enhance the value of cardiac hybrid PET/CT imaging data for predicting the long-term development of myocardial infarction and death. Further research into the implementation of Deep Learning for prognostic analyses in CAD is warranted.


2021 ◽  
pp. bmjebm-2021-111743
Author(s):  
Joanna Moncrieff ◽  
Janus Christian Jakobsen ◽  
Max Bachmann

Survival analysis is routinely used to assess differences between groups in relapse prevention and treatment discontinuation studies involving people with long-term psychiatric conditions. The actual outcome in survival analysis is ‘time to event’, yet, in the mental health field, there has been little consideration of whether a temporary delay to relapse is clinically relevant in a condition that can last for decades. Moreover, in psychiatric drug trials, a pattern of elevated early relapses following randomisation to placebo or no treatment is common. This may be the result of the withdrawal of previous treatment leading to physiological withdrawal effects, which may be mistaken for relapse, or genuine relapse precipitated by the process of withdrawal. Such withdrawal effects typically produce converging survival curves eventually. They inevitably lead to differences in time to relapse, even when there is little or no difference in the cumulative risk of relapse at final follow-up. Therefore, statistical tests based on survival analyses can be misleading because they obscure these withdrawal effects. We illustrate these difficulties in a trial of antipsychotic reduction versus maintenance, and a trial of prophylactic esketamine in people with treatment-resistant depression. Both illustrate withdrawal-related effects that underline the importance of long-term follow-up and question the use of tests based on time to event. Further discussion of the most relevant outcome and appropriate approach to analysis, and research on patient and carer preferences is important to inform the design of future trials and interpretation of existing ones.


2019 ◽  
Vol 40 (5) ◽  
pp. 649-653
Author(s):  
Ayşe Özge Şavkli ◽  
Berna Aslan Çetin ◽  
Zuat Acar ◽  
Zeynep Özköse ◽  
Mustafa Behram ◽  
...  

2020 ◽  
Vol 46 (8) ◽  
pp. 1319-1325
Author(s):  
Masako Kanda ◽  
Shohei Noguchi ◽  
Ryo Yamamoto ◽  
Haruna Kawaguchi ◽  
Shusaku Hayashi ◽  
...  

2018 ◽  
Vol 71 (3) ◽  
pp. 182-191 ◽  
Author(s):  
Junyong In ◽  
Dong Kyu Lee

Sign in / Sign up

Export Citation Format

Share Document