scholarly journals Secondary Analysis under Cohort Sampling Designs Using Conditional Likelihood

2012 ◽  
Vol 2012 ◽  
pp. 1-37 ◽  
Author(s):  
Olli Saarela ◽  
Sangita Kulathinal ◽  
Juha Karvanen

Under cohort sampling designs, additional covariate data are collected on cases of a specific type and a randomly selected subset of noncases, primarily for the purpose of studying associations with a time-to-event response of interest. With such data available, an interest may arise to reuse them for studying associations between the additional covariate data and a secondary non-time-to-event response variable, usually collected for the whole study cohort at the outset of the study. Following earlier literature, we refer to such a situation as secondary analysis. We outline a general conditional likelihood approach for secondary analysis under cohort sampling designs and discuss the specific situations of case-cohort and nested case-control designs. We also review alternative methods based on full likelihood and inverse probability weighting. We compare the alternative methods for secondary analysis in two simulated settings and apply them in a real-data example.

2018 ◽  
Vol 28 (8) ◽  
pp. 2538-2556 ◽  
Author(s):  
Eleni Vradi ◽  
Thomas Jaki ◽  
Richardus Vonk ◽  
Werner Brannath

To enable targeted therapies and enhance medical decision-making, biomarkers are increasingly used as screening and diagnostic tests. When using quantitative biomarkers for classification purposes, this often implies that an appropriate cutoff for the biomarker has to be determined and its clinical utility must be assessed. In the context of drug development, it is of interest how the probability of response changes with increasing values of the biomarker. Unlike sensitivity and specificity, predictive values are functions of the accuracy of the test, depend on the prevalence of the disease and therefore are a useful tool in this setting. In this paper, we propose a Bayesian method to not only estimate the cutoff value using the negative and positive predictive values, but also estimate the uncertainty around this estimate. Using Bayesian inference allows us to incorporate prior information, and obtain posterior estimates and credible intervals for the cut-off and associated predictive values. The performance of the Bayesian approach is compared with alternative methods via simulation studies of bias, interval coverage and width and illustrations on real data with binary and time-to-event outcomes are provided.


2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Francesca Graziano ◽  
Maria Grazia Valsecchi ◽  
Paola Rebora

Abstract Background The availability of large epidemiological or clinical data storing biological samples allow to study the prognostic value of novel biomarkers, but efficient designs are needed to select a subsample on which to measure them, for parsimony and economical reasons. Two-phase stratified sampling is a flexible approach to perform such sub-sampling, but literature on stratification variables to be used in the sampling and power evaluation is lacking especially for survival data. Methods We compared the performance of different sampling designs to assess the prognostic value of a new biomarker on a time-to-event endpoint, applying a Cox model weighted by the inverse of the empirical inclusion probability. Results Our simulation results suggest that case-control stratified (or post stratified) by a surrogate variable of the marker can yield higher performances than simple random, probability proportional to size, and case-control sampling. In the presence of high censoring rate, results showed an advantage of nested case-control and counter-matching designs in term of design effect, although the use of a fixed ratio between cases and controls might be disadvantageous. On real data on childhood acute lymphoblastic leukemia, we found that optimal sampling using pilot data is greatly efficient. Conclusions Our study suggests that, in our sample, case-control stratified by surrogate and nested case-control yield estimates and power comparable to estimates obtained in the full cohort while strongly decreasing the number of patients required. We recommend to plan the sample size and using sampling designs for exploration of novel biomarker in clinical cohort data.


Author(s):  
Berend Terluin ◽  
Ewa M. Roos ◽  
Caroline B. Terwee ◽  
Jonas B. Thorlund ◽  
Lina H. Ingelsrud

Abstract Purpose The minimal important change (MIC) of a patient-reported outcome measure (PROM) is often suspected to be baseline dependent, typically in the sense that patients who are in a poorer baseline health condition need greater improvement to qualify as minimally important. Testing MIC baseline dependency is commonly performed by creating two or more subgroups, stratified on the baseline PROM score. This study’s purpose was to show that this practice produces biased subgroup MIC estimates resulting in spurious MIC baseline dependency, and to develop alternative methods to evaluate MIC baseline dependency. Methods Datasets with PROM baseline and follow-up scores and transition ratings were simulated with and without MIC baseline dependency. Mean change MICs, ROC-based MICs, predictive MICs, and adjusted MICs were estimated before and after stratification on the baseline score. Three alternative methods were developed and evaluated. The methods were applied in a real data example for illustration. Results Baseline stratification resulted in biased subgroup MIC estimates and the false impression of MIC baseline dependency, due to redistribution of measurement error. Two of the alternative methods require a second baseline measurement with the same PROM or another correlated PROM. The third method involves the construction of two parallel tests based on splitting the PROM’s item set. Two methods could be applied to the real data. Conclusion MIC baseline dependency should not be tested in subgroups based on stratification on the baseline PROM score. Instead, one or more of the suggested alternative methods should be used.


2021 ◽  
pp. 223-225
Author(s):  
Dhara Singh ◽  
Sujata bhargava

Background: Recent guidelines of the World Health Organization (WHO) indicated administering tranexamic acid (TXA) in order to treat postpartum bleeding (PPH). Therefore, nding low-cost and lowrisk alternative methods to control obstetric bleeding is of great importance. The present study aimed to evaluate the prophylactic effect of TXA on bleeding during and after the LSCS. In addition, it was attempted to explore the impact of TXA as a safe and inexpensive method for decreasing bleeding during and after CS so that to decrease the hazard of blood transfusion or hysterectomy in these patients. Material and Methods: This prospective study conducted on 100 women in Department of Obstetrics &gynecolgy for one year period. They were divided in two groups: Cases: (n=50; women receiving prophylactic Tranexamic Acid) and Control: (n=50; women receiving saline). Estimated the amount of blood loss during surgery. The amount of blood loss during surgery were calculated Estimation of weight of dry towels and mops before autoclaving is noted. Results: Most common age group among Cases and Control was 26-30 years .%. Mean age among cases group (26.69±7.51 years) was signicantly lesser compared to control study cohort (29.75±7.72). Post operativehemoglobin level was signicantly higher among Case (11.26±12.03) as compared to Control (8.56±1.01). Comparing post operative complications revealedno signicant changes. Use of topical hemostatics was higher among the control (77%) as compared to Cases (57%). Conclusion: Prophylactic treatment with TXA in relation to elective LSCS reduces the overall total blood loss, and the risk of reoperations owing to postoperative hemorrhage as revealed by higher hemoglobin level among cases.


2020 ◽  
Author(s):  
Christopher Partlett ◽  
Nigel J Hall ◽  
Alison Leaf ◽  
Edmund Juszczak ◽  
Louise Linsell

Abstract Background A nested case-control study is an efficient design that can be embedded within an existing cohort study or randomised trial. It has a number of advantages compared to the conventional case-control design, and has the potential to answer important research questions using untapped prospectively collected data. Methods We demonstrate the utility of the matched nested case-control design by applying it to a secondary analysis of the Abnormal Doppler Enteral Prescription Trial. We investigated the role of milk feed type and changes in milk feed type in the development of necrotising enterocolitis in a group of 398 high risk growth-restricted preterm infants. Results Using matching, we were able to generate a comparable sample of controls selected from the same population as the cases. In contrast to the standard case-control design, exposure status was ascertained prior to the outcome event occurring and the comparison between the cases and matched controls could be made at the point at which the event occurred. This enabled us to reliably investigate the temporal relationship between feed type and necrotising enterocolitis. Conclusions A matched nested case-control study can be used to identify credible associations in a secondary analysis of clinical trial data where the exposure of interest was not randomised, and has several advantages over a standard case-control design. This method offers the potential to make reliable inferences in scenarios where it would be unethical or impractical to perform a randomised clinical trial.


Neurology ◽  
2020 ◽  
Vol 95 (24) ◽  
pp. e3241-e3247 ◽  
Author(s):  
Maria Stefanidou ◽  
Alexa S. Beiser ◽  
Jayandra Jung Himali ◽  
Teng J. Peng ◽  
Orrin Devinsky ◽  
...  

ObjectiveTo assess the risk of incident epilepsy among participants with prevalent dementia and the risk of incident dementia among participants with prevalent epilepsy in the Framingham Heart Study (FHS).MethodsWe analyzed prospectively collected data in the Original and Offspring FHS cohorts. To determine the risk of developing epilepsy among participants with dementia and the risk of developing dementia among participants with epilepsy, we used separate, nested, case–control designs and matched each case to 3 age-, sex- and FHS cohort–matched controls. We used Cox proportional hazards regression analysis, adjusting for sex and age. In secondary analysis, we investigated the role of education level and APOE ε4 allele status in modifying the association between epilepsy and dementia.ResultsA total of 4,906 participants had information on epilepsy and dementia and dementia follow-up after age 65. Among 660 participants with dementia and 1,980 dementia-free controls, there were 58 incident epilepsy cases during follow-up. Analysis comparing epilepsy risk among dementia cases vs controls yielded a hazard ratio (HR) of 1.82 (95% confidence interval 1.05–3.16, p = 0.034). Among 43 participants with epilepsy and 129 epilepsy-free controls, there were 51 incident dementia cases. Analysis comparing dementia risk among epilepsy cases vs controls yielded a HR of 1.99 (1.11–3.57, p = 0.021). In this group, among participants with any post–high school education, prevalent epilepsy was associated with a nearly 5-fold risk for developing dementia (HR 4.67 [1.82–12.01], p = 0.001) compared to controls of the same educational attainment.ConclusionsThere is a bi-directional association between epilepsy and dementia. with either condition carrying a nearly 2-fold risk of developing the other when compared to controls.


2012 ◽  
Vol 30 (26) ◽  
pp. 3297-3303 ◽  
Author(s):  
Joseph G. Ibrahim ◽  
Haitao Chu ◽  
Ming-Hui Chen

Missing data are a prevailing problem in any type of data analyses. A participant variable is considered missing if the value of the variable (outcome or covariate) for the participant is not observed. In this article, various issues in analyzing studies with missing data are discussed. Particularly, we focus on missing response and/or covariate data for studies with discrete, continuous, or time-to-event end points in which generalized linear models, models for longitudinal data such as generalized linear mixed effects models, or Cox regression models are used. We discuss various classifications of missing data that may arise in a study and demonstrate in several situations that the commonly used method of throwing out all participants with any missing data may lead to incorrect results and conclusions. The methods described are applied to data from an Eastern Cooperative Oncology Group phase II clinical trial of liver cancer and a phase III clinical trial of advanced non–small-cell lung cancer. Although the main area of application discussed here is cancer, the issues and methods we discuss apply to any type of study.


Sign in / Sign up

Export Citation Format

Share Document