scholarly journals Evaluating Public Health Interventions: 8. Causal Inference for Time-Invariant Interventions

2018 ◽  
Vol 108 (9) ◽  
pp. 1187-1190 ◽  
Author(s):  
Donna Spiegelman ◽  
Xin Zhou
Author(s):  
Michelle Degli Esposti ◽  
Thees Spreckelsen ◽  
Antonio Gasparrini ◽  
Douglas J Wiebe ◽  
Carl Bonander ◽  
...  

Abstract Interrupted time series designs are a valuable quasi-experimental approach for evaluating public health interventions. Interrupted time series extends a single group pre-post comparison by using multiple time points to control for underlying trends. But history bias—confounding by unexpected events occurring at the same time of the intervention—threatens the validity of this design and limits causal inference. Synthetic control methodology, a popular data-driven technique for deriving a control series from a pool of unexposed populations, is increasingly recommended. In this paper, we evaluate if and when synthetic controls can strengthen an interrupted time series design. First, we summarize the main observational study designs used in evaluative research, highlighting their respective uses, strengths, biases and design extensions for addressing these biases. Second, we outline when the use of synthetic controls can strengthen interrupted time series studies and when their combined use may be problematic. Third, we provide recommendations for using synthetic controls in interrupted time series and, using a real-world example, we illustrate the potential pitfalls of using a data-driven approach to identify a suitable control series. Finally, we emphasize the importance of theoretical approaches for informing study design and argue that synthetic control methods are not always well suited for generating a counterfactual that minimizes critical threats to interrupted time series studies. Advances in synthetic control methods bring new opportunities to conduct rigorous research in evaluating public health interventions. However, incorporating synthetic controls in interrupted time series studies may not always nullify important threats to validity nor improve causal inference.


Author(s):  
Michelle Degli Esposti ◽  
Thees Spreckelsen ◽  
Antonio Gasparrini ◽  
Douglas J Wiebe ◽  
Alexa R Yakubovich ◽  
...  

2020 ◽  
Author(s):  
Qiyang Ge ◽  
Zixin Hu ◽  
Shudi Li ◽  
Wei Lin ◽  
Li Jin ◽  
...  

ABSTRACTObjectiveDevelop the AI and casual inference-inspired methods for forecasting and evaluating the effects of public health interventions on curbing the spread of Covid-19.MethodsWe developed recurrent neural network (RNN) for modeling the transmission dynamics of the epidemics and Counterfactual-RNN (CRNN) for evaluating and exploring public health intervention strategies to slow down the spread of Covid-19 worldwide. We applied the developed methods to real-time forecasting the confirmed cases of Covid-19 across the world. The data were collected from January 22 to April 18, 2020 by John Hopkins Coronavirus Resource Center (https://coronavirus.jhu.edu/MAP.HTML).ResultsThe average errors of 1-step to 10-step forecasting were 2.9%. In the absence of a COVID-19 vaccine, we evaluated the potential effects of a number of public health measures. We found that the estimated peak number of new cases and cumulative cases, and the maximum number of cumulative cases worldwide with one week later additional intervention were reduced to 103,872, 2,104,800, and 2,271,648, respectively. The estimated total peak number of new cases and cumulative cases would be the same as the above and the maximum number of cumulative cases would be 3,864,872 in the world with 3 week later additional intervention. Duration time of the Covid-19 spread would be increased from 91 days to 123 days. Our estimation results showed that we were in the eve of stopping the spread of COVID-19 worldwide. However, we observed that transmission would quickly rebound if interventions were relaxed.ConclusionsThe accuracy of the AI-based methods for forecasting the trajectory of Covid-19 was high. The AI and causal inference-inspired methods are a powerful tool for helping public health planning and policymaking. We concluded that the spread of COVID-19 would be stopped very soon.HighlightsAs the Covid-19 pandemic soars around the world, there is urgent need to forecast the number of cases worldwide at its peak, the length of the pandemic before receding and implement public health interventions to significantly stop the spread of Covid-19.Develop artificial intelligence (AI) and causal inference inspired methods for real-time forecasting and evaluation of interventions on the worldwide trajectory of the spread of Covid-19.We estimated the maximum number of cumulative cases under immediate additional intervention to be 2,271,648; under later additional intervention the number increased to 3,864,872 and the case ending time would be May 25, 2020.Without additional intervention, the spread of COVID-19 would be stopped on July 6, 2020.


Author(s):  
Sharon Schwartz ◽  
Nicolle M. Gatto

Epidemiology is often described as the basic science of public health. A mainstay of epidemiologic research is to uncover the causes of disease that can serve as the basis for successful public-health interventions (e.g., Institute of Medicine, 1988; Milbank Memorial Fund Commission, 1976). A major obstacle to attaining this goal is that causes can never be seen but only inferred. For this reason, the inferences drawn from our studies must always be interpreted with caution. Considerable progress has been made in the methods required for sound causal inference. Much of this progress is rooted in a full and rich articulation of the logic behind randomized controlled trials (Holland, 1986). From this work, epidemiologists have a much better understanding of barriers to causal inference in observational studies, such as confounding and selection bias, and their tools and concepts are much more refined. The models behind this progress are often referred to as ‘‘counterfactual’’ models. Although researchers may be unfamiliar with them, they are widely (although not universally) accepted in the field. Counterfactual models underlie the methodologies that we all use. Within epidemiology, when people talk about a counterfactual model, they usually mean a potential outcomes model—also known as ‘‘Rubin’s causal model.’’ As laid out by epidemiologists, the potential outcomes model is rooted in the experimental ideas of Cox and Fisher, for which Neyman provided the first mathematical expression. It was popularized by Rubin, who extended it to observational studies, and expanded by Robins to exposures that vary over time (Maldonado & Greenland, 2002; Hernan, 2004; VanderWeele & Hernan, 2006). This rich tradition is responsible for much of the progress we have just noted. Despite this progress in methods of causal inference, a common charge in the epidemiologic literature is that public-health interventions based on the causes we identify in our studies often fail.


Sign in / Sign up

Export Citation Format

Share Document