scholarly journals Quantifying the hidden costs of imperfect detection for early detection surveillance

2019 ◽  
Vol 374 (1776) ◽  
pp. 20180261 ◽  
Author(s):  
Alexander J. Mastin ◽  
Frank van den Bosch ◽  
Femke van den Berg ◽  
Stephen R. Parnell

The global spread of pathogens poses an increasing threat to health, ecosystems and agriculture worldwide. As early detection of new incursions is key to effective control, new diagnostic tests that can detect pathogen presence shortly after initial infection hold great potential for detection of infection in individual hosts. However, these tests may be too expensive to be implemented at the sampling intensities required for early detection of a new epidemic at the population level. To evaluate the trade-off between earlier and/or more reliable detection and higher deployment costs, we need to consider the impacts of test performance, test cost and pathogen epidemiology. Regarding test performance, the period before new infections can be first detected and the probability of detecting them are of particular importance. We propose a generic framework that can be easily used to evaluate a variety of different detection methods and identify important characteristics of the pathogen and the detection method to consider when planning early detection surveillance. We demonstrate the application of our method using the plant pathogen Phytophthora ramorum in the UK, and find that visual inspec-tion for this pathogen is a more cost-effective strategy for early detection surveillance than an early detection diagnostic test. This article is part of the theme issue ‘Modelling infectious disease outbreaks in humans, animals and plants: epidemic forecasting and control’. This theme issue is linked with the earlier issue ‘Modelling infectious disease outbreaks in humans, animals and plants: approaches and important themes’.

2019 ◽  
Vol 374 (1776) ◽  
pp. 20180283 ◽  
Author(s):  
Devon A. Gaydos ◽  
Anna Petrasova ◽  
Richard C. Cobb ◽  
Ross K. Meentemeyer

Epidemiological models are powerful tools for evaluating scenarios and visualizing patterns of disease spread, especially when comparing intervention strategies. However, the technical skill required to synthesize and operate computational models frequently renders them beyond the command of the stakeholders who are most impacted by the results. Participatory modelling (PM) strives to restructure the power relationship between modellers and the stakeholders who rely on model insights by involving these stakeholders directly in model development and application; yet, a systematic literature review indicates little adoption of these techniques in epidemiology, especially plant epidemiology. We investigate the potential for PM to integrate stakeholder and researcher knowledge, using Phytophthora ramorum and the resulting sudden oak death disease as a case study. Recent introduction of a novel strain (European 1 or EU1) in southwestern Oregon has prompted significant concern and presents an opportunity for coordinated management to minimize regional pathogen impacts. Using a PM framework, we worked with local stakeholders to develop an interactive forecasting tool for evaluating landscape-scale control strategies. We find that model co-development has great potential to empower stakeholders in the design, development and application of epidemiological models for disease control. This article is part of the theme issue ‘Modelling infectious disease outbreaks in humans, animals and plants: epidemic forecasting and control’. This theme issue is linked with the earlier issue ‘Modelling infectious disease outbreaks in humans, animals and plants: approaches and important themes’.


2019 ◽  
Vol 374 (1776) ◽  
pp. 20180431 ◽  
Author(s):  
Robin N. Thompson ◽  
Oliver W. Morgan ◽  
Katri Jalava

The World Health Organization considers an Ebola outbreak to have ended once 42 days have passed since the last possible exposure to a confirmed case. Benefits of a quick end-of-outbreak declaration, such as reductions in trade/travel restrictions, must be balanced against the chance of flare-ups from undetected residual cases. We show how epidemiological modelling can be used to estimate the surveillance level required for decision-makers to be confident that an outbreak is over. Results from a simple model characterizing an Ebola outbreak suggest that a surveillance sensitivity (i.e. case reporting percentage) of 79% is necessary for 95% confidence that an outbreak is over after 42 days without symptomatic cases. With weaker surveillance, unrecognized transmission may still occur: if the surveillance sensitivity is only 40%, then 62 days must be waited for 95% certainty. By quantifying the certainty in end-of-outbreak declarations, public health decision-makers can plan and communicate more effectively.This article is part of the theme issue ‘Modelling infectious disease outbreaks in humans, animals and plants: epidemic forecasting and control’. This issue is linked with the earlier theme issue ‘Modelling infectious disease outbreaks in humans, animals and plants: approaches and important themes’.


2019 ◽  
Vol 374 (1776) ◽  
pp. 20180262 ◽  
Author(s):  
Y. Bourhis ◽  
T. Gottwald ◽  
F. van den Bosch

Monitoring a population for a disease requires the hosts to be sampled and tested for the pathogen. This results in sampling series from which we may estimate the disease incidence, i.e. the proportion of hosts infected. Existing estimation methods assume that disease incidence does not change between monitoring rounds, resulting in an underestimation of the disease incidence. In this paper, we develop an incidence estimation model accounting for epidemic growth with monitoring rounds that sample varying incidence. We also show how to accommodate the asymptomatic period that is the characteristic of most diseases. For practical use, we produce an approximation of the model, which is subsequently shown to be accurate for relevant epidemic and sampling parameters. Both the approximation and the full model are applied to stochastic spatial simulations of epidemics. The results prove their consistency for a very wide range of situations. The estimation model is made available as an online application. This article is part of the theme issue ‘Modelling infectious disease outbreaks in humans, animals and plants: epidemic forecasting and control’. This theme issue is linked with the earlier issue ‘Modelling infectious disease outbreaks in humans, animals and plants: approaches and important themes’.


2019 ◽  
Vol 374 (1775) ◽  
pp. 20180282 ◽  
Author(s):  
Wayne M. Getz ◽  
Richard Salter ◽  
Whitney Mgbara

Dynamic SEIR (Susceptible, Exposed, Infectious, Removed) compartmental models provide a tool for predicting the size and duration of both unfettered and managed outbreaks—the latter in the context of interventions such as case detection, patient isolation, vaccination and treatment. The reliability of this tool depends on the validity of key assumptions that include homogeneity of individuals and spatio-temporal homogeneity. Although the SEIR compartmental framework can easily be extended to include demographic (e.g. age) and additional disease (e.g. healthcare workers) classes, dependence of transmission rates on time, and metapopulation structure, fitting such extended models is hampered by both a proliferation of free parameters and insufficient or inappropriate data. This raises the question of how effective a tool the basic SEIR framework may actually be. We go some way here to answering this question in the context of the 2014–2015 outbreak of Ebola in West Africa by comparing fits of an SEIR time-dependent transmission model to both country- and district-level weekly incidence data. Our novel approach in estimating the effective-size-of-the-populations-at-risk ( N eff ) and initial number of exposed individuals ( E 0 ) at both district and country levels, as well as the transmission function parameters, including a time-to-halving-the-force-of-infection ( t f/2 ) parameter, provides new insights into this Ebola outbreak. It reveals that the estimate R 0 ≈ 1.7 from country-level data appears to seriously underestimate R 0 ≈ 3.3 − 4.3 obtained from more spatially homogeneous district-level data. Country-level data also overestimate t f/2 ≈ 22 weeks, compared with 8–10 weeks from district-level data. Additionally, estimates for the duration of individual infectiousness is around two weeks from spatially inhomogeneous country-level data compared with 2.4–4.5 weeks from spatially more homogeneous district-level data, which estimates are rather high compared with most values reported in the literature. This article is part of the theme issue ‘Modelling infectious disease outbreaks in humans, animals and plants: approaches and important themes’. This issue is linked with the subsequent theme issue ‘Modelling infectious disease outbreaks in humans, animals and plants: epidemic forecasting and control’.


2019 ◽  
Vol 374 (1776) ◽  
pp. 20180280 ◽  
Author(s):  
Laurie Baker ◽  
Jason Matthiopoulos ◽  
Thomas Müller ◽  
Conrad Freuling ◽  
Katie Hampson

Understanding how the spatial deployment of interventions affects elimination time horizons and potential for disease re-emergence has broad application to control programmes targeting human, animal and plant pathogens. We previously developed an epidemiological model that captures the main features of rabies spread and the impacts of vaccination based on detailed records of fox rabies in eastern Germany during the implementation of an oral rabies vaccination (ORV) programme. Here, we use simulations from this fitted model to determine the best vaccination strategy, in terms of spatial placement and timing of ORV efforts, for three epidemiological scenarios representative of current situations in Europe. We found that consecutive and comprehensive twice-yearly vaccinations across all regions rapidly controlled and eliminated rabies and that the autumn campaigns had the greater impact on increasing the probability of elimination. This appears to result from the need to maintain sufficient herd immunity in the face of large birth pulses, as autumn vaccinations reach susceptible juveniles and therefore a larger proportion of the population than spring vaccinations. Incomplete vaccination compromised time to elimination requiring the same or more vaccination effort to meet similar timelines. Our results have important practical implications that could inform policies for rabies containment and elimination in Europe and elsewhere. This article is part of the theme issue ‘Modelling infectious disease outbreaks in humans, animals and plants: epidemic forecasting and control’. This theme issue is linked with the earlier issue ‘Modelling infectious disease outbreaks in humans, animals and plants: approaches and important themes’.


2019 ◽  
Vol 374 (1776) ◽  
pp. 20180279 ◽  
Author(s):  
Joshua Kaminsky ◽  
Lindsay T. Keegan ◽  
C. Jessica E. Metcalf ◽  
Justin Lessler

Simulation studies are often used to predict the expected impact of control measures in infectious disease outbreaks. Typically, two independent sets of simulations are conducted, one with the intervention, and one without, and epidemic sizes (or some related metric) are compared to estimate the effect of the intervention. Since it is possible that controlled epidemics are larger than uncontrolled ones if there is substantial stochastic variation between epidemics, uncertainty intervals from this approach can include a negative effect even for an effective intervention. To more precisely estimate the number of cases an intervention will prevent within a single epidemic, here we develop a ‘single-world’ approach to matching simulations of controlled epidemics to their exact uncontrolled counterfactual. Our method borrows concepts from percolation approaches, prunes out possible epidemic histories and creates potential epidemic graphs (i.e. a mathematical representation of all consistent epidemics) that can be ‘realized’ to create perfectly matched controlled and uncontrolled epidemics. We present an implementation of this method for a common class of compartmental models (e.g. SIR models), and its application in a simple SIR model. Results illustrate how, at the cost of some computation time, this method substantially narrows confidence intervals and avoids nonsensical inferences. This article is part of the theme issue ‘Modelling infectious disease outbreaks in humans, animals and plants: epidemic forecasting and control’. This theme issue is linked with the earlier issue ‘Modelling infectious disease outbreaks in humans, animals and plants: approaches and important themes’.


2019 ◽  
Vol 374 (1775) ◽  
pp. 20180263 ◽  
Author(s):  
Elsa Rousseau ◽  
Mélanie Bonneault ◽  
Frédéric Fabre ◽  
Benoît Moury ◽  
Ludovic Mailleret ◽  
...  

Plant qualitative resistances to viruses are natural exhaustible resources that can be impaired by the emergence of resistance-breaking (RB) virus variants. Mathematical modelling can help determine optimal strategies for resistance durability by a rational deployment of resistance in agroecosystems. Here, we propose an innovative approach, built up from our previous empirical studies, based on plant cultivars combining qualitative resistance with quantitative resistance narrowing population bottlenecks exerted on viruses during host-to-host transmission and/or within-host infection. Narrow bottlenecks are expected to slow down virus adaptation to plant qualitative resistance. To study the effect of bottleneck size on yield, we developed a stochastic epidemic model with mixtures of susceptible and resistant plants, relying on continuous-time Markov chain processes. Overall, narrow bottlenecks are beneficial when the fitness cost of RB virus variants in susceptible plants is intermediate. In such cases, they could provide up to 95 additional percentage points of yield compared with deploying a qualitative resistance alone. As we have shown in previous works that virus population bottlenecks are at least partly heritable plant traits, our results suggest that breeding and deploying plant varieties exposing virus populations to narrowed bottlenecks will increase yield and delay the emergence of RB variants. This article is part of the theme issue ‘Modelling infectious disease outbreaks in humans, animals and plants: approaches and important themes’. This issue is linked with the subsequent theme issue ‘Modelling infectious disease outbreaks in humans, animals and plants: epidemic forecasting and control’.


2019 ◽  
Vol 374 (1775) ◽  
pp. 20180274 ◽  
Author(s):  
R. N. Thompson ◽  
C. P. Thompson ◽  
O. Pelerman ◽  
S. Gupta ◽  
U. Obolski

The high frequency of modern travel has led to concerns about a devastating pandemic since a lethal pathogen strain could spread worldwide quickly. Many historical pandemics have arisen following pathogen evolution to a more virulent form. However, some pathogen strains invoke immune responses that provide partial cross-immunity against infection with related strains. Here, we consider a mathematical model of successive outbreaks of two strains—a low virulence (LV) strain outbreak followed by a high virulence (HV) strain outbreak. Under these circumstances, we investigate the impacts of varying travel rates and cross-immunity on the probability that a major epidemic of the HV strain occurs, and the size of that outbreak. Frequent travel between subpopulations can lead to widespread immunity to the HV strain, driven by exposure to the LV strain. As a result, major epidemics of the HV strain are less likely, and can potentially be smaller, with more connected subpopulations. Cross-immunity may be a factor contributing to the absence of a global pandemic as severe as the 1918 influenza pandemic in the century since. This article is part of the theme issue ‘Modelling infectious disease outbreaks in humans, animals and plants: approaches and important themes’. This issue is linked with the subsequent theme issue ‘Modelling infectious disease outbreaks in humans, animals and plants: epidemic forecasting and control’.


Sign in / Sign up

Export Citation Format

Share Document