scholarly journals Adaptive targeted infectious disease testing

2020 ◽  
Vol 36 (Supplement_1) ◽  
pp. S77-S93 ◽  
Author(s):  
Maximilian Kasy ◽  
Alexander Teytelboym

Abstract We show how to efficiently use costly testing resources in an epidemic, when testing outcomes can be used to make quarantine decisions. If the costs of false quarantine and false release exceed the cost of testing, the optimal myopic testing policy targets individuals with an intermediate likelihood of being infected. A high cost of false release means that testing is optimal for individuals with a low probability of infection, and a high cost of false quarantine means that testing is optimal for individuals with a high probability of infection. If individuals arrive over time, the policy-maker faces a dynamic trade-off: using tests for individuals for whom testing yields the maximum immediate benefit vs spreading out testing capacity across the population to learn prevalence rates thereby benefiting later individuals. We describe a simple policy that is nearly optimal from a dynamic perspective. We briefly discuss practical aspects of implementing our proposed policy, including imperfect testing technology, appropriate choice of prior, and non-stationarity of the prevalence rate.

2009 ◽  
Vol 102 (2) ◽  
pp. 1172-1179 ◽  
Author(s):  
Nikhil Srivastava ◽  
Damon A. Clark ◽  
Aravinthan D.T. Samuel

Caenorhabditis elegans exhibits spontaneous motility in isotropic environments, characterized by periods of forward movements punctuated at random by turning movements. Here, we study the statistics of turning movements—deep Ω-shaped bends—exhibited by swimming worms. We show that the durations of intervals between successive Ω-turns are uncorrelated with one another and are effectively selected from a probability distribution resembling the sum of two exponentials. The worm initially exhibits frequent Ω-turns on being placed in liquid, and the mean rate of Ω-turns lessens over time. The statistics of Ω-turns is consistent with a phenomenological model involving two behavioral states governed by Poisson kinetics: a “slow” state generates Ω-turns with a low probability per unit time; a “fast” state generates Ω-turns with a high probability per unit time; and the worm randomly transitions between these slow and fast states. Our findings suggest that the statistics of spontaneous Ω-turns exhibited by swimming worms may be described using a small number of parameters, consistent with a two-state phenomenological model for the mechanisms that spontaneously generate Ω-turns.


Blood ◽  
2018 ◽  
Vol 132 (Supplement 1) ◽  
pp. 5816-5816
Author(s):  
Zachary Trisel ◽  
Mark Maddox ◽  
Ahmed Safa ◽  
Thomas Bemis ◽  
Kristine Ward ◽  
...  

Abstract Background: Heparin-induced thrombocytopenia (HIT) is a complication of heparin-based anticoagulation (AC) resulting in thrombocytopenia and thrombosis. Laboratory testing can often be avoided as the 4T score (4TS) has a negative predictive value (NPV) of 0.998 for low risk patients. Despite this scoring system, which has been validated since 2006, physicians continue to send inappropriate studies despite a low probability of HIT. We sought to evaluate our academic institution's compliance, perform a cost analysis and determine if appropriate AC was initiated. By analyzing our data, we sought to educate our staff and implement measures to improve cost efficiency and quality of care. Methods: We performed a retrospective chart review of patients admitted to Hahnemann University Hospital (HUH) between November 1, 2016 and April 30, 2017 who had HIT antibodies (HITAb) and serotonin release assay (SRA) studies. These laboratory tests were performed at Quest Diagnostics. This data was compiled from the EMR at HUH. According to the 4TS, patients were assigned a score of 0-8: 0-3 for low, 4-5 for intermediate, 6-8 for high probability respectively. Laboratory results of HITAb and SRA were then compared to the calculated 4TS. We then investigated whether appropriate AC was initiated. Data on the cost associated with the inappropriate management of suspected HIT was compiled. Results: 72 patients had HITAb sent during the interval studied. Table 1 shows the 4TS and results of HITAb and SRA testing. Table 2 lists the AC used based on the 4TS. The NPV of not having HIT in the low probability group was 100%. The positive predictive value (PPV) of having HIT in the high probability group was 100%. At our institution, HITAb with reflex SRA costs $503. Expenditure due to inappropriate testing was estimated to be around $23,000 dollars over the study's time course. Inappropriately switching to argatroban cost up to $1,000 per day or fondaparinux $500 per day of overspending on anticoagulation per patient. Discussion: We found the majority of HITAb and SRA testing was unnecessary based on the 4TS. Our data showed a low 4TS had a very high NPV confirming the scoring system's utility. HIT testing was often overutilized as part of a general workup for thrombocytopenic patients who were often septic, on marrow suppressive medications and had multiple comorbidities such as hepatitis and HIV infections which confounded their clinical picture. Furthermore, this scoring system had a very high PPV in the high probability group. This study confirmed that HIT laboratory studies rarely change patient management in these scenarios. With the turnaround time of laboratory studies taking up to 4 days, there is a significant increase to the cost of patient care when solely relying on HITAb and SRA due to the use of expensive anticoagulants. In contrast, it remains unknown if HITAb and SRA could be useful in patients with an intermediate 4TS as our data is limited with no SRA results for patients with intermediate scores and a positive HITAb. To prevent unnecessary testing in the future and to improve the management of HIT, we propose to implement the following at our institution: 1. create a hard stop in our EMR which would prevent studies from being sent off inappropriately; 2. add a 4TS to the calculator section of the EMR and encourage collaboration with the hematology department if additional questions remain after calculating a 4TS; 4. start resident based educational sessions on the importance of calculating a 4TS and its significance prior to sending laboratory studies. In conclusion, the 4TS remains a useful tool to prevent unnecessary diagnostic testing and use of expensive therapeutic anticoagulants in patients with suspected HIT. Disclosures No relevant conflicts of interest to declare.


2003 ◽  
Author(s):  
M. Spano ◽  
P. Toro ◽  
M. Goldstein
Keyword(s):  
The Cost ◽  

Author(s):  
Matthew Hindman

The Internet was supposed to fragment audiences and make media monopolies impossible. Instead, behemoths like Google and Facebook now dominate the time we spend online—and grab all the profits from the attention economy. This book explains how this happened. It sheds light on the stunning rise of the digital giants and the online struggles of nearly everyone else—and reveals what small players can do to survive in a game that is rigged against them. The book shows how seemingly tiny advantages in attracting users can snowball over time. The Internet has not reduced the cost of reaching audiences—it has merely shifted who pays and how. Challenging some of the most enduring myths of digital life, the book explains why the Internet is not the postindustrial technology that has been sold to the public, how it has become mathematically impossible for grad students in a garage to beat Google, and why net neutrality alone is no guarantee of an open Internet. It also explains why the challenges for local digital news outlets and other small players are worse than they appear and demonstrates what it really takes to grow a digital audience and stay alive in today's online economy. The book shows why, even on the Internet, there is still no such thing as a free audience.


2020 ◽  
Vol 4 (02) ◽  
pp. 34-45
Author(s):  
Naufal Dzikri Afifi ◽  
Ika Arum Puspita ◽  
Mohammad Deni Akbar

Shift to The Front II Komplek Sukamukti Banjaran Project is one of the projects implemented by one of the companies engaged in telecommunications. In its implementation, each project including Shift to The Front II Komplek Sukamukti Banjaran has a time limit specified in the contract. Project scheduling is an important role in predicting both the cost and time in a project. Every project should be able to complete the project before or just in the time specified in the contract. Delay in a project can be anticipated by accelerating the duration of completion by using the crashing method with the application of linear programming. Linear programming will help iteration in the calculation of crashing because if linear programming not used, iteration will be repeated. The objective function in this scheduling is to minimize the cost. This study aims to find a trade-off between the costs and the minimum time expected to complete this project. The acceleration of the duration of this study was carried out using the addition of 4 hours of overtime work, 3 hours of overtime work, 2 hours of overtime work, and 1 hour of overtime work. The normal time for this project is 35 days with a service fee of Rp. 52,335,690. From the results of the crashing analysis, the alternative chosen is to add 1 hour of overtime to 34 days with a total service cost of Rp. 52,375,492. This acceleration will affect the entire project because there are 33 different locations worked on Shift to The Front II and if all these locations can be accelerated then the duration of completion of the entire project will be effective


2018 ◽  
Author(s):  
Michel Failing ◽  
Benchi Wang ◽  
Jan Theeuwes

Where and what we attend to is not only determined by what we are currently looking for but also by what we have encountered in the past. Recent studies suggest that biasing the probability by which distractors appear at locations in visual space may lead to attentional suppression of high probability distractor locations which effectively reduces capture by a distractor but also impairs target selection at this location. However, in many of these studies introducing a high probability distractor location was tantamount to increasing the probability of the target appearing in any of the other locations (i.e. the low probability distractor locations). Here, we investigate an alternative interpretation of previous findings according to which attentional selection at high probability distractor locations is not suppressed. Instead, selection at low probability distractor locations is facilitated. In two visual search tasks, we found no evidence for this hypothesis: neither when there was only a bias in target presentation but no bias in distractor presentation (Experiment 1), nor when there was only a bias in distractor presentation but no bias in target presentation (Experiment 2). We conclude that recurrent presentation of a distractor in a specific location leads to attentional suppression of that location through a mechanism that is unaffected by any regularities regarding the target location.


2020 ◽  
Vol 12 (7) ◽  
pp. 2767 ◽  
Author(s):  
Víctor Yepes ◽  
José V. Martí ◽  
José García

The optimization of the cost and CO 2 emissions in earth-retaining walls is of relevance, since these structures are often used in civil engineering. The optimization of costs is essential for the competitiveness of the construction company, and the optimization of emissions is relevant in the environmental impact of construction. To address the optimization, black hole metaheuristics were used, along with a discretization mechanism based on min–max normalization. The stability of the algorithm was evaluated with respect to the solutions obtained; the steel and concrete values obtained in both optimizations were analyzed. Additionally, the geometric variables of the structure were compared. Finally, the results obtained were compared with another algorithm that solved the problem. The results show that there is a trade-off between the use of steel and concrete. The solutions that minimize CO 2 emissions prefer the use of concrete instead of those that optimize the cost. On the other hand, when comparing the geometric variables, it is seen that most remain similar in both optimizations except for the distance between buttresses. When comparing with another algorithm, the results show a good performance in optimization using the black hole algorithm.


2020 ◽  
Vol 41 (Supplement_2) ◽  
Author(s):  
D Doudesis ◽  
J Yang ◽  
A Tsanas ◽  
C Stables ◽  
A Shah ◽  
...  

Abstract Introduction The myocardial-ischemic-injury-index (MI3) is a promising machine learned algorithm that predicts the likelihood of myocardial infarction in patients with suspected acute coronary syndrome. Whether this algorithm performs well in unselected patients or predicts recurrent events is unknown. Methods In an observational analysis from a multi-centre randomised trial, we included all patients with suspected acute coronary syndrome and serial high-sensitivity cardiac troponin I measurements without ST-segment elevation myocardial infarction. Using gradient boosting, MI3 incorporates age, sex, and two troponin measurements to compute a value (0–100) reflecting an individual's likelihood of myocardial infarction, and estimates the negative predictive value (NPV) and positive predictive value (PPV). Model performance for an index diagnosis of myocardial infarction, and for subsequent myocardial infarction or cardiovascular death at one year was determined using previously defined low- and high-probability thresholds (1.6 and 49.7, respectively). Results In total 20,761 of 48,282 (43%) patients (64±16 years, 46% women) were eligible of whom 3,278 (15.8%) had myocardial infarction. MI3 was well discriminated with an area under the receiver-operating-characteristic curve of 0.949 (95% confidence interval 0.946–0.952) identifying 12,983 (62.5%) patients as low-probability (sensitivity 99.3% [99.0–99.6%], NPV 99.8% [99.8–99.9%]), and 2,961 (14.3%) as high-probability (specificity 95.0% [94.7–95.3%], PPV 70.4% [69–71.9%]). At one year, subsequent myocardial infarction or cardiovascular death occurred more often in high-probability compared to low-probability patients (17.6% [520/2,961] versus 1.5% [197/12,983], P<0.001). Conclusions In unselected consecutive patients with suspected acute coronary syndrome, the MI3 algorithm accurately estimates the likelihood of myocardial infarction and predicts probability of subsequent adverse cardiovascular events. Performance of MI3 at example thresholds Funding Acknowledgement Type of funding source: Foundation. Main funding source(s): Medical Research Council


Author(s):  
Vincent E. Castillo ◽  
John E. Bell ◽  
Diane A. Mollenkopf ◽  
Theodore P. Stank

2021 ◽  
Vol 2 (2) ◽  
pp. 263178772110046
Author(s):  
Vern L. Glaser ◽  
Neil Pollock ◽  
Luciana D’Adderio

Algorithms are ubiquitous in modern organizations. Typically, researchers have viewed algorithms as self-contained computational tools that either magnify organizational capabilities or generate unintended negative consequences. To overcome this limited understanding of algorithms as stable entities, we propose two moves. The first entails building on a performative perspective to theorize algorithms as entangled, relational, emergent, and nested assemblages that use theories—and the sociomaterial networks they invoke—to automate decisions, enact roles and expertise, and perform calculations. The second move entails building on our dynamic perspective on algorithms to theorize how algorithms evolve as they move across contexts and over time. To this end, we introduce a biographical perspective on algorithms which traces their evolution by focusing on key “biographical moments.” We conclude by discussing how our performativity-inspired biographical perspective on algorithms can help management and organization scholars better understand organizational decision-making, the spread of technologies and their logics, and the dynamics of practices and routines.


Sign in / Sign up

Export Citation Format

Share Document