scholarly journals Test sensitivity for infection versus infectiousness of SARS‐CoV‐2

Author(s):  
Joshua S. Gans
Keyword(s):  
2021 ◽  
pp. 0272989X2110027
Author(s):  
Frederik van Delft ◽  
Mirte Muller ◽  
Rom Langerak ◽  
Hendrik Koffijberg ◽  
Valesca Retèl ◽  
...  

Background Although immunotherapy (IMT) provides significant survival benefits in selected patients, approximately 10% of patients experience (serious) immune-related adverse events (irAEs). The early detection of adverse events will prevent irAEs from progressing to severe stages, and routine testing for irAEs has become common practice. Because a positive test outcome might indicate a clinically manifesting irAE that requires treatment to (temporarily) discontinue, the occurrence of false-positive test outcomes is expected to negatively affect treatment outcomes. This study explores how the UPPAAL modeling environment can be used to assess the impact of test accuracy (i.e., test sensitivity and specificity), on the probability of patients entering palliative care within 11 IMT cycles. Methods A timed automata-based model was constructed using real-world data and expert consultation. Model calibration was performed using data from 248 non–small-cell lung cancer patients treated with nivolumab. A scenario analysis was performed to evaluate the effect of changes in test accuracy on the probability of patients transitioning to palliative care. Results The constructed model was used to estimate the cumulative probabilities for the patients’ transition to palliative care, which were found to match real-world clinical observations after model calibration. The scenario analysis showed that the specificity of laboratory tests for routine monitoring has a strong effect on the probability of patients transitioning to palliative care, whereas the effect of test sensitivity was limited. Conclusion We have obtained interesting insights by simulating a care pathway and disease progression using UPPAAL. The scenario analysis indicates that an increase in test specificity results in decreased discontinuation of treatment due to suspicion of irAEs, through a reduction of false-positive test outcomes.


Author(s):  
U.W. Hesterberg ◽  
R. Bagnall ◽  
B. Bosch ◽  
K. Perrett ◽  
R. Horner ◽  
...  

A serological survey of leptospirosis in cattle originating from rural communities of the province of KwaZulu-Natal (KZN) in South Africa was carried out between March 2001 and December 2003. The survey was designed as a 2-stage survey, using the local dip tank as the primary sampling point. In total, 2021 animals from 379 dip tanks in 33 magisterial districts were sampled and tested with the microscopic agglutination test (MAT). The apparent prevalence at district level was adjusted for clustering and diagnostic test sensitivity and specificity and displayed in maps. The prevalence of leptospirosis in cattle originating from communal grazing areas of KZN was found to be 19.4% with a 95% confidence interval of 14.8-24.1 %. At district level the prevalence of leptospirosis varied from 0 to 63 % of cattle. Bovine leptospirosis was found to occur in communal grazing areas throughout the province with the exception of 2 districts. The southeastern regions showed a higher prevalence than other areas of the province; while in some of the northern and western districts a lower prevalence was noted. Several serovars were detected by the MAT and although Leptospira interrogans serovar pomona occurred most frequently, serovars tarrasovi, bratislava, hardjo, canicola and icterohaemorrhagica were also frequently identified. The findings of the survey are discussed.


2007 ◽  
Vol 53 (10) ◽  
pp. 1725-1729 ◽  
Author(s):  
Corné Biesheuvel ◽  
Les Irwig ◽  
Patrick Bossuyt

Abstract Before a new test is introduced in clinical practice, its accuracy should be assessed. In the past decade, researchers have put an increased emphasis on exploring differences in test sensitivity and specificity between patient subgroups. If the reference standard is imperfect and the prevalence of the target condition differs among subgroups, apparent differences in test sensitivity and specificity between subgroups may be caused by reference standard misclassification. We provide guidance on how to determine whether observed differences may be explained by reference standard misclassification. Such misclassification may be ascertained by examining how the apparent sensitivity and specificity change with the prevalence of the target condition in the subgroups.


2011 ◽  
Vol 135 (7) ◽  
pp. 874-881
Author(s):  
Nikita Makretsov ◽  
C. Blake Gilks ◽  
Reza Alaghehbandan ◽  
John Garratt ◽  
Louise Quenneville ◽  
...  

Abstract Context.—External quality assurance and proficiency testing programs for breast cancer predictive biomarkers are based largely on traditional ad hoc design; at present there is no universal consensus on definition of a standard reference value for samples used in external quality assurance programs. Objective.—To explore reference values for estrogen receptor and progesterone receptor immunohistochemistry in order to develop an evidence-based analytic platform for external quality assurance. Design.—There were 31 participating laboratories, 4 of which were previously designated as “expert” laboratories. Each participant tested a tissue microarray slide with 44 breast carcinomas for estrogen receptor and progesterone receptor and submitted it to the Canadian Immunohistochemistry Quality Control Program for analysis. Nuclear staining in 1% or more of the tumor cells was a positive score. Five methods for determining reference values were compared. Results.—All reference values showed 100% agreement for estrogen receptor and progesterone receptor scores, when indeterminate results were excluded. Individual laboratory performance (agreement rates, test sensitivity, test specificity, positive predictive value, negative predictive value, and κ value) was very similar for all reference values. Identification of suboptimal performance by all methods was identical for 30 of 31 laboratories. Estrogen receptor assessment of 1 laboratory was discordant: agreement was less than 90% for 3 of 5 reference values and greater than 90% with the use of 2 other reference values. Conclusions.—Various reference values provide equivalent laboratory rating. In addition to descriptive feedback, our approach allows calculation of technical test sensitivity and specificity, positive and negative predictive values, agreement rates, and κ values to guide corrective actions.


2018 ◽  
Vol 2018 ◽  
pp. 1-5
Author(s):  
Cattleya Thongrong ◽  
Pattramon Thaisiam ◽  
Pornthep Kasemsiri

Background. Nasotracheal intubation is a blind procedure that may lead to complications; therefore, several tests were introduced to assess a suitable nostril for nasotracheal intubation. However, the value of simple tests in clinical practice was insufficient to evaluate. Method. A diagnostic prospective study was conducted in 42 patients, ASA classes I–III, undergoing surgery requiring nasotracheal intubation for general anesthesia. Two simple methods for assessing the patency of nostrils were investigated. Firstly, the occlusion test was evaluated by asking for the patient’s own assessment of nasal airflow during occlusion of each contralateral nostril while in a sitting posture. Secondly, patients breathed onto a spatula held 1 cm below the nostrils while in a sitting posture. All patients were assessed using these two simple tests. Nasal endoscopic examination of each patient was used as a gold standard. Results. The diagnostic value of the occlusion test (sensitivity of 91.7%, specificity of 61.1%, PPV of 75.9%, NPV of 84.6%, LR+ of 2.36, and LR− of 0.14) seemed better than that of the spatula test (sensitivity of 95.8%, specificity of 25.0%, PPV of 63.0%, NPV of 81.8%, LR+ of 1.28, and LR− of 0.17). When both tests were combined in series, the diagnostic value increased (sensitivity of 87.9%, specificity of 70.8%, PPV of 80.1%, NPV of 81.4%, LR+ of 3.01, and LR− of 0.17). Conclusion and Recommendations. The simple occlusion test is more useful than the spatula test. However, combining the results from both tests in series helped to improve the diagnostic value for selecting a suitable nostril for nasotracheal intubation.


2018 ◽  
Vol 52 (4) ◽  
pp. 413-421 ◽  
Author(s):  
Dominika Novak Mlakar ◽  
Tatjana Kofol Bric ◽  
Ana Lucija Škrjanec ◽  
Mateja Krajc

Abstract Background We assessed the incidence and characteristics of interval cancers after faecal immunochemical occult blood test and calculated the test sensitivity in Slovenian colorectal cancer screening programme. Patients and methods The analysis included the population aged between 50 to 69 years, which was invited for screening between April 2011 and December 2012. The persons were followed-up until the next foreseen invitation, in average for 2 years. The data on interval cancers and cancers in non-responders were obtained from cancer registry. Gender, age, years of schooling, the cancer site and stage were compared among three observed groups. We used the proportional incidence method to calculate the screening test sensitivity. Results Among 502,488 persons invited for screening, 493 cancers were detected after positive screening test, 79 interval cancers after negative faecal immunochemical test and 395 in non-responders. The proportion of interval cancers was 13.8%. Among the three observed groups cancers were more frequent in men (p = 0.009) and in persons aged 60+ years (p < 0.001). Comparing screen detected and cancers in non-responders with interval cancers more interval cancers were detected in persons with 10 years of schooling or more (p = 0.029 and p = 0.001), in stage III (p = 0.027) and IV (p < 0.001), and in right hemicolon (p < 0.001). Interval cancers were more frequently in stage I than non-responders cancers (p = 0.004). Test sensitivity of faecal immunochemical test was 88.45%. Conclusions Interval cancers in Slovenian screening programme were detected in expected proportions as in similar programmes. Test sensitivity was among the highest when compared to similar programmes and was accomplished using test kit for two stool samples.


2007 ◽  
Vol 135 (1-2) ◽  
pp. 31-37 ◽  
Author(s):  
Zorana Penezic ◽  
Milos Zarkovic ◽  
Svetlana Vujovic ◽  
Miomira Ivovic ◽  
Biljana Beleslin ◽  
...  

Introduction: Diagnosis and differential diagnosis of Cushing?s syndrome (CS) remain considerable challenge in endocrinology. For more than 20 years, CRH has been widely used as differential diagnostic test. Following the CRH administration, the majority of patients with ACTH secreting pituitary adenoma show a significant rise of plasma cortisol and ACTH, whereas those with ectopic ACTH secretion characteristically do not. Objective The aim of our study was to assess the value of CRF test for differential diagnosis of CS using the ROC (receiver operating characteristic) curve method. Method A total of 30 patients with CS verified by pathological examination and postoperative testing were evaluated. CRH test was performed within diagnostic procedures. ACTH secreting pituitary adenoma was found in 18, ectopic ACTH secretion in 3 and cortisol secreting adrenal adenoma in 9 of all patients with CS. Cortisol and ACTH were determined -15, 0, 15, 30, 45, 60, 90 and 120 min. after i.v. administration of 100?g of ovine CRH. Cortisol and ACTH were determined by commercial RIA. Statistical data processing was done by ROC curve analysis. Due to small number, the patients with ectopic ACTH secretion were excluded from test evaluation by ROC curve method. Results In evaluated subgroups, basal cortisol was (1147.3?464.3 vs. 1589.8?296.3 vs. 839.2?405.6 nmol/L); maximal stimulated cortisol (1680.3?735.5 vs. 1749.0?386.6 vs. 906.1?335.0 nmol/L); and maximal increase as a percent of basal cortisol (49.1?36.9 vs. 9.0?7.6 vs. 16.7?37.3 %). Consequently, basal ACTH was (100.9 ?85.0 vs. 138.0?123.7 vs. 4.8?4.3 pg/mL) and maximal stimulated ACTH (203.8 ?160.1 vs. 288.0?189.5 vs. 7.4?9.2 pg/mL). For cortisol, determination area under ROC curve was 0.815?0.083 (CI 95% 0.652-0.978). For cortisol increase cut-off level of 20%, test sensitivity was 83%, with specificity of 78%. For ACTH, determination area under ROC curve was 0.637?0.142 (CI 95% 0.359-0.916). For ACTH increase cut-off level of 30%, test sensitivity was 70%, with specificity of 57%. Conclusion Determination of cortisol and ACTH levels in CRH test remains reliable tool in differential diagnosis of Cushing?s syndrome.


2005 ◽  
Vol 2 (4) ◽  
pp. 365-372 ◽  
Author(s):  
Mark E Arnold ◽  
Alasdair Cook ◽  
Rob Davies

The objective of this study was to develop and parametrize a mathematical model of the sensitivity of pooled sampling of faeces to detect Salmonella infection in pigs. A mathematical model was developed to represent the effect of pooling on the probability of Salmonella isolation. Parameters for the model were estimated using data obtained by collecting 50 faecal samples from each of two pig farms. Each sample was tested for Salmonella at individual sample weights of 0.1, 0.5, 1, 10 and 25 g and pools of 5, 10 and 20 samples were created from the individual samples. The highest test sensitivity for individual samples was found at 10 g (90% sensitivity), with the 25 g test sensitivity equal to 83%. For samples of less than 10 g, sensitivity was found to reduce with sample weight. Incubation for 48 h was found to produce a more sensitive test than incubation for 24 h. Model results found increasing sensitivity with more samples in the pool, with the pools of 5, 10 and 20 being more sensitive than individual sampling, and the pools of 20 being the most sensitive of those considered.


Sign in / Sign up

Export Citation Format

Share Document