Variability in Rates of Use of Antibacterials Among 130 US Hospitals and Risk-Adjustment Models for Interhospital Comparison

2008 ◽  
Vol 29 (3) ◽  
pp. 203-211 ◽  
Author(s):  
Conan MacDougall ◽  
Ronald E. Polk

Objective.To describe variability in rates of antibacterial use in a large sample of US hospitals and to create risk-adjusted models for interhospital comparison.Methods.We retrospectively surveyed the use of 87 antibacterial agents on the basis of electronic claims data from 130 medical-surgical hospitals in the United States for the period August 2002 to July 2003; these records represented 1,798,084 adult inpatients. Hospitals were assigned randomly to the derivation data set (65 hospitals) or the validation data set (65 hospitals). Multivariable models predicting rates of antibacterial use were created using the derivation data set. These models were then used to predict rates of antibacterial use in the validation data set, which was compared with observed rates of antibacterial use. Rates of antibacterial use was measured in days of therapy per 1,000 patient-days.Results.Across the surveyed hospitals, a mean of 59.3% of patients received at least 1 dose of an antimicrobial agent during hospitalization (range for individual hospitals, 44.4%-73.6%). The mean total rate of antibacterial use was 789.8 days of therapy per 1,000 patient-days (range, 454.4-1,153.4). The best model for the total rate of antibacterial use explained 31% of the variance in rates of antibacterial use and included the number of hospital beds, the number of days in the intensive care unit per 1,000 patient-days, the number of surgeries per 1,000 discharges, and the number of cases of pneumonia, bacteremia, and urinary tract infection per 1,000 discharges. Five hospitals in the validation data set were identified as having outlier rates on the basis of observed antibacterial use greater than the upper bound of the 90% prediction interval for predicted antibacterial use in that hospital.Conclusion.Most adult inpatients receive antimicrobial agents during their hospitalization, but there is substantial variability between hospitals in the volume of antibacterials used. Risk-adjusted models can explain a significant proportion of this variation and allow for comparisons between hospitals for benchmarking purposes.

2016 ◽  
Vol 16 (17) ◽  
pp. 11379-11393 ◽  
Author(s):  
Huiqun Wang ◽  
Gonzalo Gonzalez Abad ◽  
Xiong Liu ◽  
Kelly Chance

Abstract. The collection 3 Ozone Monitoring Instrument (OMI) Total Column Water Vapor (TCWV) data generated by the Smithsonian Astrophysical Observatory's (SAO) algorithm version 1.0 and archived at the Aura Validation Data Center (AVDC) are compared with NCAR's ground-based GPS data, AERONET's sun-photometer data, and Remote Sensing System's (RSS) SSMIS data. Results show that the OMI data track the seasonal and interannual variability of TCWV for a wide range of climate regimes. During the period from 2005 to 2009, the mean OMI−GPS over land is −0.3 mm and the mean OMI−AERONET over land is 0 mm. For July 2005, the mean OMI−SSMIS over the ocean is −4.3 mm. The better agreement over land than over the ocean is corroborated by the smaller fitting residuals over land and suggests that liquid water is a key factor for the fitting quality over the ocean in the version 1.0 retrieval algorithm. We find that the influence of liquid water is reduced using a shorter optimized retrieval window of 427.7–465 nm. As a result, the TCWV retrieved with the new algorithm increases significantly over the ocean and only slightly over land. We have also made several updates to the air mass factor (AMF) calculation. The updated version 2.1 retrieval algorithm improves the land/ocean consistency and the overall quality of the OMI TCWV data set. The version 2.1 OMI data largely eliminate the low bias of the version 1.0 OMI data over the ocean and are 1.5 mm higher than RSS's “clear” sky SSMIS data in July 2005. Over the ocean, the mean of version 2.1 OMI−GlobVapour is 1 mm for July 2005 and 0 mm for January 2005. Over land, the version 2.1 OMI data are about 1 mm higher than GlobVapour when TCWV  <  15 mm and about 1 mm lower when TCWV  >  15 mm.


2019 ◽  
Vol 51 (1) ◽  
pp. 17-23
Author(s):  
Joyce Z. Qian ◽  
Mara A. McAdams-DeMarco ◽  
Derek Ng ◽  
Bryan Lau

Background: Choice of vascular access for older hemodialysis patients presents a special challenge since the rate of arteriovenous fistula (AVF) primary failure is high. The Lok’s risk equation predicting AVF primary failure has achieved good prediction accuracy and holds great potential for clinical use, but it has not been validated in the United States older hemodialysis patients. Methods: We assembled a validation data set of 14,892 patients aged 67 years and older who initiated hemodialysis with a central venous catheter between July 1, 2010, and June 30, 2012, and had a subsequent, incident AVF placement from the United States Renal Data System. We examined the external validity of Lok’s model by applying it to this validation data set. The discriminatory accuracy and calibration were evaluated by the concordance index (C-statistics) and calibration plot, respectively. Results: The observed frequency of AVF primary failure varied from 0.45 to 0.53 in hemodialysis patients in the validation data set. The predicted probabilities of AVF primary failure calculated by using the Lok’s risk equation ranged from 0.08 to 0.61, and 77.8, 40.5, and 51.7% of patients were categorized as having high, intermediate, and low risk of AVF primary failure, respectively. The C-statistics of the Lok’s risk equation in the validation data set was 0.53 (95% CI 0.52–0.54). The predicted probabilities of AVF primary failure corresponded poorly with the observed proportions in the calibration plot. Conclusions: When externally applied to a cohort of U.S. older hemodialysis patients, the Lok’s risk equation exhibited poor discrimination and calibration accuracy. It is invalid to use it to predict AVF primary failure. A more complex model with strong predictors is expected to better serve clinical determination for AVF placement in this population.


2013 ◽  
Vol 11 (9) ◽  
pp. 1481-1491 ◽  
Author(s):  
Darja Kavšek ◽  
Adriána Bednárová ◽  
Miša Biro ◽  
Roman Kranvogl ◽  
Darinka Vončina ◽  
...  

AbstractAbstract Chemical composition of Slovenian coal has been characterised in terms of proximate and ultimate analyses and the relations among the chemical descriptors and the higher heating value (HHV) examined using correlation analysis and multivariate data analysis methods. The proximate analysis descriptors were used to predict HHV using multiple linear regression (MLR) and artificial neural network (ANN) methods. An attempt has been made to select the model with the optimal number of predictor variables. According to the adjusted multiple coefficient of determination in the MLR model, and alternatively, according to sensitivity analysis in ANN developing, two descriptors were evaluated by both methods as optimal predictors: fixed carbon and volatile matter. The performances of MLR and ANN when modelling HHV were comparable; the mean relative difference between the actual and calculated HHV values in the training data was 1.11% for MLR and 0.91% for ANN. The predictive ability of the models was evaluated by an external validation data set; the mean relative difference between the actual and predicted HHV values was 1.39% in MLR and 1.47% in ANN. Thus, the developed models could be appropriately used to calculate HHV. Graphical abstract


Author(s):  
Richard W. Johnson

The United States Department of Energy is promoting the resurgence of nuclear power in the U. S. for both electrical power generation and production of process heat required for industrial processes such as the manufacture of hydrogen for use as a fuel in automobiles. The DOE project is called the next generation nuclear plant (NGNP) and is based on a Generation IV reactor concept called the very high temperature reactor (VHTR), which will use helium as the coolant at temperatures ranging from 450 °C to perhaps 1000 °C. While computational fluid dynamics (CFD) has not been used for past safety analysis for nuclear reactors in the U. S., it is being considered for safety analysis for existing and future reactors. It is fully recognized that CFD simulation codes will have to be validated for flow physics reasonably close to actual fluid dynamic conditions expected in normal and accident operational situations. To this end, experimental data have been obtained in a scaled model of a narrow slice of the lower plenum of a prismatic VHTR. The present article presents new results of CFD examinations of these data to explore potential issues with the geometry, the initial conditions, the flow dynamics and the data needed to fully specify the inlet and boundary conditions; results for several turbulence models are examined. Issues are addressed and recommendations about the data are made.


Author(s):  
Jeffrey A. Kornuta ◽  
Solver I. Thorsson ◽  
Jonathan Gibbs ◽  
Peter Veloo ◽  
Troy Rovella

Abstract The United States Pipeline and Hazardous Materials Safety Administration (PHMSA) recently revised the federal rules governing natural gas transport. PHMSA added a new section on the verification of pipeline material properties for pipeline assets with insufficient or incomplete records. This section permits the use of nondestructive examination (NDE) technologies to estimate material properties, which include yield strength (YS) and ultimate tensile strength (UTS), if several conditions are satisfied. These include that NDE measurement accuracy and uncertainty be conservatively accounted for, that the NDE technology be validated by experts, and that proper calibration procedures be implemented. One such NDE technology is Instrumented Indentation Testing (IIT), which can be used to estimate YS and UTS. Precise quantification of any NDE technology’s precision and accuracy requires consistent identification of test errors: if an error occurs during a measurement such that the data should be excluded from subsequent analyses, analysts need to be alerted to the data characteristics prior to including these results. These testing errors are distinct from the inherent measurement uncertainty due to both random error and systematic error. Any NDE measurement will contain some degree of uncertainty; however, faulty measurements exhibiting clearly identifiable errors must be excluded from subsequent analyses to maintain the integrity of the data set. Accordingly, this paper extends Pacific Gas and Electric’s (PG&E’s) previously reported efforts on IIT uncertainty quantification by presenting observations of a specific type of IIT error related to tool fixturing that has occurred during in-situ testing and describing the characteristics of how this error was exhibited in the test data. Once this test error was clearly identified, isolated, and was found repeatable; pre-processing algorithms were adapted to detect and alert NDE technicians to this error during testing, ultimately evolving NDE work procedures. This paper discusses this process from the initial recognition of a test error, to the adaptation of appropriate detection algorithms, and then finally to resulting revisions in operator procedures. Ultimately, these modifications have improved validation data quality and reduced the error rate of IIT measurements collected in the field.


2000 ◽  
Vol 16 (2) ◽  
pp. 107-114 ◽  
Author(s):  
Louis M. Hsu ◽  
Judy Hayman ◽  
Judith Koch ◽  
Debbie Mandell

Summary: In the United States' normative population for the WAIS-R, differences (Ds) between persons' verbal and performance IQs (VIQs and PIQs) tend to increase with an increase in full scale IQs (FSIQs). This suggests that norm-referenced interpretations of Ds should take FSIQs into account. Two new graphs are presented to facilitate this type of interpretation. One of these graphs estimates the mean of absolute values of D (called typical D) at each FSIQ level of the US normative population. The other graph estimates the absolute value of D that is exceeded only 5% of the time (called abnormal D) at each FSIQ level of this population. A graph for the identification of conventional “statistically significant Ds” (also called “reliable Ds”) is also presented. A reliable D is defined in the context of classical true score theory as an absolute D that is unlikely (p < .05) to be exceeded by a person whose true VIQ and PIQ are equal. As conventionally defined reliable Ds do not depend on the FSIQ. The graphs of typical and abnormal Ds are based on quadratic models of the relation of sizes of Ds to FSIQs. These models are generalizations of models described in Hsu (1996) . The new graphical method of identifying Abnormal Ds is compared to the conventional Payne-Jones method of identifying these Ds. Implications of the three juxtaposed graphs for the interpretation of VIQ-PIQ differences are discussed.


2020 ◽  
Author(s):  
Eleonora Diamanti ◽  
Inda Setyawati ◽  
Spyridon Bousis ◽  
leticia mojas ◽  
lotteke Swier ◽  
...  

Here, we report on the virtual screening, design, synthesis and structure–activity relationships (SARs) of the first class of selective, antibacterial agents against the energy-coupling factor (ECF) transporters. The ECF transporters are a family of transmembrane proteins involved in the uptake of vitamins in a wide range of bacteria. Inhibition of the activity of these proteins could reduce the viability of pathogens that depend on vitamin uptake. Because of their central role in the metabolism of bacteria and their absence in humans, ECF transporters are novel potential antimicrobial targets to tackle infection. The hit compound’s metabolic and plasma stability, the potency (20, MIC Streptococcus pneumoniae = 2 µg/mL), the absence of cytotoxicity and a lack of resistance development under the conditions tested here suggest that this scaffold may represent a promising starting point for the development of novel antimicrobial agents with an unprecedented mechanism of action.<br>


2013 ◽  
Vol 99 (4) ◽  
pp. 40-45 ◽  
Author(s):  
Aaron Young ◽  
Philip Davignon ◽  
Margaret B. Hansen ◽  
Mark A. Eggen

ABSTRACT Recent media coverage has focused on the supply of physicians in the United States, especially with the impact of a growing physician shortage and the Affordable Care Act. State medical boards and other entities maintain data on physician licensure and discipline, as well as some biographical data describing their physician populations. However, there are gaps of workforce information in these sources. The Federation of State Medical Boards' (FSMB) Census of Licensed Physicians and the AMA Masterfile, for example, offer valuable information, but they provide a limited picture of the physician workforce. Furthermore, they are unable to shed light on some of the nuances in physician availability, such as how much time physicians spend providing direct patient care. In response to these gaps, policymakers and regulators have in recent years discussed the creation of a physician minimum data set (MDS), which would be gathered periodically and would provide key physician workforce information. While proponents of an MDS believe it would provide benefits to a variety of stakeholders, an effort has not been attempted to determine whether state medical boards think it is important to collect physician workforce data and if they currently collect workforce information from licensed physicians. To learn more, the FSMB sent surveys to the executive directors at state medical boards to determine their perceptions of collecting workforce data and current practices regarding their collection of such data. The purpose of this article is to convey results from this effort. Survey findings indicate that the vast majority of boards view physician workforce information as valuable in the determination of health care needs within their state, and that various boards are already collecting some data elements. Analysis of the data confirms the potential benefits of a physician minimum data set (MDS) and why state medical boards are in a unique position to collect MDS information from physicians.


BMJ Open ◽  
2021 ◽  
Vol 11 (1) ◽  
pp. e040778
Author(s):  
Vineet Kumar Kamal ◽  
Ravindra Mohan Pandey ◽  
Deepak Agrawal

ObjectiveTo develop and validate a simple risk scores chart to estimate the probability of poor outcomes in patients with severe head injury (HI).DesignRetrospective.SettingLevel-1, government-funded trauma centre, India.ParticipantsPatients with severe HI admitted to the neurosurgery intensive care unit during 19 May 2010–31 December 2011 (n=946) for the model development and further, data from same centre with same inclusion criteria from 1 January 2012 to 31 July 2012 (n=284) for the external validation of the model.Outcome(s)In-hospital mortality and unfavourable outcome at 6 months.ResultsA total of 39.5% and 70.7% had in-hospital mortality and unfavourable outcome, respectively, in the development data set. The multivariable logistic regression analysis of routinely collected admission characteristics revealed that for in-hospital mortality, age (51–60, >60 years), motor score (1, 2, 4), pupillary reactivity (none), presence of hypotension, basal cistern effaced, traumatic subarachnoid haemorrhage/intraventricular haematoma and for unfavourable outcome, age (41–50, 51–60, >60 years), motor score (1–4), pupillary reactivity (none, one), unequal limb movement, presence of hypotension were the independent predictors as its 95% confidence interval (CI) of odds ratio (OR)_did not contain one. The discriminative ability (area under the receiver operating characteristic curve (95% CI)) of the score chart for in-hospital mortality and 6 months outcome was excellent in the development data set (0.890 (0.867 to 912) and 0.894 (0.869 to 0.918), respectively), internal validation data set using bootstrap resampling method (0.889 (0.867 to 909) and 0.893 (0.867 to 0.915), respectively) and external validation data set (0.871 (0.825 to 916) and 0.887 (0.842 to 0.932), respectively). Calibration showed good agreement between observed outcome rates and predicted risks in development and external validation data set (p>0.05).ConclusionFor clinical decision making, we can use of these score charts in predicting outcomes in new patients with severe HI in India and similar settings.


2021 ◽  
pp. 106591292110093
Author(s):  
James M. Strickland ◽  
Katelyn E. Stauffer

Despite a growing body of literature examining the consequences of women’s inclusion among lobbyists, our understanding of the factors that lead to women’s initial emergence in the profession is limited. In this study, we propose that gender diversity among legislative targets incentivizes organized interests to hire women lobbyists, and thus helps to explain when and how women emerge as lobbyists. Using a comprehensive data set of registered lobbyist–client pairings from all American states in 1989 and 2011, we find that legislative diversity influences not only the number of lobby contracts held by women but also the number of former women legislators who become revolving-door lobbyists. This second finding further supports the argument that interests capitalize on the personal characteristics of lobbyists, specifically by hiring women to work in more diverse legislatures. Our findings have implications for women and politics, lobbying, and voice and political equality in the United States.


Sign in / Sign up

Export Citation Format

Share Document