Effect of cold temperature on the rate of natural attenuation of benzene, toluene, ethylbenzene, and the three isomers of xylene (BTEX)

2010 ◽  
Vol 47 (5) ◽  
pp. 516-527 ◽  
Author(s):  
Ania C. Ulrich ◽  
Kristen Tappenden ◽  
James Armstrong ◽  
Kevin W. Biggar

The impact of cold temperatures on natural attenuation rates is poorly understood and compounded by the lack of published data, particularly under field conditions. This paper presents a collection of data from monitoring and remediation projects completed at cold temperatures. The data set was compiled from 55 studies under anaerobic conditions (46 field and nine laboratory) from sites where groundwater temperatures are typically less than 15 °C. By normalizing the data to +5 and +10 °C, the scatter in rates for BTEX degradation in groundwater was reduced by 33% to 66%. In an attempt to address the paucity of data on natural attenuation rates under cold temperatures, this study has compiled and normalized 101 anaerobic BTEX natural attenuation rates. Eleven of the 55 studies and 43 of the 101 rates presented in this study have not been previously published. Additionally, this study has compiled, for each site, relevant contaminant and hydrogeological information that can be reviewed to choose appropriate rates for preliminary site analysis.

2017 ◽  
Vol 61 (12) ◽  
Author(s):  
Sinziana Cristea ◽  
Anne Smits ◽  
Aida Kulo ◽  
Catherijne A. J. Knibbe ◽  
Mirjam van Weissenbruch ◽  
...  

ABSTRACT Aminoglycoside pharmacokinetics (PK) is expected to change in neonates with perinatal asphyxia treated with therapeutic hypothermia (PATH). Several amikacin dosing guidelines have been proposed for treating neonates with (suspected) septicemia; however, none provide adjustments for cases of PATH. Therefore, we aimed to quantify the differences in amikacin PK between neonates with and without PATH to propose suitable dosing recommendations. Based on amikacin therapeutic drug monitoring data collected retrospectively from neonates with PATH, combined with a published data set, we assessed the impact of PATH on amikacin PK by using population modeling. Monte Carlo and stochastic simulations were performed to establish amikacin exposures in neonates with PATH after dosing according to the current guidelines and according to proposed model-derived dosing guidelines. Amikacin clearance was decreased 40.6% in neonates with PATH, with no changes in volume of distribution. Simulations showed that increasing the dosing interval by 12 h results in a decrease in the percentage of neonates reaching toxic trough levels (>5 mg/liter), from 40 to 76% to 14 to 25%, while still reaching efficacy targets compared to the results of current dosing regimens. Based on this study, a 12-h increase in the amikacin dosing interval in neonates with PATH is proposed to correct for the reduced clearance, yielding safe and effective exposures. As amikacin is renally excreted, further studies into other renally excreted drugs may be required, as their clearance may also be impaired.


2012 ◽  
Vol 30 (2) ◽  
pp. 253-262 ◽  
Author(s):  
Martyna Molak ◽  
Eline D. Lorenzen ◽  
Beth Shapiro ◽  
Simon Y.W. Ho

Abstract In recent years, ancient DNA has increasingly been used for estimating molecular timescales, particularly in studies of substitution rates and demographic histories. Molecular clocks can be calibrated using temporal information from ancient DNA sequences. This information comes from the ages of the ancient samples, which can be estimated by radiocarbon dating the source material or by dating the layers in which the material was deposited. Both methods involve sources of uncertainty. The performance of Bayesian phylogenetic inference depends on the information content of the data set, which includes variation in the DNA sequences and the structure of the sample ages. Various sources of estimation error can reduce our ability to estimate rates and timescales accurately and precisely. We investigated the impact of sample-dating uncertainties on the estimation of evolutionary timescale parameters using the software BEAST. Our analyses involved 11 published data sets and focused on estimates of substitution rate and root age. We show that, provided that samples have been accurately dated and have a broad temporal span, it might be unnecessary to account for sample-dating uncertainty in Bayesian phylogenetic analyses of ancient DNA. We also investigated the sample size and temporal span of the ancient DNA sequences needed to estimate phylogenetic timescales reliably. Our results show that the range of sample ages plays a crucial role in determining the quality of the results but that accurate and precise phylogenetic estimates of timescales can be made even with only a few ancient sequences. These findings have important practical consequences for studies of molecular rates, timescales, and population dynamics.


2003 ◽  
Vol 14 (3) ◽  
pp. 199-207 ◽  
Author(s):  
Mingyu Liang ◽  
Amy G. Briggs ◽  
Elizabeth Rute ◽  
Andrew S. Greene ◽  
Allen W. Cowley

Dye switching and biological replication substantially increase the cost and the complexity of cDNA microarray studies. The objective of the present analysis was to quantitatively assess the importance of these procedures to provide a quantitative basis for decision-making in the design of microarray experiments. Taking advantage of the unique characteristics of a published data set, the impact of these procedures on the reliability of microarray results was calculated. Adding a second microarray with dye switching substantially increased the correlation coefficient between observed and predicted ln(ratio) values from 0.38 ± 0.06 to 0.62 ± 0.04 ( n = 12) and the outlier concordance from 21 ± 3% to 43 ± 4%. It also increased the correlation with the entire set of microarrays from 0.60 ± 0.04 to 0.79 ± 0.04 and the outlier concordance from 31 ± 6% to 58 ± 5% and tended to improve the correlation with Northern blot results. Adding a second microarray to include biological replication also improved the performance of these indices but often to a lesser degree. Inclusion of both procedures in the second microarray substantially improved the consistency with the entire set of microarrays but had minimal effect on the consistency with predicted results. Analysis of another data set generated using a different cDNA labeling method also supported a significant impact of dye switching. In conclusion, both dye switching and biological replication substantially increased the reliability of microarray results, with dye switching likely having even greater benefits. Recommendations regarding the use of these procedures were proposed.


2021 ◽  
Author(s):  
Steven Marc Weisberg ◽  
Victor Roger Schinazi ◽  
Andrea Ferrario ◽  
Nora Newcombe

Relying on shared tasks and stimuli to conduct research can enhance the replicability of findings and allow a community of researchers to collect large data sets across multiple experiments. This approach is particularly relevant for experiments in spatial navigation, which often require the development of unfamiliar large-scale virtual environments to test participants. One challenge with shared platforms is that undetected technical errors, rather than being restricted to individual studies, become pervasive across many studies. Here, we discuss the discovery of a programming error (a bug) in a virtual environment platform used to investigate individual differences in spatial navigation: Virtual Silcton. The bug resulted in storing the absolute value of an angle in a pointing task rather than the signed angle. This bug was difficult to detect for several reasons, and it rendered the original sign of the angle unrecoverable. To assess the impact of the error on published findings, we collected a new data set for comparison. Our results revealed that the effect of the error on published data is likely to be minimal, partially explaining the difficulty in detecting the bug over the years. We also used the new data set to develop a tool that allows researchers who have previously used Virtual Silcton to evaluate the impact of the bug on their findings. We summarize the ways that shared open materials, shared data, and collaboration can pave the way for better science to prevent errors in the future.


1999 ◽  
Vol 354 (1384) ◽  
pp. 799-807 ◽  
Author(s):  
C. C. Lord ◽  
B. Barnard ◽  
K. Day ◽  
J. W. Hargrove ◽  
J. J. McNamara ◽  
...  

Recent research has shown that many parasite populations are made up of a number of epidemiologically distinct strains or genotypes. The implications of strain structure or genetic diversity for parasite population dynamics are still uncertain, partly because there is no coherent framework for the interpretation of field data. Here, we present an analysis of four published data sets for vector–borne microparasite infections where strains or genotypes have been distinguished: serotypes of African horse sickness (AHS) in zebra; types of Nannomonas trypanosomes in tsetse flies; parasite–induced erythrocyte surface antigen (PIESA) based isolates of Plasmodium falciparum malaria in humans, and the merozoite surface protein 2 gene (MSP–2) alleles of P. falciparum in humans and in anopheline mosquitoes. For each data set we consider the distribution of strains or types among hosts and any pairwise associations between strains or types. Where host age data are available we also compare age–prevalence relationships and estimates of the force–of–infection. Multiple infections of hosts are common and for most data sets infections have an aggregated distribution among hosts with a tendency towards positive associations between certain strains or types. These patterns could result from interactions (facilitation) between strains or types, or they could reflect patterns of contact between hosts and vectors. We use a mathematical model to illustrate the impact of host–vector contact patterns, finding that even if contact is random there may still be significant aggregation in parasite distributions. This effect is enhanced if there is non–random contact or other heterogeneities between hosts, vectors or parasites. In practice, different strains or types also have different forces of infection. We anticipate that aggregated distributions and positive associations between microparasite strains or types will be extremely common.


2020 ◽  
Author(s):  
Xiaoqian Jiang ◽  
Lishan Yu ◽  
Hamisu M. Salihub ◽  
Deepa Dongarwar

BACKGROUND In the United States, State laws require birth certificates to be completed for all births; and federal law mandates national collection and publication of births and other vital statistics data. National Center for Health Statistics (NCHS) has published the key statistics of birth data over the years. These data files, from as early as the 1970s, have been released and made publicly available. There are about 3 million new births each year, and every birth is a record in the data set described by hundreds of variables. The total data cover more than half of the current US population, making it an invaluable resource to study and examine birth epidemiology. Using such big data, researchers can ask interesting questions and study longitudinal patterns, for example, the impact of mother's drinking status to infertility in metropolitans in the last decade, or the education level of the biological father to the c-sections over the years. However, existing published data sets cannot directly support these research questions as there are adjustments to the variables and their categories, which makes these individually published data files fragmented. The information contained in the published data files is highly diverse, containing hundreds of variables each year. Besides minor adjustments like renaming and increasing variable categories, some major updates significantly changed the fields of statistics (including removal, addition, and modification of the variables), making the published data disconnected and ambiguous to use over multiple years. Researchers have previously reconstructed features to study temporal patterns, but the scale is limited (focusing only on a few variables of interest). Many have reinvented the wheels, and such reconstructions lack consistency as different researchers might use different criteria to harmonize variables, leading to inconsistent findings and limiting the reproducibility of research. There is no systematic effort to combine about five decades of data files into a database that includes every variable that has ever been released by NCHS. OBJECTIVE To utilize machine learning techniques to combine the United States (US) natality data for the last five decades, with changing variables and factors, into a consistent database. METHODS We developed a feasible and efficient deep-learning-based framework to harmonize data sets of live births in the US from 1970 to 2018. We constructed a graph based on the property and elements of databases including variables and conducted a graph convolutional network (GCN) on the graph to learn the graph embeddings for nodes where the learned embeddings implied the similarity of variables. We devised a novel loss function with a slack margin and a banlist mechanism (for a random walk) to learn the desired structure (two nodes sharing more information were more similar to each other.). We developed an active learning mechanism to conduct the harmonization. RESULTS We harmonized historical US birth data and resolved conflicts in ambiguous terms. From a total of 9,321 variables (i.e., 783 stemmed variables, from 1970 to 2018) we applied our model iteratively together with human review, obtaining 323 hyperchains of variables. Hyperchains for harmonization were composed of 201 stemmed variable pairs when considering any pairs of different stemmed variables changed over years. During the harmonization, the first round of our model provided 305 candidates stemmed variable pairs (based on the top-20 most similar variables of each variable based on the learned embeddings of variables) and achieved recall and precision of 87.56%, 57.70%, respectively. CONCLUSIONS Our harmonized graph neural network (HGNN) method provides a feasible and efficient way to connect relevant databases at a meta-level. Adapting to databases' property and characteristics, HGNN can learn patterns and search relations globally, which is powerful to discover the similarity between variables among databases. Smart utilization of machine learning can significantly reduce the manual effort in database harmonization and integration of fragmented data into useful databases for future research.


2020 ◽  
Vol 494 (1) ◽  
pp. 1387-1394 ◽  
Author(s):  
M Damasso ◽  
F Del Sordo

ABSTRACT Proxima c, a candidate second planet orbiting Proxima Centauri, was detected with the radial velocity method. The announced long orbital period (5.21$^{+0.26}_{-0.22}$ yr), and small semi-amplitude of the induced Doppler signal (1.2 ± 0.4 $\rm{\,m\,s^{-1}}$), make this detection challenging and a target worthy of a follow-up in the years to come. We intend to evaluate the impact of future data on the statistical significance of the detection through realistic simulated radial velocities to be added to the published data set, spanning up to one orbital period of Proxima c in the time range 2019–2023. We find that the detection significance of Proxima c increases depending not only on the amount of data collected, but also on the number of instruments used, and especially on the time-span covered by the observational campaign. However, on average, we do not get strong statistical evidence, and we predict that, in the best-case scenario, in the next five years the detection of Proxima c can become significant at the 4-σ level. If instead Proxima c does not exist, the detected signal may lower its significance down to 2 σ.


Crisis ◽  
2018 ◽  
Vol 39 (1) ◽  
pp. 27-36 ◽  
Author(s):  
Kuan-Ying Lee ◽  
Chung-Yi Li ◽  
Kun-Chia Chang ◽  
Tsung-Hsueh Lu ◽  
Ying-Yeh Chen

Abstract. Background: We investigated the age at exposure to parental suicide and the risk of subsequent suicide completion in young people. The impact of parental and offspring sex was also examined. Method: Using a cohort study design, we linked Taiwan's Birth Registry (1978–1997) with Taiwan's Death Registry (1985–2009) and identified 40,249 children who had experienced maternal suicide (n = 14,431), paternal suicide (n = 26,887), or the suicide of both parents (n = 281). Each exposed child was matched to 10 children of the same sex and birth year whose parents were still alive. This yielded a total of 398,081 children for our non-exposed cohort. A Cox proportional hazards model was used to compare the suicide risk of the exposed and non-exposed groups. Results: Compared with the non-exposed group, offspring who were exposed to parental suicide were 3.91 times (95% confidence interval [CI] = 3.10–4.92 more likely to die by suicide after adjusting for baseline characteristics. The risk of suicide seemed to be lower in older male offspring (HR = 3.94, 95% CI = 2.57–6.06), but higher in older female offspring (HR = 5.30, 95% CI = 3.05–9.22). Stratified analyses based on parental sex revealed similar patterns as the combined analysis. Limitations: As only register-­based data were used, we were not able to explore the impact of variables not contained in the data set, such as the role of mental illness. Conclusion: Our findings suggest a prominent elevation in the risk of suicide among offspring who lost their parents to suicide. The risk elevation differed according to the sex of the afflicted offspring as well as to their age at exposure.


2013 ◽  
Vol 99 (4) ◽  
pp. 40-45 ◽  
Author(s):  
Aaron Young ◽  
Philip Davignon ◽  
Margaret B. Hansen ◽  
Mark A. Eggen

ABSTRACT Recent media coverage has focused on the supply of physicians in the United States, especially with the impact of a growing physician shortage and the Affordable Care Act. State medical boards and other entities maintain data on physician licensure and discipline, as well as some biographical data describing their physician populations. However, there are gaps of workforce information in these sources. The Federation of State Medical Boards' (FSMB) Census of Licensed Physicians and the AMA Masterfile, for example, offer valuable information, but they provide a limited picture of the physician workforce. Furthermore, they are unable to shed light on some of the nuances in physician availability, such as how much time physicians spend providing direct patient care. In response to these gaps, policymakers and regulators have in recent years discussed the creation of a physician minimum data set (MDS), which would be gathered periodically and would provide key physician workforce information. While proponents of an MDS believe it would provide benefits to a variety of stakeholders, an effort has not been attempted to determine whether state medical boards think it is important to collect physician workforce data and if they currently collect workforce information from licensed physicians. To learn more, the FSMB sent surveys to the executive directors at state medical boards to determine their perceptions of collecting workforce data and current practices regarding their collection of such data. The purpose of this article is to convey results from this effort. Survey findings indicate that the vast majority of boards view physician workforce information as valuable in the determination of health care needs within their state, and that various boards are already collecting some data elements. Analysis of the data confirms the potential benefits of a physician minimum data set (MDS) and why state medical boards are in a unique position to collect MDS information from physicians.


2019 ◽  
Vol 11 (1) ◽  
pp. 156-173
Author(s):  
Spenser Robinson ◽  
A.J. Singh

This paper shows Leadership in Energy and Environmental Design (LEED) certified hospitality properties exhibit increased expenses and earn lower net operating income (NOI) than non-certified buildings. ENERGY STAR certified properties demonstrate lower overall expenses than non-certified buildings with statistically neutral NOI effects. Using a custom sample of all green buildings and their competitive data set as of 2013 provided by Smith Travel Research (STR), the paper documents potential reasons for this result including increased operational expenses, potential confusion with certified and registered LEED projects in the data, and qualitative input. The qualitative input comes from a small sample survey of five industry professionals. The paper provides one of the only analyses on operating efficiencies with LEED and ENERGY STAR hospitality properties.


Sign in / Sign up

Export Citation Format

Share Document