scholarly journals Development of Graphitization of μg-Sized Samples at Lund University

Radiocarbon ◽  
2010 ◽  
Vol 52 (3) ◽  
pp. 1270-1276 ◽  
Author(s):  
J Genberg ◽  
K Stenström ◽  
M Elfman ◽  
M Olsson

To be able to successfully measure radiocarbon with accelerator mass spectrometry (AMS) in atmospheric aerosol samples, graphitization of small sample sizes (<50 μg carbon) must provide reproducible results. At Lund University, a graphitization line optimized for small samples has been constructed. Attention has been given to minimize the reduction reactor volume and each reactor is equipped with a very small pressure transducer that enables constant monitoring of the reaction. Samples as small as 25 μg of carbon have been successfully analyzed, and the mass detection limit of the system has probably not been reached.

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Florent Le Borgne ◽  
Arthur Chatton ◽  
Maxime Léger ◽  
Rémi Lenain ◽  
Yohann Foucher

AbstractIn clinical research, there is a growing interest in the use of propensity score-based methods to estimate causal effects. G-computation is an alternative because of its high statistical power. Machine learning is also increasingly used because of its possible robustness to model misspecification. In this paper, we aimed to propose an approach that combines machine learning and G-computation when both the outcome and the exposure status are binary and is able to deal with small samples. We evaluated the performances of several methods, including penalized logistic regressions, a neural network, a support vector machine, boosted classification and regression trees, and a super learner through simulations. We proposed six different scenarios characterised by various sample sizes, numbers of covariates and relationships between covariates, exposure statuses, and outcomes. We have also illustrated the application of these methods, in which they were used to estimate the efficacy of barbiturates prescribed during the first 24 h of an episode of intracranial hypertension. In the context of GC, for estimating the individual outcome probabilities in two counterfactual worlds, we reported that the super learner tended to outperform the other approaches in terms of both bias and variance, especially for small sample sizes. The support vector machine performed well, but its mean bias was slightly higher than that of the super learner. In the investigated scenarios, G-computation associated with the super learner was a performant method for drawing causal inferences, even from small sample sizes.


2016 ◽  
Vol 41 (5) ◽  
pp. 472-505 ◽  
Author(s):  
Elizabeth Tipton ◽  
Kelly Hallberg ◽  
Larry V. Hedges ◽  
Wendy Chan

Background: Policy makers and researchers are frequently interested in understanding how effective a particular intervention may be for a specific population. One approach is to assess the degree of similarity between the sample in an experiment and the population. Another approach is to combine information from the experiment and the population to estimate the population average treatment effect (PATE). Method: Several methods for assessing the similarity between a sample and population currently exist as well as methods estimating the PATE. In this article, we investigate properties of six of these methods and statistics in the small sample sizes common in education research (i.e., 10–70 sites), evaluating the utility of rules of thumb developed from observational studies in the generalization case. Result: In small random samples, large differences between the sample and population can arise simply by chance and many of the statistics commonly used in generalization are a function of both sample size and the number of covariates being compared. The rules of thumb developed in observational studies (which are commonly applied in generalization) are much too conservative given the small sample sizes found in generalization. Conclusion: This article implies that sharp inferences to large populations from small experiments are difficult even with probability sampling. Features of random samples should be kept in mind when evaluating the extent to which results from experiments conducted on nonrandom samples might generalize.


Radiocarbon ◽  
1987 ◽  
Vol 29 (2) ◽  
pp. 169-175 ◽  
Author(s):  
Michael Andree

When single species of foraminifera picked from marine sediments are 14C dated with Accelerator Mass Spectrometry (AMS), bioturbation puts limits on the minimal sample size to be used, as uncertainty is added to the result by statistics of the picking process. The model presented here simulates the additional statistical uncertainty introduced into the measurement by the coupling of bioturbation and small sample amounts. As there is no general solution for this problem, we present two simple cases only. The model can also be used to simulate more complicated situations occurring in sediments.


Paleobiology ◽  
2003 ◽  
Vol 29 (1) ◽  
pp. 52-70 ◽  
Author(s):  
Anna K. Behrensmeyer ◽  
C. Tristan Stayton ◽  
Ralph E. Chapman

Avian skeletal remains occur in many fossil assemblages, and in spite of small sample sizes and incomplete preservation, they may be a source of valuable paleoecological information. In this paper, we examine the taphonomy of a modern avian bone assemblage and test the relationship between ecological data based on avifaunal skeletal remains and known ecological attributes of a living bird community. A total of 54 modern skeletal occurrences and a sample of 126 identifiable bones from Amboseli Park, Kenya, were analyzed for weathering features and skeletal part preservation in order to characterize preservation features and taphonomic biases. Avian remains, with the exception of ostrich, decay more rapidly than adult mammal bones and rarely reach advanced stages of weathering. Breakage and the percentage of anterior limb elements serve as indicators of taphonomic overprinting that may affect paleoecological signals. Using ecomorphic categories including body weight, diet, and habitat, we compared species in the bone assemblage with the living Amboseli avifauna. The documented bone sample is biased toward large body size, representation of open grassland habitats, and grazing or scavenging diets. In spite of this, multidimensional scaling analysis shows that the small faunal sample (16 out of 364 species) in the pre-fossil bone assemblage accurately represents general features of avian ecospace in Amboseli. This provides a measure of the potential fidelity of paleoecological reconstructions based on small samples of avian remains. In the Cenozoic, the utility of avian fossils is enhanced because bird ecomorphology is relatively well known and conservative through time, allowing back-extrapolations of habitat preferences, diet, etc. based on modern taxa.


Radiocarbon ◽  
1986 ◽  
Vol 28 (2A) ◽  
pp. 556-560 ◽  
Author(s):  
N J Conard ◽  
David Elmore ◽  
P W Kubik ◽  
H E Gove ◽  
L E Tubbs ◽  
...  

A method of chemical separation and purification of chloride from relatively small samples (500 to 2100g) of glacial ice is presented. With this procedure the first successful measurements of pre-bomb levels of 36Cl in Greenland ice have been made. Emphasis is placed on methods of reducing sulfur, which causes interference in the accelerator mass spectrometry, and in maximizing the yield. Data regarding the selection of materials for sample holders and the use of metal powders for extending the lifetime of the sample are also presented.


Radiocarbon ◽  
1987 ◽  
Vol 29 (3) ◽  
pp. 323-333 ◽  
Author(s):  
J S Vogel ◽  
D E Nelson ◽  
J R Southon

The levels and sources of the measurement background in an AMS 14C dating system have been studied in detail. The relative contributions to the total background from combustion, graphitization, storage, handling, and from the accelerator were determined by measuring the C concentrations in samples of anthracite coal ranging in size from 15μg to 20mg. The results show that, for the present system, the uncertainty in the background is greater than that due to measurement precision alone for very old or for very small samples. While samples containing 100μg of carbon can yield useful 14C dates throughout the Holocene, 200 to 500μg are required for dating late Pleistocene materials. With the identification of the procedures that introduce contamination, the level and uncertainty of the total system background should both be reducible to the point that 100μg of carbon would be sufficient for dating most materials.


Radiocarbon ◽  
2001 ◽  
Vol 43 (2A) ◽  
pp. 275-282 ◽  
Author(s):  
Q Hua ◽  
G E Jacobsen ◽  
U Zoppi ◽  
E M Lawson ◽  
A A Williams ◽  
...  

We present routine methods of target preparation for radiocarbon analysis at the ANTARES Accelerator Mass Spectrometry (AMS) Centre, as well as recent developments which have decreased our procedural blank level and improved our ability to process small samples containing less than 200 μg of carbon. Routine methods of 14C sample preparation include sample pretreatment, CO2 extraction (combustion, hydrolysis and water stripping) and conversion to graphite (graphitization). A new method of cleaning glassware and reagents used in sample processing, by baking them under a stream of oxygen, is described. The results show significant improvements in our procedural blanks. In addition, a new graphitization system dedicated to small samples, using H2/Fe reduction of CO2, has been commissioned. The technical details of this system, the graphite yield and the level of fractionation of the targets are discussed.


2003 ◽  
Vol 791 ◽  
Author(s):  
Carl C. Koch ◽  
Ronald O. Scattergood ◽  
K. Linga Murty ◽  
Ramesh K. Guduru ◽  
Gopinath Trichy ◽  
...  

ABSTRACTTesting methods are reviewed that can be applied to the small sample sizes which result from many of the processing routes for preparation of nanocrystalline materials. These include the measurement of elastic properties on small samples; hardness, with emphasis on nanoindentation methods; the miniaturized disk bend test (MDBT); the automated ball indentation test (ABI); the shear punch test; and the use of subsize compression and tensile samples.


2005 ◽  
Vol 28 (3) ◽  
pp. 283-294 ◽  
Author(s):  
Jin-Shei Lai ◽  
Jeanne Teresi ◽  
Richard Gershon

An item with differential item functioning (DIF) displays different statistical properties, conditional on a matching variable. The presence of DIF in measures can invalidate the conclusions of medical outcome studies. Numerous approaches have been developed to examine DIF in many areas, including education and health-related quality of life. There is little consensus in the research community regarding selection of one best method, and most methods require large sample sizes. This article describes some approaches to examine DIF with small samples (e.g., less than 200).


1988 ◽  
Vol 13 (3) ◽  
pp. 142-146 ◽  
Author(s):  
David A. Cole

In the area of severe-profound retardation, researchers are faced with small sample sizes. The question of statistical power is critical. In this article, three commonly used tests for treatment-control group differences are compared with respect to their relative power: the posttest-only approach, the change-score approach, and an analysis of covariance (ANCOVA) approach. In almost all cases, the ANCOVA approach is the more powerful than the other two, even when very small samples are involved. Finally, a fourth approach involving ANCOVA plus alternate rank assignments is examined and found to be superior even to the ANCOVA approach, especially in small sample cases. Use of slightly more sophisticated statistics in small sample research is recommended.


Sign in / Sign up

Export Citation Format

Share Document