calibration data set
Recently Published Documents


TOTAL DOCUMENTS

24
(FIVE YEARS 1)

H-INDEX

14
(FIVE YEARS 0)

PLoS ONE ◽  
2021 ◽  
Vol 16 (5) ◽  
pp. e0250636
Author(s):  
Marylène Delcey ◽  
Pierre Bour ◽  
Valéry Ozenne ◽  
Wadie Ben Hassen ◽  
Bruno Quesson

Purpose To propose a MR-thermometry method and associated data processing technique to predict the maximal RF-induced temperature increase near an implanted wire for any other MRI sequence. Methods A dynamic single shot echo planar imaging sequence was implemented that interleaves acquisition of several slices every second and an energy deposition module with adjustable parameters. Temperature images were processed in real time and compared to invasive fiber-optic measurements to assess accuracy of the method. The standard deviation of temperature was measured in gel and in vivo in the human brain of a volunteer. Temperature increases were measured for different RF exposure levels in a phantom containing an inserted wire and then a MR-conditional pacemaker lead. These calibration data set were fitted to a semi-empirical model allowing estimation of temperature increase of other acquisition sequences. Results The precision of the measurement obtained after filtering with a 1.6x1.6 mm2 in plane resolution was 0.2°C in gel, as well as in the human brain. A high correspondence was observed with invasive temperature measurements during RF-induced heating (0.5°C RMSE for a 11.5°C temperature increase). Temperature rises of 32.4°C and 6.5°C were reached at the tip of a wire and of a pacemaker lead, respectively. After successful fitting of temperature curves of the calibration data set, temperature rise predicted by the model was in good agreement (around 5% difference) with measured temperature by a fiber optic probe, for three other MRI sequences. Conclusion This method proposes a rapid and reliable quantification of the temperature rise near an implanted wire. Calibration data set and resulting fitting coefficients can be used to estimate temperature increase for any MRI sequence as function of its power and duration.


SOIL ◽  
2020 ◽  
Vol 6 (2) ◽  
pp. 565-578
Author(s):  
Wartini Ng ◽  
Budiman Minasny ◽  
Wanderson de Sousa Mendes ◽  
José Alexandre Melo Demattê

Abstract. The number of samples used in the calibration data set affects the quality of the generated predictive models using visible, near and shortwave infrared (VIS–NIR–SWIR) spectroscopy for soil attributes. Recently, the convolutional neural network (CNN) has been regarded as a highly accurate model for predicting soil properties on a large database. However, it has not yet been ascertained how large the sample size should be for CNN model to be effective. This paper investigates the effect of the training sample size on the accuracy of deep learning and machine learning models. It aims at providing an estimate of how many calibration samples are needed to improve the model performance of soil properties predictions with CNN as compared to conventional machine learning models. In addition, this paper also looks at a way to interpret the CNN models, which are commonly labelled as a black box. It is hypothesised that the performance of machine learning models will increase with an increasing number of training samples, but it will plateau when it reaches a certain number, while the performance of CNN will keep improving. The performances of two machine learning models (partial least squares regression – PLSR; Cubist) are compared against the CNN model. A VIS–NIR–SWIR spectra library from Brazil, containing 4251 unique sites with averages of two to three samples per depth (a total of 12 044 samples), was divided into calibration (3188 sites) and validation (1063 sites) sets. A subset of the calibration data set was then created to represent a smaller calibration data set ranging from 125, 300, 500, 1000, 1500, 2000, 2500 and 2700 unique sites, which is equivalent to a sample size of approximately 350, 840, 1400, 2800, 4200, 5600, 7000 and 7650. All three models (PLSR, Cubist and CNN) were generated for each sample size of the unique sites for the prediction of five different soil properties, i.e. cation exchange capacity, organic carbon, sand, silt and clay content. These calibration subset sampling processes and modelling were repeated 10 times to provide a better representation of the model performances. Learning curves showed that the accuracy increased with an increasing number of training samples. At a lower number of samples (< 1000), PLSR and Cubist performed better than CNN. The performance of CNN outweighed the PLSR and Cubist model at a sample size of 1500 and 1800, respectively. It can be recommended that deep learning is most efficient for spectra modelling for sample sizes above 2000. The accuracy of the PLSR and Cubist model seems to reach a plateau above sample sizes of 4200 and 5000, respectively, while the accuracy of CNN has not plateaued. A sensitivity analysis of the CNN model demonstrated its ability to determine important wavelengths region that affected the predictions of various soil attributes.


2019 ◽  
Author(s):  
Yvette L. Eley ◽  
William Thompson ◽  
Sarah E. Greene ◽  
Ilya Mandel ◽  
Kirsty Edgar ◽  
...  

Abstract. In the modern oceans, the relative abundances of Glycerol dialkyl glycerol tetraether (GDGTs) compounds produced by marine archaeal communities show a significant dependence on the local sea surface temperature at the site of formation. When preserved in ancient marine sediments, the measured abundances of these fossil lipid biomarkers thus have the potential to provide a geological record of long-term variability in planetary surface temperatures. Several empirical calibrations have been made between observed GDGT relative abundances in late Holocene core top sediments and modern upper ocean temperatures. These calibrations form the basis of the widely used TEX86 palaeothermometer. There are, however, two outstanding problems with this approach, first the appropriate assignment of uncertainty to estimates of ancient sea surface temperatures based on the relationship of the ancient GDGT assemblage to the modern calibration data set; and second, the problem of making temperature estimates beyond the range of the modern empirical calibrations (> 30 ºC). Here we apply modern machine-learning tools, including Gaussian Process Emulators and forward modelling, to develop a new mathematical approach we call OPTiMAL (Optimised Palaeothermometry from Tetraethers via MAchine Learning) to improve temperature estimation and the representation of uncertainty based on the relationship between ancient GDGT assemblage data and the structure of the modern calibration data set. We reduce the root mean square uncertainty on temperature predictions (validated using the modern data set) from ~ ±6 ºC using TEX86 based estimators to ±3.6 ºC using Gaussian Process estimators for temperatures below 30 ºC. We also provide a new but simple quantitative measure of the distance between an ancient GDGT assemblage and the nearest neighbour within the modern calibration dataset, as a test for significant non-analogue behaviour. Finally, we advocate against the use of temperature estimates beyond the range of the modern empirical calibration dataset, given the absence – to date – of a robust predictive biological model or extensive and reproducible mesocosm experimental data in this elevated temperature range.


2018 ◽  
Vol 18 (16) ◽  
pp. 11813-11829 ◽  
Author(s):  
Anina Gilgen ◽  
Carole Adolf ◽  
Sandra O. Brugger ◽  
Luisa Ickes ◽  
Margit Schwikowski ◽  
...  

Abstract. Microscopic charcoal particles are fire-specific tracers, which are ubiquitous in natural archives such as lake sediments or ice cores. Thus, charcoal records from lake sediments have become the primary source for reconstructing past fire activity. Microscopic charcoal particles are generated during forest and grassland fires and can be transported over large distances before being deposited into natural archives. In this paper, we implement microscopic charcoal particles into a global aerosol–climate model to better understand the transport of charcoal on a large scale. Atmospheric transport and interactions with other aerosol particles, clouds, and radiation are explicitly simulated. To estimate the emissions of the microscopic charcoal particles, we use recent European charcoal observations from lake sediments as a calibration data set. We found that scaling black carbon fire emissions from the Global Fire Assimilation System (a satellite-based emission inventory) by approximately 2 orders of magnitude matches the calibration data set best. The charcoal validation data set, for which we collected charcoal observations from all over the globe, generally supports this scaling factor. In the validation data set, we included charcoal particles from lake sediments, peats, and ice cores. While only the Spearman rank correlation coefficient is significant for the calibration data set (0.67), both the Pearson and the Spearman rank correlation coefficients are positive and significantly different from zero for the validation data set (0.59 and 0.48, respectively). Overall, the model captures a significant portion of the spatial variability, but it fails to reproduce the extreme spatial variability observed in the charcoal data. This can mainly be explained by the coarse spatial resolution of the model and uncertainties concerning fire emissions. Furthermore, charcoal fluxes derived from ice core sites are much lower than the simulated fluxes, which can be explained by the location properties (high altitude and steep topography, which are not well represented in the model) of most of the investigated ice cores. Global modelling of charcoal can improve our understanding of the representativeness of this fire proxy. Furthermore, it might allow past fire emissions provided by fire models to be quantitatively validated. This might deepen our understanding of the processes driving global fire activity.


Author(s):  
Amr Mohamed ◽  
Alexander Y. Bigazzi

With an increasing focus on bicycling as a mode of urban transportation, there is a pressing need for improved tools for bicycle travel analysis and modeling. This paper introduces “biking schedules” to represent archetypal urban cycling dynamics, analogous to driving schedules used in vehicle emissions analysis. Three different methods of constructing biking schedules with both speed and road grade attributes are developed from the driving schedule literature. The methods are applied and compared using a demonstration data set of 55 h of 1-Hz on-road GPS data from three cyclists. Biking schedules are evaluated based on their ability to represent the speed dynamics, power output, and breathing rates of a calibration data set and then validated for different riders. The impact of using coarser 3, 5, and 10 s GPS logging intervals on the accuracy of the schedules is also evaluated. Results indicate that the best biking schedule construction method depends on the volume and resolution of the calibration data set. Overall, the biking schedules successfully represent most of the assessed characteristics of cycling dynamics in the calibration data set (speed, acceleration, grade, power, and breathing) within 5%. Future work will examine the precision of biking schedules constructed from larger data sets in more diverse cycling conditions and explore additional refinements to the construction methods. This research is considered a first step toward adopting biking schedules in bicycle travel analysis and modeling, and potential applications are discussed.


Author(s):  
N. Börlin ◽  
A. Murtiyoso ◽  
P. Grussenmeyer ◽  
F. Menna ◽  
E. Nocerino

In this paper we investigate how the residuals in bundle adjustment can be split into a composition of simple functions. According to the chain rule, the Jacobian (linearisation) of the residual can be formed as a product of the Jacobians of the individual steps. When implemented, this enables a modularisation of the computation of the bundle adjustment residuals and Jacobians where each component has limited responsibility. This enables simple replacement of components to e.g. implement different projection or rotation models by exchanging a module.<br> The technique has previously been used to implement bundle adjustment in the open-source package DBAT (Börlin and Grussenmeyer, 2013) based on the Photogrammetric and Computer Vision interpretations of Brown (1971) lens distortion model. In this paper, we applied the technique to investigate how affine distortions can be used to model the projection of a tilt-shift lens. Two extended distortion models were implemented to test the hypothesis that the ordering of the affine and lens distortion steps can be changed to reduce the size of the residuals of a tilt-shift lens calibration.<br> Results on synthetic data confirm that the ordering of the affine and lens distortion steps matter and is detectable by DBAT. However, when applied to a real camera calibration data set of a tilt-shift lens, no difference between the extended models was seen. This suggests that the tested hypothesis is false and that other effects need to be modelled to better explain the projection. The relatively low implementation effort that was needed to generate the models suggest that the technique can be used to investigate other novel projection models in photogrammetry, including modelling changes in the 3D geometry to better understand the tilt-shift lens.


2016 ◽  
Vol 12 (5) ◽  
pp. 1263-1280 ◽  
Author(s):  
Frazer Matthews-Bird ◽  
Stephen J. Brooks ◽  
Philip B. Holden ◽  
Encarni Montoya ◽  
William D. Gosling

Abstract. Presented here is the first chironomid calibration data set for tropical South America. Surface sediments were collected from 59 lakes across Bolivia (15 lakes), Peru (32 lakes), and Ecuador (12 lakes) between 2004 and 2013 over an altitudinal gradient from 150 m above sea level (a.s.l) to 4655 m a.s.l, between 0–17° S and 64–78° W. The study sites cover a mean annual temperature (MAT) gradient of 25 °C. In total, 55 chironomid taxa were identified in the 59 calibration data set lakes. When used as a single explanatory variable, MAT explains 12.9 % of the variance (λ1/λ2 =  1.431). Two inference models were developed using weighted averaging (WA) and Bayesian methods. The best-performing model using conventional statistical methods was a WA (inverse) model (R2jack =  0.890; RMSEPjack =  2.404 °C, RMSEP – root mean squared error of prediction; mean biasjack =  −0.017 °C; max biasjack =  4.665 °C). The Bayesian method produced a model with R2jack =  0.909, RMSEPjack =  2.373 °C, mean biasjack =  0.598 °C, and max biasjack =  3.158 °C. Both models were used to infer past temperatures from a ca. 3000-year record from the tropical Andes of Ecuador, Laguna Pindo. Inferred temperatures fluctuated around modern-day conditions but showed significant departures at certain intervals (ca. 1600 cal yr BP; ca. 3000–2500 cal yr BP). Both methods (WA and Bayesian) showed similar patterns of temperature variability; however, the magnitude of fluctuations differed. In general the WA method was more variable and often underestimated Holocene temperatures (by ca. −7 ± 2.5 °C relative to the modern period). The Bayesian method provided temperature anomaly estimates for cool periods that lay within the expected range of the Holocene (ca. −3 ± 3.4 °C). The error associated with both reconstructions is consistent with a constant temperature of 20 °C for the past 3000 years. We would caution, however, against an over-interpretation at this stage. The reconstruction can only currently be deemed qualitative and requires more research before quantitative estimates can be generated with confidence. Increasing the number, and spread, of lakes in the calibration data set would enable the detection of smaller climate signals.


The Holocene ◽  
2013 ◽  
Vol 23 (11) ◽  
pp. 1650-1654 ◽  
Author(s):  
J Sakari Salonen ◽  
Heikki Seppä ◽  
H John B Birks

Radiocarbon ◽  
2013 ◽  
Vol 55 (4) ◽  
pp. 2049-2058 ◽  
Author(s):  
Richard A Staff ◽  
Gordon Schlolaut ◽  
Christopher Bronk Ramsey ◽  
Fiona Brock ◽  
Charlotte L Bryant ◽  
...  

The varved sediment profile of Lake Suigetsu, central Japan, offers an ideal opportunity from which to derive a terrestrial record of atmospheric radiocarbon across the entire range of the 14C dating method. Previous work by Kitagawa and van der Plicht (1998a,b, 2000) provided such a data set; however, problems with the varve-based age scale of their SG93 sediment core precluded the use of this data set for 14C calibration purposes. Lake Suigetsu was re-cored in summer 2006, with the retrieval of overlapping sediment cores from 4 parallel boreholes enabling complete recovery of the sediment profile for the present “Suigetsu Varves 2006” project (Nakagawa et al. 2012). Over 550 14C determinations have been obtained from terrestrial plant macrofossils picked from the latter SG06 composite sediment core, which, coupled with the core's independent varve chronology, provides the only non-reservoir-corrected 14C calibration data set across the 14C dating range.Here, physical matching of archive U-channel sediment from SG93 to the continuous SG06 sediment profile is presented. We show the excellent agreement between the respective projects' 14C data sets, allowing the integration of 243 14C determinations from the original SG93 project into a composite Lake Suigetsu 14C calibration data set comprising 808 individual 14C determinations, spanning the last 52,800 cal yr.


The Holocene ◽  
2011 ◽  
Vol 22 (4) ◽  
pp. 439-449 ◽  
Author(s):  
Alan Hogg ◽  
David J. Lowe ◽  
Jonathan Palmer ◽  
Gretel Boswijk ◽  
Christopher Bronk Ramsey

Sign in / Sign up

Export Citation Format

Share Document