scholarly journals Ten Priority Science Gaps in Assessing Climate Data Record Quality

2019 ◽  
Vol 11 (8) ◽  
pp. 986 ◽  
Author(s):  
Joanne Nightingale ◽  
Jonathan P.D. Mittaz ◽  
Sarah Douglas ◽  
Dick Dee ◽  
James Ryder ◽  
...  

Decision makers need accessible robust evidence to introduce new policies to mitigate and adapt to climate change. There is an increasing amount of environmental information available to policy makers concerning observations and trends relating to the climate. However, this data is hosted across a multitude of websites often with inconsistent metadata and sparse information relating to the quality, accuracy and validity of the data. Subsequently, the task of comparing datasets to decide which is the most appropriate for a certain purpose is very complex and often infeasible. In support of the European Union’s Copernicus Climate Change Service (C3S) mission to provide authoritative information about the past, present and future climate in Europe and the rest of the world, each dataset to be provided through this service must undergo an evaluation of its climate relevance and scientific quality to help with data comparisons. This paper presents the framework for Evaluation and Quality Control (EQC) of climate data products derived from satellite and in situ observations to be catalogued within the C3S Climate Data Store (CDS). The EQC framework will be implemented by C3S as part of their operational quality assurance programme. It builds on past and present international investment in Quality Assurance for Earth Observation initiatives, extensive user requirements gathering exercises, as well as a broad evaluation of over 250 data products and a more in-depth evaluation of a selection of 24 individual data products derived from satellite and in situ observations across the land, ocean and atmosphere Essential Climate Variable (ECV) domains. A prototype Content Management System (CMS) to facilitate the process of collating, evaluating and presenting the quality aspects and status of each data product to data users is also described. The development of the EQC framework has highlighted cross-domain as well as ECV specific science knowledge gaps in relation to addressing the quality of climate data sets derived from satellite and in situ observations. We discuss 10 common priority science knowledge gaps that will require further research investment to ensure all quality aspects of climate data sets can be ascertained and provide users with the range of information necessary to confidently select relevant products for their specific application.

Author(s):  
Joanne Nightingale ◽  
Folkert Boersma ◽  
Jan-Peter Muller ◽  
Steven Compernolle ◽  
Jean-Christopher Lambert ◽  
...  

Data from Earth Observation (EO) satellites are increasingly used to monitor the environment, understand variability and change, inform evaluations of climate model forecasts and manage natural resources. Policy makers are progressively relying on the information derived from these datasets to make decisions on mitigating and adapting to climate change. These decisions should be evidence based, which requires confidence in derived products as well as the reference measurements used to calibrate, validate or inform product development. In support of the European Union’s Earth Observation Programmes Copernicus Climate Change Service, the Quality Assurance for Essential Climate Variables (QA4ECV) project fulfilled a gap in the delivery of climate quality satellite derived datasets by prototyping a robust, generic system for the implementation and evaluation of Quality Assurance (QA) measures for satellite-derived ECV climate data record products. The project demonstrated the QA system on six new long-term, climate quality ECV data records for surface Albedo, Leaf Area Index, FAPAR, NO2, HCHO and CO. Provision of standardized QA information provides data users with evidence-based confidence in the products and enables judgement on the fitness-for-purpose of various ECV data products their specific applications.


2018 ◽  
Vol 10 (8) ◽  
pp. 1254 ◽  
Author(s):  
Joanne Nightingale ◽  
Klaas Boersma ◽  
Jan-Peter Muller ◽  
Steven Compernolle ◽  
Jean-Christopher Lambert ◽  
...  

Data from Earth observation (EO) satellites are increasingly used to monitor the environment, understand variability and change, inform evaluations of climate model forecasts, and manage natural resources. Policymakers are progressively relying on the information derived from these datasets to make decisions on mitigating and adapting to climate change. These decisions should be evidence based, which requires confidence in derived products, as well as the reference measurements used to calibrate, validate, or inform product development. In support of the European Union’s Earth Observation Programmes Copernicus Climate Change Service (C3S), the Quality Assurance for Essential Climate Variables (QA4ECV) project fulfilled a gap in the delivery of climate quality satellite-derived datasets, by prototyping a generic system for the implementation and evaluation of quality assurance (QA) measures for satellite-derived ECV climate data record products. The project demonstrated the QA system on six new long-term, climate quality ECV data records for surface albedo, leaf area index (LAI), fraction of absorbed photosynthetically active radiation (FAPAR), nitrogen dioxide (NO2), formaldehyde (HCHO), and carbon monoxide (CO). The provision of standardised QA information provides data users with evidence-based confidence in the products and enables judgement on the fitness-for-purpose of various ECV data products and their specific applications.


2021 ◽  
Author(s):  
Jouke de Baar ◽  
Gerard van der Schrier ◽  
Irene Garcia-Marti ◽  
Else van den Besselaar

<p><strong>Objective</strong></p><p>The purpose of the European Copernicus Climate Change Service (C3S) is to support society by providing information about the past, present and future climate. For the service related to <em>in-situ</em> observations, one of the objectives is to provide high-resolution (0.1x0.1 and 0.25x0.25 degrees) gridded wind speed fields. The gridded wind fields are based on ECA&D daily average station observations for the period 1970-2020.</p><p><strong>Research question</strong> </p><p>We address the following research questions: [1] How efficiently can we provide the gridded wind fields as a statistically reliable ensemble, in order to represent the uncertainty of the gridding? [2] How efficiently can we exploit high-resolution geographical auxiliary variables (e.g. digital elevation model, terrain roughness) to augment the station data from a sparse network, in order to provide gridded wind fields with high-resolution local features?</p><p><strong>Approach</strong></p><p>In our analysis, we apply greedy forward selection linear regression (FSLR) to include the high-resolution effects of the auxiliary variables on monthly-mean data. These data provide a ‘background’ for the daily estimates. We apply cross-validation to avoid FSLR over-fitting and use full-cycle bootstrapping to create FSLR ensemble members. Then, we apply Gaussian process regression (GPR) to regress the daily anomalies. We consider the effect of the spatial distribution of station locations on the GPR gridding uncertainty.</p><p>The goal of this work is to produce several decades of daily gridded wind fields, hence, computational efficiency is of utmost importance. We alleviate the computational cost of the FSLR and GPR analyses by incorporating greedy algorithms and sparse matrix algebra in the analyses.</p><p><strong>Novelty</strong>   </p><p>The gridded wind fields are calculated as a statistical ensemble of realizations. In the present analysis, the ensemble spread is based on uncertainties arising from the auxiliary variables as well as from the spatial distribution of stations.</p><p>Cross-validation is used to tune the GPR hyper parameters. Where conventional GPR hyperparameter tuning aims at an optimal prediction of the gridded mean, instead, we tune the GPR hyperparameters for optimal prediction of the gridded ensemble spread.</p><p>Building on our experience with providing similar gridded climate data sets, this set of gridded wind fields is a novel addition to the E-OBS climate data sets.</p>


2018 ◽  
Author(s):  
Athanasia Iona ◽  
Athanasios Theodorou ◽  
Sarantis Sofianos ◽  
Sylvain Watelet ◽  
Charles Troupin ◽  
...  

Abstract. We present a new product composed of a set of thermohaline climatic indices from 1950 to 2015 for the Mediterranean Sea such as decadal temperature and salinity anomalies, their mean values over selected depths, decadal ocean heat and salt content anomalies at selected depth layers as well as their long times series. It is produced from a new high-resolution climatology of temperature and salinity on a 1/8° regular grid based on historical high quality in situ observations. Ocean heat and salt content differences between 1980–2015 and 1950–1979 are compared for evaluation of the climate shift in the Mediterranean Sea. The spatial patterns of heat and salt content shifts demonstrate in greater detail than ever before that the climate changes differently in the several regions of the basin. Long time series of heat and salt content for the period 1950 to 2015 are also provided which indicate that in the Mediterranean Sea there is a net mean volume warming and salting since 1950 with acceleration during the last two decades. The time series also show that the ocean heat content seems to fluctuate on a cycle of about 40 years and seems to follow the Atlantic Multidecadal Oscillation climate cycle indicating that the natural large scale atmospheric variability could be superimposed on to the warming trend. This product is an observations-based estimation of the Mediterranean climatic indices. It relies solely on spatially interpolated data produced from in-situ observations averaged over decades in order to smooth the decadal variability and reveal the long term trends with more accuracy. It can provide a valuable contribution to the modellers' community, next to the satellite-based products and serve as a baseline for the evaluation of climate-change model simulations contributing thus to a better understanding of the complex response of the Mediterranean Sea to the ongoing global climate change. The product is available here: https://doi.org/10.5281/zenodo.1210100.


2020 ◽  
Vol 12 (9) ◽  
pp. 1414
Author(s):  
Victoria M. Scholl ◽  
Megan E. Cattau ◽  
Maxwell B. Joseph ◽  
Jennifer K. Balch

Accurately mapping tree species composition and diversity is a critical step towards spatially explicit and species-specific ecological understanding. The National Ecological Observatory Network (NEON) is a valuable source of open ecological data across the United States. Freely available NEON data include in-situ measurements of individual trees, including stem locations, species, and crown diameter, along with the NEON Airborne Observation Platform (AOP) airborne remote sensing imagery, including hyperspectral, multispectral, and light detection and ranging (LiDAR) data products. An important aspect of predicting species using remote sensing data is creating high-quality training sets for optimal classification purposes. Ultimately, manually creating training data is an expensive and time-consuming task that relies on human analyst decisions and may require external data sets or information. We combine in-situ and airborne remote sensing NEON data to evaluate the impact of automated training set preparation and a novel data preprocessing workflow on classifying the four dominant subalpine coniferous tree species at the Niwot Ridge Mountain Research Station forested NEON site in Colorado, USA. We trained pixel-based Random Forest (RF) machine learning models using a series of training data sets along with remote sensing raster data as descriptive features. The highest classification accuracies, 69% and 60% based on internal RF error assessment and an independent validation set, respectively, were obtained using circular tree crown polygons created with half the maximum crown diameter per tree. LiDAR-derived data products were the most important features for species classification, followed by vegetation indices. This work contributes to the open development of well-labeled training data sets for forest composition mapping using openly available NEON data without requiring external data collection, manual delineation steps, or site-specific parameters.


2020 ◽  
Author(s):  
Sara Moutia

<p>The main advantage of remote sensing products is that they are reasonably good in terms of temporal and special coverage, and they are available in a near real time. Therefore, an understanding of the strengths and weaknesses of satellite data is useful to choose it as an alternative source of information with acceptable accuracy.  On the first hand, this study assesses an Inter-comparison between CMSAF Sunshine Duration (SD) data records and ground observations of 30 data sets from 1983 to 2015. the correlation is very significant and the satellite data fits very closely to in situ observations. On the other hand, trend analysis is applied to SD and Solar Incoming Direct radiation (SID) data, a number of stations show a statistically significant decreasing trend in SD and also SID shows a decreasing trend over Morocco in most of regions especially in summer. The results indicate a general tendency of decrease in incoming solar radiation mostly during summer which could be of some concern for solar energy.</p>


2018 ◽  
Vol 22 (1) ◽  
pp. 241-263 ◽  
Author(s):  
Yu Zhang ◽  
Ming Pan ◽  
Justin Sheffield ◽  
Amanda L. Siemann ◽  
Colby K. Fisher ◽  
...  

Abstract. Closing the terrestrial water budget is necessary to provide consistent estimates of budget components for understanding water resources and changes over time. Given the lack of in situ observations of budget components at anything but local scale, merging information from multiple data sources (e.g., in situ observation, satellite remote sensing, land surface model, and reanalysis) through data assimilation techniques that optimize the estimation of fluxes is a promising approach. Conditioned on the current limited data availability, a systematic method is developed to optimally combine multiple available data sources for precipitation (P), evapotranspiration (ET), runoff (R), and the total water storage change (TWSC) at 0.5∘ spatial resolution globally and to obtain water budget closure (i.e., to enforce P-ET-R-TWSC= 0) through a constrained Kalman filter (CKF) data assimilation technique under the assumption that the deviation from the ensemble mean of all data sources for the same budget variable is used as a proxy of the uncertainty in individual water budget variables. The resulting long-term (1984–2010), monthly 0.5∘ resolution global terrestrial water cycle Climate Data Record (CDR) data set is developed under the auspices of the National Aeronautics and Space Administration (NASA) Earth System Data Records (ESDRs) program. This data set serves to bridge the gap between sparsely gauged regions and the regions with sufficient in situ observations in investigating the temporal and spatial variability in the terrestrial hydrology at multiple scales. The CDR created in this study is validated against in situ measurements like river discharge from the Global Runoff Data Centre (GRDC) and the United States Geological Survey (USGS), and ET from FLUXNET. The data set is shown to be reliable and can serve the scientific community in understanding historical climate variability in water cycle fluxes and stores, benchmarking the current climate, and validating models.


2021 ◽  
Author(s):  
Tai-Long He ◽  
Dylan Jones ◽  
Kazuyuki Miyazaki ◽  
Kevin Bowman ◽  
Zhe Jiang ◽  
...  

<p>The COVID-19 pandemic led to the lockdown of over one-third of Chinese cities in early 2020. Observations have shown significant reductions of atmospheric abundances of NO<sub>2</sub> over China during this period. This change in atmospheric NO<sub>2</sub> implies a dramatic change in emission of NO<sub>x</sub>, which provides a unique opportunity to study the response of the chemistry of the atmospheric to large reductions in anthropogenic emissions. We use a deep learning (DL) model to quantify the change in surface emissions of NO<sub>x</sub> in China that are associated with the observed changes in atmospheric NO<sub>2</sub> during the lockdown period. Compared to conventional data assimilation systems, deep neural networks are free of the potential errors associated with parameterized subgrid-scale processes. Furthermore, they are not susceptible to the chemical errors typically found in atmospheric chemical transport models. The neural-network-based approach also offers a more computationally efficient means of inverse modeling of NO<sub>x</sub> emissions at high spatial resolutions. Our DL model is trained using meteorological predictors and reanalysis data of surface NO<sub>2</sub> from 2005 to 2017. The evaluation is conducted using in-situ measurements of NO<sub>2</sub> in 2019 and 2020. The Baidu 'Qianxi' migration data sets are used to evaluate the model's performance in capturing the typical variation in Chinese NOx emissions during the Chinese New Year holidays. The TROPOMI-derived TCR-2 chemical reanalysis is used to evaluate the DL analysis in 2020. We show that the DL-based approach is able to better reproduce the variation in anthropogenic NO<sub>x</sub> emissions and capture the reduction in Chinese NO<sub>x</sub> emissions during the period of the COVID-19 pandemic.</p>


2015 ◽  
Vol 12 (2) ◽  
pp. 1907-1973 ◽  
Author(s):  
T. J. Bohn ◽  
J. R. Melton ◽  
A. Ito ◽  
T. Kleinen ◽  
R. Spahni ◽  
...  

Abstract. Wetlands are the world's largest natural source of methane, a powerful greenhouse gas. The strong sensitivity of methane emissions to environmental factors such as soil temperature and moisture has led to concerns about potential positive feedbacks to climate change. This risk is particularly relevant at high latitudes, which have experienced pronounced warming and where thawing permafrost could potentially liberate large amounts of labile carbon over the next 100 years. However, global models disagree as to the magnitude and spatial distribution of emissions, due to uncertainties in wetland area and emissions per unit area and a scarcity of in situ observations. Recent intensive field campaigns across the West Siberian Lowland (WSL) make this an ideal region over which to assess the performance of large-scale process-based wetland models in a high-latitude environment. Here we present the results of a follow-up to the Wetland and Wetland CH4 Intercomparison of Models Project (WETCHIMP), focused on the West Siberian Lowland (WETCHIMP-WSL). We assessed 21 models and 5 inversions over this domain in terms of total CH4 emissions, simulated wetland areas, and CH4 fluxes per unit wetland area and compared these results to an intensive in situ CH4 flux dataset, several wetland maps, and two satellite inundation products. We found that: (a) despite the large scatter of individual estimates, 12 year mean estimates of annual total emissions over the WSL from forward models (5.34 ± 0.54 Tg CH4 y-1), inversions (6.06 ± 1.22 Tg CH4 y-1), and in situ observations (3.91 ± 1.29 Tg CH4 y-1) largely agreed, (b) forward models using inundation products alone to estimate wetland areas suffered from severe biases in CH4 emissions, (c) the interannual timeseries of models that lacked either soil thermal physics appropriate to the high latitudes or realistic emissions from unsaturated peatlands tended to be dominated by a single environmental driver (inundation or air temperature), unlike those of inversions and more sophisticated forward models, (d) differences in biogeochemical schemes across models had relatively smaller influence over performance; and (e) multi-year or multi-decade observational records are crucial for evaluating models' responses to long-term climate change.


Author(s):  
Ibrahima Hathie ◽  
Dilys MacCarthy ◽  
Bright Freduah ◽  
Mouhamed Ly ◽  
Ahmadou Ly ◽  
...  

The Agricultural Model Intercomparison and Improvement Project (AgMIP) developed protocol-based methods for Regional Integrated Assessment (RIA) of agricultural systems. These methods have been applied by teams of scientists working with regional and national stakeholders across Sub-Saharan Africa and South Asia. This paper describes the data sets that were used to implement the AgMIP RIA methods for the Nioro region of Senegal. The goal of the RIA is to assess the potential impacts of climate change on the principal agricultural system in the Senegal peanut basin comprised of peanut, millet, maize and other minor crops and livestock, and to assess adaptations of that system to climate change, under current as well as future climate and socio-economic conditions. The data sets include: the Representative Agricultural Pathways (RAPs) developed for Nioro from 2000-2050; climate data used to implement crop yield simulations; the data used to parameterize the Agricultural Production Systems sIMulator (APSIM) and the Decision Support System for Agrotechnology Transfer (DSSAT) crop models, which include historical climate data and future climate scenarios; and the data used to parameterize the Tradeoff Analysis Model for Multi-dimensional Impact Assessment (TOA-MD) economic simulation model. The analysis is structured around four AgMIP “core questions'' of climate impact assessment.


Sign in / Sign up

Export Citation Format

Share Document