scholarly journals Uncertainty Assessment of the Vertically-Resolved Cloud Amount for Joint CloudSat–CALIPSO Radar–Lidar Observations

2021 ◽  
Vol 13 (4) ◽  
pp. 807
Author(s):  
Andrzej Z. Kotarba ◽  
Mateusz Solecki

The joint CloudSat–Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation (CALIPSO) climatology remains the only dataset that provides a global, vertically-resolved cloud amount statistic. However, data are affected by uncertainty that is the result of a combination of infrequent sampling, and a very narrow, pencil-like swath. This study provides the first global assessment of these uncertainties, which are quantified using bootstrapped confidence intervals. Rather than focusing on a purely theoretical discussion, we investigate empirical data that span a five-year period between 2006 and 2011. We examine the 2B-Geometric Profiling (GEOPROF)-LIDAR cloud product, at typical spatial resolutions found in global grids (1.0°, 2.5°, 5.0°, and 10.0°), four confidence levels (0.85, 0.90, 0.95, and 0.99), and three time scales (annual, seasonal, and monthly). Our results demonstrate that it is impossible to estimate, for every location, a five-year mean cloud amount based on CloudSat–CALIPSO data, assuming an accuracy of 1% or 5%, a high confidence level (>0.95), and a fine spatial resolution (1°–2.5°). In fact, the 1% requirement was only met by ~6.5% of atmospheric volumes at 1° and 2.5°, while the more tolerant criterion (5%) was met by 22.5% volumes at 1°, or 48.9% at 2.5° resolution. In order for at least 99% of volumes to meet an accuracy criterion, the criterion itself would have to be lowered to ~20% for 1° data, or to ~8% for 2.5° data. Our study also showed that the average confidence interval: decreased four times when the spatial resolution increased from 1° to 10°; doubled when the confidence level increased from 0.85 to 0.99; and tripled when the number of data-months increased from one (monthly mean) to twelve (annual mean). The cloud regime arguably had the most impact on the width of the confidence interval (mean cloud amount and its standard deviation). Our findings suggest that existing uncertainties in the CloudSat–CALIPSO five-year climatology are primarily the result of climate-specific factors, rather than the sampling scheme. Results that are presented in the form of statistics or maps, as in this study, can help the scientific community to improve accuracy assessments (which are frequently omitted), when analyzing existing and future CloudSat–CALIPSO cloud climatologies.

2021 ◽  
Author(s):  
Andrzej Kotarba ◽  
Mateusz Solecki

<p>Vertically-resolved cloud amount is essential for understanding the Earth’s radiation budget. Joint CloudSat-CALIPSO, lidar-radar cloud climatology remains the only dataset providing such information globally. However, a specific sampling scheme (pencil-like swath, 16-day revisit) introduces an uncertainty to CloudSat-CALIPSO cloud amounts. In the research we assess those uncertainties in terms of a bootstrap confidence intervals. Five years (2006-2011) of the 2B-GEOPROF-LIDAR (version P2_R05) cloud product was examined, accounting for  typical spatial resolutions of a global grids (1.0°, 2.5°, 5.0°, 10.0°), four confidence levels of confidence interval (0.85, 0.90, 0.95, 0.99), and three time scales of mean cloud amount (annual, seasonal, monthly). Results proved that cloud amount accuracy of 1%, or 5%, is not achievable with the dataset, assuming a 5-year mean cloud amount, high (>0.95) confidence level, and fine spatial resolution (1º–2.5º). The 1% requirement was only met by ~6.5% of atmospheric volumes at 1º and 2.5º, while more tolerant criterion (5%) was met by 22.5% volumes at 1º, or 48.9% at 2.5º resolution. In order to have at least 99% of volumes meeting an accuracy criterion, the criterion itself would have to be lowered to ~20% for 1º data, or to ~8% for 2.5º data. Study also quantified the relation between confidence interval width, and spatial resolution, confidence level, number of observations. Cloud regime (mean cloud amount, and standard deviation of cloud amount) was found the most important factor impacting the width of confidence interval. The research has been funded by the National Science Institute of Poland grant no. UMO-2017/25/B/ST10/01787. This research has been supported in part by PL-Grid Infrastructure (a computing resources).</p>


2017 ◽  
Vol 928 (10) ◽  
pp. 58-63 ◽  
Author(s):  
V.I. Salnikov

The initial subject for study are consistent sums of the measurement errors. It is assumed that the latter are subject to the normal law, but with the limitation on the value of the marginal error Δpred = 2m. It is known that each amount ni corresponding to a confidence interval, which provides the value of the sum, is equal to zero. The paradox is that the probability of such an event is zero; therefore, it is impossible to determine the value ni of where the sum becomes zero. The article proposes to consider the event consisting in the fact that some amount of error will change value within 2m limits with a confidence level of 0,954. Within the group all the sums have a limit error. These tolerances are proposed to use for the discrepancies in geodesy instead of 2m*SQL(ni). The concept of “the law of the truncated normal distribution with Δpred = 2m” is suggested to be introduced.


1998 ◽  
Vol 16 (3) ◽  
pp. 331-341 ◽  
Author(s):  
J. Massons ◽  
D. Domingo ◽  
J. Lorente

Abstract. A cloud-detection method was used to retrieve cloudy pixels from Meteosat images. High spatial resolution (one pixel), monthly averaged cloud-cover distribution was obtained for a 1-year period. The seasonal cycle of cloud amount was analyzed. Cloud parameters obtained include the total cloud amount and the percentage of occurrence of clouds at three altitudes. Hourly variations of cloud cover are also analyzed. Cloud properties determined are coherent with those obtained in previous studies.Key words. Cloud cover · Meteosat


2016 ◽  
Vol 29 (17) ◽  
pp. 6065-6083 ◽  
Author(s):  
Yinghui Liu ◽  
Jeffrey R. Key

Abstract Cloud cover is one of the largest uncertainties in model predictions of the future Arctic climate. Previous studies have shown that cloud amounts in global climate models and atmospheric reanalyses vary widely and may have large biases. However, many climate studies are based on anomalies rather than absolute values, for which biases are less important. This study examines the performance of five atmospheric reanalysis products—ERA-Interim, MERRA, MERRA-2, NCEP R1, and NCEP R2—in depicting monthly mean Arctic cloud amount anomalies against Moderate Resolution Imaging Spectroradiometer (MODIS) satellite observations from 2000 to 2014 and against Cloud–Aerosol Lidar and Infrared Pathfinder Satellite Observation (CALIPSO) observations from 2006 to 2014. All five reanalysis products exhibit biases in the mean cloud amount, especially in winter. The Gerrity skill score (GSS) and correlation analysis are used to quantify their performance in terms of interannual variations. Results show that ERA-Interim, MERRA, MERRA-2, and NCEP R2 perform similarly, with annual mean GSSs of 0.36/0.22, 0.31/0.24, 0.32/0.23, and 0.32/0.23 and annual mean correlation coefficients of 0.50/0.51, 0.43/0.54, 0.44/0.53, and 0.50/0.52 against MODIS/CALIPSO, indicating that the reanalysis datasets do exhibit some capability for depicting the monthly mean cloud amount anomalies. There are no significant differences in the overall performance of reanalysis products. They all perform best in July, August, and September and worst in November, December, and January. All reanalysis datasets have better performance over land than over ocean. This study identifies the magnitudes of errors in Arctic mean cloud amounts and anomalies and provides a useful tool for evaluating future improvements in the cloud schemes of reanalysis products.


2017 ◽  
Vol 4 (1) ◽  
pp. 27 ◽  
Author(s):  
Pramaditya Wicaksono ◽  
Faza Adhimah

Image-sharpening process integrates lower spatial resolution multispectral bands with higher spatial resolution panchromatic band to produce multispectral bands with finer spatial detail called pan-sharpened image. Although the pan-sharpened image can greatly assist the process of information extraction using visual interpretation, the benefit and setback of using pan-sharpened image on the accuracy of digital classification for mapping remain unclear. This research aimed at 1) highlighting the issue of using pan-sharpened image to perform benthic habitats mapping and 2) comparing the accuracy of benthic habitats mapping using original and pan-sharpened bands. In this study, Quickbird image was used and Kemujan Island was selected as the study area. Two levels of hierarchical classification scheme of benthic habitats were constructed based on the composition of in situ benthic habitats. PC Spectral sharpening method was applied on Quickbird image. Image radiometric corrections, PCA transformation, and image classifications were performed on both original and pan-sharpened image. The results showed that the accuracy of benthic habitats classification of pan-sharpened image (maximum overall accuracy 64.28% and 73.30% for per-pixel and OBIA, respectively) was lower than the original image (73.46% and 73.10%, respectively). The main setback of using pan-sharpened image is the inability to correct the sunglint, hence adversely affects the process of water column correction, PCA transformation and image classification. This is mainly because sunglint do not only affect object’s spectral response but also the texture of the object. Nevertheless, the pan-sharpened image can still be used to map benthic habitats using visual interpretation and digital image processing. Pan-sharpened image will deliver better classification accuracy and visual appearance especially when the sunglint is low.


2018 ◽  
Vol 44 ◽  
pp. 00121
Author(s):  
Sara Nicpoń ◽  
Paula Iliaszewicz ◽  
Maciej Leoniak ◽  
Agnieszka Trusz-Zdybek

For proper enumeration of protozoa in activated sludge good methodology is required. In this paper we present some remarks on microscopic methodology of protozoa enumeration. This remarks concern number of repetitions from one sample required to obtain good statistical results as well as influence of sample aeration on number of found protozoa. Presented data shows that at last 10 repetitions are required from each sample to obtain low average confidence interval. Lower number of repetitions leads to sharp increase in average confidence interval and loss of statistical significance while higher number does not decrease average confidence interval substantially. As measurements lasts for few hours lack of sample’s aeration during measurement leads to detection of lower by 27% number of protozoa.


2017 ◽  
Vol 5 (2) ◽  
pp. 121
Author(s):  
Mazen Doumani ◽  
Adnan Habib ◽  
Abrar Alhababi ◽  
Ahmad Bashnakli ◽  
Enass Shamsy ◽  
...  

Self-confidence level assessment in newly graduated students is very important to evaluate the undergraduate endodontic courses. Objective: The aim of this study was to get information from internship dentists in Alfarabi dental college related to their confidence levels during root canal treatment procedures.Methods: Anonymous survey forms were sent to 150 internship dentists in Alfarbi dental college. They were asked to indicate their self-confidence level by Lickert’s scoring system ranging between 1 and 5.Results: Removal of broken instruments was determined as a procedure that was not experienced by 25.2% of the dentists. (44.6%) of dentists felt confident about taking radiographs during root canal treatment. 1.9 % of them reported as having very little confidence during retreatment. The irrigation was a procedure in which they felt very confident about (59.2%).Conclusion: The non-practiced endodontic procedure was clearly related to levels of self confidence among internship dentists; this means; a lot of studies in dental school should be performed to determine the weakness points or gaps in undergraduate endodontic courses.


Wood Research ◽  
2021 ◽  
Vol 66 (4) ◽  
pp. 582-594
Author(s):  
FRANCISCO ANTONIO ROCCO LAHR ◽  
VINICIUS BORGES DE MOURA AQUINO ◽  
FELIPE NASCIMENTO ARROYO ◽  
HERISSON FERREIRA DOS SANTOS ◽  
SERGIO AUGUSTO MELLO SILVA ◽  
...  

The Brazilian standard ABNT 7190 (1997) establishes the strength classes C20, C30, C40 and C60 for the proper framework of the different wood types in the group of hardwoods. Associated with the strength class, which is based on the compressive strength characteristic value parallel to the fibers (fc0,k), the standard stipulates the respective values representing the stiffness (Ec0), with 19500 MPa being the reference value for the class C40, essential variables in structural design. For being the C40 class is the one with the greatest amplitude (20 MPa), it is possible that the value 19500 MPa is not the best representation of stiffness. This work aimed to verify the representativeness the stiffness value established by the Brazilian standard for C40 wood. The result obtained from the average confidence interval indicates the value of 14110 MPa as being the most representative, which may imply structures that are supposedly more rigid than they really are.


2019 ◽  
Vol 11 (21) ◽  
pp. 116-126
Author(s):  
Israa Jameel Muhsin

DEMs, thus, simply regular grids of elevation measurements over the land surface.The aim of the present work is to produce high resolution DEM for certain investigated region (i.e. Baghdad University Campus\ college of science). The easting and northing of 90 locations, including the ground-base and buildings of the studied area, have been obtained by field survey using global positioning system (GPS). The image of the investigated area has been extracted from Quick-Bird satellite sensor (with spatial resolution of 0.6 m). It has been geo-referenced and rectified  using 1st order polynomial transformation. many interpolation methods have been used to estimate the elevation such as ordinary Kriging, inverse distance weighted (IDW) and  natural neighbor methods. The mosaic  algorithm has then been applied between the base and building layers of studied area in order to perform the final DEM. The accuracy assessments of the interpolation methods have been calculated using the root-mean-square-error (RMSE) criterion. Finally, the estimated DEMs have been used to constructing 3-D views of the original image.


2021 ◽  
Author(s):  
Omar Torres ◽  
Hiren Jethva ◽  
Changwoo Ahn ◽  
Glen Jaross ◽  
Diego Loyola

<p>The NASA-TROPOMI aerosol algorithm (TropOMAER), is an adaptation of the currently operational OMI near-UV (OMAERUV & OMACA) inversion schemes, that take advantage of TROPOMI’s unprecedented fine spatial resolution at UV wavelengths, and the availability of ancillary aerosol-related information to derive aerosol loading in cloud-free and above-cloud aerosols scenes. In this presentation we will introduce the NASA TROPOMI aerosol algorithm and discuss initial evaluation results of retrieved aerosol optical depth (AOD) and single scattering albedo (SSA) by direct comparison to AERONET AOD direct measurements and SSA inversions. We will also demonstrate TropOMAER retrieval capabilities in the context of recent continental scale aerosol events.</p>


Sign in / Sign up

Export Citation Format

Share Document