GEOID90: High‐resolution geoid height model for the conterminous United States

1992 ◽  
Author(s):  
Dennis G. Milbert
2018 ◽  
Author(s):  
Kenneth Belitz ◽  
◽  
Richard B. Moore ◽  
T.L. Arnold ◽  
J.B. Sharpe ◽  
...  

2021 ◽  
Vol 13 (12) ◽  
pp. 2239
Author(s):  
Ying Quan ◽  
Mingze Li ◽  
Yuanshuo Hao ◽  
Bin Wang

As a common form of light detection and ranging (LiDAR) in forestry applications, the canopy height model (CHM) provides the elevation distribution of aboveground vegetation. A CHM is traditionally generated by interpolating all the first LiDAR echoes. However, the first echo cannot accurately represent the canopy surface, and the resulting large amount of noise (data pits) also reduce the CHM quality. Although previous studies concentrate on many pit-filling methods, the applicability of these methods in high-resolution unmanned aerial vehicle laser scanning (UAVLS)-derived CHMs has not been revealed. This study selected eight widely used, recently developed, representative pit-filling methods, namely first-echo interpolation, smooth filtering (mean, medium and Gaussian), highest point interpolation, pit-free algorithm, spike-free algorithm and graph-based progressive morphological filtering (GPMF). A comprehensive evaluation framework was implemented, including a quantitative evaluation using simulation data and an additional application evaluation using UAVLS data. The results indicated that the spike-free algorithm and GPMF had excellent visual performances and were closest to the real canopy surface (root mean square error (RMSE) of simulated data were 0.1578 m and 0.1093 m, respectively; RMSE of UAVLS data were 0.3179 m and 0.4379 m, respectively). Compared with the first-echo method, the accuracies of the spike-free algorithm and GPMF improved by approximately 23% and 22%, respectively. The pit-free algorithm and highest point interpolation method also have advantages in high-resolution CHM generation. The global smooth filter method based on the first-echo CHM reduced the average canopy height by approximately 7.73%. Coniferous forests require more pit-filling than broad-leaved forests and mixed forests. Although the results of individual tree applications indicated that there was no significant difference between these methods except the median filter method, pit-filling is still of great significance for generating high-resolution CHMs. This study provides guidance for using high-resolution UAVLS in forestry applications.


2016 ◽  
Vol 55 (10) ◽  
pp. 2247-2262 ◽  
Author(s):  
Rebecca V. Cumbie-Ward ◽  
Ryan P. Boyles

AbstractA standardized precipitation index (SPI) that uses high-resolution, daily estimates of precipitation from the National Weather Service over the contiguous United States has been developed and is referred to as HRD SPI. There are two different historical distributions computed in the HRD SPI dataset, each with a different combination of normals period (1971–2000 or 1981–2010) and clustering solution of gauge stations. For each historical distribution, the SPI is computed using the NCEP Stage IV and Advanced Hydrologic Prediction Service (AHPS) gridded precipitation datasets for a total of four different HRD SPI products. HRD SPIs are found to correlate strongly with independently produced SPIs over the 10-yr period from 2005 to 2015. The drought-monitoring utility of the HRD SPIs is assessed with case studies of drought in the central and southern United States during 2012 and over the Carolinas during 2007–08. A monthly comparison between HRD SPIs and independently produced SPIs reveals generally strong agreement during both events but weak agreement in areas where radar coverage is poor. For both study regions, HRD SPI is compared with the U.S. Drought Monitor (USDM) to assess the best combination of precipitation input, normals period, and station clustering solution. SPI generated with AHPS precipitation and the 1981–2010 PRISM normals and associated cluster solution is found to best capture the spatial extent and severity of drought conditions indicated by the USDM. This SPI is also able to resolve local variations in drought conditions that are not shown by either the USDM or comparison SPI datasets.


2017 ◽  
Author(s):  
Paul W. Miller ◽  
Thomas L. Mote

Abstract. Weakly forced thunderstorms (WFTs), short-lived convection forming in synoptically quiescent regimes, are a contemporary forecasting challenge. The convective environments that support severe WFTs are often similar to those that yield only nonsevere WFTs, and additionally, only a small proportion individual WFTs will ultimately produce severe weather. The purpose of this study is to better characterize the relative severe weather potential in these settings as a function of the convective environment. Thirty near-storm convective parameters for > 200 000 WFTs in the Southeast United States are calculated from a high-resolution numerical forecasting model, the Rapid Refresh (RAP). For each parameter, the relative likelihood of WFT days with at least one severe weather event is assessed along a moving threshold. Parameters (and the values of them) that reliably separate severe-weather-supporting from nonsevere WFT days are highlighted. Only two convective parameters, vertical totals (VT) and total totals (TT), appreciably differentiate severe-wind-supporting and severe-hail-supporting days from nonsevere WFT days. When VTs exceeded values between 24.6–25.1 °C or TTs between 46.5–47.3 °C, severe-wind days were roughly 5 × more likely. Meanwhile, severe-hail days became roughly 10 × more likely when VTs exceeded 24.4–26.0 °C or TTs exceeded 46.3–49.2 °C. The stronger performance of VT and TT is partly attributed to the more accurate representation of these parameters in the numerical model. Under-reporting of severe weather and model error are posited to exacerbate the forecasting challenge by obscuring the subtle convective environmental differences enhancing storm severity.


2017 ◽  
Vol 449 ◽  
pp. 1-11 ◽  
Author(s):  
Li Gao ◽  
Yongsong Huang ◽  
Bryan Shuman ◽  
W. Wyatt Oswald ◽  
David Foster

2020 ◽  
Author(s):  
Ben Orsburn

AbstractThe production of hemp and products derived from these plants that contain zero to trace amounts of the psychoactive cannabinoid tetrahydrocannabidiol (THC) is a rapidly growing new market in the United States. The most common products today contain relatively high concentrations of the compound cannabidiol (CBD). Recent studies have investigated commercial CBD products using targeted assays and have found varying degrees of misrepresentation and contamination of these products. To expand on previous studies, we demonstrate the application of non-targeted screening by high resolution accurate mass spectrometry to more comprehensively identify potential adulterants and contaminants. We find evidence to support previous conclusions that CBD products are commonly misrepresented in terms of cannabinoid concentrations present. Specifically, we observe a wide variation in relative THC concentrations across the product tested, with some products containing 10-fold more relative signal than others. In addition, we find that several products appear to be purposely adulterated with over the counter drugs such as caffeine and melatonin. We also observe multiple small molecule contaminants that are typically linked to improper production or packaging methods in food or pharmaceutical production. Finally, we present high resolution accurate mass spectrometry data and tandem MS/MS fragments supporting the presence of trace amounts of fluorofentanyl in a single mail order CBD product. We conclude that the CBD industry would benefit from more robust testing regulations and that the cannabis testing industry, in general, would benefit from the use of non-targeted screening technologies.


Sign in / Sign up

Export Citation Format

Share Document