Developing causal interpretations for high and low level light used in quantum remote sensing

Author(s):  
ChandraSekhar Roychoudhuri
Keyword(s):  
2019 ◽  
Vol 11 (13) ◽  
pp. 1617 ◽  
Author(s):  
Jicheng Wang ◽  
Li Shen ◽  
Wenfan Qiao ◽  
Yanshuai Dai ◽  
Zhilin Li

The classification of very-high-resolution (VHR) remote sensing images is essential in many applications. However, high intraclass and low interclass variations in these kinds of images pose serious challenges. Fully convolutional network (FCN) models, which benefit from a powerful feature learning ability, have shown impressive performance and great potential. Nevertheless, only classification results with coarse resolution can be obtained from the original FCN method. Deep feature fusion is often employed to improve the resolution of outputs. Existing strategies for such fusion are not capable of properly utilizing the low-level features and considering the importance of features at different scales. This paper proposes a novel, end-to-end, fully convolutional network to integrate a multiconnection ResNet model and a class-specific attention model into a unified framework to overcome these problems. The former fuses multilevel deep features without introducing any redundant information from low-level features. The latter can learn the contributions from different features of each geo-object at each scale. Extensive experiments on two open datasets indicate that the proposed method can achieve class-specific scale-adaptive classification results and it outperforms other state-of-the-art methods. The results were submitted to the International Society for Photogrammetry and Remote Sensing (ISPRS) online contest for comparison with more than 50 other methods. The results indicate that the proposed method (ID: SWJ_2) ranks #1 in terms of overall accuracy, even though no additional digital surface model (DSM) data that were offered by ISPRS were used and no postprocessing was applied.


2021 ◽  
Vol 14 (4) ◽  
pp. 2673-2697
Author(s):  
Hong Chen ◽  
Sebastian Schmidt ◽  
Michael D. King ◽  
Galina Wind ◽  
Anthony Bucholtz ◽  
...  

Abstract. Cloud optical properties such as optical thickness along with surface albedo are important inputs for deriving the shortwave radiative effects of clouds from spaceborne remote sensing. Owing to insufficient knowledge about the snow or ice surface in the Arctic, cloud detection and the retrieval products derived from passive remote sensing, such as from the Moderate Resolution Imaging Spectroradiometer (MODIS), are difficult to obtain with adequate accuracy – especially for low-level thin clouds, which are ubiquitous in the Arctic. This study aims at evaluating the spectral and broadband irradiance calculated from MODIS-derived cloud properties in the Arctic using aircraft measurements collected during the Arctic Radiation-IceBridge Sea and Ice Experiment (ARISE), specifically using the upwelling and downwelling shortwave spectral and broadband irradiance measured by the Solar Spectral Flux Radiometer (SSFR) and the BroadBand Radiometer system (BBR). This starts with the derivation of surface albedo from SSFR and BBR, accounting for the heterogeneous surface in the marginal ice zone (MIZ) with aircraft camera imagery, followed by subsequent intercomparisons of irradiance measurements and radiative transfer calculations in the presence of thin clouds. It ends with an attribution of any biases we found to causes, based on the spectral dependence and the variations in the measured and calculated irradiance along the flight track. The spectral surface albedo derived from the airborne radiometers is consistent with prior ground-based and airborne measurements and adequately represents the surface variability for the study region and time period. Somewhat surprisingly, the primary error in MODIS-derived irradiance fields for this study stems from undetected clouds, rather than from the retrieved cloud properties. In our case study, about 27 % of clouds remained undetected, which is attributable to clouds with an optical thickness of less than 0.5. We conclude that passive imagery has the potential to accurately predict shortwave irradiances in the region if the detection of thin clouds is improved. Of at least equal importance, however, is the need for an operational imagery-based surface albedo product for the polar regions that adequately captures its temporal, spatial, and spectral variability to estimate cloud radiative effects from spaceborne remote sensing.


1992 ◽  
Vol 108 (1) ◽  
pp. 185-191 ◽  
Author(s):  
F. G. Davies ◽  
E. Kilelu ◽  
K. J. Linthicum ◽  
R. G. Pegram

SUMMARYSummary An hypothesis that there was an annual emergence of Rift Valley fever virus in Zambia, during or after the seasonal rains, was examined with the aid of sentinel cattle. Serum samples taken during 1974 and 1978 showed evidence of epizootic Rift Valley fever in Zambia, with more than 80% positive. A sentinel herd exposed from 1982 to 1986 showed that some Rift Valley fever occurred each year. This was usually at a low level, with 3–8% of the susceptible cattle seroconverting. In 1985–6 more than 20% of the animals seroconverted, and this greater activity was associated with vegetational changes - which could be detected by remote-sensing satellite imagery-which have also been associated with greater virus activity in Kenya.


2018 ◽  
Vol 176 ◽  
pp. 11001
Author(s):  
Livius Buzdugan ◽  
Denisa Urlea ◽  
Paul Bugeac ◽  
Sabina Stefan

The paper is focused on the study of atmospheric conditions determining low vertical visibility over Henri Coanda airport. A network of ceilometers and a Sodar were used to detect fog and low level cloud layers. In our study, vertical visibility from ceilometers and acoustic reflectivity from Sodar for November 2016 were used to estimate fog depth and top of fog layers, respectively. The correlation between fog and low cloud occurrence and the wind direction and speed is also investigated.


2021 ◽  
Vol 13 (18) ◽  
pp. 3617
Author(s):  
Xudong Yao ◽  
Qing Guo ◽  
An Li

Clouds in optical remote sensing images cause spectral information change or loss, that affects image analysis and application. Therefore, cloud detection is of great significance. However, there are some shortcomings in current methods, such as the insufficient extendibility due to using the information of multiple bands, the intense extendibility due to relying on some manually determined thresholds, and the limited accuracy, especially for thin clouds or complex scenes caused by low-level manual features. Combining the above shortcomings and the requirements for efficiency in practical applications, we propose a light-weight deep learning cloud detection network based on DeeplabV3+ architecture and channel attention module (CD-AttDLV3+), only using the most common red–green–blue and near-infrared bands. In the CD-AttDLV3+ architecture, an optimized backbone network-MobileNetV2 is used to reduce the number of parameters and calculations. Atrous spatial pyramid pooling effectively reduces the information loss caused by multiple down-samplings while extracting multi-scale features. CD-AttDLV3+ concatenates more low-level features than DeeplabV3+ to improve the cloud boundary quality. The channel attention module is introduced to strengthen the learning of important channels and improve the training efficiency. Moreover, the loss function is improved to alleviate the imbalance of samples. For the Landsat-8 Biome set, CD-AttDLV3+ achieves the highest accuracy in comparison with other methods, including Fmask, SVM, and SegNet, especially for distinguishing clouds from bright surfaces and detecting light-transmitting thin clouds. It can also perform well on other Landsat-8 and Sentinel-2 images. Experimental results indicate that CD-AttDLV3+ is robust, with a high accuracy and extendibility.


2021 ◽  
Vol 10 (10) ◽  
pp. 672
Author(s):  
Suting Chen ◽  
Chaoqun Wu ◽  
Mithun Mukherjee ◽  
Yujie Zheng

Semantic segmentation of remote sensing images (RSI) plays a significant role in urban management and land cover classification. Due to the richer spatial information in the RSI, existing convolutional neural network (CNN)-based methods cannot segment images accurately and lose some edge information of objects. In addition, recent studies have shown that leveraging additional 3D geometric data with 2D appearance is beneficial to distinguish the pixels’ category. However, most of them require height maps as additional inputs, which severely limits their applications. To alleviate the above issues, we propose a height aware-multi path parallel network (HA-MPPNet). Our proposed MPPNet first obtains multi-level semantic features while maintaining the spatial resolution in each path for preserving detailed image information. Afterward, gated high-low level feature fusion is utilized to complement the lack of low-level semantics. Then, we designed the height feature decode branch to learn the height features under the supervision of digital surface model (DSM) images and used the learned embeddings to improve semantic context by height feature guide propagation. Note that our module does not need a DSM image as additional input after training and is end-to-end. Our method outperformed other state-of-the-art methods for semantic segmentation on publicly available remote sensing image datasets.


Sign in / Sign up

Export Citation Format

Share Document