scholarly journals Cloud Detection Algorithm for Multi-Satellite Remote Sensing Imagery Based on a Spectral Library and 1D Convolutional Neural Network

2021 ◽  
Vol 13 (16) ◽  
pp. 3319
Author(s):  
Nan Ma ◽  
Lin Sun ◽  
Chenghu Zhou ◽  
Yawen He

Automatic cloud detection in remote sensing images is of great significance. Deep-learning-based methods can achieve cloud detection with high accuracy; however, network training heavily relies on a large number of labels. Manually labelling pixel-wise level cloud and non-cloud annotations for many remote sensing images is laborious and requires expert-level knowledge. Different types of satellite images cannot share a set of training data, due to the difference in spectral range and spatial resolution between them. Hence, labelled samples in each upcoming satellite image are required to train a new deep-learning-based model. In order to overcome such a limitation, a novel cloud detection algorithm based on a spectral library and convolutional neural network (CD-SLCNN) was proposed in this paper. In this method, the residual learning and one-dimensional CNN (Res-1D-CNN) was used to accurately capture the spectral information of the pixels based on the prior spectral library, effectively preventing errors due to the uncertainties in thin clouds, broken clouds, and clear-sky pixels during remote sensing interpretation. Benefiting from data simulation, the method is suitable for the cloud detection of different types of multispectral data. A total of 62 Landsat-8 Operational Land Imagers (OLI), 25 Moderate Resolution Imaging Spectroradiometers (MODIS), and 20 Sentinel-2 satellite images acquired at different times and over different types of underlying surfaces, such as a high vegetation coverage, urban area, bare soil, water, and mountains, were used for cloud detection validation and quantitative analysis, and the cloud detection results were compared with the results from the function of the mask, MODIS cloud mask, support vector machine, and random forest. The comparison revealed that the CD-SLCNN method achieved the best performance, with a higher overall accuracy (95.6%, 95.36%, 94.27%) and mean intersection over union (77.82%, 77.94%, 77.23%) on the Landsat-8 OLI, MODIS, and Sentinel-2 data, respectively. The CD-SLCNN algorithm produced consistent results with a more accurate cloud contour on thick, thin, and broken clouds over a diverse underlying surface, and had a stable performance regarding bright surfaces, such as buildings, ice, and snow.

2021 ◽  
Vol 9 (1) ◽  
pp. 47-70
Author(s):  
Kumar Gaurav ◽  
François Métivier ◽  
Rajiv Sinha ◽  
Amit Kumar ◽  
Sampat Kumar Tandon ◽  
...  

Abstract. We propose an innovative methodology to estimate the formative discharge of alluvial rivers from remote sensing images. This procedure involves automatic extraction of the width of a channel from Landsat Thematic Mapper, Landsat 8, and Sentinel-1 satellite images. We translate the channel width extracted from satellite images to discharge using a width–discharge regime curve established previously by us for the Himalayan rivers. This regime curve is based on the threshold theory, a simple physical force balance that explains the first-order geometry of alluvial channels. Using this procedure, we estimate the formative discharge of six major rivers of the Himalayan foreland: the Brahmaputra, Chenab, Ganga, Indus, Kosi, and Teesta rivers. Except highly regulated rivers (Indus and Chenab), our estimates of the discharge from satellite images can be compared with the mean annual discharge obtained from historical records of gauging stations. We have shown that this procedure applies both to braided and single-thread rivers over a large territory. Furthermore, our methodology to estimate discharge from remote sensing images does not rely on continuous ground calibration.


Sensors ◽  
2020 ◽  
Vol 20 (5) ◽  
pp. 1465 ◽  
Author(s):  
Lili Zhang ◽  
Jisen Wu ◽  
Yu Fan ◽  
Hongmin Gao ◽  
Yehong Shao

In this paper, we consider building extraction from high spatial resolution remote sensing images. At present, most building extraction methods are based on artificial features. However, the diversity and complexity of buildings mean that building extraction methods still face great challenges, so methods based on deep learning have recently been proposed. In this paper, a building extraction framework based on a convolution neural network and edge detection algorithm is proposed. The method is called Mask R-CNN Fusion Sobel. Because of the outstanding achievement of Mask R-CNN in the field of image segmentation, this paper improves it and then applies it in remote sensing image building extraction. Our method consists of three parts. First, the convolutional neural network is used for rough location and pixel level classification, and the problem of false and missed extraction is solved by automatically discovering semantic features. Second, Sobel edge detection algorithm is used to segment building edges accurately so as to solve the problem of edge extraction and the integrity of the object of deep convolutional neural networks in semantic segmentation. Third, buildings are extracted by the fusion algorithm. We utilize the proposed framework to extract the building in high-resolution remote sensing images from Chinese satellite GF-2, and the experiments show that the average value of IOU (intersection over union) of the proposed method was 88.7% and the average value of Kappa was 87.8%, respectively. Therefore, our method can be applied to the recognition and segmentation of complex buildings and is superior to the classical method in accuracy.


2021 ◽  
Vol 17 (3) ◽  
pp. 235-247
Author(s):  
Jun Zhang ◽  
Junjun Liu

Remote sensing is an indispensable technical way for monitoring earth resources and environmental changes. However, optical remote sensing images often contain a large number of cloud, especially in tropical rain forest areas, make it difficult to obtain completely cloud-free remote sensing images. Therefore, accurate cloud detection is of great research value for optical remote sensing applications. In this paper, we propose a saliency model-oriented convolution neural network for cloud detection in remote sensing images. Firstly, we adopt Kernel Principal Component Analysis (KCPA) to unsupervised pre-training the network. Secondly, small labeled samples are used to fine-tune the network structure. And, remote sensing images are performed with super-pixel approach before cloud detection to eliminate the irrelevant backgrounds and non-clouds object. Thirdly, the image blocks are input into the trained convolutional neural network (CNN) for cloud detection. Meanwhile, the segmented image will be recovered. Fourth, we fuse the detected result with the saliency map of raw image to further improve the accuracy of detection result. Experiments show that the proposed method can accurately detect cloud. Compared to other state-of-the-art cloud detection method, the new method has better robustness.


2021 ◽  
Vol 13 (18) ◽  
pp. 3617
Author(s):  
Xudong Yao ◽  
Qing Guo ◽  
An Li

Clouds in optical remote sensing images cause spectral information change or loss, that affects image analysis and application. Therefore, cloud detection is of great significance. However, there are some shortcomings in current methods, such as the insufficient extendibility due to using the information of multiple bands, the intense extendibility due to relying on some manually determined thresholds, and the limited accuracy, especially for thin clouds or complex scenes caused by low-level manual features. Combining the above shortcomings and the requirements for efficiency in practical applications, we propose a light-weight deep learning cloud detection network based on DeeplabV3+ architecture and channel attention module (CD-AttDLV3+), only using the most common red–green–blue and near-infrared bands. In the CD-AttDLV3+ architecture, an optimized backbone network-MobileNetV2 is used to reduce the number of parameters and calculations. Atrous spatial pyramid pooling effectively reduces the information loss caused by multiple down-samplings while extracting multi-scale features. CD-AttDLV3+ concatenates more low-level features than DeeplabV3+ to improve the cloud boundary quality. The channel attention module is introduced to strengthen the learning of important channels and improve the training efficiency. Moreover, the loss function is improved to alleviate the imbalance of samples. For the Landsat-8 Biome set, CD-AttDLV3+ achieves the highest accuracy in comparison with other methods, including Fmask, SVM, and SegNet, especially for distinguishing clouds from bright surfaces and detecting light-transmitting thin clouds. It can also perform well on other Landsat-8 and Sentinel-2 images. Experimental results indicate that CD-AttDLV3+ is robust, with a high accuracy and extendibility.


Author(s):  
F. Wen ◽  
Y. Zhang ◽  
B. Zhang

Abstract. Cloud detection is a vital preprocessing step for remote sensing image applications, which has been widely studied through Convolutional Neural Networks (CNNs) in recent years. However, the available CNN-based works only extract local/non-local features by stacked convolution and pooling layers, ignoring global contextual information of the input scenes. In this paper, a novel segmentation-based network is proposed for cloud detection of remote sensing images. We add a multi-class classification branch to a U-shaped semantic segmentation network. Through the encoder-decoder architecture, pixelwise classification of cloud, shadow and landcover can be obtained. Besides, the multi-class classification branch is built on top of the encoder module to extract global context by identifying what classes exist in the input scene. Linear representation encoded global contextual information is learned in the added branch, which is to be combined with featuremaps of the decoder and can help to selectively strengthen class-related features or weaken class-unrelated features at different scales. The whole network is trained and tested in an end-to-end fashion. Experiments on two Landsat-8 cloud detection datasets show better performance than other deep learning methods, which finally achieves 90.82% overall accuracy and 0.6992 mIoU on the SPARCS dataset, demonstrating the effectiveness of the proposed framework for cloud detection in remote sensing images.


2019 ◽  
Vol 11 (13) ◽  
pp. 1588 ◽  
Author(s):  
Tao Lu ◽  
Jiaming Wang ◽  
Yanduo Zhang ◽  
Zhongyuan Wang ◽  
Junjun Jiang

Recently, the application of satellite remote sensing images is becoming increasingly popular, but the observed images from satellite sensors are frequently in low-resolution (LR). Thus, they cannot fully meet the requirements of object identification and analysis. To utilize the multi-scale characteristics of objects fully in remote sensing images, this paper presents a multi-scale residual neural network (MRNN). MRNN adopts the multi-scale nature of satellite images to reconstruct high-frequency information accurately for super-resolution (SR) satellite imagery. Different sizes of patches from LR satellite images are initially extracted to fit different scale of objects. Large-, middle-, and small-scale deep residual neural networks are designed to simulate differently sized receptive fields for acquiring relative global, contextual, and local information for prior representation. Then, a fusion network is used to refine different scales of information. MRNN fuses the complementary high-frequency information from differently scaled networks to reconstruct the desired high-resolution satellite object image, which is in line with human visual experience (“look in multi-scale to see better”). Experimental results on the SpaceNet satellite image and NWPU-RESISC45 databases show that the proposed approach outperformed several state-of-the-art SR algorithms in terms of objective and subjective image qualities.


Author(s):  
B. UMA SHANKAR ◽  
SAROJ K. MEHER ◽  
ASHISH GHOSH

A neuro-wavelet supervised classifier is proposed for land cover classification of multispectral remote sensing images. Features extracted from the original pixels information using wavelet transform (WT) are fed as input to a feed forward multi-layer neural network (MLP). The WT basically provides the spatial and spectral features of a pixel along with its neighbors and these features are used for improved classification. For testing the performance of the proposed method, we have used two IRS-1A satellite images and one SPOT satellite image. Results are compared with those of the original spectral feature based classifiers and found to be consistently better. Simulation study revealed that Biorthogonal 3.3 (Bior3.3) wavelet in combination with MLP performed better compared to all other wavelets. Results are evaluated visually and quantitatively with two measurements, β index of homogeneity and Davies–Bouldin (DB) index for compactness and separability of classes. We suggested a modified β index in accessing the percentage of accuracy (PAβ) of the classified images also.


Sign in / Sign up

Export Citation Format

Share Document