scholarly journals Cloud Detection of SuperView-1 Remote Sensing Images Based on Genetic Reinforcement Learning

2020 ◽  
Vol 12 (19) ◽  
pp. 3190
Author(s):  
Xiaolong Li ◽  
Hong Zheng ◽  
Chuanzhao Han ◽  
Haibo Wang ◽  
Kaihan Dong ◽  
...  

Cloud pixels have massively reduced the utilization of optical remote sensing images, highlighting the importance of cloud detection. According to the current remote sensing literature, methods such as the threshold method, statistical method and deep learning (DL) have been applied in cloud detection tasks. As some cloud areas are translucent, areas blurred by these clouds still retain some ground feature information, which blurs the spectral or spatial characteristics of these areas, leading to difficulty in accurate detection of cloud areas by existing methods. To solve the problem, this study presents a cloud detection method based on genetic reinforcement learning. Firstly, the factors that directly affect the classification of pixels in remote sensing images are analyzed, and the concept of pixel environmental state (PES) is proposed. Then, PES information and the algorithm’s marking action are integrated into the “PES-action” data set. Subsequently, the rule of “reward–penalty” is introduced and the “PES-action” strategy with the highest cumulative return is learned by a genetic algorithm (GA). Clouds can be detected accurately through the learned “PES-action” strategy. By virtue of the strong adaptability of reinforcement learning (RL) to the environment and the global optimization ability of the GA, cloud regions are detected accurately. In the experiment, multi-spectral remote sensing images of SuperView-1 were collected to build the data set, which was finally accurately detected. The overall accuracy (OA) of the proposed method on the test set reached 97.15%, and satisfactory cloud masks were obtained. Compared with the best DL method disclosed and the random forest (RF) method, the proposed method is superior in precision, recall, false positive rate (FPR) and OA for the detection of clouds. This study aims to improve the detection of cloud regions, providing a reference for researchers interested in cloud detection of remote sensing images.

2021 ◽  
Vol 17 (3) ◽  
pp. 235-247
Author(s):  
Jun Zhang ◽  
Junjun Liu

Remote sensing is an indispensable technical way for monitoring earth resources and environmental changes. However, optical remote sensing images often contain a large number of cloud, especially in tropical rain forest areas, make it difficult to obtain completely cloud-free remote sensing images. Therefore, accurate cloud detection is of great research value for optical remote sensing applications. In this paper, we propose a saliency model-oriented convolution neural network for cloud detection in remote sensing images. Firstly, we adopt Kernel Principal Component Analysis (KCPA) to unsupervised pre-training the network. Secondly, small labeled samples are used to fine-tune the network structure. And, remote sensing images are performed with super-pixel approach before cloud detection to eliminate the irrelevant backgrounds and non-clouds object. Thirdly, the image blocks are input into the trained convolutional neural network (CNN) for cloud detection. Meanwhile, the segmented image will be recovered. Fourth, we fuse the detected result with the saliency map of raw image to further improve the accuracy of detection result. Experiments show that the proposed method can accurately detect cloud. Compared to other state-of-the-art cloud detection method, the new method has better robustness.


2021 ◽  
Vol 13 (18) ◽  
pp. 3617
Author(s):  
Xudong Yao ◽  
Qing Guo ◽  
An Li

Clouds in optical remote sensing images cause spectral information change or loss, that affects image analysis and application. Therefore, cloud detection is of great significance. However, there are some shortcomings in current methods, such as the insufficient extendibility due to using the information of multiple bands, the intense extendibility due to relying on some manually determined thresholds, and the limited accuracy, especially for thin clouds or complex scenes caused by low-level manual features. Combining the above shortcomings and the requirements for efficiency in practical applications, we propose a light-weight deep learning cloud detection network based on DeeplabV3+ architecture and channel attention module (CD-AttDLV3+), only using the most common red–green–blue and near-infrared bands. In the CD-AttDLV3+ architecture, an optimized backbone network-MobileNetV2 is used to reduce the number of parameters and calculations. Atrous spatial pyramid pooling effectively reduces the information loss caused by multiple down-samplings while extracting multi-scale features. CD-AttDLV3+ concatenates more low-level features than DeeplabV3+ to improve the cloud boundary quality. The channel attention module is introduced to strengthen the learning of important channels and improve the training efficiency. Moreover, the loss function is improved to alleviate the imbalance of samples. For the Landsat-8 Biome set, CD-AttDLV3+ achieves the highest accuracy in comparison with other methods, including Fmask, SVM, and SegNet, especially for distinguishing clouds from bright surfaces and detecting light-transmitting thin clouds. It can also perform well on other Landsat-8 and Sentinel-2 images. Experimental results indicate that CD-AttDLV3+ is robust, with a high accuracy and extendibility.


Author(s):  
J. Li ◽  
Z. Wu ◽  
Z. Hu ◽  
Y. Zhang ◽  
M. Molinier

Abstract. Clouds in optical remote sensing images seriously affect the visibility of background pixels and greatly reduce the availability of images. It is necessary to detect clouds before processing images. In this paper, a novel cloud detection method based on attentive generative adversarial network (Auto-GAN) is proposed for cloud detection. Our main idea is to inject visual attention into the domain transformation to detect clouds automatically. First, we use a discriminator (D) to distinguish between cloudy and cloud free images. Then, a segmentation network is used to detect the difference between cloudy and cloud-free images (i.e. clouds). Last, a generator (G) is used to fill in the different regions in cloud image in order to confuse the discriminator. Auto-GAN only requires images and their labels (1 for a cloud-free image, 0 for a cloudy image) in the training phase which is more time-saving to acquire than existing methods based on CNNs that require pixel-level labels. Auto-GAN is applied to cloud detection in Sentinel-2A Level 1C imagery. The results indicate that Auto-GAN method performs well in cloud detection over different land surfaces.


2011 ◽  
Vol 271-273 ◽  
pp. 205-210
Author(s):  
Ying Zhao Ma ◽  
Wei Li Jiao ◽  
Wang Wei

Cloud is an important factor affect the quality of optical remote sensing image. How to automatically detect the cloud cover of an image, reduce of useless data transmission, make great significance of higher data rate usefulness. This paper represent a method based on Lansat5 data, which can automatically mark the location of clouds region in each image, and effective calculated for each cloud cover, remove useless remote sensing images.


2021 ◽  
Vol 13 (15) ◽  
pp. 2910
Author(s):  
Xiaolong Li ◽  
Hong Zheng ◽  
Chuanzhao Han ◽  
Wentao Zheng ◽  
Hao Chen ◽  
...  

Clouds constitute a major obstacle to the application of optical remote-sensing images as they destroy the continuity of the ground information in the images and reduce their utilization rate. Therefore, cloud detection has become an important preprocessing step for optical remote-sensing image applications. Due to the fact that the features of clouds in current cloud-detection methods are mostly manually interpreted and the information in remote-sensing images is complex, the accuracy and generalization of current cloud-detection methods are unsatisfactory. As cloud detection aims to extract cloud regions from the background, it can be regarded as a semantic segmentation problem. A cloud-detection method based on deep convolutional neural networks (DCNN)—that is, a spatial folding–unfolding remote-sensing network (SFRS-Net)—is introduced in the paper, and the reason for the inaccuracy of DCNN during cloud region segmentation and the concept of space folding/unfolding is presented. The backbone network of the proposed method adopts an encoder–decoder structure, in which the pooling operation in the encoder is replaced by a folding operation, and the upsampling operation in the decoder is replaced by an unfolding operation. As a result, the accuracy of cloud detection is improved, while the generalization is guaranteed. In the experiment, the multispectral data of the GaoFen-1 (GF-1) satellite is collected to form a dataset, and the overall accuracy (OA) of this method reaches 96.98%, which is a satisfactory result. This study aims to develop a method that is suitable for cloud detection and can complement other cloud-detection methods, providing a reference for researchers interested in cloud detection of remote-sensing images.


2020 ◽  
Vol 12 (24) ◽  
pp. 4162
Author(s):  
Anna Hu ◽  
Zhong Xie ◽  
Yongyang Xu ◽  
Mingyu Xie ◽  
Liang Wu ◽  
...  

One major limitation of remote-sensing images is bad weather conditions, such as haze. Haze significantly reduces the accuracy of satellite image interpretation. To solve this problem, this paper proposes a novel unsupervised method to remove haze from high-resolution optical remote-sensing images. The proposed method, based on cycle generative adversarial networks, is called the edge-sharpening cycle-consistent adversarial network (ES-CCGAN). Most importantly, unlike existing methods, this approach does not require prior information; the training data are unsupervised, which mitigates the pressure of preparing the training data set. To enhance the ability to extract ground-object information, the generative network replaces a residual neural network (ResNet) with a dense convolutional network (DenseNet). The edge-sharpening loss function of the deep-learning model is designed to recover clear ground-object edges and obtain more detailed information from hazy images. In the high-frequency information extraction model, this study re-trained the Visual Geometry Group (VGG) network using remote-sensing images. Experimental results reveal that the proposed method can recover different kinds of scenes from hazy images successfully and obtain excellent color consistency. Moreover, the ability of the proposed method to obtain clear edges and rich texture feature information makes it superior to the existing methods.


2020 ◽  
Vol 38 (4A) ◽  
pp. 510-514
Author(s):  
Tay H. Shihab ◽  
Amjed N. Al-Hameedawi ◽  
Ammar M. Hamza

In this paper to make use of complementary potential in the mapping of LULC spatial data is acquired from LandSat 8 OLI sensor images are taken in 2019.  They have been rectified, enhanced and then classified according to Random forest (RF) and artificial neural network (ANN) methods. Optical remote sensing images have been used to get information on the status of LULC classification, and extraction details. The classification of both satellite image types is used to extract features and to analyse LULC of the study area. The results of the classification showed that the artificial neural network method outperforms the random forest method. The required image processing has been made for Optical Remote Sensing Data to be used in LULC mapping, include the geometric correction, Image Enhancements, The overall accuracy when using the ANN methods 0.91 and the kappa accuracy was found 0.89 for the training data set. While the overall accuracy and the kappa accuracy of the test dataset were found 0.89 and 0.87 respectively.


2021 ◽  
Vol 13 (3) ◽  
pp. 441
Author(s):  
Han Fu ◽  
Bihong Fu ◽  
Pilong Shi

The South China Karst, a United Nations Educational, Scientific and Cultural Organization (UNESCO) natural heritage site, is one of the world’s most spectacular examples of humid tropical to subtropical karst landscapes. The Libo cone karst in the southern Guizhou Province is considered as the world reference site for these types of karst, forming a distinctive and beautiful landscape. Geomorphic information and spatial distribution of cone karst is essential for conservation and management for Libo heritage site. In this study, a deep learning (DL) method based on DeepLab V3+ network was proposed to document the cone karst landscape in Libo by multi-source data, including optical remote sensing images and digital elevation model (DEM) data. The training samples were generated by using Landsat remote sensing images and their combination with satellite derived DEM data. Each group of training dataset contains 898 samples. The input module of DeepLab V3+ network was improved to accept four-channel input data, i.e., combination of Landsat RGB images and DEM data. Our results suggest that the mean intersection over union (MIoU) using the four-channel data as training samples by a new DL-based pixel-level image segmentation approach is the highest, which can reach 95.5%. The proposed method can accomplish automatic extraction of cone karst landscape by self-learning of deep neural network, and therefore it can also provide a powerful and automatic tool for documenting other type of geological landscapes worldwide.


Sign in / Sign up

Export Citation Format

Share Document