scholarly journals Utilizing Multilevel Features for Cloud Detection on Satellite Imagery

2018 ◽  
Vol 10 (11) ◽  
pp. 1853 ◽  
Author(s):  
Xi Wu ◽  
Zhenwei Shi

Cloud detection, which is defined as the pixel-wise binary classification, is significant in satellite imagery processing. In current remote sensing literature, cloud detection methods are linked to the relationships of imagery bands or based on simple image feature analysis. These methods, which only focus on low-level features, are not robust enough on the images with difficult land covers, for clouds share similar image features such as color and texture with the land covers. To solve the problem, in this paper, we propose a novel deep learning method for cloud detection on satellite imagery by utilizing multilevel image features with two major processes. The first process is to obtain the cloud probability map from the designed deep convolutional neural network, which concatenates deep neural network features from low-level to high-level. The second part of the method is to get refined cloud masks through a composite image filter technique, where the specific filter captures multilevel features of cloud structures and the surroundings of the input imagery. In the experiments, the proposed method achieves 85.38% intersection over union of cloud in the testing set which contains 100 Gaofen-1 wide field of view images and obtains satisfactory visual cloud masks, especially for those hard images. The experimental results show that utilizing multilevel features by the combination of the network with feature concatenation and the particular filter tackles the cloud detection problem with improved cloud masks.

Author(s):  
Yifei Kang ◽  
Li Pan ◽  
Qi Chen ◽  
Tong Zhang ◽  
Shasha Zhang ◽  
...  

With the rapid development of high resolution remote sensing for earth observation technology, satellite imagery is widely used in the fields of resource investigation, environment protection, and agricultural research. Image mosaicking is an important part of satellite imagery production. However, the existence of clouds leads to lots of disadvantages for automatic image mosaicking, mainly in two aspects: 1) Image blurring may be caused during the process of image dodging, 2) Cloudy areas may be passed through by automatically generated seamlines. To address these problems, an automatic mosaicking method is proposed for cloudy satellite imagery in this paper. Firstly, modified Otsu thresholding and morphological processing are employed to extract cloudy areas and obtain the percentage of cloud cover. Then, cloud detection results are used to optimize the process of dodging and mosaicking. Thus, the mosaic image can be combined with more clear-sky areas instead of cloudy areas. Besides, clear-sky areas will be clear and distortionless. The Chinese GF-1 wide-field-of-view orthoimages are employed as experimental data. The performance of the proposed approach is evaluated in four aspects: the effect of cloud detection, the sharpness of clear-sky areas, the rationality of seamlines and efficiency. The evaluation results demonstrated that the mosaic image obtained by our method has fewer clouds, better internal color consistency and better visual clarity compared with that obtained by traditional method. The time consumed by the proposed method for 17 scenes of GF-1 orthoimages is within 4 hours on a desktop computer. The efficiency can meet the general production requirements for massive satellite imagery.


Complexity ◽  
2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Tianming Song ◽  
Xiaoyang Yu ◽  
Shuang Yu ◽  
Zhe Ren ◽  
Yawei Qu

Medical image technology is becoming more and more important in the medical field. It not only provides important information about internal organs of the body for clinical analysis and medical treatment but also assists doctors in diagnosing and treating various diseases. However, in the process of medical image feature extraction, there are some problems, such as inconspicuous feature extraction and low feature preparation rate. Combined with the learning idea of convolution neural network, the image multifeature vectors are quantized in a deeper level, which makes the image features further abstract and not only makes up for the one-sidedness of single feature description but also improves the robustness of feature descriptors. This paper presents a medical image processing method based on multifeature fusion, which has high feature extraction effect on medical images of chest, lung, brain and liver, and can better express the feature relationship of medical images. Experimental results show that the accuracy of the proposed method is more than 5% higher than that of other methods, which shows that the performance of the proposed method is better.


Author(s):  
T. Kemper ◽  
N. Mudau ◽  
P. Mangara ◽  
M. Pesaresi

Urban areas in sub-Saharan Africa are growing at an unprecedented pace. Much of this growth is taking place in informal settlements. In South Africa more than 10% of the population live in urban informal settlements. South Africa has established a National Informal Settlement Development Programme (NUSP) to respond to these challenges. This programme is designed to support the National Department of Human Settlement (NDHS) in its implementation of the Upgrading Informal Settlements Programme (UISP) with the objective of eventually upgrading all informal settlements in the country. Currently, the NDHS does not have access to an updated national dataset captured at the same scale using source data that can be used to understand the status of informal settlements in the country. <br><br> This pilot study is developing a fully automated workflow for the wall-to-wall processing of SPOT-5 satellite imagery of South Africa. The workflow includes an automatic image information extraction based on multiscale textural and morphological image features extraction. The advanced image feature compression and optimization together with innovative learning and classification techniques allow a processing of the SPOT-5 images using the Landsat-based National Land Cover (NLC) of South Africa from the year 2000 as low-resolution thematic reference layers as. The workflow was tested on 42 SPOT scenes based on a stratified sampling. The derived building information was validated against a visually interpreted building point data set and produced an accuracy of 97 per cent. Given this positive result, is planned to process the most recent wall-to-wall coverage as well as the archived imagery available since 2007 in the near future.


Author(s):  
Yifei Kang ◽  
Li Pan ◽  
Qi Chen ◽  
Tong Zhang ◽  
Shasha Zhang ◽  
...  

With the rapid development of high resolution remote sensing for earth observation technology, satellite imagery is widely used in the fields of resource investigation, environment protection, and agricultural research. Image mosaicking is an important part of satellite imagery production. However, the existence of clouds leads to lots of disadvantages for automatic image mosaicking, mainly in two aspects: 1) Image blurring may be caused during the process of image dodging, 2) Cloudy areas may be passed through by automatically generated seamlines. To address these problems, an automatic mosaicking method is proposed for cloudy satellite imagery in this paper. Firstly, modified Otsu thresholding and morphological processing are employed to extract cloudy areas and obtain the percentage of cloud cover. Then, cloud detection results are used to optimize the process of dodging and mosaicking. Thus, the mosaic image can be combined with more clear-sky areas instead of cloudy areas. Besides, clear-sky areas will be clear and distortionless. The Chinese GF-1 wide-field-of-view orthoimages are employed as experimental data. The performance of the proposed approach is evaluated in four aspects: the effect of cloud detection, the sharpness of clear-sky areas, the rationality of seamlines and efficiency. The evaluation results demonstrated that the mosaic image obtained by our method has fewer clouds, better internal color consistency and better visual clarity compared with that obtained by traditional method. The time consumed by the proposed method for 17 scenes of GF-1 orthoimages is within 4 hours on a desktop computer. The efficiency can meet the general production requirements for massive satellite imagery.


2021 ◽  
Vol 303 ◽  
pp. 01058
Author(s):  
Meng-Di Deng ◽  
Rui-Sheng Jia ◽  
Hong-Mei Sun ◽  
Xing-Li Zhang

The resolution of seismic section images can directly affect the subsequent interpretation of seismic data. In order to improve the spatial resolution of low-resolution seismic section images, a super-resolution reconstruction method based on multi-scale convolution is proposed. This method designs a multi-scale convolutional neural network to learn high-low resolution image feature pairs, and realizes mapping learning from low-resolution seismic section images to high-resolution seismic section images. This multi-scale convolutional neural network model consists of four convolutional layers and a sub-pixel convolutional layer. Convolution operations are used to learn abundant seismic section image features, and sub-pixel convolution layer is used to reconstruct high-resolution seismic section image. The experimental results show that the proposed method is superior to the comparison method in peak signal-to-noise ratio (PSNR) and structural similarity (SSIM). In the total training time and reconstruction time, our method is about 22% less than the FSRCNN method and about 18% less than the ESPCN method.


2021 ◽  
Author(s):  
Rohit Raja ◽  
Sandeep Kumar ◽  
Shilpa Choudhary ◽  
Hemlata Dalmia

Abstract Day by day, rapidly increasing the number of images on digital platforms and digital image databases has increased. Generally, the user requires image retrieval and it is a challenging task to search effectively from the enormous database. Mainly content-based image retrieval (CBIR) algorithm considered the visual image feature such as color, texture, shape, etc. The non-visual features also play a significant role in image retrieval, mainly in the security concern and selection of image features is an essential issue in CBIR. Performance is one of the challenging tasks in image retrieval, according to current CBIR studies. To overcome this gap, the new method used for CBIR using histogram of gradient (HOG), dominant color descriptor (DCD) & hue moment (HM) features. This work uses color features and shapes texture in-depth for CBIR. HOG is used to extract texture features. DCD on RGB and HSV are used to improve efficiency and computation. A neural network (NN) is used to extract the image features, which improves the computation using the Corel dataset. The experimental results evaluated on various standard benchmarks Corel-1k, Corel-5k datasets, and outcomes of the proposed work illustrate that the proposed CBIR is efficient for other state-of-the-art image retrieval methods. Intensive analysis of the proposed work proved that the proposed work has better precision, recall, accuracy


Author(s):  
Guobing Yan ◽  
◽  
Qiang Sun ◽  
Jianying Huang ◽  
Yonghong Chen

Image recognition is one of the key technologies for worker’s helmet detection using an unmanned aerial vehicle (UAV). By analyzing the image feature extraction method for workers’ helmet detection based on convolutional neural network (CNN), a double-channel convolutional neural network (DCNN) model is proposed to improve the traditional image processing methods. On the basis of AlexNet model, the image features of the worker can be extracted using two independent CNNs, and the essential image features can be better reflected considering the abstraction degree of the features. Combining a traditional machine learning method and random forest (RF), an intelligent recognition algorithm based on DCNN and RF is proposed for workers’ helmet detection. The experimental results show that deep learning (DL) is closely related to the traditional machine learning methods. Moreover, adding a DL module to the traditional machine learning framework can improve the recognition accuracy.


2019 ◽  
Vol 2019 ◽  
pp. 1-17 ◽  
Author(s):  
Mingyong Li ◽  
Ziye An ◽  
Qinmin Wei ◽  
Kaiyue Xiang ◽  
Yan Ma

In recent years, with the explosion of multimedia data from search engines, social media, and e-commerce platforms, there is an urgent need for fast retrieval methods for massive big data. Hashing is widely used in large-scale and high-dimensional data search because of its low storage cost and fast query speed. Thanks to the great success of deep learning in many fields, the deep learning method has been introduced into hashing retrieval, and it uses a deep neural network to learn image features and hash codes simultaneously. Compared with the traditional hashing methods, it has better performance. However, existing deep hashing methods have some limitations; for example, most methods consider only one kind of supervised loss, which leads to insufficient utilization of supervised information. To address this issue, we proposed a triplet deep hashing method with joint supervised loss based on the convolutional neural network (JLTDH) in this work. The proposed method JLTDH combines triplet likelihood loss and linear classification loss; moreover, the triplet supervised label is adopted, which contains richer supervised information than that of the pointwise and pairwise labels. At the same time, in order to overcome the cubic increase in the number of triplets and make triplet training more effective, we adopt a novel triplet selection method. The whole process is divided into two stages: In the first stage, taking the triplets generated by the triplet selection method as the input of the CNN, the three CNNs with shared weights are used for image feature learning, and the last layer of the network outputs a preliminary hash code. In the second stage, relying on the hash code of the first stage and the joint loss function, the neural network model is further optimized so that the generated hash code has higher query precision. We perform extensive experiments on the three public benchmark datasets CIFAR-10, NUS-WIDE, and MS-COCO. Experimental results demonstrate that the proposed method outperforms the compared methods, and the method is also superior to all previous deep hashing methods based on the triplet label.


2020 ◽  
Vol 12 (21) ◽  
pp. 3547 ◽  
Author(s):  
Yuanyuan Ren ◽  
Xianfeng Zhang ◽  
Yongjian Ma ◽  
Qiyuan Yang ◽  
Chuanjian Wang ◽  
...  

Remote sensing image segmentation with samples imbalance is always one of the most important issues. Typically, a high-resolution remote sensing image has the characteristics of high spatial resolution and low spectral resolution, complex large-scale land covers, small class differences for some land covers, vague foreground, and imbalanced distribution of samples. However, traditional machine learning algorithms have limitations in deep image feature extraction and dealing with sample imbalance issue. In the paper, we proposed an improved full-convolution neural network, called DeepLab V3+, with loss function based solution of samples imbalance. In addition, we select Sentinel-2 remote sensing images covering the Yuli County, Bayingolin Mongol Autonomous Prefecture, Xinjiang Uygur Autonomous Region, China as data sources, then a typical region image dataset is built by data augmentation. The experimental results show that the improved DeepLab V3+ model can not only utilize the spectral information of high-resolution remote sensing images, but also consider its rich spatial information. The classification accuracy of the proposed method on the test dataset reaches 97.97%. The mean Intersection-over-Union reaches 87.74%, and the Kappa coefficient 0.9587. The work provides methodological guidance to sample imbalance correction, and the established data resource can be a reference to further study in the future.


Cancers ◽  
2019 ◽  
Vol 11 (12) ◽  
pp. 1901 ◽  
Author(s):  
Hongdou Yao ◽  
Xuejie Zhang ◽  
Xiaobing Zhou ◽  
Shengyan Liu

In this paper, we present a new deep learning model to classify hematoxylin–eosin-stained breast biopsy images into four classes (normal tissues, benign lesions, in situ carcinomas, and invasive carcinomas). Our model uses a parallel structure consist of a convolutional neural network (CNN) and a recurrent neural network (RNN) for image feature extraction, which is greatly different from the common existed serial method of extracting image features by CNN and then inputting them into RNN. Then, we introduce a special perceptron attention mechanism, which is derived from the natural language processing (NLP) field, to unify the features extracted by the two different neural network structures of the model. In the convolution layer, general batch normalization is replaced by the new switchable normalization method. And the latest regularization technology, targeted dropout, is used to substitute for the general dropout in the last three fully connected layers of the model. In the testing phase, we use the model fusion method and test time augmentation technology on three different datasets of hematoxylin–eosin-stained breast biopsy images. The results demonstrate that our model significantly outperforms state-of-the-art methods.


Sign in / Sign up

Export Citation Format

Share Document