scholarly journals Mapping Forested Wetland Inundation in the Delmarva Peninsula, USA Using Deep Convolutional Neural Networks

2020 ◽  
Vol 12 (4) ◽  
pp. 644 ◽  
Author(s):  
Ling Du ◽  
Gregory W. McCarty ◽  
Xin Zhang ◽  
Megan W. Lang ◽  
Melanie K. Vanderhoof ◽  
...  

The Delmarva Peninsula in the eastern United States is partially characterized by thousands of small, forested, depressional wetlands that are highly sensitive to weather variability and climate change, but provide critical ecosystem services. Due to the relatively small size of these depressional wetlands and their occurrence under forest canopy cover, it is very challenging to map their inundation status based on existing remote sensing data and traditional classification approaches. In this study, we applied a state-of-the-art U-Net semantic segmentation network to map forested wetland inundation in the Delmarva area by integrating leaf-off WorldView-3 (WV3) multispectral data with fine spatial resolution light detection and ranging (lidar) intensity and topographic data, including a digital elevation model (DEM) and topographic wetness index (TWI). Wetland inundation labels generated from lidar intensity were used for model training and validation. The wetland inundation map results were also validated using field data, and compared to the U.S. Fish and Wildlife Service National Wetlands Inventory (NWI) geospatial dataset and a random forest output from a previous study. Our results demonstrate that our deep learning model can accurately determine inundation status with an overall accuracy of 95% (Kappa = 0.90) compared to field data and high overlap (IoU = 70%) with lidar intensity-derived inundation labels. The integration of topographic metrics in deep learning models can improve the classification accuracy for depressional wetlands. This study highlights the great potential of deep learning models to improve the accuracy of wetland inundation maps through use of high-resolution optical and lidar remote sensing datasets.

2021 ◽  
Vol 13 (13) ◽  
pp. 2524
Author(s):  
Ziyi Chen ◽  
Dilong Li ◽  
Wentao Fan ◽  
Haiyan Guan ◽  
Cheng Wang ◽  
...  

Deep learning models have brought great breakthroughs in building extraction from high-resolution optical remote-sensing images. Among recent research, the self-attention module has called up a storm in many fields, including building extraction. However, most current deep learning models loading with the self-attention module still lose sight of the reconstruction bias’s effectiveness. Through tipping the balance between the abilities of encoding and decoding, i.e., making the decoding network be much more complex than the encoding network, the semantic segmentation ability will be reinforced. To remedy the research weakness in combing self-attention and reconstruction-bias modules for building extraction, this paper presents a U-Net architecture that combines self-attention and reconstruction-bias modules. In the encoding part, a self-attention module is added to learn the attention weights of the inputs. Through the self-attention module, the network will pay more attention to positions where there may be salient regions. In the decoding part, multiple large convolutional up-sampling operations are used for increasing the reconstruction ability. We test our model on two open available datasets: the WHU and Massachusetts Building datasets. We achieve IoU scores of 89.39% and 73.49% for the WHU and Massachusetts Building datasets, respectively. Compared with several recently famous semantic segmentation methods and representative building extraction methods, our method’s results are satisfactory.


2020 ◽  
Vol 12 (9) ◽  
pp. 1519 ◽  
Author(s):  
Sujit Madhab Ghosh ◽  
Mukunda Dev Behera ◽  
Somnath Paramanik

Canopy height serves as a good indicator of forest carbon content. Remote sensing-based direct estimations of canopy height are usually based on Light Detection and Ranging (LiDAR) or Synthetic Aperture Radar (SAR) interferometric data. LiDAR data is scarcely available for the Indian tropics, while Interferometric SAR data from commercial satellites are costly. High temporal decorrelation makes freely available Sentinel-1 interferometric data mostly unsuitable for tropical forests. Alternatively, other remote sensing and biophysical parameters have shown good correlation with forest canopy height. The study objective was to establish and validate a methodology by which forest canopy height can be estimated from SAR and optical remote sensing data using machine learning models i.e., Random Forest (RF) and Symbolic Regression (SR). Here, we analysed the potential of Sentinel-1 interferometric coherence and Sentinel-2 biophysical parameters to propose a new method for estimating canopy height in the study site of the Bhitarkanika wildlife sanctuary, which has mangrove forests. The results showed that interferometric coherence, and biophysical variables (Leaf Area Index (LAI) and Fraction of Vegetation Cover (FVC)) have reasonable correlation with canopy height. The RF model showed a Root Mean Squared Error (RMSE) of 1.57 m and R2 value of 0.60 between observed and predicted canopy heights; whereas, the SR model through genetic programming demonstrated better RMSE and R2 values of 1.48 and 0.62 m, respectively. The SR also established an interpretable model, which is not possible via any other machine learning algorithms. The FVC was found to be an essential variable for predicting forest canopy height. The canopy height maps correlated with ICESat-2 estimated canopy height, albeit modestly. The study demonstrated the effectiveness of Sentinel series data and the machine learning models in predicting canopy height. Therefore, in the absence of commercial and rare data sources, the methodology demonstrated here offers a plausible alternative for forest canopy height estimation.


2020 ◽  
Vol 12 (3) ◽  
pp. 500 ◽  
Author(s):  
Mehrnoush Soroush ◽  
Alireza Mehrtash ◽  
Emad Khazraee ◽  
Jason A. Ur

In this paper, we report the results of our work on automated detection of qanat shafts on the Cold War-era CORONA Satellite Imagery. The increasing quantity of air and space-borne imagery available to archaeologists and the advances in computational science have created an emerging interest in automated archaeological detection. Traditional pattern recognition methods proved to have limited applicability for archaeological prospection, for a variety of reasons, including a high rate of false positives. Since 2012, however, a breakthrough has been made in the field of image recognition through deep learning. We have tested the application of deep convolutional neural networks (CNNs) for automated remote sensing detection of archaeological features. Our case study is the qanat systems of the Erbil Plain in the Kurdistan Region of Iraq. The signature of the underground qanat systems on the remote sensing data are the semi-circular openings of their vertical shafts. We choose to focus on qanat shafts because they are promising targets for pattern recognition and because the richness and the extent of the qanat landscapes cannot be properly captured across vast territories without automated techniques. Our project is the first effort to use automated techniques on historic satellite imagery that takes advantage of neither the spectral imagery resolution nor very high (sub-meter) spatial resolution.


2021 ◽  
Vol 13 (6) ◽  
pp. 1132
Author(s):  
Zhibao Wang ◽  
Lu Bai ◽  
Guangfu Song ◽  
Jie Zhang ◽  
Jinhua Tao ◽  
...  

Estimation of the number and geo-location of oil wells is important for policy holders considering their impact on energy resource planning. With the recent development in optical remote sensing, it is possible to identify oil wells from satellite images. Moreover, the recent advancement in deep learning frameworks for object detection in remote sensing makes it possible to automatically detect oil wells from remote sensing images. In this paper, we collected a dataset named Northeast Petroleum University–Oil Well Object Detection Version 1.0 (NEPU–OWOD V1.0) based on high-resolution remote sensing images from Google Earth Imagery. Our database includes 1192 oil wells in 432 images from Daqing City, which has the largest oilfield in China. In this study, we compared nine different state-of-the-art deep learning models based on algorithms for object detection from optical remote sensing images. Experimental results show that the state-of-the-art deep learning models achieve high precision on our collected dataset, which demonstrate the great potential for oil well detection in remote sensing.


2021 ◽  
Vol 13 (24) ◽  
pp. 5109
Author(s):  
Kaimeng Ding ◽  
Shiping Chen ◽  
Yu Wang ◽  
Yueming Liu ◽  
Yue Zeng ◽  
...  

The prerequisite for the use of remote sensing images is that their security must be guaranteed. As a special subset of perceptual hashing, subject-sensitive hashing overcomes the shortcomings of the existing perceptual hashing that cannot distinguish between “subject-related tampering” and “subject-unrelated tampering” of remote sensing images. However, the existing subject-sensitive hashing still has a large deficiency in robustness. In this paper, we propose a novel attention-based asymmetric U-Net (AAU-Net) for the subject-sensitive hashing of remote sensing (RS) images. Our AAU-Net demonstrates obvious asymmetric structure characteristics, which is important to improve the robustness of features by combining the attention mechanism and the characteristics of subject-sensitive hashing. On the basis of AAU-Net, a subject-sensitive hashing algorithm is developed to integrate the features of various bands of RS images. Our experimental results show that our AAU-Net-based subject-sensitive hashing algorithm is more robust than the existing deep learning models such as Attention U-Net and MUM-Net, and its tampering sensitivity remains at the same level as that of Attention U-Net and MUM-Net.


2021 ◽  
Vol 11 ◽  
Author(s):  
Nam Nhut Phan ◽  
Chi-Cheng Huang ◽  
Ling-Ming Tseng ◽  
Eric Y. Chuang

We proposed a highly versatile two-step transfer learning pipeline for predicting the gene signature defining the intrinsic breast cancer subtypes using unannotated pathological images. Deciphering breast cancer molecular subtypes by deep learning approaches could provide a convenient and efficient method for the diagnosis of breast cancer patients. It could reduce costs associated with transcriptional profiling and subtyping discrepancy between IHC assays and mRNA expression. Four pretrained models such as VGG16, ResNet50, ResNet101, and Xception were trained with our in-house pathological images from breast cancer patient with recurrent status in the first transfer learning step and TCGA-BRCA dataset for the second transfer learning step. Furthermore, we also trained ResNet101 model with weight from ImageNet for comparison to the aforementioned models. The two-step deep learning models showed promising classification results of the four breast cancer intrinsic subtypes with accuracy ranging from 0.68 (ResNet50) to 0.78 (ResNet101) in both validation and testing sets. Additionally, the overall accuracy of slide-wise prediction showed even higher average accuracy of 0.913 with ResNet101 model. The micro- and macro-average area under the curve (AUC) for these models ranged from 0.88 (ResNet50) to 0.94 (ResNet101), whereas ResNet101_imgnet weighted with ImageNet archived an AUC of 0.92. We also show the deep learning model prediction performance is significantly improved relatively to the common Genefu tool for breast cancer classification. Our study demonstrated the capability of deep learning models to classify breast cancer intrinsic subtypes without the region of interest annotation, which will facilitate the clinical applicability of the proposed models.


2019 ◽  
Vol 9 (8) ◽  
pp. 1663-1672
Author(s):  
Yane Li ◽  
Ming Fan ◽  
Shichen Liu ◽  
Bin Zheng ◽  
Lihua Li

This work investigated a novel framework of predicting short-term breast cancer risk by using a deep learning approach in mammography. A dataset of 675 negative screening cases were applied. 333 cases were cancer diagnosed at next screening, while 342 cases remained negative. In order to stratify these patients into high and low cancer risk group, we first used an automatically method to segment bilateral matched central regions from right and left mammography respectively. Then, three AlexNet, GoogLeNet and ResNet based deep learning models were established with ten-fold cross validation method for both difference image of bilateral matched central regions and two whole regions of bilateral breasts respectively. Using AlexNet-, GoogLeNet- and ResNet-based risk model, areas under ROC curves (AUC) are 0.56, 0.62 and 0.64 for central regions and 0.59, 0.57 and 0.65 for whole regions, respectively. When combining prediction scores of three deep learning models with a multi-agent fusion algorithm, AUCs are 0.67 and 0.67 for central regions and whole regions respectively. When fusing scores of central region-based risk model and whole region-based risk model, AUC significantly increases to 0.71 (p < 0.01). By dividing 675 cases into five subgroups based on sorting results of risk scores, the odds ratios had an significant increasing trend as the scores increased (p = 0 003). This study demonstrates feasibility of applying deep learning technology to assist investigating novel markers in mammography for helping assessment of short-term breast cancer risk and improving the efficiency of breast cancer screening in the future.


Sign in / Sign up

Export Citation Format

Share Document