scholarly journals Object Recognition in Remote Sensing Images Based on Modified Backpropagation Neural Network

2021 ◽  
Vol 38 (2) ◽  
pp. 451-459
Author(s):  
Manthena Narasimha Raju ◽  
Kumaran Natarajan ◽  
Chandra Sekhar Vasamsetty

In the area of remote sensing, one of the problems is how high-quality remote sensing images are automatically categorized and classified. There have been many suggestions for alternatives. Amongst these, there are drawbacks of approaches focused on low visual and intermediate visual characteristics. This article, therefore, adopts the deep learning method for classifying high-resolution remote sensing picture scenes to learn semantic knowledge. Most of the existing neural network convolution approaches are focused on the model of transfer training and there are comparatively like hidden Marco models, linear fitting methods, the creation of new neural networks based on the latest high-resolution remote sensing picture data sets. But in this paper, we used a modified backpropagation neural network is proposed to detect the objects in images. To test the performance of the proposed model we use two remote sensing data sets benchmark tests were done. The test-precision, precision, reminder, and F1 scores are all fine with the Assist data collection. The precision, precision, reminder, and F1 score are all enhanced on the SIRI-WHU dataset. The proposed system has better precision and robustness compared to the current approaches including the most conventional methods and certain profound learning methods to scene distinguish high-resolution remote sensing pictures.

2020 ◽  
Vol 12 (18) ◽  
pp. 2985 ◽  
Author(s):  
Yeneng Lin ◽  
Dongyun Xu ◽  
Nan Wang ◽  
Zhou Shi ◽  
Qiuxiao Chen

Automatic road extraction from very-high-resolution remote sensing images has become a popular topic in a wide range of fields. Convolutional neural networks are often used for this purpose. However, many network models do not achieve satisfactory extraction results because of the elongated nature and varying sizes of roads in images. To improve the accuracy of road extraction, this paper proposes a deep learning model based on the structure of Deeplab v3. It incorporates squeeze-and-excitation (SE) module to apply weights to different feature channels, and performs multi-scale upsampling to preserve and fuse shallow and deep information. To solve the problems associated with unbalanced road samples in images, different loss functions and backbone network modules are tested in the model’s training process. Compared with cross entropy, dice loss can improve the performance of the model during training and prediction. The SE module is superior to ResNext and ResNet in improving the integrity of the extracted roads. Experimental results obtained using the Massachusetts Roads Dataset show that the proposed model (Nested SE-Deeplab) improves F1-Score by 2.4% and Intersection over Union by 2.0% compared with FC-DenseNet. The proposed model also achieves better segmentation accuracy in road extraction compared with other mainstream deep-learning models including Deeplab v3, SegNet, and UNet.


2022 ◽  
Author(s):  
Md. Sarkar Hasanuzzaman

Abstract Hyperspectral imaging is a versatile and powerful technology for gathering geo-data. Planes and satellites equipped with hyperspectral cameras are currently the leading contenders for large-scale imaging projects. Aiming at the shortcomings of traditional methods for detecting sparse representation of multi-spectral images, this paper proposes wireless sensor networks (WSNs) based single-hyperspectral image super-resolution method based on deep residual convolutional neural networks. We propose a different strategy that involves merging cheaper multispectral sensors to achieve hyperspectral-like spectral resolution while maintaining the WSN's spatial resolution. This method studies and mines the nonlinear relationship between low-resolution remote sensing images and high-resolution remote sensing images, constructs a deep residual convolutional neural network, connects multiple residual blocks in series, and removes some unnecessary modules. For this purpose, a decision support system is used that provides the outcome to the next layer. Finally, this paper, fully explores the similarities between natural images and hyperspectral images, use natural image samples to train convolutional neural networks, and further use migration learning to introduce the trained network model to the super-resolution problem of high-resolution remote sensing images, and solve the lack of training samples problem. A comparison between different algorithms for processing data on datasets collected in situ and via remote sensing is used to evaluate the proposed approach. The experimental results show that the method has good performance and can obtain better super-resolution effects.


2020 ◽  
Vol 9 (6) ◽  
pp. 370
Author(s):  
Atakan Körez ◽  
Necaattin Barışçı ◽  
Aydın Çetin ◽  
Uçman Ergün

The detection of objects in very high-resolution (VHR) remote sensing images has become increasingly popular with the enhancement of remote sensing technologies. High-resolution images from aircrafts or satellites contain highly detailed and mixed backgrounds that decrease the success of object detection in remote sensing images. In this study, a model that performs weighted ensemble object detection using optimized coefficients is proposed. This model uses the outputs of three different object detection models trained on the same dataset. The model’s structure takes two or more object detection methods as its input and provides an output with an optimized coefficient-weighted ensemble. The Northwestern Polytechnical University Very High Resolution 10 (NWPU-VHR10) and Remote Sensing Object Detection (RSOD) datasets were used to measure the object detection success of the proposed model. Our experiments reveal that the proposed model improved the Mean Average Precision (mAP) performance by 0.78%–16.5% compared to stand-alone models and presents better mean average precision than other state-of-the-art methods (3.55% higher on the NWPU-VHR-10 dataset and 1.49% higher when using the RSOD dataset).


2019 ◽  
Vol 15 (5) ◽  
pp. 155014771985203 ◽  
Author(s):  
Shoulin Yin ◽  
Ye Zhang ◽  
Shahid Karim

Currently, big data is a new and hot issue. Particularly, the rapid growth of the Internet of Things causes a sharp growth of data. Enormous amounts of networking sensors are continuously collecting and transmitting data to be stored and processed in the cloud, including remote sensing data, environmental data, and geographical data. And region is regarded as the very important object in remote sensing data, which is mainly researched in this article. Region search is a crucial task in remote sensing process, especially for military area and civilian fields. It is difficult to fast search region accurately and achieve generalizability of the regions’ features due to the complex background information, as well as the smaller size. Especially, when processing region search in large-scale remote sensing image, detailed information as the feature can be extracted in inner region. To overcome the above difficulty region search task, we propose an accurate and fast region search in optical remote sensing images under cloud computing environment, which is based on hybrid convolutional neural network. The proposed region search method partitioned into four processes. First, fully convolutional network is adopted to produce all the candidate regions that contain the possible object regions. This process avoids exhaustive search for input images. Then, the features of all candidate regions are extracted by a fast region-based convolutional neural network structure. Third, we design a new difficult sample mining method for the training process. At the end, in order to improve the region search precision, we use an iterative bounding box regression algorithm to normalize the detected bounding boxes, in which the regions contain candidate objects. The proposed algorithm is evaluated on optical remote sensing images acquired from Google Earth. Finally, we conduct the experiments, and the obtained results show that the proposed region search method constantly achieves better results regardless of the type of images tested. Compared with traditional region search methods, such as region-based convolutional neural network and newest feature extraction frameworks, our proposed methods show better robustness with complex context semantic information and backgrounds.


2021 ◽  
Vol 13 (2) ◽  
pp. 239
Author(s):  
Zhenfeng Shao ◽  
Zifan Zhou ◽  
Xiao Huang ◽  
Ya Zhang

Automatic extraction of the road surface and road centerline from very high-resolution (VHR) remote sensing images has always been a challenging task in the field of feature extraction. Most existing road datasets are based on data with simple and clear backgrounds under ideal conditions, such as images derived from Google Earth. Therefore, the studies on road surface extraction and road centerline extraction under complex scenes are insufficient. Meanwhile, most existing efforts addressed these two tasks separately, without considering the possible joint extraction of road surface and centerline. With the introduction of multitask convolutional neural network models, it is possible to carry out these two tasks simultaneously by facilitating information sharing within a multitask deep learning model. In this study, we first design a challenging dataset using remote sensing images from the GF-2 satellite. The dataset contains complex road scenes with manually annotated images. We then propose a two-task and end-to-end convolution neural network, termed Multitask Road-related Extraction Network (MRENet), for road surface extraction and road centerline extraction. We take features extracted from the road as the condition of centerline extraction, and the information transmission and parameter sharing between the two tasks compensate for the potential problem of insufficient road centerline samples. In the network design, we use atrous convolutions and a pyramid scene parsing pooling module (PSP pooling), aiming to expand the network receptive field, integrate multilevel features, and obtain more abundant information. In addition, we use a weighted binary cross-entropy function to alleviate the background imbalance problem. Experimental results show that the proposed algorithm outperforms several comparative methods in the aspects of classification precision and visual interpretation.


2021 ◽  
Vol 13 (21) ◽  
pp. 4237
Author(s):  
Xiaoping Zhang ◽  
Bo Cheng ◽  
Jinfen Chen ◽  
Chenbin Liang

Agricultural greenhouses (AGs) are an important component of modern facility agriculture, and accurately mapping and dynamically monitoring their distribution are necessary for agricultural scientific management and planning. Semantic segmentation can be adopted for AG extraction from remote sensing images. However, the feature maps obtained by traditional deep convolutional neural network (DCNN)-based segmentation algorithms blur spatial details and insufficient attention is usually paid to contextual representation. Meanwhile, the maintenance of the original morphological characteristics, especially the boundaries, is still a challenge for precise identification of AGs. To alleviate these problems, this paper proposes a novel network called high-resolution boundary refined network (HBRNet). In this method, we design a new backbone with multiple paths based on HRNetV2 aiming to preserve high spatial resolution and improve feature extraction capability, in which the Pyramid Cross Channel Attention (PCCA) module is embedded to residual blocks to strengthen the interaction of multiscale information. Moreover, the Spatial Enhancement (SE) module is employed to integrate the contextual information of different scales. In addition, we introduce the Spatial Gradient Variation (SGV) unit in the Boundary Refined (BR) module to couple the segmentation task and boundary learning task, so that they can share latent high-level semantics and interact with each other, and combine this with the joint loss to refine the boundary. In our study, GaoFen-2 remote sensing images in Shouguang City, Shandong Province, China are selected to make the AG dataset. The experimental results show that HBRNet demonstrates a significant improvement in segmentation performance up to an IoU score of 94.89%, implying that this approach has advantages and potential for precise identification of AGs.


Sign in / Sign up

Export Citation Format

Share Document