scholarly journals MCCRNet: A Multi-Level Change Contextual Refinement Network for Remote Sensing Image Change Detection

2021 ◽  
Vol 10 (9) ◽  
pp. 591
Author(s):  
Qingtian Ke ◽  
Peng Zhang

Change detection based on bi-temporal remote sensing images has made significant progress in recent years, aiming to identify the changed and unchanged pixels between a registered pair of images. However, most learning-based change detection methods only utilize fused high-level features from the feature encoder and thus miss the detailed representations that low-level feature pairs contain. Here we propose a multi-level change contextual refinement network (MCCRNet) to strengthen the multi-level change representations of feature pairs. To effectively capture the dependencies of feature pairs while avoiding fusing them, our atrous spatial pyramid cross attention (ASPCA) module introduces a crossed spatial attention module and a crossed channel attention module to emphasize the position importance and channel importance of each feature while simultaneously keeping the scale of input and output the same. This module can be plugged into any feature extraction layer of a Siamese change detection network. Furthermore, we propose a change contextual representations (CCR) module from the perspective of the relationship between the change pixels and the contextual representation, named change region contextual representations. The CCR module aims to correct changed pixels mistakenly predicted as unchanged by a class attention mechanism. Finally, we introduce an effective sample number adaptively weighted loss to solve the class-imbalanced problem of change detection datasets. On the whole, compared with other attention modules that only use fused features from the highest feature pairs, our method can capture the multi-level spatial, channel, and class context of change discrimination information. The experiments are performed with four public change detection datasets of various image resolutions. Compared to state-of-the-art methods, our MCCRNet achieved superior performance on all datasets (i.e., LEVIR, Season-Varying Change Detection Dataset, Google Data GZ, and DSIFN) with improvements of 0.47%, 0.11%, 2.62%, and 3.99%, respectively.

Author(s):  
Wenqing Feng ◽  
Haigang Sui ◽  
Jihui Tu

In the process of object-oriented change detection, the determination of the optimal segmentation scale is directly related to the subsequent change information extraction and analysis. Aiming at this problem, this paper presents a novel object-level change detection method based on multi-scale segmentation and fusion. First of all, the fine to coarse segmentation is used to obtain initial objects of different sizes; then, according to the features of the objects, Change Vector Analysis is used to obtain the change detection results of various scales. Furthermore, in order to improve the accuracy of change detection, this paper introduces fuzzy fusion and two kinds of decision level fusion methods to get the results of multi-scale fusion. Based on these methods, experiments are done with SPOT5 multi-spectral remote sensing imagery. Compared with pixel-level change detection methods, the overall accuracy of our method has been improved by nearly 10%, and the experimental results prove the feasibility and effectiveness of the fusion strategies.


Author(s):  
Wenqing Feng ◽  
Haigang Sui ◽  
Jihui Tu

In the process of object-oriented change detection, the determination of the optimal segmentation scale is directly related to the subsequent change information extraction and analysis. Aiming at this problem, this paper presents a novel object-level change detection method based on multi-scale segmentation and fusion. First of all, the fine to coarse segmentation is used to obtain initial objects of different sizes; then, according to the features of the objects, Change Vector Analysis is used to obtain the change detection results of various scales. Furthermore, in order to improve the accuracy of change detection, this paper introduces fuzzy fusion and two kinds of decision level fusion methods to get the results of multi-scale fusion. Based on these methods, experiments are done with SPOT5 multi-spectral remote sensing imagery. Compared with pixel-level change detection methods, the overall accuracy of our method has been improved by nearly 10%, and the experimental results prove the feasibility and effectiveness of the fusion strategies.


Author(s):  
Haocong Rao ◽  
Shihao Xu ◽  
Xiping Hu ◽  
Jun Cheng ◽  
Bin Hu

Skeleton-based person re-identification (Re-ID) is an emerging open topic providing great value for safety-critical applications. Existing methods typically extract hand-crafted features or model skeleton dynamics from the trajectory of body joints, while they rarely explore valuable relation information contained in body structure or motion. To fully explore body relations, we construct graphs to model human skeletons from different levels, and for the first time propose a Multi-level Graph encoding approach with Structural-Collaborative Relation learning (MG-SCR) to encode discriminative graph features for person Re-ID. Specifically, considering that structurally-connected body components are highly correlated in a skeleton, we first propose a multi-head structural relation layer to learn different relations of neighbor body-component nodes in graphs, which helps aggregate key correlative features for effective node representations. Second, inspired by the fact that body-component collaboration in walking usually carries recognizable patterns, we propose a cross-level collaborative relation layer to infer collaboration between different level components, so as to capture more discriminative skeleton graph features. Finally, to enhance graph dynamics encoding, we propose a novel self-supervised sparse sequential prediction task for model pre-training, which facilitates encoding high-level graph semantics for person Re-ID. MG-SCR outperforms state-of-the-art skeleton-based methods, and it achieves superior performance to many multi-modal methods that utilize extra RGB or depth features. Our codes are available at https://github.com/Kali-Hac/MG-SCR.


2020 ◽  
Vol 12 (15) ◽  
pp. 2460 ◽  
Author(s):  
Yanan You ◽  
Jingyi Cao ◽  
Wenli Zhou

Quantities of multi-temporal remote sensing (RS) images create favorable conditions for exploring the urban change in the long term. However, diverse multi-source features and change patterns bring challenges to the change detection in urban cases. In order to sort out the development venation of urban change detection, we make an observation of the literatures on change detection in the last five years, which focuses on the disparate multi-source RS images and multi-objective scenarios determined according to scene category. Based on the survey, a general change detection framework, including change information extraction, data fusion, and analysis of multi-objective scenarios modules, is summarized. Owing to the attributes of input RS images affect the technical selection of each module, data characteristics and application domains across different categories of RS images are discussed firstly. On this basis, not only the evolution process and relationship of the representative solutions are elaborated in the module description, through emphasizing the feasibility of fusing diverse data and the manifold application scenarios, we also advocate a complete change detection pipeline. At the end of the paper, we conclude the current development situation and put forward possible research direction of urban change detection, in the hope of providing insights to the following research.


2002 ◽  
Author(s):  
Farid Melgani ◽  
Gabriele Moser ◽  
Sebastiano B. Serpico

2021 ◽  
Vol 13 (11) ◽  
pp. 2163
Author(s):  
Zhou Huang ◽  
Huaixin Chen ◽  
Biyuan Liu ◽  
Zhixi Wang

Although remarkable progress has been made in salient object detection (SOD) in natural scene images (NSI), the SOD of optical remote sensing images (RSI) still faces significant challenges due to various spatial resolutions, cluttered backgrounds, and complex imaging conditions, mainly for two reasons: (1) accurate location of salient objects; and (2) subtle boundaries of salient objects. This paper explores the inherent properties of multi-level features to develop a novel semantic-guided attention refinement network (SARNet) for SOD of NSI. Specifically, the proposed semantic guided decoder (SGD) roughly but accurately locates the multi-scale object by aggregating multiple high-level features, and then this global semantic information guides the integration of subsequent features in a step-by-step feedback manner to make full use of deep multi-level features. Simultaneously, the proposed parallel attention fusion (PAF) module combines cross-level features and semantic-guided information to refine the object’s boundary and highlight the entire object area gradually. Finally, the proposed network architecture is trained through an end-to-end fully supervised model. Quantitative and qualitative evaluations on two public RSI datasets and additional NSI datasets across five metrics show that our SARNet is superior to 14 state-of-the-art (SOTA) methods without any post-processing.


2021 ◽  
Author(s):  
RG Negri ◽  
Alejandro Frery

© 2021, The Author(s), under exclusive licence to Springer-Verlag London Ltd. part of Springer Nature. The Earth’s environment is continually changing due to both human and natural factors. Timely identification of the location and kind of change is of paramount importance in several areas of application. Because of that, remote sensing change detection is a topic of great interest. The development of precise change detection methods is a constant challenge. This study introduces a novel unsupervised change detection method based on data clustering and optimization. The proposal is less dependent on radiometric normalization than classical approaches. We carried experiments with remote sensing images and simulated datasets to compare the proposed method with other unsupervised well-known techniques. At its best, the proposal improves by 50% the accuracy concerning the second best technique. Such improvement is most noticeable with uncalibrated data. Experiments with simulated data reveal that the proposal is better than all other compared methods at any practical significance level. The results show the potential of the proposed method.


2021 ◽  
Author(s):  
RG Negri ◽  
Alejandro Frery

© 2021, The Author(s), under exclusive licence to Springer-Verlag London Ltd. part of Springer Nature. The Earth’s environment is continually changing due to both human and natural factors. Timely identification of the location and kind of change is of paramount importance in several areas of application. Because of that, remote sensing change detection is a topic of great interest. The development of precise change detection methods is a constant challenge. This study introduces a novel unsupervised change detection method based on data clustering and optimization. The proposal is less dependent on radiometric normalization than classical approaches. We carried experiments with remote sensing images and simulated datasets to compare the proposed method with other unsupervised well-known techniques. At its best, the proposal improves by 50% the accuracy concerning the second best technique. Such improvement is most noticeable with uncalibrated data. Experiments with simulated data reveal that the proposal is better than all other compared methods at any practical significance level. The results show the potential of the proposed method.


2018 ◽  
Vol 27 (08) ◽  
pp. 1850031 ◽  
Author(s):  
Md. Abdul Alim Sheikh ◽  
Alok Kole ◽  
Tanmoy Maity

In this paper a novel technique for building change detection from remote sensing imagery is presented. It includes two main stages: (1) Object-specific discriminative features are extracted using Morphological Building Index (MBI) to automatically detect the existence of buildings in remote sensing images. (2) Pixel-based image matching is measured on the basis of Mutual Information (MI) of the images by Normalized Mutual Information (NMI). Here, the MBI features values are computed for each of the pair images taken over the same region at two different times and then changes in these two MBI images are measured to indicate the building change. MI is estimated locally for all the pixels for image matching and then thresholding is applied for eliminating those pixels which are responsible for strong similarity. Finally, after getting the MBI and NMI images, a further fusion of these two images is done for refinement of the change result. For evaluation purpose, the experiments are carried on QuickBird, IKONOS images and images taken from Google Earth. The results show that the proposed technique can attain acceptable correctness rates above 90% with Overall Accuracy (OA) 89.52%.


Sign in / Sign up

Export Citation Format

Share Document