scholarly journals Target Recognition of SAR Images via Matching Attributed Scattering Centers with Binary Target Region

Sensors ◽  
2018 ◽  
Vol 18 (9) ◽  
pp. 3019 ◽  
Author(s):  
Jian Tan ◽  
Xiangtao Fan ◽  
Shenghua Wang ◽  
Yingchao Ren

A target recognition method of synthetic aperture radar (SAR) images is proposed via matching attributed scattering centers (ASCs) to binary target regions. The ASCs extracted from the test image are predicted as binary regions. In detail, each ASC is first transformed to the image domain based on the ASC model. Afterwards, the resulting image is converted to a binary region segmented by a global threshold. All the predicted binary regions of individual ASCs from the test sample are mapped to the binary target regions of the corresponding templates. Then, the matched regions are evaluated by three scores which are combined as a similarity measure via the score-level fusion. In the classification stage, the target label of the test sample is determined according to the fused similarities. The proposed region matching method avoids the conventional ASC matching problem, which involves the assignment of ASC sets. In addition, the predicted regions are more robust than the point features. The Moving and Stationary Target Acquisition and Recognition (MSTAR) dataset is used for performance evaluation in the experiments. According to the experimental results, the method in this study outperforms some traditional methods reported in the literature under several different operating conditions. Under the standard operating condition (SOC), the proposed method achieves very good performance, with an average recognition rate of 98.34%, which is higher than the traditional methods. Moreover, the robustness of the proposed method is also superior to the traditional methods under different extended operating conditions (EOCs), including configuration variants, large depression angle variation, noise contamination, and partial occlusion.

2020 ◽  
Vol 2020 ◽  
pp. 1-10 ◽  
Author(s):  
Chenyu Li ◽  
Guohua Liu

This paper applied block sparse Bayesian learning (BSBL) to synthetic aperture radar (SAR) target recognition. The traditional sparse representation-based classification (SRC) operates on the global dictionary collaborated by different classes. Afterwards, the similarities between the test sample and various classes are evaluated by the reconstruction errors. This paper reconstructs the test sample based on local dictionaries formed by individual classes. Considering the azimuthal sensitivity of SAR images, the linear coefficients on the local dictionary are sparse ones with block structure. Therefore, to solve the sparse coefficients, the BSBL is employed. The proposed method can better exploit the representation capability of each class, thus benefiting the recognition performance. Based on the experimental results on the moving and stationary target acquisition and recognition (MSTAR) dataset, the effectiveness and robustness of the proposed method is confirmed.


2021 ◽  
Vol 30 (13) ◽  
Author(s):  
Zhichao Liu ◽  
Baida Qu

For the problem of target recognition of synthetic aperture radar (SAR) images, a method based on the combination of bidimensional empirical mode decomposition (BEMD) and extreme learning machine (ELM) is proposed. BEMD performs feature extraction for SAR images, producing multi-layer bidimensional intrinsic mode functions (BIMF). These BIMFs covey the discrimination of the original target while effectively eliminating the noises. ELM conducts the classification of each BIMF with high efficiency and robustness. Finally, the decisions from different BIMFs are fused using a linear weighting strategy to reach a reliable decision on the target label. The proposed method compensates the relatively low adaptivity of ELM to noise corruption by BEMD feature extraction. Moreover, the multi-layer BIMFs provide more discriminative information for correct decision. Hence, the overall recognition performance can be improved. As an efficient recognition algorithm, the proposed method can be used in an embedded system for wide applications. Experiments are designed and implemented on the moving and stationary target acquisition and recognition (MSTAR) dataset. The proposed method is tested under both the standard operating condition (SOC) and extended operating conditions (EOCs). The results reflect its effectiveness and robustness via quantitative comparisons.


2019 ◽  
Vol 11 (22) ◽  
pp. 2676 ◽  
Author(s):  
Meiting Yu ◽  
Sinong Quan ◽  
Gangyao Kuang ◽  
Shaojie Ni

Synthetic aperture radar (SAR) target recognition under extended operating conditions (EOCs) is a challenging problem due to the complex application environment, especially for insufficient target variations and corrupted SAR images in the training samples. This paper proposes a new strategy to solve these problems for target recognition. The SAR images are firstly characterized by multi-scale components of monogenic signal. The generated monogenic features are decomposed to learn a class dictionary and a shared dictionary, which represent the possible intraclass variations information and the common information, respectively. Moreover, a sparse representation of the class dictionary and a dense representation of the shared dictionary are jointly employed to represent a query sample for classification. The validity of the proposed strategy is demonstrated with multiple comparative experiments on moving and stationary target acquisition and recognition (MSTAR) database.


2019 ◽  
Vol 11 (11) ◽  
pp. 1316 ◽  
Author(s):  
Li Wang ◽  
Xueru Bai ◽  
Feng Zhou

In recent studies, synthetic aperture radar (SAR) automatic target recognition (ATR) algorithms that are based on the convolutional neural network (CNN) have achieved high recognition rates in the moving and stationary target acquisition and recognition (MSTAR) dataset. However, in a SAR ATR task, the feature maps with little information automatically learned by CNN will disturb the classifier. We design a new enhanced squeeze and excitation (enhanced-SE) module to solve this problem, and then propose a new SAR ATR network, i.e., the enhanced squeeze and excitation network (ESENet). When compared to the available CNN structures that are designed for SAR ATR, the ESENet can extract more effective features from SAR images and obtain better generalization performance. In the MSTAR dataset containing pure targets, the proposed method achieves a recognition rate of 97.32% and it exceeds the available CNN-based SAR ATR algorithms. Additionally, it has shown robustness to large depression angle variation, configuration variants, and version variants.


Author(s):  
Zhenyu Zhang ◽  

This paper proposes a method using joint classification of monogenic components with discrimination analysis for target recognition in synthetic aperture radar (SAR) images. Three monogenic components, namely, phase, amplitude, and orientation, are extracted from the original image and classified by joint sparse representation for target recognition. Considering that the three components may have different discrimination capabilities for different operating conditions, the discrimination analysis is incorporated into the classification scheme. The components with low discriminability are not used in the joint classification. Afterwards, those discriminative components for a certain condition are classified to determine the target type. Experiments are conducted on the moving and stationary target acquisition and recognition (MSTAR) to evaluate the performance of the proposed method.


2020 ◽  
Vol 2020 ◽  
pp. 1-9 ◽  
Author(s):  
Xiaojing Tan ◽  
Ming Zou ◽  
Xiqin He

This study proposes a synthetic aperture radar (SAR) target-recognition method based on the fused features from the multiresolution representations by 2D canonical correlation analysis (2DCCA). The multiresolution representations were demonstrated to be more discriminative than the solely original image. So, the joint classification of the multiresolution representations is beneficial to the enhancement of SAR target recognition performance. 2DCCA is capable of exploiting the inner correlations of the multiresolution representations while significantly reducing the redundancy. Therefore, the fused features can effectively convey the discrimination capability of the multiresolution representations while relieving the storage and computational burdens caused by the original high dimension. In the classification stage, the sparse representation-based classification (SRC) is employed to classify the fused features. SRC is an effective and robust classifier, which has been extensively validated in the previous works. The moving and stationary target acquisition and recognition (MSTAR) data set is employed to evaluate the proposed method. According to the experimental results, the proposed method could achieve a high recognition rate of 97.63% for the 10 classes of targets under the standard operating condition (SOC). Under the extended operating conditions (EOC) like configuration variance, depression angle variance, and the robustness of the proposed method are also quantitively validated. In comparison with some other SAR target recognition methods, the superiority of the proposed method can be effectively demonstrated.


2021 ◽  
Vol 13 (8) ◽  
pp. 1455
Author(s):  
Jifang Pei ◽  
Weibo Huo ◽  
Chenwei Wang ◽  
Yulin Huang ◽  
Yin Zhang ◽  
...  

Multiview synthetic aperture radar (SAR) images contain much richer information for automatic target recognition (ATR) than a single-view one. It is desirable to establish a reasonable multiview ATR scheme and design effective ATR algorithm to thoroughly learn and extract that classification information, so that superior SAR ATR performance can be achieved. Hence, a general processing framework applicable for a multiview SAR ATR pattern is first given in this paper, which can provide an effective approach to ATR system design. Then, a new ATR method using a multiview deep feature learning network is designed based on the proposed multiview ATR framework. The proposed neural network is with a multiple input parallel topology and some distinct deep feature learning modules, with which significant classification features, the intra-view and inter-view features existing in the input multiview SAR images, will be learned simultaneously and thoroughly. Therefore, the proposed multiview deep feature learning network can achieve an excellent SAR ATR performance. Experimental results have shown the superiorities of the proposed multiview SAR ATR method under various operating conditions.


2016 ◽  
Vol 2016 ◽  
pp. 1-11 ◽  
Author(s):  
Hongqiao Wang ◽  
Yanning Cai ◽  
Guangyuan Fu ◽  
Shicheng Wang

Aiming at the multiple target recognition problems in large-scene SAR image with strong speckle, a robust full-process method from target detection, feature extraction to target recognition is studied in this paper. By introducing a simple 8-neighborhood orthogonal basis, a local multiscale decomposition method from the center of gravity of the target is presented. Using this method, an image can be processed with a multilevel sampling filter and the target’s multiscale features in eight directions and one low frequency filtering feature can be derived directly by the key pixels sampling. At the same time, a recognition algorithm organically integrating the local multiscale features and the multiscale wavelet kernel classifier is studied, which realizes the quick classification with robustness and high accuracy for multiclass image targets. The results of classification and adaptability analysis on speckle show that the robust algorithm is effective not only for the MSTAR (Moving and Stationary Target Automatic Recognition) target chips but also for the automatic target recognition of multiclass/multitarget in large-scene SAR image with strong speckle; meanwhile, the method has good robustness to target’s rotation and scale transformation.


Sensors ◽  
2020 ◽  
Vol 20 (6) ◽  
pp. 1724
Author(s):  
Zilu Ying ◽  
Chen Xuan ◽  
Yikui Zhai ◽  
Bing Sun ◽  
Jingwen Li ◽  
...  

Since Synthetic Aperture Radar (SAR) targets are full of coherent speckle noise, the traditional deep learning models are difficult to effectively extract key features of the targets and share high computational complexity. To solve the problem, an effective lightweight Convolutional Neural Network (CNN) model incorporating transfer learning is proposed for better handling SAR targets recognition tasks. In this work, firstly we propose the Atrous-Inception module, which combines both atrous convolution and inception module to obtain rich global receptive fields, while strictly controlling the parameter amount and realizing lightweight network architecture. Secondly, the transfer learning strategy is used to effectively transfer the prior knowledge of the optical, non-optical, hybrid optical and non-optical domains to the SAR target recognition tasks, thereby improving the model’s recognition performance on small sample SAR target datasets. Finally, the model constructed in this paper is verified to be 97.97% on ten types of MSTAR datasets under standard operating conditions, reaching a mainstream target recognition rate. Meanwhile, the method presented in this paper shows strong robustness and generalization performance on a small number of randomly sampled SAR target datasets.


2017 ◽  
Vol 2017 ◽  
pp. 1-18 ◽  
Author(s):  
Xiaohui Zhao ◽  
Yicheng Jiang ◽  
Tania Stathaki

A strategy is introduced for achieving high accuracy in synthetic aperture radar (SAR) automatic target recognition (ATR) tasks. Initially, a novel pose rectification process and an image normalization process are sequentially introduced to produce images with less variations prior to the feature processing stage. Then, feature sets that have a wealth of texture and edge information are extracted with the utilization of wavelet coefficients, where more effective and compact feature sets are acquired by reducing the redundancy and dimensionality of the extracted feature set. Finally, a group of discrimination trees are learned and combined into a final classifier in the framework of Real-AdaBoost. The proposed method is evaluated with the public release database for moving and stationary target acquisition and recognition (MSTAR). Several comparative studies are conducted to evaluate the effectiveness of the proposed algorithm. Experimental results show the distinctive superiority of the proposed method under both standard operating conditions (SOCs) and extended operating conditions (EOCs). Moreover, our additional tests suggest that good recognition accuracy can be achieved even with limited number of training images as long as these are captured with appropriately incremental sample step in target poses.


Sign in / Sign up

Export Citation Format

Share Document