Moving and stationary target acquisition and recognition (MSTAR) model-based automatic target recognition: search technology for a robust ATR

Author(s):  
Joseph R. Diemunsch ◽  
John Wissinger
2016 ◽  
Vol 2016 ◽  
pp. 1-11 ◽  
Author(s):  
Hongqiao Wang ◽  
Yanning Cai ◽  
Guangyuan Fu ◽  
Shicheng Wang

Aiming at the multiple target recognition problems in large-scene SAR image with strong speckle, a robust full-process method from target detection, feature extraction to target recognition is studied in this paper. By introducing a simple 8-neighborhood orthogonal basis, a local multiscale decomposition method from the center of gravity of the target is presented. Using this method, an image can be processed with a multilevel sampling filter and the target’s multiscale features in eight directions and one low frequency filtering feature can be derived directly by the key pixels sampling. At the same time, a recognition algorithm organically integrating the local multiscale features and the multiscale wavelet kernel classifier is studied, which realizes the quick classification with robustness and high accuracy for multiclass image targets. The results of classification and adaptability analysis on speckle show that the robust algorithm is effective not only for the MSTAR (Moving and Stationary Target Automatic Recognition) target chips but also for the automatic target recognition of multiclass/multitarget in large-scene SAR image with strong speckle; meanwhile, the method has good robustness to target’s rotation and scale transformation.


2017 ◽  
Vol 2017 ◽  
pp. 1-18 ◽  
Author(s):  
Xiaohui Zhao ◽  
Yicheng Jiang ◽  
Tania Stathaki

A strategy is introduced for achieving high accuracy in synthetic aperture radar (SAR) automatic target recognition (ATR) tasks. Initially, a novel pose rectification process and an image normalization process are sequentially introduced to produce images with less variations prior to the feature processing stage. Then, feature sets that have a wealth of texture and edge information are extracted with the utilization of wavelet coefficients, where more effective and compact feature sets are acquired by reducing the redundancy and dimensionality of the extracted feature set. Finally, a group of discrimination trees are learned and combined into a final classifier in the framework of Real-AdaBoost. The proposed method is evaluated with the public release database for moving and stationary target acquisition and recognition (MSTAR). Several comparative studies are conducted to evaluate the effectiveness of the proposed algorithm. Experimental results show the distinctive superiority of the proposed method under both standard operating conditions (SOCs) and extended operating conditions (EOCs). Moreover, our additional tests suggest that good recognition accuracy can be achieved even with limited number of training images as long as these are captured with appropriately incremental sample step in target poses.


2019 ◽  
Vol 2019 ◽  
pp. 1-9 ◽  
Author(s):  
Yinjie Xie ◽  
Wenxin Dai ◽  
Zhenxin Hu ◽  
Yijing Liu ◽  
Chuan Li ◽  
...  

Among many improved convolutional neural network (CNN) architectures in the optical image classification, only a few were applied in synthetic aperture radar (SAR) automatic target recognition (ATR). One main reason is that direct transfer of these advanced architectures for the optical images to the SAR images easily yields overfitting due to its limited data set and less features relative to the optical images. Thus, based on the characteristics of the SAR image, we proposed a novel deep convolutional neural network architecture named umbrella. Its framework consists of two alternate CNN-layer blocks. One block is a fusion of six 3-layer paths, which is used to extract diverse level features from different convolution layers. The other block is composed of convolution layers and pooling layers are mainly utilized to reduce dimensions and extract hierarchical feature information. The combination of the two blocks could extract rich features from different spatial scale and simultaneously alleviate overfitting. The performance of the umbrella model was validated by the Moving and Stationary Target Acquisition and Recognition (MSTAR) benchmark data set. This architecture could achieve higher than 99% accuracy for the classification of 10-class targets and higher than 96% accuracy for the classification of 8 variants of the T72 tank, even in the case of diverse positions located by targets. The accuracy of our umbrella is superior to the current networks applied in the classification of MSTAR. The result shows that the umbrella architecture possesses a very robust generalization capability and will be potential for SAR-ART.


1998 ◽  
Author(s):  
Stephen A. Stanhope ◽  
Eric R. Keydel ◽  
Wayne D. Williams ◽  
Vasik G. Rajlich ◽  
Russell Sieron

2021 ◽  
Vol 13 (17) ◽  
pp. 3493
Author(s):  
Jifang Pei ◽  
Zhiyong Wang ◽  
Xueping Sun ◽  
Weibo Huo ◽  
Yin Zhang ◽  
...  

Synthetic aperture radar (SAR) is an advanced microwave imaging system of great importance. The recognition of real-world targets from SAR images, i.e., automatic target recognition (ATR), is an attractive but challenging issue. The majority of existing SAR ATR methods are designed for single-view SAR images. However, multiview SAR images contain more abundant classification information than single-view SAR images, which benefits automatic target classification and recognition. This paper proposes an end-to-end deep feature extraction and fusion network (FEF-Net) that can effectively exploit recognition information from multiview SAR images and can boost the target recognition performance. The proposed FEF-Net is based on a multiple-input network structure with some distinct and useful learning modules, such as deformable convolution and squeeze-and-excitation (SE). Multiview recognition information can be effectively extracted and fused with these modules. Therefore, excellent multiview SAR target recognition performance can be achieved by the proposed FEF-Net. The superiority of the proposed FEF-Net was validated based on experiments with the moving and stationary target acquisition and recognition (MSTAR) dataset.


1991 ◽  
Author(s):  
Jacques G. Verly ◽  
Richard L. Delanoy ◽  
Dan E. Dudgeon

2011 ◽  
Vol 187 ◽  
pp. 319-325
Author(s):  
Wen Ming Cao ◽  
Xiong Feng Li ◽  
Li Juan Pu

Biometric Pattern Recognition aim at finding the best coverage of per kind of sample’s distribution in the feature space. This paper employed geometric algebra to determine local continuum (connected) direction and connected path of same kind of target of SAR images of the complex geometrical body in high dimensional space. We researched the property of the GA Neuron of the coverage body in high dimensional space and studied a kind of SAR ATR(SAR automatic target recognition) technique which works with small data amount and result to high recognizing rate. Finally, we verified our algorithm with MSTAR (Moving and Stationary Target Acquisition and Recognition) [1] data set.


Author(s):  
Yongpeng Tao ◽  
Yu Jing ◽  
Cong Xu

Background: A synthetic aperture radar (SAR) automatic target recognition (ATR) method is proposed in this paper via the joint classification of the target region and shadow. Methods: The elliptical Fourier descriptors (EFDs) are used to describe the target region and shadow extracted from the original SAR image. In addition, the relative positions between the target region and shadow are represented by a constructed feature vector. The three feature vectors complement each other to provide more comprehensive descriptions of the target’s physical properties, e.g., sizes and shape. In the classification stage, the three feature vectors are jointly classified based on the joint sparse representation (JSR). JSR is a multi-task learning algorithm, which can not only represent each component properly but also exploit the inner correlations of different components. Finally, the target type is determined to the class with the minimum reconstruction error. Results: Experiments have been conducted on the Moving and Stationary Target Acquisition and Recognition (MSTAR) dataset. The proposed method achieves a high recognition accuracy of 96.86% for 10-class recognition problem under the standard operating condition (SOC). Moreover, robustness of the proposed method is also superior over the reference methods under the extended operating conditions (EOCs) like configuration variance, depression angle variance, and noise corruption. Conclusion: Therefore, the effectiveness and robustness of the proposed method can be quantitatively demonstrated by the experimental results.


Sign in / Sign up

Export Citation Format

Share Document