Information-theoretic bounds on target recognition performance

Author(s):  
Avinash Jain ◽  
Pierre Moulin ◽  
Michael I. Miller ◽  
Kannan Ramchandran
2021 ◽  
Vol 13 (10) ◽  
pp. 265
Author(s):  
Jie Chen ◽  
Bing Han ◽  
Xufeng Ma ◽  
Jian Zhang

Underwater target recognition is an important supporting technology for the development of marine resources, which is mainly limited by the purity of feature extraction and the universality of recognition schemes. The low-frequency analysis and recording (LOFAR) spectrum is one of the key features of the underwater target, which can be used for feature extraction. However, the complex underwater environment noise and the extremely low signal-to-noise ratio of the target signal lead to breakpoints in the LOFAR spectrum, which seriously hinders the underwater target recognition. To overcome this issue and to further improve the recognition performance, we adopted a deep-learning approach for underwater target recognition, and a novel LOFAR spectrum enhancement (LSE)-based underwater target-recognition scheme was proposed, which consists of preprocessing, offline training, and online testing. In preprocessing, we specifically design a LOFAR spectrum enhancement based on multi-step decision algorithm to recover the breakpoints in LOFAR spectrum. In offline training, the enhanced LOFAR spectrum is adopted as the input of convolutional neural network (CNN) and a LOFAR-based CNN (LOFAR-CNN) for online recognition is developed. Taking advantage of the powerful capability of CNN in feature extraction, the recognition accuracy can be further improved by the proposed LOFAR-CNN. Finally, extensive simulation results demonstrate that the LOFAR-CNN network can achieve a recognition accuracy of 95.22%, which outperforms the state-of-the-art methods.


2021 ◽  
Vol 2021 ◽  
pp. 1-8
Author(s):  
Junhua Wang ◽  
Yuan Jiang

For the problem of synthetic aperture radar (SAR) image target recognition, a method via combination of multilevel deep features is proposed. The residual network (ResNet) is used to learn the multilevel deep features of SAR images. Based on the similarity measure, the multilevel deep features are clustered and several feature sets are obtained. Then, each feature set is characterized and classified by the joint sparse representation (JSR), and the corresponding output result is obtained. Finally, the results of different feature sets are combined using the weighted fusion to obtain the target recognition results. The proposed method in this paper can effectively combine the advantages of ResNet and JSR in feature extraction and classification and improve the overall recognition performance. Experiments and analysis are carried out on the MSTAR dataset with rich samples. The results show that the proposed method can achieve superior performance for 10 types of target samples under the standard operating condition (SOC), noise interference, and occlusion conditions, which verifies its effectiveness.


Author(s):  
Corwin A. Bennett ◽  
Samuel H. Winterstein ◽  
Robert E. Kent

The terminology and literature in the area of image quality and target recognition are reviewed. An experiment in which subjects recognized strategic and tactical targets in aerial photographs with controlled image degradations is described. Some findings are: Recognition performance is only moderate for representative conditions. There are wide differences among target types in the recognizability. Knowledge of a target's presence (briefing) greatly aids recognition. Better resolution means better performance. Enlarging the image such that a line of resolution subtends more than three minutes of arc hinders recognition. Grain size should be kept below 20 seconds of arc. It is suggested that the eventual application of the modulation transfer function approach to measurement of image quality and target characteristics will enable a quantitative subsuming of various quality-size relationships. More attention needs to be paid in recognition research to suitable task definition, target description, and subject selection.


Author(s):  
Sehchang Hah ◽  
Deborah A. Reisweber ◽  
Jose A. Picart ◽  
Harry Zwick

Sensors ◽  
2020 ◽  
Vol 20 (6) ◽  
pp. 1724
Author(s):  
Zilu Ying ◽  
Chen Xuan ◽  
Yikui Zhai ◽  
Bing Sun ◽  
Jingwen Li ◽  
...  

Since Synthetic Aperture Radar (SAR) targets are full of coherent speckle noise, the traditional deep learning models are difficult to effectively extract key features of the targets and share high computational complexity. To solve the problem, an effective lightweight Convolutional Neural Network (CNN) model incorporating transfer learning is proposed for better handling SAR targets recognition tasks. In this work, firstly we propose the Atrous-Inception module, which combines both atrous convolution and inception module to obtain rich global receptive fields, while strictly controlling the parameter amount and realizing lightweight network architecture. Secondly, the transfer learning strategy is used to effectively transfer the prior knowledge of the optical, non-optical, hybrid optical and non-optical domains to the SAR target recognition tasks, thereby improving the model’s recognition performance on small sample SAR target datasets. Finally, the model constructed in this paper is verified to be 97.97% on ten types of MSTAR datasets under standard operating conditions, reaching a mainstream target recognition rate. Meanwhile, the method presented in this paper shows strong robustness and generalization performance on a small number of randomly sampled SAR target datasets.


Sign in / Sign up

Export Citation Format

Share Document