Target identification in the time-frequency domain

Author(s):  
I. Jouny ◽  
P. Karunaratne ◽  
M. Amin
1997 ◽  
Vol 50 (3) ◽  
pp. 131-148 ◽  
Author(s):  
G. C. Gaunaurd ◽  
H. C. Strifors

The article presents an overview of transient resonance scattering, emphasizing one of its most important applications—the active classification of sonar and radar targets. It discusses traditional, classical techniques such as the Watson-Sommerfeld method (WSM) to transform classical, and slowly convergent normal-mode series in the frequency domain, to rapidly convergent series in the domain of the complex generalization, λ, of the mode-order, n. In view of its analytical complexity and the advent of computers that can overcome slow convergence difficulties, the WSM is not as popular today as it once was. Its main advantage remains its ability to extract physical interpretations from the mathematical results. Resonance scattering focuses on the resonance spectral region of targets. Of these, the penetrable (ie, elastic or dielectric) ones are the subjects of main interest here, particularly those insonified/illuminated by (finite) pulses of various types. The authors describe the exact isolation and extraction of the resonances contained within the scattering cross-section of a penetrable target by subtraction of suitable, background, geometrical contributions. These backgrounds are often given by the solution for an identical, but impenetrable target. This seems to be the main usefulness of impenetrable target solutions in underwater acoustics, which, generally, are physically unrealistic idealizations. The resonances identify the target as its fingerprint. Examples are shown to illustrate various transient scattering phenomena in acoustics and electromagnetism. The article shows exactly how the broadband pulses emitted by an impulse sonar (or radar) extract a substantial number of resonances from the echoes of penetrable targets. Further, it is shown how these are actually used to identify all physical characteristics of various analyzed targets, thus, indeed identifying them. The application of a novel signal processing technique that analyzes the echoes in the joint time-frequency domain is examined. This shows much promise for target identification purposes. Many distributions of the Wigner-type were used by us to generate simulated and experimental echo-displays in time-frequency that show the advantages of the process. The present overview supplements two earlier ones [23, 48] on closely related subjects. The article includes 101 references.


1996 ◽  
Author(s):  
Ismail I. Jouny ◽  
Passant V. Karunaratne ◽  
Moeness G. Amin

Electronics ◽  
2019 ◽  
Vol 8 (5) ◽  
pp. 535 ◽  
Author(s):  
Fei Gao ◽  
Teng Huang ◽  
Jun Wang ◽  
Jinping Sun ◽  
Amir Hussain ◽  
...  

Radars, as active detection sensors, are known to play an important role in various intelligent devices. Target recognition based on high-resolution range profile (HRRP) is an important approach for radars to monitor interesting targets. Traditional recognition algorithms usually rely on a single feature, which makes it difficult to maintain the recognition performance. In this paper, 2-D sequence features from HRRP are extracted in various data domains such as time-frequency domain, time domain, and frequency domain. A novel target identification method is then proposed, by combining bidirectional Long Short-Term Memory (BLSTM) and a Hidden Markov Model (HMM), to learn these multi-domain sequence features. Specifically, we first extract multi-domain HRRP sequences. Next, a new multi-input BLSTM is proposed to learn these multi-domain HRRP sequences, which are then fed to a standard HMM classifier to learn multi-aspect features. Finally, the trained HMM is used to implement the recognition task. Extensive experiments are carried out on the publicly accessible, benchmark MSTAR database. Our proposed algorithm is shown to achieve an identification accuracy of over 91% with a lower false alarm rate and higher identification confidence, compared to several state-of-the-art techniques.


Author(s):  
Wentao Xie ◽  
Qian Zhang ◽  
Jin Zhang

Smart eyewear (e.g., AR glasses) is considered to be the next big breakthrough for wearable devices. The interaction of state-of-the-art smart eyewear mostly relies on the touchpad which is obtrusive and not user-friendly. In this work, we propose a novel acoustic-based upper facial action (UFA) recognition system that serves as a hands-free interaction mechanism for smart eyewear. The proposed system is a glass-mounted acoustic sensing system with several pairs of commercial speakers and microphones to sense UFAs. There are two main challenges in designing the system. The first challenge is that the system is in a severe multipath environment and the received signal could have large attenuation due to the frequency-selective fading which will degrade the system's performance. To overcome this challenge, we design an Orthogonal Frequency Division Multiplexing (OFDM)-based channel state information (CSI) estimation scheme that is able to measure the phase changes caused by a facial action while mitigating the frequency-selective fading. The second challenge is that because the skin deformation caused by a facial action is tiny, the received signal has very small variations. Thus, it is hard to derive useful information directly from the received signal. To resolve this challenge, we apply a time-frequency analysis to derive the time-frequency domain signal from the CSI. We show that the derived time-frequency domain signal contains distinct patterns for different UFAs. Furthermore, we design a Convolutional Neural Network (CNN) to extract high-level features from the time-frequency patterns and classify the features into six UFAs, namely, cheek-raiser, brow-raiser, brow-lower, wink, blink and neutral. We evaluate the performance of our system through experiments on data collected from 26 subjects. The experimental result shows that our system can recognize the six UFAs with an average F1-score of 0.92.


Sign in / Sign up

Export Citation Format

Share Document