scholarly journals Underwater Target Recognition Based on Multi-Decision LOFAR Spectrum Enhancement: A Deep-Learning Approach

2021 ◽  
Vol 13 (10) ◽  
pp. 265
Author(s):  
Jie Chen ◽  
Bing Han ◽  
Xufeng Ma ◽  
Jian Zhang

Underwater target recognition is an important supporting technology for the development of marine resources, which is mainly limited by the purity of feature extraction and the universality of recognition schemes. The low-frequency analysis and recording (LOFAR) spectrum is one of the key features of the underwater target, which can be used for feature extraction. However, the complex underwater environment noise and the extremely low signal-to-noise ratio of the target signal lead to breakpoints in the LOFAR spectrum, which seriously hinders the underwater target recognition. To overcome this issue and to further improve the recognition performance, we adopted a deep-learning approach for underwater target recognition, and a novel LOFAR spectrum enhancement (LSE)-based underwater target-recognition scheme was proposed, which consists of preprocessing, offline training, and online testing. In preprocessing, we specifically design a LOFAR spectrum enhancement based on multi-step decision algorithm to recover the breakpoints in LOFAR spectrum. In offline training, the enhanced LOFAR spectrum is adopted as the input of convolutional neural network (CNN) and a LOFAR-based CNN (LOFAR-CNN) for online recognition is developed. Taking advantage of the powerful capability of CNN in feature extraction, the recognition accuracy can be further improved by the proposed LOFAR-CNN. Finally, extensive simulation results demonstrate that the LOFAR-CNN network can achieve a recognition accuracy of 95.22%, which outperforms the state-of-the-art methods.

Sensors ◽  
2021 ◽  
Vol 21 (4) ◽  
pp. 1429
Author(s):  
Gang Hu ◽  
Kejun Wang ◽  
Liangliang Liu

Facing the complex marine environment, it is extremely challenging to conduct underwater acoustic target feature extraction and recognition using ship-radiated noise. In this paper, firstly, taking the one-dimensional time-domain raw signal of the ship as the input of the model, a new deep neural network model for underwater target recognition is proposed. Depthwise separable convolution and time-dilated convolution are used for passive underwater acoustic target recognition for the first time. The proposed model realizes automatic feature extraction from the raw data of ship radiated noise and temporal attention in the process of underwater target recognition. Secondly, the measured data are used to evaluate the model, and cluster analysis and visualization analysis are performed based on the features extracted from the model. The results show that the features extracted from the model have good characteristics of intra-class aggregation and inter-class separation. Furthermore, the cross-folding model is used to verify that there is no overfitting in the model, which improves the generalization ability of the model. Finally, the model is compared with traditional underwater acoustic target recognition, and its accuracy is significantly improved by 6.8%.


2021 ◽  
Vol 2083 (4) ◽  
pp. 042007
Author(s):  
Xiaowen Liu ◽  
Juncheng Lei

Abstract Image recognition technology mainly includes image feature extraction and classification recognition. Feature extraction is the key link, which determines whether the recognition performance is good or bad. Deep learning builds a model by building a hierarchical model structure like the human brain, extracting features layer by layer from the data. Applying deep learning to image recognition can further improve the accuracy of image recognition. Based on the idea of clustering, this article establishes a multi-mix Gaussian model for engineering image information in RGB color space through offline learning and expectation-maximization algorithms, to obtain a multi-mix cluster representation of engineering image information. Then use the sparse Gaussian machine learning model on the YCrCb color space to quickly learn the distribution of engineering images online, and design an engineering image recognizer based on multi-color space information.


2014 ◽  
Vol 989-994 ◽  
pp. 4187-4190 ◽  
Author(s):  
Lin Zhang

An adaptive gender recognition method is proposed in this paper. At first, do multiwavlet transform to face image and get its low frequency information, then do feature extraction to the low frequency information using compressive sensing (CS), use extreme learning machine (ELM) to achieve gender recognition finally. In the process of feature extraction, we use genetic algorithm (GA) to get the number of measurements of CS in order to gain the highest recognition rate, so the method can adaptive access optimal performance. Experimental results show that compared with PDA and LDA, the new method improved the recognition accuracy substantially.


Geophysics ◽  
2019 ◽  
Vol 84 (6) ◽  
pp. R989-R1001 ◽  
Author(s):  
Oleg Ovcharenko ◽  
Vladimir Kazei ◽  
Mahesh Kalita ◽  
Daniel Peter ◽  
Tariq Alkhalifah

Low-frequency seismic data are crucial for convergence of full-waveform inversion (FWI) to reliable subsurface properties. However, it is challenging to acquire field data with an appropriate signal-to-noise ratio in the low-frequency part of the spectrum. We have extrapolated low-frequency data from the respective higher frequency components of the seismic wavefield by using deep learning. Through wavenumber analysis, we find that extrapolation per shot gather has broader applicability than per-trace extrapolation. We numerically simulate marine seismic surveys for random subsurface models and train a deep convolutional neural network to derive a mapping between high and low frequencies. The trained network is then tested on sections from the BP and SEAM Phase I benchmark models. Our results indicate that we are able to recover 0.25 Hz data from the 2 to 4.5 Hz frequencies. We also determine that the extrapolated data are accurate enough for FWI application.


2012 ◽  
Vol 2012 ◽  
pp. 1-15 ◽  
Author(s):  
Huadong Meng ◽  
Yimin Wei ◽  
Xuhua Gong ◽  
Yimin Liu ◽  
Xiqin Wang

We address the problem of radar phase-coded waveform design for extended target recognition in the presence of colored Gaussian disturbance. Phase-coded waveforms are selected since they can fully exploit the transmit power with sufficient variability. An important constraint, target detection performance, is considered to meet the practical requirements. The waveform is designed to achieve maximum recognition performance under a control on the achievable signal-to-noise ratio (SNR) of every possible target hypothesis. We formulate the code design in terms of a nonconvex, NP-hard quadratic optimization problem in the cases of both continuous and discrete phases. Techniques based on semidefinite relaxation (SDR) and randomization are proposed to approximate the optimal solutions. Simulation results show that the recognition performance and the detection requirements are well balanced and accurate approximations are achieved.


Author(s):  
Akey Sungheetha ◽  
Rajesh Sharma R

Over the last decade, remote sensing technology has advanced dramatically, resulting in significant improvements on image quality, data volume, and application usage. These images have essential applications since they can help with quick and easy interpretation. Many standard detection algorithms fail to accurately categorize a scene from a remote sensing image recorded from the earth. A method that uses bilinear convolution neural networks to produce a lessweighted set of models those results in better visual recognition in remote sensing images using fine-grained techniques. This proposed hybrid method is utilized to extract scene feature information in two times from remote sensing images for improved recognition. In layman's terms, these features are defined as raw, and only have a single defined frame, so they will allow basic recognition from remote sensing images. This research work has proposed a double feature extraction hybrid deep learning approach to classify remotely sensed image scenes based on feature abstraction techniques. Also, the proposed algorithm is applied to feature values in order to convert them to feature vectors that have pure black and white values after many product operations. The next stage is pooling and normalization, which occurs after the CNN feature extraction process has changed. This research work has developed a novel hybrid framework method that has a better level of accuracy and recognition rate than any prior model.


Sign in / Sign up

Export Citation Format

Share Document