scholarly journals Deep Learning-Based Ultrasonic Testing to Evaluate the Porosity of Additively Manufactured Parts with Rough Surfaces

Metals ◽  
2021 ◽  
Vol 11 (2) ◽  
pp. 290
Author(s):  
Seong-Hyun Park ◽  
Jung-Yean Hong ◽  
Taeho Ha ◽  
Sungho Choi ◽  
Kyung-Young Jhang

Ultrasonic testing (UT) has been actively studied to evaluate the porosity of additively manufactured parts. Currently, ultrasonic measurements of as-deposited parts with a rough surface remain problematic because the surface lowers the signal-to-noise ratio (SNR) of ultrasonic signals, which degrades the UT performance. In this study, various deep learning (DL) techniques that can effectively extract the features of defects, even from signals with a low SNR, were applied to UT, and their performance in terms of the porosity evaluation of additively manufactured parts with rough surfaces was investigated. Experimentally, the effects of the processing conditions of additive manufacturing on the resulting porosity were first analyzed using both optical and scanning acoustic microscopy. Second, convolutional neural network (CNN), deep neural network, and multi-layer perceptron models were trained using time-domain ultrasonic signals obtained from additively manufactured specimens with various levels of porosity and surface roughness. The experimental results showed that all the models could evaluate porosity accurately, even that of the as-deposited specimens. In particular, the CNN delivered the best performance at 94.5%. However, conventional UT could not be applied because of the low SNR. The generalization performance when using newly manufactured as-deposited specimens was high at 90%.

2014 ◽  
Vol 687-691 ◽  
pp. 3822-3827
Author(s):  
Bin Liang ◽  
Yan Ping Bai

This paper introduces the basic mathematical model of fuzzy neural network and T-S model. It uses the fuzzy neural network for targeted emulational signal-noise separation and presents evaluation indexes of the fuzzy neural network’s denoising effect, analyzes its mean square error (MSE), signal-to-noise ratio (SNR), SNR gain, and similarity of signal and theoretical reference signal after denoising. The simulation results show that this algorithm has prominent effect of separation under high and low SNR environment. At last, the experiments for the second lake of Fenhe also validated the superiority and effectiveness of this algorithm.


2021 ◽  
Vol 11 (21) ◽  
pp. 9948
Author(s):  
Amira Echtioui ◽  
Ayoub Mlaouah ◽  
Wassim Zouch ◽  
Mohamed Ghorbel ◽  
Chokri Mhiri ◽  
...  

Recently, Electroencephalography (EEG) motor imagery (MI) signals have received increasing attention because it became possible to use these signals to encode a person’s intention to perform an action. Researchers have used MI signals to help people with partial or total paralysis, control devices such as exoskeletons, wheelchairs, prostheses, and even independent driving. Therefore, classifying the motor imagery tasks of these signals is important for a Brain-Computer Interface (BCI) system. Classifying the MI tasks from EEG signals is difficult to offer a good decoder due to the dynamic nature of the signal, its low signal-to-noise ratio, complexity, and dependence on the sensor positions. In this paper, we investigate five multilayer methods for classifying MI tasks: proposed methods based on Artificial Neural Network, Convolutional Neural Network 1 (CNN1), CNN2, CNN1 with CNN2 merged, and the modified CNN1 with CNN2 merged. These proposed methods use different spatial and temporal characteristics extracted from raw EEG data. We demonstrate that our proposed CNN1-based method outperforms state-of-the-art machine/deep learning techniques for EEG classification by an accuracy value of 68.77% and use spatial and frequency characteristics on the BCI Competition IV-2a dataset, which includes nine subjects performing four MI tasks (left/right hand, feet, and tongue). The experimental results demonstrate the feasibility of this proposed method for the classification of MI-EEG signals and can be applied successfully to BCI systems where the amount of data is large due to daily recording.


Sensors ◽  
2021 ◽  
Vol 21 (19) ◽  
pp. 6482
Author(s):  
Merlin M. Mendoza ◽  
Yu-Chi Chang ◽  
Alexei V. Dmitriev ◽  
Chia-Hsien Lin ◽  
Lung-Chih Tsai ◽  
...  

The technique of active ionospheric sounding by ionosondes requires sophisticated methods for the recovery of experimental data on ionograms. In this work, we applied an advanced algorithm of deep learning for the identification and classification of signals from different ionospheric layers. We collected a dataset of 6131 manually labeled ionograms acquired from low-latitude ionosondes in Taiwan. In the ionograms, we distinguished 11 different classes of the signals according to their ionospheric layers. We developed an artificial neural network, FC-DenseNet24, based on the FC-DenseNet convolutional neural network. We also developed a double-filtering algorithm to reduce incorrectly classified signals. That made it possible to successfully recover the sporadic E layer and the F2 layer from highly noise-contaminated ionograms whose mean signal-to-noise ratio was low, SNR = 1.43. The Intersection over Union (IoU) of the recovery of these two signal classes was greater than 0.6, which was higher than the previous models reported. We also identified three factors that can lower the recovery accuracy: (1) smaller statistics of samples; (2) mixing and overlapping of different signals; (3) the compact shape of signals.


Author(s):  
Lutao Liu ◽  
Xinyu Li

AbstractRecently, due to the wide application of low probability of intercept (LPI) radar, lots of recognition approaches about LPI radar signal modulations have been proposed. However, facing the increasingly complex electromagnetic environment, most existing methods have poor performance to identify different modulation types in low signal-to-noise ratio (SNR). This paper proposes an automatic recognition method for different LPI radar signal modulations. Firstly, time-domain signals are converted to time-frequency images (TFIs) by smooth pseudo-Wigner–Ville distribution. Then, these TFIs are fed into a designed triplet convolutional neural network (TCNN) to obtain high-dimensional feature vectors. In essence, TCNN is a CNN network that triplet loss is adopted to optimize parameters of the network in the training process. The participation of triplet loss can ensure that the distance between samples in different classes is greater than that between samples with the same label, improving the discriminability of TCNN. Eventually, a fully connected neural network is employed as the classifier to recognize different modulation types. Simulation shows that the overall recognition success rate can achieve 94% at − 10 dB, which proves the proposed method has a strong discriminating capability for the recognition of different LPI radar signal modulations, even under low SNR.


2021 ◽  
Vol 9 (11) ◽  
pp. 1252
Author(s):  
Yufei Liu ◽  
Feng Zhou ◽  
Gang Qiao ◽  
Yunjiang Zhao ◽  
Guang Yang ◽  
...  

A deep learning-based cyclic shift keying spread spectrum (CSK-SS) underwater acoustic (UWA) communication system is proposed for improving the performance of the conventional system in low signal-to-noise ratio and multipath effects. The proposed deep learning-based system involves the long- and short-term memory (LSTM) architecture-based neural network model as the receiving module of the system. The neural network is fed with the communication signals passing through known channel impulse responses in the offline stage, and then directly used to demodulate the received signal in the online stage to reduce the influence of the above factors. Numerical simulation and actual data results suggest that the deep learning-based CSK-SS UWA communication system is more reliable communication than a conventional system. In particular, the collected experimental data show that after preprocessing, when the communication rate is less than 180 bps, a bit error rate of less than 10−3 can be obtained at a signal-to-noise ratio of −8 dB.


2019 ◽  
Author(s):  
Rami Cohen ◽  
Dima Ruinskiy ◽  
Janis Zickfeld ◽  
Hans IJzerman ◽  
Yizhar Lavner

In this chapter, we compare deep learning and classical approaches for detection of baby cry sounds in various domestic environments under challenging signal-to-noise ratio conditions. Automatic cry detection has applications in commercial products (such as baby remote monitors) as well as in medical and psycho-social research. We design and evaluate several convolutional neural network (CNN) architectures for baby cry detection, and compare their performance to that of classical machine-learning approaches, such as logistic regression and support vector machines. In addition to feed-forward CNNs, we analyze the performance of recurrent neural network (RNN) architectures, which are able to capture temporal behavior of acoustic events. We show that by carefully designing CNN architectures with specialized non-symmetric kernels, better results are obtained compared to common CNN architectures.


Sensors ◽  
2021 ◽  
Vol 21 (22) ◽  
pp. 7474
Author(s):  
Yongjiang Mao ◽  
Wenjuan Ren ◽  
Zhanpeng Yang

With the development of signal processing technology and the use of new radar systems, signal aliasing and electronic interference have occurred in space. The electromagnetic signals have become extremely complicated in their current applications in space, causing difficult problems in terms of accurately identifying radar-modulated signals in low signal-to-noise ratio (SNR) environments. To address this problem, in this paper, we propose an intelligent recognition method that combines time–frequency (T–F) analysis and a deep neural network to identify radar modulation signals. The T–F analysis of the complex Morlet wavelet transform (CMWT) method is used to extract the characteristics of signals and obtain the T–F images. Adaptive filtering and morphological processing are used in T–F image enhancement to reduce the interference of noise on signal characteristics. A deep neural network with the channel-separable ResNet (Sep-ResNet) is used to classify enhanced T–F images. The proposed method completes high-accuracy intelligent recognition of radar-modulated signals in a low-SNR environment. When the SNR is −10 dB, the probability of successful recognition (PSR) is 93.44%.


2020 ◽  
Vol 8 (2) ◽  
pp. 138
Author(s):  
Ari Peryanto ◽  
Anton Yudhana ◽  
Rusydi Umar

Dengan berkembang pesatnya teknologi saat ini, mengakibatkan Deep Learning menjadi salah satu metode machine learning yang sangat diminati. Teknologi GPU Acceleration menjadi salah satu sebab dari pesatnya perkembangan Deep Learning. Deep learning sangat cocok digunakan untuk memecahkan permasalahan klasik dalam Computer Vision, yaitu dalam pengklasifikasian citra. Salah satu metode dalam deep  learning yang  sering digunakan dalam pengolah  citra  adalah  Convolutional Neural Network dan merupakan pengembangan dari Multi Layer Perceptron. Pada penelitian ini pengimplementasian  metode ini dilakukan  menggunakan library  keras dengan bahasa pemrograman phyton.  Pada  proses  training  menggunakan  Convolutional  Neural  Network,  dilakukan  setting  jumlah epoch dan memperbesar ukuran data training untuk meningkatkan akurasi dalam pengklasifikasian citra. Ukuran yang digunakan adalah 32x32, 64x64 dan 128x128. Proses training dengan jumlah epoch 40 dan ukuran 32x32 didapat nilai akurasi tertinggi yang mencapai 98,02% dan rata-rata akurasi tertinggi yaitu 97,56 %, serta  akurasi sistem sebesar 96,64%.


2019 ◽  
pp. 8-31
Author(s):  
Volodymyr Turchenko ◽  
Eric Chalmers ◽  
Artur Luczak

This paper presents the development of several models of a deep convolutional auto-encoder in the Caffe deep learning framework and their experimental evaluation on the example of MNIST dataset. We have created five models of a convolutional auto-encoder which differ architecturally by the presence or absence of pooling and unpooling layers in the auto-encoder’s encoder and decoder parts. Our results show that the developed models provide very good results in dimensionality reduction and unsupervised clustering tasks, and small classification errors when we used the learned internal code as an input of a supervised linear classifier and multi-layer perceptron. The best results were provided by a model where the encoder part contains convolutional and pooling layers, followed by an analogous decoder part with deconvolution and unpooling layers without the use of switch variables in the decoder part. The paper also discusses practical details of the creation of a deep convolutional auto-encoder in the very popular Caffe deep learning framework. We believe that our approach and results presented in this paper could help other researchers to build efficient deep neural network architectures in the future.


Sign in / Sign up

Export Citation Format

Share Document