scholarly journals Addressing Overfitting Problem in Deep Learning-Based Solutions for Next Generation Data-Driven Networks

2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Mansheng Xiao ◽  
Yuezhong Wu ◽  
Guocai Zuo ◽  
Shuangnan Fan ◽  
Huijun Yu ◽  
...  

Next-generation networks are data-driven by design but face uncertainty due to various changing user group patterns and the hybrid nature of infrastructures running these systems. Meanwhile, the amount of data gathered in the computer system is increasing. How to classify and process the massive data to reduce the amount of data transmission in the network is a very worthy problem. Recent research uses deep learning to propose solutions for these and related issues. However, deep learning faces problems like overfitting that may undermine the effectiveness of its applications in solving different network problems. This paper considers the overfitting problem of convolutional neural network (CNN) models in practical applications. An algorithm for maximum pooling dropout and weight attenuation is proposed to avoid overfitting. First, design the maximum value pooling dropout in the pooling layer of the model to sparse the neurons and then introduce the regularization based on weight attenuation to reduce the complexity of the model when the gradient of the loss function is calculated by backpropagation. Theoretical analysis and experiments show that the proposed method can effectively avoid overfitting and can reduce the error rate of data set classification by more than 10% on average than other methods. The proposed method can improve the quality of different deep learning-based solutions designed for data management and processing in next-generation networks.

2020 ◽  
Vol 71 (7) ◽  
pp. 868-880
Author(s):  
Nguyen Hong-Quan ◽  
Nguyen Thuy-Binh ◽  
Tran Duc-Long ◽  
Le Thi-Lan

Along with the strong development of camera networks, a video analysis system has been become more and more popular and has been applied in various practical applications. In this paper, we focus on person re-identification (person ReID) task that is a crucial step of video analysis systems. The purpose of person ReID is to associate multiple images of a given person when moving in a non-overlapping camera network. Many efforts have been made to person ReID. However, most of studies on person ReID only deal with well-alignment bounding boxes which are detected manually and considered as the perfect inputs for person ReID. In fact, when building a fully automated person ReID system the quality of the two previous steps that are person detection and tracking may have a strong effect on the person ReID performance. The contribution of this paper are two-folds. First, a unified framework for person ReID based on deep learning models is proposed. In this framework, the coupling of a deep neural network for person detection and a deep-learning-based tracking method is used. Besides, features extracted from an improved ResNet architecture are proposed for person representation to achieve a higher ReID accuracy. Second, our self-built dataset is introduced and employed for evaluation of all three steps in the fully automated person ReID framework.


Author(s):  
Shaoqiang Wang ◽  
Shudong Wang ◽  
Song Zhang ◽  
Yifan Wang

Abstract To automatically detect dynamic EEG signals to reduce the time cost of epilepsy diagnosis. In the signal recognition of electroencephalogram (EEG) of epilepsy, traditional machine learning and statistical methods require manual feature labeling engineering in order to show excellent results on a single data set. And the artificially selected features may carry a bias, and cannot guarantee the validity and expansibility in real-world data. In practical applications, deep learning methods can release people from feature engineering to a certain extent. As long as the focus is on the expansion of data quality and quantity, the algorithm model can learn automatically to get better improvements. In addition, the deep learning method can also extract many features that are difficult for humans to perceive, thereby making the algorithm more robust. Based on the design idea of ResNeXt deep neural network, this paper designs a Time-ResNeXt network structure suitable for time series EEG epilepsy detection to identify EEG signals. The accuracy rate of Time-ResNeXt in the detection of EEG epilepsy can reach 91.50%. The Time-ResNeXt network structure produces extremely advanced performance on the benchmark dataset (Berne-Barcelona dataset) and has great potential for improving clinical practice.


Author(s):  
Yves-Gael Billet ◽  
Christophe Gravier ◽  
Jacques Fayolle

This paper provides the state of art and hints on how to lay the foundations of an adaptive QoS approach in Next Generation Networks (NGN). The key idea is to provide a model, which would offer one application version or another, depending on the Quality of Service (QoS) negotiated at the session establishment in a NGN. The stake of this research is a better-balanced usage of the network, for maximizing the service offered to the user given his or her network capacities. It encompasses the model for such an implementation in a NGN as IP Multimedia Subsystem (IMS).


2020 ◽  
Vol 39 (10) ◽  
pp. 734-741
Author(s):  
Sébastien Guillon ◽  
Frédéric Joncour ◽  
Pierre-Emmanuel Barrallon ◽  
Laurent Castanié

We propose new metrics to measure the performance of a deep learning model applied to seismic interpretation tasks such as fault and horizon extraction. Faults and horizons are thin geologic boundaries (1 pixel thick on the image) for which a small prediction error could lead to inappropriately large variations in common metrics (precision, recall, and intersection over union). Through two examples, we show how classical metrics could fail to indicate the true quality of fault or horizon extraction. Measuring the accuracy of reconstruction of thin objects or boundaries requires introducing a tolerance distance between ground truth and prediction images to manage the uncertainties inherent in their delineation. We therefore adapt our metrics by introducing a tolerance function and illustrate their ability to manage uncertainties in seismic interpretation. We compare classical and new metrics through different examples and demonstrate the robustness of our metrics. Finally, we show on a 3D West African data set how our metrics are used to tune an optimal deep learning model.


2013 ◽  
Vol 11 (26) ◽  
pp. 81
Author(s):  
Alexander Suarez R. ◽  
Zeida María Solarte A. ◽  
Juan Carlos Cuéllar Q.

Geophysics ◽  
2009 ◽  
Vol 74 (2) ◽  
pp. SI27-SI36 ◽  
Author(s):  
Joost van der Neut ◽  
Andrey Bakulin

In the virtual source (VS) method we crosscorrelate seismic recordings at two receivers to create a new data set as if one of these receivers were a virtual source and the other a receiver. We focus on the amplitudes and kinematics of VS data, generated by an array of active sources at the surface and recorded by an array of receivers in a borehole. The quality of the VS data depends on the radiation pattern of the virtual source, which in turn is controlled by the spatial aperture of the surface source distribution. Theory suggests that when the receivers are surrounded by multi-component sources completely filling a closed surface, then the virtual source has an isotropic radiation pattern and VS data possess true amplitudes. In practical applications, limited sourceaperture and deployment of a single source type create an anisotropic radiation pattern of the virtual source, leading to distorted amplitudes. This pattern can be estimated by autocorrelating the spatial Fourier transform of the downgoing wavefield in the special case of a laterally invariant medium. The VS data can be improved by deconvolving the VS data with the estimated amplitude radiation pattern in the frequency-wavenumber domain. This operation alters the amplitude spectrum but not the phase of the data. We can also steer the virtual source by assigning it a new desired amplitude radiation pattern, given sufficient illumination exists in the desired directions. Alternatively, time-gating the downgoing wavefield before crosscorrelation, already common practice in implementing the VS method, can improve the radiation characteristics of a virtual source.


Sign in / Sign up

Export Citation Format

Share Document