scholarly journals Transfer Learning Strategies for Deep Learning-based PHM Algorithms

2020 ◽  
Vol 10 (7) ◽  
pp. 2361
Author(s):  
Fan Yang ◽  
Wenjin Zhang ◽  
Laifa Tao ◽  
Jian Ma

As we enter the era of big data, we have to face big data generated by industrial systems that are massive, diverse, high-speed, and variability. In order to effectively deal with big data possessing these characteristics, deep learning technology has been widely used. However, the existing methods require great human involvement that is heavily depend on domain expertise and may thus be non-representative and biased from task to similar task, so for a wide variety of prognostic and health management (PHM) tasks, how to apply the developed deep learning algorithms to similar tasks to reduce the amount of development and data collection costs has become an urgent problem. Based on the idea of transfer learning and the structures of deep learning PHM algorithms, this paper proposes two transfer strategies via transferring different elements of deep learning PHM algorithms, analyzes the possible transfer scenarios in practical application, and proposes transfer strategies applicable in each scenario. At the end of this paper, the deep learning algorithm of bearing fault diagnosis based on convolutional neural networks (CNN) is transferred based on the proposed method, which was carried out under different working conditions and for different objects, respectively. The experiments verify the value and effectiveness of the proposed method and give the best choice of transfer strategy.

2020 ◽  
Vol 10 (4) ◽  
pp. 213 ◽  
Author(s):  
Ki-Sun Lee ◽  
Jae Young Kim ◽  
Eun-tae Jeon ◽  
Won Suk Choi ◽  
Nan Hee Kim ◽  
...  

According to recent studies, patients with COVID-19 have different feature characteristics on chest X-ray (CXR) than those with other lung diseases. This study aimed at evaluating the layer depths and degree of fine-tuning on transfer learning with a deep convolutional neural network (CNN)-based COVID-19 screening in CXR to identify efficient transfer learning strategies. The CXR images used in this study were collected from publicly available repositories, and the collected images were classified into three classes: COVID-19, pneumonia, and normal. To evaluate the effect of layer depths of the same CNN architecture, CNNs called VGG-16 and VGG-19 were used as backbone networks. Then, each backbone network was trained with different degrees of fine-tuning and comparatively evaluated. The experimental results showed the highest AUC value to be 0.950 concerning COVID-19 classification in the experimental group of a fine-tuned with only 2/5 blocks of the VGG16 backbone network. In conclusion, in the classification of medical images with a limited number of data, a deeper layer depth may not guarantee better results. In addition, even if the same pre-trained CNN architecture is used, an appropriate degree of fine-tuning can help to build an efficient deep learning model.


2021 ◽  
Vol 2021 ◽  
pp. 1-13
Author(s):  
Qian Huang ◽  
Xue Wen Li

Big data is a massive and diverse form of unstructured data, which needs proper analysis and management. It is another great technological revolution after the Internet, the Internet of Things, and cloud computing. This paper firstly studies the related concepts and basic theories as the origin of research. Secondly, it analyzes in depth the problems and challenges faced by Chinese government management under the impact of big data. Again, we explore the opportunities that big data brings to government management in terms of management efficiency, administrative capacity, and public services and believe that governments should seize opportunities to make changes. Brainlike computing attempts to simulate the structure and information processing process of biological neural network. This paper firstly analyzes the development status of e-government at home and abroad, studies the service-oriented architecture (SOA) and web services technology, deeply studies the e-government and SOA theory, and discusses this based on the development status of e-government in a certain region. Then, the deep learning algorithm is used to construct the monitoring platform to monitor the government behavior in real time, and the deep learning algorithm is used to conduct in-depth mining to analyze the government's intention behavior.


2021 ◽  
Vol 2021 ◽  
pp. 1-6
Author(s):  
Huiying Zhang ◽  
Jinjin Guo ◽  
Guie Sun

High-dimensional deep learning has been applied in all walks of life at present, among which the most representative one is the logistics path optimization combining multimedia with high-dimensional deep learning. Using multimedia logistics to explore and operate the best path can make the whole logistics industry get innovation and leap forward. How to use high-dimensional deep learning to conduct visual logistics operation management is an opportunity and a problem facing the whole logistics industry at present. The application of high-dimensional deep learning technology can help logistics enterprises improve their management level, realize intelligent decision-making, and enable accurate prediction. Starting from the total amount of logistics, regional layout, enterprise scale, and high-dimensional deep learning algorithm, this paper analyzes the current situation of China’s logistic development through multiweight analysis and explores the best path for multimedia logistics.


CONVERTER ◽  
2021 ◽  
pp. 598-605
Author(s):  
Zhao Jianchao

Behind the rapid development of the Internet industry, Internet security has become a hidden danger. In recent years, the outstanding performance of deep learning in classification and behavior prediction based on massive data makes people begin to study how to use deep learning technology. Therefore, this paper attempts to apply deep learning to intrusion detection to learn and classify network attacks. Aiming at the nsl-kdd data set, this paper first uses the traditional classification methods and several different deep learning algorithms for learning classification. This paper deeply analyzes the correlation among data sets, algorithm characteristics and experimental classification results, and finds out the deep learning algorithm which is relatively good at. Then, a normalized coding algorithm is proposed. The experimental results show that the algorithm can improve the detection accuracy and reduce the false alarm rate.


Information ◽  
2020 ◽  
Vol 11 (5) ◽  
pp. 279 ◽  
Author(s):  
Bambang Susilo ◽  
Riri Fitri Sari

The internet has become an inseparable part of human life, and the number of devices connected to the internet is increasing sharply. In particular, Internet of Things (IoT) devices have become a part of everyday human life. However, some challenges are increasing, and their solutions are not well defined. More and more challenges related to technology security concerning the IoT are arising. Many methods have been developed to secure IoT networks, but many more can still be developed. One proposed way to improve IoT security is to use machine learning. This research discusses several machine-learning and deep-learning strategies, as well as standard datasets for improving the security performance of the IoT. We developed an algorithm for detecting denial-of-service (DoS) attacks using a deep-learning algorithm. This research used the Python programming language with packages such as scikit-learn, Tensorflow, and Seaborn. We found that a deep-learning model could increase accuracy so that the mitigation of attacks that occur on an IoT network is as effective as possible.


Electronics ◽  
2021 ◽  
Vol 10 (10) ◽  
pp. 1183
Author(s):  
Jae-Eun Lee ◽  
Ji-Won Kang ◽  
Woo-Suk Kim ◽  
Jin-Kyum Kim ◽  
Young-Ho Seo ◽  
...  

Much research and development have been made to implement deep neural networks for various purposes with hardware. We implement the deep learning algorithm with a dedicated processor. Watermarking technology for ultra-high resolution digital images and videos needs to be implemented in hardware for real-time or high-speed operation. We propose an optimization methodology to implement a deep learning-based watermarking algorithm in hardware. The proposed optimization methodology includes algorithm and memory optimization. Next, we analyze a fixed-point number system suitable for implementing neural networks as hardware for watermarking. Using these, a hardware structure of a dedicated processor for watermarking based on deep learning technology is proposed and implemented as an application-specific integrated circuit (ASIC).


Sign in / Sign up

Export Citation Format

Share Document