scholarly journals Human Resources Balanced Allocation Method Based on Deep Learning Algorithm

2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
Weiwei Shi ◽  
Qiuzuo Li

At present, the economics and social developments show the characteristics of diversification, and the focus of social enterprise management is driven by the allocation of human resources. Human resource allocation is a way of appropriate allocation and reasonable placement of human resources. It means that, under the guidance of science, human resources can maintain the best combination with other resources at any time. Nevertheless, the irregularities in management teams and the balanced differences of talent quality have a great effect on the balanced development of an enterprise. Based on this, this paper studies the establishment of a recurrent neural network (RNN) model to realize the allocation of human resources and the balanced development of enterprise management. Firstly, a deep learning model, based on the recurrent neural network, is established. Then, the human resources data is analyzed to calculate the matching degree between the human resources and posts. Finally, personnel scheduling is carried out according to the matching degree score between the human resources and posts, to obtain the optimal balanced allocation result of the human resources. Experimental results show that our method can bring significant improvements to personnel position matching and effectively enhance the efficiency of human resource allocation based on the cloud environment.

2021 ◽  
Vol 2021 ◽  
pp. 1-13
Author(s):  
Weihuang Dai ◽  
Yi Hu ◽  
Zijiang Zhu ◽  
Xiaofang Liao

The reasonable allocation and use of human resources is an important content in the process of complex system analysis and design. This paper studies the human resource allocation model of Petri net based on artificial intelligence and neural network. In this paper, combined with the characteristics of human resource scheduling, human resource mobility, concurrency, and obvious classification characteristics, the human resource allocation model based on Petri net is implemented. In this paper, the model is trained with the python version of human resource analysis data set. The training parameters are 100, the error coefficient is 0.001, and the learning speed is 0.01. First, the coding rules of human resource data are established. Then, the parameters are input into the model, and the human resource data are trained in the model. Finally, the results of the model output layer are analyzed. The research study shows that the average prediction accuracy of this model is 78.85%. Model training requires the addition of 25 neurons for every 0.01 increase to improve the accuracy of predicting dynamic data of human resources. If the accuracy rate exceeds 75%, the increase in the number of neurons cannot be compensated for by the increase in the accuracy rate, but it is most efficient when the amount of data for human resource scheduling is 2000 to 4000. Therefore, this system can effectively allocate small- and medium-sized human resources and has a high accuracy.


2022 ◽  
pp. 513-525
Author(s):  
Jing Xu ◽  
Bo Wang ◽  
Gihong Min

With the fierce competition of the enterprise market, the human resource allocation of enterprises will face multiple risks. This article takes the connotation of human resource configuration management as the research object and establishes the human resource configuration model through SOM neural network. And the model is trained, learned, and tested. What's more, it is applied to human resources management to adjust the allocation of human resources for the enterprise in a timely manner. It provides a detailed basis for proposing coping strategies and has a great application value.


Author(s):  
Jing Xu ◽  
Bo Wang ◽  
Gihong Min

With the fierce competition of the enterprise market, the human resource allocation of enterprises will face multiple risks. This article takes the connotation of human resource configuration management as the research object and establishes the human resource configuration model through SOM neural network. And the model is trained, learned, and tested. What's more, it is applied to human resources management to adjust the allocation of human resources for the enterprise in a timely manner. It provides a detailed basis for proposing coping strategies and has a great application value.


Sensors ◽  
2021 ◽  
Vol 21 (15) ◽  
pp. 4953
Author(s):  
Sara Al-Emadi ◽  
Abdulla Al-Ali ◽  
Abdulaziz Al-Ali

Drones are becoming increasingly popular not only for recreational purposes but in day-to-day applications in engineering, medicine, logistics, security and others. In addition to their useful applications, an alarming concern in regard to the physical infrastructure security, safety and privacy has arisen due to the potential of their use in malicious activities. To address this problem, we propose a novel solution that automates the drone detection and identification processes using a drone’s acoustic features with different deep learning algorithms. However, the lack of acoustic drone datasets hinders the ability to implement an effective solution. In this paper, we aim to fill this gap by introducing a hybrid drone acoustic dataset composed of recorded drone audio clips and artificially generated drone audio samples using a state-of-the-art deep learning technique known as the Generative Adversarial Network. Furthermore, we examine the effectiveness of using drone audio with different deep learning algorithms, namely, the Convolutional Neural Network, the Recurrent Neural Network and the Convolutional Recurrent Neural Network in drone detection and identification. Moreover, we investigate the impact of our proposed hybrid dataset in drone detection. Our findings prove the advantage of using deep learning techniques for drone detection and identification while confirming our hypothesis on the benefits of using the Generative Adversarial Networks to generate real-like drone audio clips with an aim of enhancing the detection of new and unfamiliar drones.


Electronics ◽  
2021 ◽  
Vol 10 (1) ◽  
pp. 81
Author(s):  
Jianbin Xiong ◽  
Dezheng Yu ◽  
Shuangyin Liu ◽  
Lei Shu ◽  
Xiaochan Wang ◽  
...  

Plant phenotypic image recognition (PPIR) is an important branch of smart agriculture. In recent years, deep learning has achieved significant breakthroughs in image recognition. Consequently, PPIR technology that is based on deep learning is becoming increasingly popular. First, this paper introduces the development and application of PPIR technology, followed by its classification and analysis. Second, it presents the theory of four types of deep learning methods and their applications in PPIR. These methods include the convolutional neural network, deep belief network, recurrent neural network, and stacked autoencoder, and they are applied to identify plant species, diagnose plant diseases, etc. Finally, the difficulties and challenges of deep learning in PPIR are discussed.


Symmetry ◽  
2021 ◽  
Vol 13 (6) ◽  
pp. 931
Author(s):  
Kecheng Peng ◽  
Xiaoqun Cao ◽  
Bainian Liu ◽  
Yanan Guo ◽  
Wenlong Tian

The intensity variation of the South Asian high (SAH) plays an important role in the formation and extinction of many kinds of mesoscale systems, including tropical cyclones, southwest vortices in the Asian summer monsoon (ASM) region, and the precipitation in the whole Asia Europe region, and the SAH has a vortex symmetrical structure; its dynamic field also has the symmetry form. Not enough previous studies focus on the variation of SAH daily intensity. The purpose of this study is to establish a day-to-day prediction model of the SAH intensity, which can accurately predict not only the interannual variation but also the day-to-day variation of the SAH. Focusing on the summer period when the SAH is the strongest, this paper selects the geopotential height data between 1948 and 2020 from NCEP to construct the SAH intensity datasets. Compared with the classical deep learning methods of various kinds of efficient time series prediction model, we ultimately combine the Ensemble Empirical Mode Decomposition with Adaptive Noise (CEEMDAN) method, which has the ability to deal with the nonlinear and unstable single system, with the Permutation Entropy (PE) method, which can extract the SAH intensity feature of IMF decomposed by CEEMDAN, and the Convolution-based Gated Recurrent Neural Network (ConvGRU) model is used to train, test, and predict the intensity of the SAH. The prediction results show that the combination of CEEMDAN and ConvGRU can have a higher accuracy and more stable prediction ability than the traditional deep learning model. After removing the redundant features in the time series, the prediction accuracy of the SAH intensity is higher than that of the classical model, which proves that the method has good applicability for the prediction of nonlinear systems in the atmosphere.


Sensors ◽  
2020 ◽  
Vol 20 (22) ◽  
pp. 6460
Author(s):  
Dae-Yeon Kim ◽  
Dong-Sik Choi ◽  
Jaeyun Kim ◽  
Sung Wan Chun ◽  
Hyo-Wook Gil ◽  
...  

In this study, we propose a personalized glucose prediction model using deep learning for hospitalized patients who experience Type-2 diabetes. We aim for our model to assist the medical personnel who check the blood glucose and control the amount of insulin doses. Herein, we employed a deep learning algorithm, especially a recurrent neural network (RNN), that consists of a sequence processing layer and a classification layer for the glucose prediction. We tested a simple RNN, gated recurrent unit (GRU), and long-short term memory (LSTM) and varied the architectures to determine the one with the best performance. For that, we collected data for a week using a continuous glucose monitoring device. Type-2 inpatients are usually experiencing bad health conditions and have a high variability of glucose level. However, there are few studies on the Type-2 glucose prediction model while many studies performed on Type-1 glucose prediction. This work has a contribution in that the proposed model exhibits a comparative performance to previous works on Type-1 patients. For 20 in-hospital patients, we achieved an average root mean squared error (RMSE) of 21.5 and an Mean absolute percentage error (MAPE) of 11.1%. The GRU with a single RNN layer and two dense layers was found to be sufficient to predict the glucose level. Moreover, to build a personalized model, at most, 50% of data are required for training.


Kursor ◽  
2020 ◽  
Vol 10 (4) ◽  
Author(s):  
Felisia Handayani ◽  
Metty Mustikasari

Sentiment analysis is computational research of the opinions of many people who are textually expressed against a particular topic. Twitter is the most popular communication tool among Internet users today to express their opinions. Deep Learning is a solution to allow computers to learn from experience and understand the world in terms of the hierarchy concept. Deep Learning objectives replace manual assignments with learning. The development of deep learning has a set of algorithms that focus on learning data representation. The recurrent Neural Network is one of the machine learning methods included in Deep learning because the data is processed through multi-players. RNN is also an algorithm that can recall the input with internal memory, therefore it is suitable for machine learning problems involving sequential data. The study aims to test models that have been created from tweets that are positive, negative, and neutral sentiment to determine the accuracy of the models. The models have been created using the Recurrent Neural Network when applied to tweet classifications to mark the individual classes of Indonesian-language tweet data sentiment. From the experiments conducted, results on the built system showed that the best test results in the tweet data with the RNN method using Confusion Matrix are with Precision 0.618, Recall 0.507 and Accuracy 0.722 on the data amounted to 3000 data and comparative data training and data testing of ratio data 80:20


Memory management is very essential task for large-scale storage systems; in mobile platform generate storage errors due to insufficient memory as well as additional task overhead. Many existing systems have illustrated different solution for such issues, like load balancing and load rebalancing. Different unusable applications which are already installed in mobile platform user never access frequently but it allocates some memory space on hard device storage. In the proposed research work we describe dynamic resource allocation for mobile platforms using deep learning approach. In Real world mobile systems users may install different kind of applications which required ad-hoc basis. Such applications may be affect to execution performance of system as well space complexity, sometime they also affect another runnable applications performance. To eliminate of such issues, we carried out an approach to allocate runtime resources for data storage for mobile platform. When system connected with cloud data server it store complete file system on remote Virtual Machine (VM) and whenever a single application required which immediately install beginning as remote server to local device. For developed of proposed system we implemented deep learning base Convolutional Neural Network (CNN), algorithm has used with tensorflow environment which reduces the time complexity for data storage as well as extraction respectively.


Sign in / Sign up

Export Citation Format

Share Document