scholarly journals Hybrid Approach for Resource Allocation in Cloud Infrastructure Using Random Forest and Genetic Algorithm

2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Madhusudhan H S ◽  
Satish Kumar T ◽  
S.M.F D Syed Mustapha ◽  
Punit Gupta ◽  
Rajan Prasad Tripathi

In cloud computing, the virtualization technique is a significant technology to optimize the power consumption of the cloud data center. In this generation, most of the services are moving to the cloud resulting in increased load on data centers. As a result, the size of the data center grows and hence there is more energy consumption. To resolve this issue, an efficient optimization algorithm is required for resource allocation. In this work, a hybrid approach for virtual machine allocation based on genetic algorithm (GA) and the random forest (RF) is proposed which belongs to a class of supervised machine learning techniques. The aim of the work is to minimize power consumption while maintaining better load balance among available resources and maximizing resource utilization. The proposed model used a genetic algorithm to generate a training dataset for the random forest model and further get a trained model. The real-time workload traces from PlanetLab are used to evaluate the approach. The results showed that the proposed GA-RF model improves energy consumption, execution time, and resource utilization of the data center and hosts as compared to the existing models. The work used power consumption, execution time, resource utilization, average start time, and average finish time as performance metrics.

2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Jayati Athavale ◽  
Minami Yoda ◽  
Yogendra Joshi

Purpose This study aims to present development of genetic algorithm (GA)-based framework aimed at minimizing data center cooling energy consumption by optimizing the cooling set-points while ensuring that thermal management criteria are satisfied. Design/methodology/approach Three key components of the developed framework include an artificial neural network-based model for rapid temperature prediction (Athavale et al., 2018a, 2019), a thermodynamic model for cooling energy estimation and GA-based optimization process. The static optimization framework informs the IT load distribution and cooling set-points in the data center room to simultaneously minimize cooling power consumption while maximizing IT load. The dynamic framework aims to minimize cooling power consumption in the data center during operation by determining most energy-efficient set-points for the cooling infrastructure while preventing temperature overshoots. Findings Results from static optimization framework indicate that among the three levels (room, rack and row) of IT load distribution granularity, Rack-level distribution consumes the least cooling power. A test case of 7.5 h implementing dynamic optimization demonstrated a reduction in cooling energy consumption between 21%–50% depending on current operation of data center. Research limitations/implications The temperature prediction model used being data-driven, is specific to the lab configuration considered in this study and cannot be directly applied to other scenarios. However, the overall framework can be generalized. Practical implications The developed framework can be implemented in data centers to optimize operation of cooling infrastructure and reduce energy consumption. Originality/value This paper presents a holistic framework for improving energy efficiency of data centers which is of critical value given the high (and increasing) energy consumption by these facilities.


Electronics ◽  
2020 ◽  
Vol 9 (3) ◽  
pp. 444 ◽  
Author(s):  
Valerio Morfino ◽  
Salvatore Rampone

In the fields of Internet of Things (IoT) infrastructures, attack and anomaly detection are rising concerns. With the increased use of IoT infrastructure in every domain, threats and attacks in these infrastructures are also growing proportionally. In this paper the performances of several machine learning algorithms in identifying cyber-attacks (namely SYN-DOS attacks) to IoT systems are compared both in terms of application performances, and in training/application times. We use supervised machine learning algorithms included in the MLlib library of Apache Spark, a fast and general engine for big data processing. We show the implementation details and the performance of those algorithms on public datasets using a training set of up to 2 million instances. We adopt a Cloud environment, emphasizing the importance of the scalability and of the elasticity of use. Results show that all the Spark algorithms used result in a very good identification accuracy (>99%). Overall, one of them, Random Forest, achieves an accuracy of 1. We also report a very short training time (23.22 sec for Decision Tree with 2 million rows). The experiments also show a very low application time (0.13 sec for over than 600,000 instances for Random Forest) using Apache Spark in the Cloud. Furthermore, the explicit model generated by Random Forest is very easy-to-implement using high- or low-level programming languages. In light of the results obtained, both in terms of computation times and identification performance, a hybrid approach for the detection of SYN-DOS cyber-attacks on IoT devices is proposed: the application of an explicit Random Forest model, implemented directly on the IoT device, along with a second level analysis (training) performed in the Cloud.


2013 ◽  
Vol 760-762 ◽  
pp. 1343-1347
Author(s):  
Tao Wan

Power electronic transformation system is applied widely in industrial control and the application environment is complex. Big, small and medium-sized system power consumption improves continuously, so it is urgent to reduce the system energy consumption problems. This paper proposes a way to reduce the energy consumption of power electronic transformation system based on genetic algorithm. Work frequency regulation and working voltage measurement technology are used in industrial control system and the voltage and frequency produced by system power consumption are calculated. Genetic algorithm is used to calculate the optimal solution. And then achieve the purpose of reducing energy consumption. Experimental results show that this control algorithm can effectively reduce the power consumption of power electronic transformation system in industrial control and has a good effect.


2012 ◽  
Vol 2012 ◽  
pp. 1-10 ◽  
Author(s):  
Tosmate Cheocherngngarn ◽  
Jean Andrian ◽  
Deng Pan

Recently, energy efficiency or green IT has become a hot issue for many IT infrastructures as they attempt to utilize energy-efficient strategies in their enterprise IT systems in order to minimize operational costs. Networking devices are shared resources connecting important IT infrastructures, especially in a data center network they are always operated 24/7 which consume a huge amount of energy, and it has been obviously shown that this energy consumption is largely independent of the traffic through the devices. As a result, power consumption in networking devices is becoming more and more a critical problem, which is of interest for both research community and general public. Multicast benefits group communications in saving link bandwidth and improving application throughput, both of which are important for green data center. In this paper, we study the deployment strategy of multicast switches in hybrid mode in energy-aware data center network: a case of famous fat-tree topology. The objective is to find the best location to deploy multicast switch not only to achieve optimal bandwidth utilization but also to minimize power consumption. We show that it is possible to easily achieve nearly 50% of energy consumption after applying our proposed algorithm.


Efficient resource utilization plays a vital role in cloud computing since the shared computational power of the resources is offered on demand. During dynamic resource allocation sometimes a server may be over utilized or underutilized thus leading to excess of energy consumption in the data centers. So the proposed system calculates the over utilization and underutilization of a CPU and RAM usage and also considers the network bandwidth usage to reduce power consumption in the cloud data center. Hence, a novel method is used for minimizing power consumption in the data center


Author(s):  
SIVARANJANI BALAKRISHNAN ◽  
SURENDRAN DORAISWAMY

Data centers are becoming the main backbone of and centralized repository for all cloud-accessible services in on-demand cloud computing environments. In particular, virtual data centers (VDCs) facilitate the virtualization of all data center resources such as computing, memory, storage, and networking equipment as a single unit. It is necessary to use the data center efficiently to improve its profitability. The essential factor that significantly influences efficiency is the average number of VDC requests serviced by the infrastructure provider, and the optimal allocation of requests improves the acceptance rate. In existing VDC request embedding algorithms, data center performance factors such as resource utilization rate and energy consumption are not taken into consideration. This motivated us to design a strategy for improving the resource utilization rate without increasing the energy consumption. We propose novel VDC embedding methods based on row-epitaxial and batched greedy algorithms inspired by bioinformatics. These algorithms embed new requests into the VDC while reembedding previously allocated requests. Reembedding is done to consolidate the available resources in the VDC resource pool. The experimental testbed results show that our algorithms boost the data center objectives of high resource utilization (by improving the request acceptance rate), low energy consumption, and short VDC request scheduling delay, leading to an appreciable return on investment.


2021 ◽  
Vol 39 (1B) ◽  
pp. 203-208
Author(s):  
Haider A. Ghanem ◽  
Rana F. Ghani ◽  
Maha J. Abbas

Data centers are the main nerve of the Internet because of its hosting, storage, cloud computing and other services. All these services require a lot of work and resources, such as energy and cooling. The main problem is how to improve the work of data centers through increased resource utilization by using virtual host simulations and exploiting all server resources. In this paper, we have considered memory resources, where Virtual machines were distributed to hosts after comparing the virtual machines with the host from where the memory and putting the virtual machine on the appropriate host, this will reduce the host machines in the data centers and this will improve the performance of the data centers, in terms of power consumption and the number of servers used and cost.


Author(s):  
Li Mao ◽  
De Yu Qi ◽  
Wei Wei Lin ◽  
Bo Liu ◽  
Ye Da Li

With the rapid growth of energy consumption in global data centers and IT systems, energy optimization has become an important issue to be solved in cloud data center. By introducing heterogeneous energy constraints of heterogeneous physical servers in cloud computing, an energy-efficient resource scheduling model for heterogeneous physical servers based on constraint satisfaction problems is presented. The method of model solving based on resource equivalence optimization is proposed, in which the resources in the same class are pruning treatment when allocating resource so as to reduce the solution space of the resource allocation model and speed up the model solution. Experimental results show that, compared with DynamicPower and MinPM, the proposed algorithm (EqPower) not only improves the performance of resource allocation, but also reduces energy consumption of cloud data center.


2015 ◽  
Vol 4 (1) ◽  
pp. 78
Author(s):  
Cristian Tudoran ◽  
Stefan Albert ◽  
Dorin N. Dadarlat ◽  
Carmen Tripon ◽  
Sorin Dan Anghel

Improving the energy efficiency of our Institute’s data center is an ambitious challenge for our research teams. Understanding how the energy is consumed in each segment of the system becomes fundamental in order to minimize the overall energy consumed by the system itself. In this paper, we propose an experimentally–driven approach to develop a simple and accurate power consumption and temperature monitoring system. In this work we focused our attention on the monitoring, measurement of the energy consumption patterns of our data center system, at INCDTIM Cluj-Napoca, Romania.


2014 ◽  
Vol 513-517 ◽  
pp. 2031-2034
Author(s):  
Hui Zhang ◽  
Yong Liu

Virtual machine migration is an effective method to improve the resource utilization of cloud data center. The common migration methods use heuristic algorithms to allocation virtual machines, the solution results is easy to fall into local optimal solution. Therefore, an algorithm called Migrating algorithm based on Genetic Algorithm (MGA) is introduced in this paper, which roots from genetic evolution theory to achieve global optimal search in the map of virtual machines to target nodes, and improves the objective function of Genetic Algorithm by setting the resource utilization of virtual machine and target node as an input factor into the calculation process. There is a contrast between MGA, Single Threshold (ST) and Double Threshold (DT) through simulation experiments, the results show that the MGA can effectively reduce migrations times and the number of host machine used.


Sign in / Sign up

Export Citation Format

Share Document