scholarly journals An Enhanced Discrete Symbiotic Organism Search Algorithm for Optimal Task Scheduling in the Cloud

Algorithms ◽  
2021 ◽  
Vol 14 (7) ◽  
pp. 200
Author(s):  
Suleiman Sa’ad ◽  
Abdullah Muhammed ◽  
Mohammed Abdullahi ◽  
Azizol Abdullah ◽  
Fahrul Hakim Ayob

Recently, cloud computing has begun to experience tremendous growth because government agencies and private organisations are migrating to the cloud environment. Hence, having a task scheduling strategy that is efficient is paramount for effectively improving the prospects of cloud computing. Typically, a certain number of tasks are scheduled to use diverse resources (virtual machines) to minimise the makespan and achieve the optimum utilisation of the system by reducing the response time within the cloud environment. The task scheduling problem is NP-complete; as such, obtaining a precise solution is difficult, particularly for large-scale tasks. Therefore, in this paper, we propose a metaheuristic enhanced discrete symbiotic organism search (eDSOS) algorithm for optimal task scheduling in the cloud computing setting. Our proposed algorithm is an extension of the standard symbiotic organism search (SOS), a nature-inspired algorithm that has been implemented to solve various numerical optimisation problems. This algorithm imitates the symbiotic associations (mutualism, commensalism, and parasitism stages) displayed by organisms in an ecosystem. Despite the improvements made with the discrete symbiotic organism search (DSOS) algorithm, it still becomes trapped in local optima due to the large size of the values of the makespan and response time. The local search space of the DSOS is diversified by substituting the best value with any candidate in the population at the mutualism phase of the DSOS algorithm, which makes it worthy for use in task scheduling problems in the cloud. Thus, the eDSOS strategy converges faster when the search space is larger or more prominent due to diversification. The CloudSim simulator was used to conduct the experiment, and the simulation results show that the proposed eDSOS was able to produce a solution with a good quality when compared with that of the DSOS. Lastly, we analysed the proposed strategy by using a two-sample t-test, which revealed that the performance of eDSOS was of significance compared to the benchmark strategy (DSOS), particularly for large search spaces. The percentage improvements were 26.23% for the makespan and 63.34% for the response time.

2021 ◽  
Vol 2021 ◽  
pp. 1-13
Author(s):  
Jitendra Kumar Samriya ◽  
Subhash Chandra Patel ◽  
Manju Khurana ◽  
Pradeep Kumar Tiwari ◽  
Omar Cheikhrouhou

Cloud computing is the most prominent established framework; it offers access to resources and services based on large-scale distributed processing. An intensive management system is required for the cloud environment, and it should gather information about all phases of task processing and ensuring fair resource provisioning through the levels of Quality of Service (QoS). Virtual machine allocation is a major issue in the cloud environment that contributes to energy consumption and asset utilization in distributed cloud computing. Subsequently, in this paper, a multiobjective Emperor Penguin Optimization (EPO) algorithm is proposed to allocate the virtual machines with power utilization in a heterogeneous cloud environment. The proposed method is analyzed to make it suitable for virtual machines in the data center through Binary Gravity Search Algorithm (BGSA), Ant Colony Optimization (ACO), and Particle Swarm Optimization (PSO). To compare with other strategies, EPO is energy-efficient and there are significant differences. The results of the proposed system have been evaluated through the JAVA simulation platform. The exploratory outcome presents that the proposed EPO-based system is very effective in limiting energy consumption, SLA violation (SLAV), and enlarging QoS requirements for giving capable cloud service.


Author(s):  
Dinkan Patel ◽  
Anjuman Ranavadiya

Cloud Computing is a type of Internet model that enables convenient, on-demand resources that can be used rapidly and with minimum effort. Cloud Computing can be IaaS, PaaS or SaaS. Scheduling of these tasks is important so that resources can be utilized efficiently with minimum time which in turn gives better performance. Real time tasks require dynamic scheduling as tasks cannot be known in advance as in static scheduling approach. There are different task scheduling algorithms that can be utilized to increase the performance in real time and performing these on virtual machines can prove to be useful. Here a review of various task scheduling algorithms is done which can be used to perform the task and allocate resources so that performance can be increased.


2021 ◽  
Author(s):  
Jianying Miao

This thesis describes an innovative task scheduling and resource allocation strategy by using thresholds with attributes and amount (TAA) in order to improve the quality of service of cloud computing. In the strategy, attribute-oriented thresholds are set to decide on the acceptance of cloudlets (tasks), and the provisioning of accepted cloudlets on suitable resources represented by virtual machines (VMs,). Experiments are performed in a simulation environment created by Cloudsim that is modified for the experiments. Experimental results indicate that TAA can significantly improve attribute matching between cloudlets and VMs, with average execution time reduced by 30 to 50% compared to a typical non-filtering policy. Moreover, the tradeoff between acceptance rate and task delay, as well as between prioritized and non-prioritized cloudlets, may be adjusted as desired. The filtering type and range and the positioning of thresholds may also be adjusted so as to adapt to the dynamically changing cloud environment.


2020 ◽  
Vol 17 (4) ◽  
pp. 1990-1998
Author(s):  
R. Valarmathi ◽  
T. Sheela

Cloud computing is a powerful technology of computing which renders flexible services anywhere to the user. Resource management and task scheduling are essential perspectives of cloud computing. One of the main problems of cloud computing was task scheduling. Usually task scheduling and resource management in cloud is a tough optimization issue at the time of considering quality of service needs. Huge works under task scheduling focuses only on deadline issues and cost optimization and it avoids the significance of availability, robustness and reliability. The main purpose of this study is to develop an Optimized Algorithm for Efficient Resource Allocation and Scheduling in Cloud Environment. This study uses PSO and R factor algorithm. The main aim of PSO algorithm is that tasks are scheduled to VM (virtual machines) to reduce the time of waiting and throughput of system. PSO is a technique inspired by social and collective behavior of animal swarms in nature and wherein particles search the problem space to predict near optimal or optimal solution. A hybrid algorithm combining PSO and R-factor has been developed with the purpose of reducing the processing time, make span and cost of task execution simultaneously. The test results and simulation reveals that the proposed method offers better efficiency than the previously prevalent approaches.


Cloud computing becoming one of the most advanced and promising technologies in these days for information technology era. It has also helped to reduce the cost of small and medium enterprises based on cloud provider services. Resource scheduling with load balancing is one of the primary and most important goals of the cloud computing scheduling process. Resource scheduling in cloud is a non-deterministic problem and is responsible for assigning tasks to virtual machines (VMs) by a servers or service providers in a way that increases the resource utilization and performance, reduces response time, and keeps the whole system balanced. So in this paper, we presented a model deep learning based resource scheduling and load balancing using multidimensional queuing load optimization (MQLO) algorithm with the concept of for cloud environment Multidimensional Resource Scheduling and Queuing Network (MRSQN) is used to detect the overloaded server and migrate them to VMs. Here, ANN is used as deep learning concept as a classifier that helps to identify the overloaded or under loaded servers or VMs and balanced them based on their basis parameters such as CPU, memory and bandwidth. In particular, the proposed ANN-based MQLO algorithm has improved the response time as well success rate. The simulation results show that the proposed ANN-based MQLO algorithm has improved the response time compared to the existing algorithms in terms of Average Success Rate, Resource Scheduling Efficiency, Energy Consumption and Response Time.


2021 ◽  
Author(s):  
Jianying Miao

This thesis describes an innovative task scheduling and resource allocation strategy by using thresholds with attributes and amount (TAA) in order to improve the quality of service of cloud computing. In the strategy, attribute-oriented thresholds are set to decide on the acceptance of cloudlets (tasks), and the provisioning of accepted cloudlets on suitable resources represented by virtual machines (VMs,). Experiments are performed in a simulation environment created by Cloudsim that is modified for the experiments. Experimental results indicate that TAA can significantly improve attribute matching between cloudlets and VMs, with average execution time reduced by 30 to 50% compared to a typical non-filtering policy. Moreover, the tradeoff between acceptance rate and task delay, as well as between prioritized and non-prioritized cloudlets, may be adjusted as desired. The filtering type and range and the positioning of thresholds may also be adjusted so as to adapt to the dynamically changing cloud environment.


2020 ◽  
Author(s):  
M Gokuldhev ◽  
G Singaravel

Abstract Nowadays, Cloud computing is a new computing model in the field of information technology and research. Generally, the cloud environment aims in providing the resource that depends upon the user’s necessity. The major problem caused by cloud computing is task scheduling. Nevertheless, the previous scheduling methods concentrate only on the resource needs, memory, implementation time and cost. In this paper, we introduced an optimal task-scheduling algorithm of the local pollination-based moth search algorithm (LPMSA), which is the hybridization of moth search algorithm (MSA) and flower pollination algorithm (FPA). The proposed LPMSA chooses an optimal solution for proper task scheduling in the cloud. Moreover, the exploitation capacity of MSA is improved by using the local search of the FPA algorithm. In this work, we use 2-fold simulation processes that are implemented under the platform of JAVA. The proposed LPMSA for task-scheduling performance is evaluated using low and high heterogeneous machines with uniform and non-uniform parameters. The experimental analysis demonstrates that the proposed LPMSA approach is well suitable for cloud task scheduling thereby reducing the makespan and energy consumption during proper task scheduling.


2014 ◽  
Vol 13 (11) ◽  
pp. 5142-5154
Author(s):  
Hamdy M. Mousa ◽  
Gamal F. Elhady

Nowadays, Cloud computing is an expanding area in research and industry, which involves virtualization, distributed computing, internet, software, security, web services and etc. A cloud consists of several elements such as clients, data centers and distributed servers, internet and it includes fault tolerance, high availability, effectiveness, scalability, flexibility, reduced overhead for users, reduced cost of ownership, on demand services and etc. Now the next factor is coming, cost of Virtual machines on Data centers and response time. So this paper develop trust model in cloud computing based on fuzzy logic, to explores the coordination between Data Centers and user bound to optimize the application performance, cost of Virtual machines on Data centers and response time using Cloud Computing Analyst.


2021 ◽  
Vol 11 (14) ◽  
pp. 6244
Author(s):  
Rohail Gulbaz ◽  
Abdul Basit Siddiqui ◽  
Nadeem Anjum ◽  
Abdullah Alhumaidi Alotaibi ◽  
Turke Althobaiti ◽  
...  

Task scheduling is one of the core issues in cloud computing. Tasks are heterogeneous, and they have intensive computational requirements. Tasks need to be scheduled on Virtual Machines (VMs), which are resources in a cloud environment. Due to the immensity of search space for possible mappings of tasks to VMs, meta-heuristics are introduced for task scheduling. In scheduling makespan and load balancing, Quality of Service (QoS) parameters are crucial. This research contributes a novel load balancing scheduler, namely Balancer Genetic Algorithm (BGA), which is presented to improve makespan and load balancing. Insufficient load balancing can cause an overhead of utilization of resources, as some of the resources remain idle. BGA inculcates a load balancing mechanism, where the actual load in terms of million instructions assigned to VMs is considered. A need to opt for multi-objective optimization for improvement in load balancing and makespan is also emphasized. Skewed, normal and uniform distributions of workload and different batch sizes are used in experimentation. BGA has exhibited significant improvement compared with various state-of-the-art approaches for makespan, throughput and load balancing.


Author(s):  
Shailendra Raghuvanshi ◽  
Priyanka Dubey

Load balancing of non-preemptive independent tasks on virtual machines (VMs) is an important aspect of task scheduling in clouds. Whenever certain VMs are overloaded and remaining VMs are under loaded with tasks for processing, the load has to be balanced to achieve optimal machine utilization. In this paper, we propose an algorithm named honey bee behavior inspired load balancing, which aims to achieve well balanced load across virtual machines for maximizing the throughput. The proposed algorithm also balances the priorities of tasks on the machines in such a way that the amount of waiting time of the tasks in the queue is minimal. We have compared the proposed algorithm with existing load balancing and scheduling algorithms. The experimental results show that the algorithm is effective when compared with existing algorithms. Our approach illustrates that there is a significant improvement in average execution time and reduction in waiting time of tasks on queue using workflowsim simulator in JAVA.


Sign in / Sign up

Export Citation Format

Share Document