scholarly journals GANA-VDC: Application-Aware with Bandwidth Guarantee in Cloud Datacenters

Electronics ◽  
2019 ◽  
Vol 8 (3) ◽  
pp. 258
Author(s):  
Saleem Karmoshi ◽  
Shuo Wang ◽  
Naji Alhusaini ◽  
Jing Li ◽  
Ming Zhu ◽  
...  

Allocating bandwidth guarantees to applications in the cloud has become increasingly demanding and essential as applications compete to share cloud network resources. However, cloud-computing providers offer no network bandwidth guarantees in a cloud environment, predictably preventing tenants from running their applications. Existing schemes offer tenants practical cluster abstraction solutions emulating underlying physical network resources, proving impractical; however, providing virtual network abstractions has remained an essential step in the right direction. In this paper, we consider the requirements for enabling the application-aware network with bandwidth guarantees in a Virtual Data Center (VDC). We design GANA-VDC, a network virtualization framework supporting VDC application-aware networking with bandwidth guarantees in a cloud datacenter. GANA-VDC achieves scalability using an interceptor to translate OpenFlow features to prompt fine-grained Quality of Service (QoS). Facilitating the expression of diverse network resource demands, we also propose a new Virtual Network (VN) to Physical Network (PN) mapping approach, Graph Abstraction Network Architecture (GANA), which we innovatively introduce in this paper, allowing tenants to provide applications with cloud networking environment, thereby increasing the preservation performance. Our results show GANA-VDC can provide bandwidth guarantee and achieve low time complexity, yielding higher network utility.

2020 ◽  
Vol 10 (21) ◽  
pp. 7874
Author(s):  
Shuo Wang ◽  
Zhiqiang Zhou ◽  
Hongjie Zhang ◽  
Jing Li

In the cloud datacenter, for the multi-tenant model, network resources should be fairly allocated among VDCs (virtual datacenters). Conventionally, the allocation of cloud network resources is on a best-effort basis, so the specific information of network resource allocation is unclear. Previous research has either aimed to provide minimum bandwidth guarantee, or focused on realizing work conservation according to the VM-to-VM (virtual machine to virtual machine) flow policy or per-source policy, or both policies. However, they failed to consider allocating redundant bandwidth among VDCs in a fair way. This paper presents a bandwidth that guarantees enforcement framework NXT-Freedom, and this framework allocates the network resources on the basis of per-VDC fairness, which can achieve work conservation. In order to guarantee per-VDC fair allocation, a hierarchical max–min fairness algorithm is put forward in this paper. In order to ensure that the framework can be applied to non-congestion-free network core and achieve scalability, NXT-Freedom decouples the computation of per-VDC allocation from the execution of allocation, but it brings some CPU overheads resulting from bandwidth enforcement. We observe that there is no need to enforce the non-blocking virtual network. Leveraging this observation, we distinguish the virtual network type of VDC to eliminate part of the CPU overheads. The evaluation results of a prototype prove that NXT-Freedom can achieve the isolation of per-VDC performance, which also shows fast adaption to flow variation in cloud datacenter.


2021 ◽  
Author(s):  
Ze Xi Xu ◽  
Lei Zhuang ◽  
Meng Yang He ◽  
Si Jin Yang ◽  
Yu Song ◽  
...  

Abstract Virtualization and resource isolation techniques have enabled the efficient sharing of networked resources. How to control network resource allocation accurately and flexibly has gradually become a research hotspot due to the growth in user demands. Therefore, this paper presents a new edge-based virtual network embedding approach to studying this problem that employs a graph edit distance method to accurately control resource usage. In particular, to manage network resources efficiently, we restrict the use conditions of network resources and restrict the structure based on common substructure isomorphism and an improved spider monkey optimization algorithm is employed to prune redundant information from the substrate network. Experimental results showed that the proposed method achieves better performance than existing algorithms in terms of resource management capacity, including energy savings and the revenue-cost ratio.


Game Theory ◽  
2017 ◽  
pp. 383-399
Author(s):  
Sungwook Kim

Computer network bandwidth can be viewed as a limited resource. The users on the network compete for that resource. Their competition can be simulated using game theory models. No centralized regulation of network usage is possible because of the diverse ownership of network resources. Therefore, the problem is of ensuring the fair sharing of network resources. If a centralized system could be developed which would govern the use of the shared resources, each user would get an assigned network usage time or bandwidth, thereby limiting each person's usage of network resources to his or her fair share. As of yet, however, such a system remains an impossibility, making the situation of sharing network resources a competitive game between the users of the network and decreasing everyone's utility. This chapter explores this competitive game.


Author(s):  
Emilia Rosa Jimson ◽  
Kashif Nisar ◽  
Mohd Hanafi Ahmad Hijazi

The complex design of the current network architecture, which has inevitably resulted in poor network resources management, has triggered researchers to propose a Software Defined Networking (SDN)-based network model to simplify the management of the limited bandwidth of a network. The key idea of the SDN-based model is to simplify network management by introducing a centralized control through which the dynamic update of forwarding rules, the simplification of network devices tasks, and flow abstractions can be realized. This proposed model utilizes the limited network bandwidth systematically by giving real-time traffic higher priority than non-real-time traffic to access limited resources. The experimental results showed that the proposed model helped ensure real-time traffic would be given greater priority to access the limited bandwidth, where the major portion of the limited bandwidth was allocated to the real-time traffic.


Computer network bandwidth can be viewed as a limited resource. The users on the network compete for that resource. Their competition can be simulated using game theory models. No centralized regulation of network usage is possible because of the diverse ownership of network resources. Therefore, the problem is of ensuring the fair sharing of network resources. If a centralized system could be developed which would govern the use of the shared resources, each user would get an assigned network usage time or bandwidth, thereby limiting each person's usage of network resources to his or her fair share. As of yet, however, such a system remains an impossibility, making the situation of sharing network resources a competitive game between the users of the network and decreasing everyone's utility. This chapter explores this competitive game.


2019 ◽  
Vol 9 (1) ◽  
pp. 137
Author(s):  
Zhiyong Ye ◽  
Yuanchang Zhong ◽  
Yingying Wei

The workload of a data center has the characteristics of complexity and requirement variability. However, in reality, the attributes of network workloads are rarely used by resource schedulers. Failure to dynamically schedule network resources according to workload changes inevitably leads to the inability to achieve optimal throughput and performance when allocating network resources. Therefore, there is an urgent need to design a scheduling framework that can be workload-aware and allocate network resources on demand based on network I/O virtualization. However, in the current mainstream I/O virtualization methods, there is no way to provide workload-aware functions while meeting the performance requirements of virtual machines (VMs). Therefore, we propose a method that can dynamically sense the VM workload to allocate network resources on demand, and can ensure the scalability of the VM while improving the performance of the system. We combine the advantages of I/O para-virtualization and SR-IOV technology, and use a limited number of virtual functions (VFs) to ensure the performance of network-intensive VMs, thereby improving the overall network performance of the system. For non-network-intensive VMs, the scalability of the system is guaranteed by using para-virtualized Network Interface Cards (NICs) which are not limited in number. Furthermore, to be able to allocate the corresponding bandwidth according to the VM’s network workload, we hierarchically divide the VF’s network bandwidth, and dynamically switch between VF and para-virtualized NICs through the active backup strategy of Bonding Drive and ACPI Hotplug technology to ensure the dynamic allocation of VF. Experiments show that the allocation framework can effectively improve system network performance, in which the average request delay can be reduced by more than 26%, and the system bandwidth throughput rate can be improved by about 5%.


Sensors ◽  
2018 ◽  
Vol 18 (10) ◽  
pp. 3444 ◽  
Author(s):  
Cheol-Ho Hong ◽  
Kyungwoon Lee ◽  
Minkoo Kang ◽  
Chuck Yoo

Fog computing is a new computing paradigm that employs computation and network resources at the edge of a network to build small clouds, which perform as small data centers. In fog computing, lightweight virtualization (e.g., containers) has been widely used to achieve low overhead for performance-limited fog devices such as WiFi access points (APs) and set-top boxes. Unfortunately, containers have a weakness in the control of network bandwidth for outbound traffic, which poses a challenge to fog computing. Existing solutions for containers fail to achieve desirable network bandwidth control, which causes bandwidth-sensitive applications to suffer unacceptable network performance. In this paper, we propose qCon, which is a QoS-aware network resource management framework for containers to limit the rate of outbound traffic in fog computing. qCon aims to provide both proportional share scheduling and bandwidth shaping to satisfy various performance demands from containers while implementing a lightweight framework. For this purpose, qCon supports the following three scheduling policies that can be applied to containers simultaneously: proportional share scheduling, minimum bandwidth reservation, and maximum bandwidth limitation. For a lightweight implementation, qCon develops its own scheduling framework on the Linux bridge by interposing qCon’s scheduling interface on the frame processing function of the bridge. To show qCon’s effectiveness in a real fog computing environment, we implement qCon in a Docker container infrastructure on a performance-limited fog device—a Raspberry Pi 3 Model B board.


Sensors ◽  
2020 ◽  
Vol 20 (9) ◽  
pp. 2655 ◽  
Author(s):  
Yue Zong ◽  
Chuan Feng ◽  
Yingying Guan ◽  
Yejun Liu ◽  
Lei Guo

The emerging 5G applications and the connectivity of billions of devices have driven the investigation of multi-domain heterogeneous converged optical networks. To support emerging applications with their diverse quality of service requirements, network slicing has been proposed as a promising technology. Network virtualization is an enabler for network slicing, where the physical network can be partitioned into different configurable slices in the multi-domain heterogeneous converged optical networks. An efficient resource allocation mechanism for multiple virtual networks in network virtualization is one of the main challenges referred as virtual network embedding (VNE). This paper is a survey on the state-of-the-art works for the VNE problem towards multi-domain heterogeneous converged optical networks, providing the discussion on future research issues and challenges. In this paper, we describe VNE in multi-domain heterogeneous converged optical networks with enabling network orchestration technologies and analyze the literature about VNE algorithms with various network considerations for each network domain. The basic VNE problem with various motivations and performance metrics for general scenarios is discussed. A VNE algorithm taxonomy is presented and discussed by classifying the major VNE algorithms into three categories according to existing literature. We analyze and compare the attributes of algorithms such as node and link embedding methods, objectives, and network architecture, which can give a selection or baseline for future work of VNE. Finally, we explore some broader perspectives in future research issues and challenges on 5G scenario, field trail deployment, and machine learning-based algorithms.


2019 ◽  
Vol 10 (3) ◽  
pp. 33-48 ◽  
Author(s):  
Emilia Rosa Jimson ◽  
Kashif Nisar ◽  
Mohd Hanafi Ahmad Hijazi

Software defined networking (SDN) architecture has been verified to make the current network architecture management simpler, and flexible. The key idea of SDN is to simplify network management by introducing a centralized control, through which dynamic updates of forwarding rules, simplification of the network devices task, and flow abstractions can be realized. In this article, the researchers discuss the complex design of the current network architecture, which has inevitably resulted in poor network resources management, such as bandwidth management. SDN-based network model has been proposed to simplify the management of the limited bandwidth of a network. The proposed network model utilizes the limited network bandwidth systematically by giving real-time traffics higher priority than non-real-time traffics to access the limited resource. The experimental results showed that the proposed model helped ensure real-time traffics would be given greater priority to access the limited bandwidth, where major portion of the limited bandwidth being allocated to the real-time traffics.


Sign in / Sign up

Export Citation Format

Share Document