scholarly journals A Virtual Network Resource Allocation Framework Based on SR-IOV

2019 ◽  
Vol 9 (1) ◽  
pp. 137
Author(s):  
Zhiyong Ye ◽  
Yuanchang Zhong ◽  
Yingying Wei

The workload of a data center has the characteristics of complexity and requirement variability. However, in reality, the attributes of network workloads are rarely used by resource schedulers. Failure to dynamically schedule network resources according to workload changes inevitably leads to the inability to achieve optimal throughput and performance when allocating network resources. Therefore, there is an urgent need to design a scheduling framework that can be workload-aware and allocate network resources on demand based on network I/O virtualization. However, in the current mainstream I/O virtualization methods, there is no way to provide workload-aware functions while meeting the performance requirements of virtual machines (VMs). Therefore, we propose a method that can dynamically sense the VM workload to allocate network resources on demand, and can ensure the scalability of the VM while improving the performance of the system. We combine the advantages of I/O para-virtualization and SR-IOV technology, and use a limited number of virtual functions (VFs) to ensure the performance of network-intensive VMs, thereby improving the overall network performance of the system. For non-network-intensive VMs, the scalability of the system is guaranteed by using para-virtualized Network Interface Cards (NICs) which are not limited in number. Furthermore, to be able to allocate the corresponding bandwidth according to the VM’s network workload, we hierarchically divide the VF’s network bandwidth, and dynamically switch between VF and para-virtualized NICs through the active backup strategy of Bonding Drive and ACPI Hotplug technology to ensure the dynamic allocation of VF. Experiments show that the allocation framework can effectively improve system network performance, in which the average request delay can be reduced by more than 26%, and the system bandwidth throughput rate can be improved by about 5%.

Sensors ◽  
2018 ◽  
Vol 18 (10) ◽  
pp. 3444 ◽  
Author(s):  
Cheol-Ho Hong ◽  
Kyungwoon Lee ◽  
Minkoo Kang ◽  
Chuck Yoo

Fog computing is a new computing paradigm that employs computation and network resources at the edge of a network to build small clouds, which perform as small data centers. In fog computing, lightweight virtualization (e.g., containers) has been widely used to achieve low overhead for performance-limited fog devices such as WiFi access points (APs) and set-top boxes. Unfortunately, containers have a weakness in the control of network bandwidth for outbound traffic, which poses a challenge to fog computing. Existing solutions for containers fail to achieve desirable network bandwidth control, which causes bandwidth-sensitive applications to suffer unacceptable network performance. In this paper, we propose qCon, which is a QoS-aware network resource management framework for containers to limit the rate of outbound traffic in fog computing. qCon aims to provide both proportional share scheduling and bandwidth shaping to satisfy various performance demands from containers while implementing a lightweight framework. For this purpose, qCon supports the following three scheduling policies that can be applied to containers simultaneously: proportional share scheduling, minimum bandwidth reservation, and maximum bandwidth limitation. For a lightweight implementation, qCon develops its own scheduling framework on the Linux bridge by interposing qCon’s scheduling interface on the frame processing function of the bridge. To show qCon’s effectiveness in a real fog computing environment, we implement qCon in a Docker container infrastructure on a performance-limited fog device—a Raspberry Pi 3 Model B board.


Game Theory ◽  
2017 ◽  
pp. 383-399
Author(s):  
Sungwook Kim

Computer network bandwidth can be viewed as a limited resource. The users on the network compete for that resource. Their competition can be simulated using game theory models. No centralized regulation of network usage is possible because of the diverse ownership of network resources. Therefore, the problem is of ensuring the fair sharing of network resources. If a centralized system could be developed which would govern the use of the shared resources, each user would get an assigned network usage time or bandwidth, thereby limiting each person's usage of network resources to his or her fair share. As of yet, however, such a system remains an impossibility, making the situation of sharing network resources a competitive game between the users of the network and decreasing everyone's utility. This chapter explores this competitive game.


Computer network bandwidth can be viewed as a limited resource. The users on the network compete for that resource. Their competition can be simulated using game theory models. No centralized regulation of network usage is possible because of the diverse ownership of network resources. Therefore, the problem is of ensuring the fair sharing of network resources. If a centralized system could be developed which would govern the use of the shared resources, each user would get an assigned network usage time or bandwidth, thereby limiting each person's usage of network resources to his or her fair share. As of yet, however, such a system remains an impossibility, making the situation of sharing network resources a competitive game between the users of the network and decreasing everyone's utility. This chapter explores this competitive game.


2021 ◽  
Vol 11 (19) ◽  
pp. 9163
Author(s):  
Mateusz Żotkiewicz ◽  
Wiktor Szałyga ◽  
Jaroslaw Domaszewicz ◽  
Andrzej Bąk ◽  
Zbigniew Kopertowski ◽  
...  

The new generation of programmable networks allow mechanisms to be deployed for the efficient control of dynamic bandwidth allocation and ensure Quality of Service (QoS) in terms of Key Performance Indicators (KPIs) for delay or loss sensitive Internet of Things (IoT) services. To achieve flexible, dynamic and automated network resource management in Software-Defined Networking (SDN), Artificial Intelligence (AI) algorithms can provide an effective solution. In the paper, we propose the solution for network resources allocation, where the AI algorithm is responsible for controlling intent-based routing in SDN. The paper focuses on the problem of optimal switching of intents between two designated paths using the Deep-Q-Learning approach based on an artificial neural network. The proposed algorithm is the main novelty of this paper. The Developed Networked Application Emulation System (NAPES) allows the AI solution to be tested with different patterns to evaluate the performance of the proposed solution. The AI algorithm was trained to maximize the total throughput in the network and effective network utilization. The results presented confirm the validity of applied AI approach to the problem of improving network performance in next-generation networks and the usefulness of the NAPES traffic generator for efficient economical and technical deployment in IoT networking systems evaluation.


monitoring the behavior of computer networks is essential for problem identification and optimal management. Part of this behavior to be monitored is the utilization of the network bandwidth. Several techniques are used to model and forecast network traffic such as time series models, modern data mining techniques, soft computing approaches, and neural networks are used for network traffic analysis and prediction. Efficient bandwidth utilization and optimization are very interesting research issues in effective networks because bandwidth is one of the most required and expensive Internet components needed today. It is generally known that the higher the bandwidth available, the better the network performance, thus an essential aid for network design and bandwidth wastage control and a need for traffic models which can capture the characteristics is necessary. In this paper, a time series prediction models were proposed for LAN office network bandwidth utilization. The proposed prediction models are tested by using evaluation metrics used in time series such as MSE and performance evaluation plot. Testing results show that the proposed models can enhance the detection of bandwidth traffic and provide an efficient tool for bandwidth utilization.


Author(s):  
Yaser Jararweh ◽  
Mahmoud Al-Ayyoub ◽  
Ahmad Doulat ◽  
Ahmad Al Abed Al Aziz ◽  
Haythem A. Bany Salameh ◽  
...  

Software defined networking (SDN) provides a novel network resource management framework that overcomes several challenges related to network resources management. On the other hand, Cognitive Radio (CR) technology is a promising paradigm for addressing the spectrum scarcity problem through efficient dynamic spectrum access (DSA). In this paper, the authors introduce a virtualization based SDN resource management framework for cognitive radio networks (CRNs). The framework uses the concept of multilayer hypervisors for efficient resources allocation. It also introduces a semi-decentralized control scheme that allows the CRN Base Station (BS) to delegate some of the management responsibilities to the network users. The main objective of the proposed framework is to reduce the CR users' reliance on the CRN BS and physical network resources while improving the network performance by reducing the control overhead.


2016 ◽  
Vol 2016 ◽  
pp. 1-10 ◽  
Author(s):  
Cheol-Ho Hong ◽  
Kyungwoon Lee ◽  
Hyunchan Park ◽  
Chuck Yoo

To meet the various requirements of cloud computing users, research on guaranteeing Quality of Service (QoS) is gaining widespread attention in the field of cloud computing. However, as cloud computing platforms adopt virtualization as an enabling technology, it becomes challenging to distribute system resources to each user according to the diverse requirements. Although ample research has been conducted in order to meet QoS requirements, the proposed solutions lack simultaneous support for multiple policies, degrade the aggregated throughput of network resources, and incur CPU overhead. In this paper, we propose a new mechanism, called ANCS (Advanced Network Credit Scheduler), to guarantee QoS through dynamic allocation of network resources in virtualization. To meet the various network demands of cloud users, ANCS aims to concurrently provide multiple performance policies; these include weight-based proportional sharing, minimum bandwidth reservation, and maximum bandwidth limitation. In addition, ANCS develops an efficient work-conserving scheduling method for maximizing network resource utilization. Finally, ANCS can achieve low CPU overhead via its lightweight design, which is important for practical deployment.


2020 ◽  
Vol 10 (6) ◽  
pp. 1984
Author(s):  
Omran Ayoub ◽  
Davide Andreoletti ◽  
Francesco Musumeci ◽  
Massimo Tornatore ◽  
Achille Pattavina

Network operators must continuously explore new network architectures to satisfy increasing traffic demand due to bandwidth-hungry services, such as video-on-demand (VoD). A promising solution which enables offloading traffic consists of terminating VoD requests locally through deploying caches at the network edge. However, deciding the number of caches to deploy, their locations in the network and their dimensions in terms of storage capacity is not trivial and must be jointly optimized, to reduce costs and utilize network resources efficiently. In this paper, we aim to find the optimal deployment of caches in a hierarchical metro network, which minimizes the overall network resource occupation for VoD services, in terms of number of caches deployed across the various network levels, their locations and their dimensions (i.e., storage capacity), under limited storage capacity. We first propose an analytical model which serves as a tool to find the optimal deployment as a function of various parameters, such as popularity distribution and location of metro cache. Then, we present a discrete-event simulator for dynamic VoD provisioning to verify the correctness of the analytical model and to measure the performance of different cache deployment strategies in terms of overall network resource occupation. We prove that, to minimize resource occupation given a fixed budget in terms of storage capacity, storage capacity must be distributed among caches at different layers of the metro network. Moreover, we provide guidelines for the optimal cache deployment strategy when the available storage capacity is limited. We further show how the optimal deployment of caches across the various metro network levels varies depending on the popularity distribution, the metro network topology and the amount of storage capacity available (i.e., the budget invested in terms of storage capacity).


2019 ◽  
Vol 10 (1) ◽  
pp. 78-95 ◽  
Author(s):  
Hindol Bhattacharya ◽  
Samiran Chattopadhyay ◽  
Matangini Chattopadhyay ◽  
Avishek Banerjee

Distributed storage allocation problems are an important optimization problem in reliable distributed storage, which aims to minimize storage cost while maximizing error recovery probability by optimal storage of data in distributed storage nodes. A key characteristic of distributed storage is that data is stored in remote servers across a network. Thus, network resources especially communication links are an expensive and non-trivial resource which should be optimized as well. In this article, the authors present a simulation-based study of the network characteristics of a distributed storage network in the light of several allocation patterns. By varying the allocation patterns, the authors have demonstrated the interdependence between network bandwidth, defined in terms of link capacity and allocation pattern using network throughput as a metric. Motivated by observing the importance of network resource as an important cost metric, the authors have formalized an optimization problem that jointly minimizes both the storage cost and the cost of network resources. A hybrid meta heuristic algorithm is employed that solves this optimization problem by allocating data in a distributed storage system. Experimental results validate the efficacy of the algorithm.


2015 ◽  
Vol 17 (2) ◽  
pp. 113-120 ◽  
Author(s):  
Seokmo Gu ◽  
Aria Seo ◽  
Yei-chang Kim

Purpose – The purpose of this paper is a transcoding system based on a virtual machine in a cloud computing environment. There are many studies about transmitting realistic media through a network. As the size of realistic media data is very large, it is difficult to transmit them using current network bandwidth. Thus, a method of encoding by compressing the data using a new encoding technique is necessary. The next-generation encoding technique high-efficiency video coding (HEVC) can encode video at a high compressibility rate compared to the existing encoding techniques, MPEG-2 and H.264. Yet, encoding the information takes at least ten times longer than existing encoding techniques. Design/methodology/approach – This paper attempts to solve the tome problem using a virtual machine in a cloud computing environment. Findings – In addition, by calculating the transcoding time of the proposed technique, it found that the time was reduced compared to existing techniques. Originality/value – To this end, this paper proposed transcoding appropriate for the transmission of realistic media by dynamically allocating the resources of the virtual machine.


Sign in / Sign up

Export Citation Format

Share Document