Study of System Throughput and Fairness Issues in Cooperative Transmissions

Author(s):  
Sung-Yeon Kim ◽  
Jang-Won Lee
Algorithms ◽  
2021 ◽  
Vol 14 (3) ◽  
pp. 80
Author(s):  
Qiuqi Han ◽  
Guangyuan Zheng ◽  
Chen Xu

Device-to-Device (D2D) communications, which enable direct communication between nearby user devices over the licensed spectrum, have been considered a key technique to improve spectral efficiency and system throughput in cellular networks (CNs). However, the limited spectrum resources cannot be sufficient to support more cellular users (CUs) and D2D users to meet the growth of the traffic data in future wireless networks. Therefore, Long-Term Evolution-Unlicensed (LTE-U) and D2D-Unlicensed (D2D-U) technologies have been proposed to further enhance system capacity by extending the CUs and D2D users on the unlicensed spectrum for communications. In this paper, we consider an LTE network where the CUs and D2D users are allowed to share the unlicensed spectrum with Wi-Fi users. To maximize the sum rate of all users while guaranteeing each user’s quality of service (QoS), we jointly consider user access and resource allocation. To tackle the formulated problem, we propose a matching-iteration-based joint user access and resource allocation algorithm. Simulation results show that the proposed algorithm can significantly improve system throughput compared to the other benchmark algorithms.


2021 ◽  
Vol 48 (3) ◽  
pp. 128-129
Author(s):  
Sounak Kar ◽  
Robin Rehrmann ◽  
Arpan Mukhopadhyay ◽  
Bastian Alt ◽  
Florin Ciucu ◽  
...  

We analyze a data-processing system with n clients producing jobs which are processed in batches by m parallel servers; the system throughput critically depends on the batch size and a corresponding sub-additive speedup function that arises due to overhead amortization. In practice, throughput optimization relies on numerical searches for the optimal batch size which is computationally cumbersome. In this paper, we model this system in terms of a closed queueing network assuming certain forms of service speedup; a standard Markovian analysis yields the optimal throughput in w n4 time. Our main contribution is a mean-field model that has a unique, globally attractive stationary point, derivable in closed form. This point characterizes the asymptotic throughput as a function of the batch size that can be calculated in O(1) time. Numerical settings from a large commercial system reveal that this asymptotic optimum is accurate in practical finite regimes.


Author(s):  
Umar Ibrahim Minhas ◽  
Roger Woods ◽  
Georgios Karakonstantis

AbstractWhilst FPGAs have been used in cloud ecosystems, it is still extremely challenging to achieve high compute density when mapping heterogeneous multi-tasks on shared resources at runtime. This work addresses this by treating the FPGA resource as a service and employing multi-task processing at the high level, design space exploration and static off-line partitioning in order to allow more efficient mapping of heterogeneous tasks onto the FPGA. In addition, a new, comprehensive runtime functional simulator is used to evaluate the effect of various spatial and temporal constraints on both the existing and new approaches when varying system design parameters. A comprehensive suite of real high performance computing tasks was implemented on a Nallatech 385 FPGA card and show that our approach can provide on average 2.9 × and 2.3 × higher system throughput for compute and mixed intensity tasks, while 0.2 × lower for memory intensive tasks due to external memory access latency and bandwidth limitations. The work has been extended by introducing a novel scheduling scheme to enhance temporal utilization of resources when using the proposed approach. Additional results for large queues of mixed intensity tasks (compute and memory) show that the proposed partitioning and scheduling approach can provide higher than 3 × system speedup over previous schemes.


2021 ◽  
Vol 48 (4) ◽  
pp. 3-3
Author(s):  
Ingo Weber

Blockchain is a novel distributed ledger technology. Through its features and smart contract capabilities, a wide range of application areas opened up for blockchain-based innovation [5]. In order to analyse how concrete blockchain systems as well as blockchain applications are used, data must be extracted from these systems. Due to various complexities inherent in blockchain, the question how to interpret such data is non-trivial. Such interpretation should often be shared among parties, e.g., if they collaborate via a blockchain. To this end, we devised an approach codify the interpretation of blockchain data, to extract data from blockchains accordingly, and to output it in suitable formats [1, 2]. This work will be the main topic of the keynote. In addition, application developers and users of blockchain applications may want to estimate the cost of using or operating a blockchain application. In the keynote, I will also discuss our cost estimation method [3, 4]. This method was designed for the Ethereum blockchain platform, where cost also relates to transaction complexity, and therefore also to system throughput.


2021 ◽  
Vol 13 (3) ◽  
pp. 78
Author(s):  
Chuanhong Li ◽  
Lei Song ◽  
Xuewen Zeng

The continuous increase in network traffic has sharply increased the demand for high-performance packet processing systems. For a high-performance packet processing system based on multi-core processors, the packet scheduling algorithm is critical because of the significant role it plays in load distribution, which is related to system throughput, attracting intensive research attention. However, it is not an easy task since the canonical flow-level packet scheduling algorithm is vulnerable to traffic locality, while the packet-level packet scheduling algorithm fails to maintain cache affinity. In this paper, we propose an adaptive throughput-first packet scheduling algorithm for DPDK-based packet processing systems. Combined with the feature of DPDK burst-oriented packet receiving and transmitting, we propose using Subflow as the scheduling unit and the adjustment unit making the proposed algorithm not only maintain the advantages of flow-level packet scheduling algorithms when the adjustment does not happen but also avoid packet loss as much as possible when the target core may be overloaded Experimental results show that the proposed method outperforms Round-Robin, HRW (High Random Weight), and CRC32 on system throughput and packet loss rate.


Author(s):  
XIANGBIN YU ◽  
GUANGGUO BI

Space-time block (STB) coding has been an effective transmit diversity technique for combating fading recently. In this paper, a full-rate and low-complexity STB coding scheme with complex orthogonal design for multiple antennas is proposed, and turbo code is employed as channel coding to improve the proposed code scheme performance further. Compared with full-diversity multiple antennas STB coding schemes, the proposed scheme can implement full data rate, partial diversity and a smaller complexity, and has more spatial redundancy information. Moreover, using the proposed scheme can form efficient spatial interleaving, thus performance loss due to partial diversity is effectively compensated by the concatenation of turbo coding. Simulation results show that on the condition of the same system throughput and concatenation of turbo code, the proposed scheme has lower bit error rate (BER) than those low-rate and full-diversity multiple antennas STB coding schemes.


2015 ◽  
Vol 6 (4) ◽  
pp. 60-69 ◽  
Author(s):  
Sławomir Kłos ◽  
Peter Trebuna

Abstract This paper proposes the application of computer simulation methods to support decision making regarding intermediate buffer allocations in a series-parallel production line. The simulation model of the production system is based on a real example of a manufacturing company working in the automotive industry. Simulation experiments were conducted for different allocations of buffer capacities and different numbers of employees. The production system consists of three technological operations with intermediate buffers between each operation. The technological operations are carried out using machines and every machine can be operated by one worker. Multi-work in the production system is available (one operator operates several machines). On the basis of the simulation experiments, the relationship between system throughput, buffer allocation and the number of employees is analyzed. Increasing the buffer capacity results in an increase in the average product lifespan. Therefore, in the article a new index is proposed that includes the throughput of the manufacturing system and product life span. Simulation experiments were performed for different configurations of technological operations.


2021 ◽  
Author(s):  
Brett Alan Hathaway ◽  
Seyed Morteza Emadi ◽  
Vinayak Deshpande

To increase revenue or improve customer service, companies are increasingly personalizing their product or service offerings based on their customers' history of interactions. In this paper, we show how call centers can improve customer service by implementing personalized priority policies. Under personalized priority policies, managers use customer contact history to predict individual-level caller abandonment and redialing behavior and prioritize them based on these predictions to improve operational performance. We provide a framework for how companies can use individual-level customer history data to capture the idiosyncratic preferences and beliefs that impact caller abandonment and redialing behavior and quantify the improvements to operational performance of these policies by applying our framework using caller history data from a real-world call center. We achieve this by formulating a structural model that uses a Bayesian learning framework to capture how callers’ past waiting times and abandonment/redialing decisions affect their current abandonment and redialing behavior and use our data to impute the callers’ underlying primitives such as their rewards for service, waiting costs, and redialing costs. These primitives allow us to simulate caller behavior under a variety of personalized priority policies and hence, collect relevant operational performance measures. We find that, relative to the first-come, first-served policy, our proposed personalized priority policies have the potential to decrease average waiting times by up to 29% or increase system throughput by reducing the percentage of service requests lost to abandonment by up to 6.3%. This paper was accepted by Vishaul Gaur, operations management.


Author(s):  
Jiashen Li ◽  
◽  
Yun Pan ◽  

The improvement of chip integration leads to the increase of power density of system chips, which leads to the overheating of system chips. When dispatching the power density of system chips, some working modules are selectively closed to avoid all modules on the chip being turned on at the same time and to solve the problem of overheating. Taking 2D grid-on-chip network as the research object, an optimal scheduling algorithm of system-on-chip power density based on network-on-chip (NoC) is proposed. Under the constraints of thermal design power (TDP) and system, dynamic programming algorithm is used to solve the optimal application set throughput allocation from bottom to top by dynamic programming for the number and frequency level of each application configuration processor under the given application set of network-on-chip. On this basis, the simulated annealing algorithm is used to complete the application mapping aiming at heat dissipation effect and communication delay. The open and closed processor layout is determined. After obtaining the layout results, the TDP is adjusted. The maximum TDP constraint is iteratively searched according to the feedback loop of the system over-hot spots, and the power density scheduling performance of the system chip is maximized under this constraint, so as to ensure the system core. At the same time, chip throughput can effectively solve the problem of chip overheating. The experimental results show that the proposed algorithm increases the system chip throughput by about 11%, improves the system throughput loss, and achieves a balance between the system chip power consumption and scheduling time.


Sign in / Sign up

Export Citation Format

Share Document