An Empirical Evaluation of Design Abstraction and Performance of Thrust Framework

Author(s):  
Ajai V. George ◽  
Sankar Manoj ◽  
Sanket Rajan Gupte ◽  
Santonu Sarkar
Author(s):  
Latif Al-Hakim ◽  
Melissa Johnson Morgan ◽  
Roberta Chau

This study investigates cross-border collaboration between beef organisations in Australia and Singapore. It aims to identify factors impacting trust and technology diffusion by gauging gaps between expected importance and perceived performance rating of the factors. The research presents results of a survey comprising 69 beef organisations from Australia and Singapore. The research identifies critical gaps using two methods of analysis; validity analysis and performance gap analysis. Each method comprises two types of tests. The WarpPLS software is used to perform the validity analysis. Results indicate gaps in level of responsiveness. The research concludes that the success of cross-border collaboration between organisations in both Australia and Singapore can be better achieved through the establishment of information exchange relationships, rather than through the use of technology alone, and by ensuring compatibility between business partners.


Author(s):  
T. Ooya ◽  
H. Yamada ◽  
T. Ishimori ◽  
Y. Shibata ◽  
Y. Osana ◽  
...  

2021 ◽  
Author(s):  
Muhammad Zakarya ◽  
Lee Gillam ◽  
Khaled Salah ◽  
Omer F. Rana ◽  
Santosh Tirunagari ◽  
...  

In many production clouds, with the notable exception of Google, aggregation-based VM placement policies are used to provision datacenter resources energy and performance efficiently. However, if VMs with similar workloads are placed onto the same machines, they might suffer from contention, particularly, if they are competing for similar resources. High levels of resource contention may degrade VMs performance, and, therefore, could potentially increase users' costs and infrastructure's energy consumption. Furthermore, segregation-based methods result in stranded resources and, therefore, less economics. The recent industrial interest in segregating workloads opens new directions for research. In this paper, we demonstrate how aggregation and segregation-based VM placement policies lead to variabilities in energy efficiency, workload performance, and users' costs. We, then, propose various approaches to aggregation-based placement and migration. We investigate through a number of experiments, using Microsoft Azure and Google's workload traces for more than twelve thousand hosts and a million VMs, the impact of placement decisions on energy, performance, and costs. Our extensive simulations and empirical evaluation demonstrate that, for certain workloads, aggregation-based allocation and consolidation is ~9.61% more energy and ~20.0% more performance efficient than segregation-based policies. Moreover, various aggregation metrics, such as runtimes and workload types, offer variations in energy consumption and performance, therefore, users' costs.<br>


Author(s):  
UMUT A. ACAR ◽  
ARTHUR CHARGUÉRAUD ◽  
MIKE RAINEY

AbstractA classic problem in parallel computing is determining whether to execute a thread in parallel or sequentially. If small threads are executed in parallel, the overheads due to thread creation can overwhelm the benefits of parallelism, resulting in suboptimal efficiency and performance. If large threads are executed sequentially, processors may spin idle, resulting again in sub-optimal efficiency and performance. This “granularity problem” is especially important in implicitly parallel languages, where the programmer expresses all potential for parallelism, leaving it to the system to exploit parallelism by creating threads as necessary. Although this problem has been identified as an important problem, it is not well understood—broadly applicable solutions remain elusive. In this paper, we propose techniques for automatically controlling granularity in implicitly parallel programming languages to achieve parallel efficiency and performance. To this end, we first extend a classic result, Brent's theorem (a.k.a. the work-time principle) to include thread-creation overheads. Using a cost semantics for a general-purpose language in the style of lambda calculus with parallel tuples, we then present a precise accounting of thread-creation overheads and bound their impact on efficiency and performance. To reduce such overheads, we propose an oracle-guided semantics by using estimates of the sizes of parallel threads. We show that, if the oracle provides accurate estimates in constant time, then the oracle-guided semantics reduces the thread-creation overheads for a reasonably large class of parallel computations. We describe how to approximate the oracle-guided semantics in practice by combining static and dynamic techniques. We require the programmer to provide the asymptotic complexity cost for each parallel thread and use runtime profiling to determine hardware-specific constant factors. We present an implementation of the proposed approach as an extension of the Manticore compiler for Parallel ML. Our empirical evaluation shows that our techniques can reduce thread-creation overheads, leading to good efficiency and performance.


Electronics ◽  
2021 ◽  
Vol 10 (16) ◽  
pp. 1977
Author(s):  
Guangyu Zhu ◽  
Jaehyun Han ◽  
Sangjin Lee ◽  
Yongseok Son

The emergence of non-volatile memories (NVM) brings new opportunities and challenges to data management system design. As an important part of the data management systems, several new file systems are developed to take advantage of the characteristics of NVM. However, these NVM-aware file systems are usually designed and evaluated based on simulations or emulations. In order to explore the performance and characteristics of these file systems on real hardware, in this article, we provide an empirical evaluation of NVM-aware file systems on the first commercially available byte-addressable NVM (i.e., the Intel Optane DC Persistent Memory Module (DCPMM)). First, to compare the performance difference between traditional file systems and NVM-aware file systems, we evaluate the performance of Ext4, XFS, F2FS, Ext4-DAX, XFS-DAX, and NOVA file systems on DCPMMs. To compare DCPMMs with other secondary storage devices, we also conduct the same evaluations on Optane SSDs and NAND-flash SSDs. Second, we observe how remote NUMA node access and device mapper striping affect the performance of DCPMMs. Finally, we evaluate the performance of the database (i.e., MySQL) on DCPMMs with Ext4 and Ext4-DAX file systems. We summarize several observations from the evaluation results and performance analysis. We anticipate that these observations will provide implications for various memory and storage systems.


Liberalization and data generation has attracted many overseas banks to India, thereby establishing supermarkets, new merchandise and green transport channels for the banking industry. within the improvement of Indian financial system, Banking sector performs a completely essential and vital position. With the usage of era there have been an boom in penetration, productiveness and performance. It has no longer only multiplied the value effectiveness however additionall y has helped in making small cost transactions feasible. It additionally complements alternatives, creates new markets, and improves productiveness and performance. it has been observed that monetary markets have changed into a purchaser's markets in India. Industrial Banks in India at the moment are turning into a one-forestall grocery store. the point of interest is transferring from mass banking to magnificence banking with the advent of fee added and custom designed products. generation allows banks to create what looks like a branch in a business constructing's foyer while not having to lease manpower for manual operations. The branches are walking ion the concept of 24 X 7 operating, made feasible by using the usage of Tele banking, ATMs, internet banking, cell banking and Ebanking. those technology pushed shipping channels are getting used to attain out to the greatest number of customers at decrease fee and in maximum green manner. The splendor of those banking improvements is that it places both banker and customer in a win- win state of affairs. effective use of era has a multiplier impact ion increase and development. For the reason that technological improvements in the banking area in industrialized nations were proven to boom productivity of this enterprise around the sector, then why did India pull away from adopting this technology until the Nineteen Nineties. Why has India been a overdue adopter of generation in the banking enterprise while it could have reaped the advantages from the existing R&D information developed by way of innovators and early adopters. this newsletter charts out the direction of technological innovation in the Indian banking enterprise submit-economic liberalization (1991-2) and identifies initial conditions in phrases of competitive environment and regulatory pressures that have contributed to the diffusion of those improvements. the thing highlights the function of labor unions in public quarter banks and their preliminary competition to technological adoption. The empirical evaluation demonstrates the superior overall performance of the early adopters of technology (private area and foreign banks) as measured with the aid of productivity, returns on equity, and marketplace proportion, in comparison to the late or passive adopters (public sector banks).


Author(s):  
Kenneth M. Alvares ◽  
Charles L. Hulin

Two models which explain both temporal changes in behavior during training and temporal decreases in correlations between ability measures and performance measures are presented. It is argued that both phenomena are dependent on the same process and that each of the models presented adequately accounts for the experimental data. The changing task model originally proposed by Woodrow and later elaborated by Fleishman assumes that the abilities which contribute to task performance change systematically over time. A second model, the changing subject model, assumes that practice on the criterion task systematically and significantly affects the ability levels of the subjects. A discussion of the changed conception of the ability-skill distribution necessitated by the second model is presented. The psychological and organizational implications of the two models are discussed, and the well nigh impossibility of an empirical evaluation is pointed out.


Sign in / Sign up

Export Citation Format

Share Document