scholarly journals EEffSim: A discrete event simulator for energy efficiency in large-scale storage systems

Author(s):  
Ramya Prabhakar ◽  
Erik Kruus ◽  
Guanlin Lu ◽  
Cristian Ungureanu
2021 ◽  
Vol 13 (4) ◽  
pp. 83
Author(s):  
Malika Bendechache ◽  
Sergej Svorobej ◽  
Patricia Takako Endo ◽  
Adrian Mihai ◽  
Theo Lynn

Simulation has become an indispensable technique for modelling and evaluating the performance of large-scale systems efficiently and at a relatively low cost. ElasticSearch (ES) is one of the most popular open source large-scale distributed data indexing systems worldwide. In this paper, we use the RECAP Discrete Event Simulator (DES) simulator, an extension of CloudSimPlus, to model and evaluate the performance of a real-world cloud-based ES deployment by an Irish small and medium-sized enterprise (SME), Opening.io. Following simulation experiments that explored how much query traffic the existing Opening.io architecture could cater for before performance degradation, a revised architecture was proposed, adding a new virtual machine in order to dissolve the bottleneck. The simulation results suggest that the proposed improved architecture can handle significantly larger query traffic (about 71% more) than the current architecture used by Opening.io. The results also suggest that the RECAP DES simulator is suitable for simulating ES systems and can help companies to understand their infrastructure bottlenecks under various traffic scenarios and inform optimisation and scalability decisions.


Author(s):  
Mark Endrei ◽  
Chao Jin ◽  
Minh Ngoc Dinh ◽  
David Abramson ◽  
Heidi Poxon ◽  
...  

Rising power costs and constraints are driving a growing focus on the energy efficiency of high performance computing systems. The unique characteristics of a particular system and workload and their effect on performance and energy efficiency are typically difficult for application users to assess and to control. Settings for optimum performance and energy efficiency can also diverge, so we need to identify trade-off options that guide a suitable balance between energy use and performance. We present statistical and machine learning models that only require a small number of runs to make accurate Pareto-optimal trade-off predictions using parameters that users can control. We study model training and validation using several parallel kernels and more complex workloads, including Algebraic Multigrid (AMG), Large-scale Atomic Molecular Massively Parallel Simulator, and Livermore Unstructured Lagrangian Explicit Shock Hydrodynamics. We demonstrate that we can train the models using as few as 12 runs, with prediction error of less than 10%. Our AMG results identify trade-off options that provide up to 45% improvement in energy efficiency for around 10% performance loss. We reduce the sample measurement time required for AMG by 90%, from 13 h to 74 min.


Energies ◽  
2021 ◽  
Vol 14 (11) ◽  
pp. 3296
Author(s):  
Carlos García-Santacruz ◽  
Luis Galván ◽  
Juan M. Carrasco ◽  
Eduardo Galván

Energy storage systems are expected to play a fundamental part in the integration of increasing renewable energy sources into the electric system. They are already used in power plants for different purposes, such as absorbing the effect of intermittent energy sources or providing ancillary services. For this reason, it is imperative to research managing and sizing methods that make power plants with storage viable and profitable projects. In this paper, a managing method is presented, where particle swarm optimisation is used to reach maximum profits. This method is compared to expert systems, proving that the former achieves better results, while respecting similar rules. The paper further presents a sizing method which uses the previous one to make the power plant as profitable as possible. Finally, both methods are tested through simulations to show their potential.


Author(s):  
peisheng guo ◽  
gongzheng yang ◽  
Chengxin Wang

Aqueous zinc-ion batteries (AZIBs) have been regarded as alternative and promising large-scale energy storage systems due to their low cost, convenient manufacturing processes, and high safety. However, their development was...


2018 ◽  
Vol 8 (4) ◽  
pp. 34 ◽  
Author(s):  
Vishal Saxena ◽  
Xinyu Wu ◽  
Ira Srivastava ◽  
Kehan Zhu

The ongoing revolution in Deep Learning is redefining the nature of computing that is driven by the increasing amount of pattern classification and cognitive tasks. Specialized digital hardware for deep learning still holds its predominance due to the flexibility offered by the software implementation and maturity of algorithms. However, it is being increasingly desired that cognitive computing occurs at the edge, i.e., on hand-held devices that are energy constrained, which is energy prohibitive when employing digital von Neumann architectures. Recent explorations in digital neuromorphic hardware have shown promise, but offer low neurosynaptic density needed for scaling to applications such as intelligent cognitive assistants (ICA). Large-scale integration of nanoscale emerging memory devices with Complementary Metal Oxide Semiconductor (CMOS) mixed-signal integrated circuits can herald a new generation of Neuromorphic computers that will transcend the von Neumann bottleneck for cognitive computing tasks. Such hybrid Neuromorphic System-on-a-chip (NeuSoC) architectures promise machine learning capability at chip-scale form factor, and several orders of magnitude improvement in energy efficiency. Practical demonstration of such architectures has been limited as performance of emerging memory devices falls short of the expected behavior from the idealized memristor-based analog synapses, or weights, and novel machine learning algorithms are needed to take advantage of the device behavior. In this article, we review the challenges involved and present a pathway to realize large-scale mixed-signal NeuSoCs, from device arrays and circuits to spike-based deep learning algorithms with ‘brain-like’ energy-efficiency.


Sign in / Sign up

Export Citation Format

Share Document