A Critical Comparison of Reduced and Conventional EOS Algorithms

SPE Journal ◽  
2013 ◽  
Vol 18 (02) ◽  
pp. 378-388 ◽  
Author(s):  
Kjetil B. Haugen ◽  
Bret L. Beckner

Summary Phase-equilibrium calculations can be a time-consuming part of process simulators and compositional reservoir simulations. Various authors have presented encouraging speed improvements based on reduced methods that can lower the computational cost by reducing the number of independent variables and thus generating a smaller system of equations to solve. This paper presents a careful comparison of conventional and reduced algorithms, showing that they can be expressed as linear transformations of each other. Consequently, the two sets of algorithms exhibit identical convergence behavior, and the performance gain of the reduced methods is entirely caused by reducing the cost of linear-algebra operations. Performance benchmarks show much smaller speed-up numbers than seen in previously published material. Highly optimized linear-algebra operations significantly limit the opportunity for further speed improvement from reduced methods. Only a marginal speed-up potential is observed for mixtures with 15 components or less. This suggests that reduced methods may be less attractive for reservoir simulation than previously thought.

2021 ◽  
Vol 10 (10) ◽  
pp. e166101018871
Author(s):  
Heictor Alves de Oliveira Costa ◽  
Larissa Luz Gomes ◽  
Denis Carlos Lima Costa

This paper aims to present and run a composite model using Genetic Algorithm (GA) and Particle Swarm (PSO), with the assistance of parallel computing methods, to optimize the electrical distribution in a power grid based on an IEEE 14-bus system. The mathematical-computational modeling allows using the objective function to analyze the cost in relation to power or voltage as independent variables, and it is the bridge for the connection between the 2 implemented algorithms. The results presented in this article demonstrate that the methodology was implemented splendidly, in addition to obtaining an excellent computational cost and complying with the physical restrictions of network security, it also achieved global solutions in its optimization.


Sensors ◽  
2021 ◽  
Vol 21 (13) ◽  
pp. 4321
Author(s):  
Paola Soto ◽  
Miguel Camelo ◽  
Kevin Mets ◽  
Francesc Wilhelmi ◽  
David Góez ◽  
...  

IEEE 802.11 (Wi-Fi) is one of the technologies that provides high performance with a high density of connected devices to support emerging demanding services, such as virtual and augmented reality. However, in highly dense deployments, Wi-Fi performance is severely affected by interference. This problem is even worse in new standards, such as 802.11n/ac, where new features such as Channel Bonding (CB) are introduced to increase network capacity but at the cost of using wider spectrum channels. Finding the best channel assignment in dense deployments under dynamic environments with CB is challenging, given its combinatorial nature. Therefore, the use of analytical or system models to predict Wi-Fi performance after potential changes (e.g., dynamic channel selection with CB, and the deployment of new devices) are not suitable, due to either low accuracy or high computational cost. This paper presents a novel, data-driven approach to speed up this process, using a Graph Neural Network (GNN) model that exploits the information carried in the deployment’s topology and the intricate wireless interactions to predict Wi-Fi performance with high accuracy. The evaluation results show that preserving the graph structure in the learning process obtains a 64% increase versus a naive approach, and around 55% compared to other Machine Learning (ML) approaches when using all training features.


Author(s):  
Jimmy Ming-Tai Wu ◽  
Qian Teng ◽  
Shahab Tayeb ◽  
Jerry Chun-Wei Lin

AbstractThe high average-utility itemset mining (HAUIM) was established to provide a fair measure instead of genetic high-utility itemset mining (HUIM) for revealing the satisfied and interesting patterns. In practical applications, the database is dynamically changed when insertion/deletion operations are performed on databases. Several works were designed to handle the insertion process but fewer studies focused on processing the deletion process for knowledge maintenance. In this paper, we then develop a PRE-HAUI-DEL algorithm that utilizes the pre-large concept on HAUIM for handling transaction deletion in the dynamic databases. The pre-large concept is served as the buffer on HAUIM that reduces the number of database scans while the database is updated particularly in transaction deletion. Two upper-bound values are also established here to reduce the unpromising candidates early which can speed up the computational cost. From the experimental results, the designed PRE-HAUI-DEL algorithm is well performed compared to the Apriori-like model in terms of runtime, memory, and scalability in dynamic databases.


2014 ◽  
Vol 665 ◽  
pp. 643-646
Author(s):  
Ying Liu ◽  
Yan Ye ◽  
Chun Guang Li

Metalearning algorithm learns the base learning algorithm, targeted for improving the performance of the learning system. The incremental delta-bar-delta (IDBD) algorithm is such a metalearning algorithm. On the other hand, sparse algorithms are gaining popularity due to their good performance and wide applications. In this paper, we propose a sparse IDBD algorithm by taking the sparsity of the systems into account. Thenorm penalty is contained in the cost function of the standard IDBD, which is equivalent to adding a zero attractor in the iterations, thus can speed up convergence if the system of interest is indeed sparse. Simulations demonstrate that the proposed algorithm is superior to the competing algorithms in sparse system identification.


2017 ◽  
Vol 2017 ◽  
pp. 1-10
Author(s):  
Hsuan-Ming Huang ◽  
Ing-Tsung Hsiao

Background and Objective. Over the past decade, image quality in low-dose computed tomography has been greatly improved by various compressive sensing- (CS-) based reconstruction methods. However, these methods have some disadvantages including high computational cost and slow convergence rate. Many different speed-up techniques for CS-based reconstruction algorithms have been developed. The purpose of this paper is to propose a fast reconstruction framework that combines a CS-based reconstruction algorithm with several speed-up techniques.Methods. First, total difference minimization (TDM) was implemented using the soft-threshold filtering (STF). Second, we combined TDM-STF with the ordered subsets transmission (OSTR) algorithm for accelerating the convergence. To further speed up the convergence of the proposed method, we applied the power factor and the fast iterative shrinkage thresholding algorithm to OSTR and TDM-STF, respectively.Results. Results obtained from simulation and phantom studies showed that many speed-up techniques could be combined to greatly improve the convergence speed of a CS-based reconstruction algorithm. More importantly, the increased computation time (≤10%) was minor as compared to the acceleration provided by the proposed method.Conclusions. In this paper, we have presented a CS-based reconstruction framework that combines several acceleration techniques. Both simulation and phantom studies provide evidence that the proposed method has the potential to satisfy the requirement of fast image reconstruction in practical CT.


2018 ◽  
Vol 2018 ◽  
pp. 1-12
Author(s):  
Yun-Hua Wu ◽  
Lin-Lin Ge ◽  
Feng Wang ◽  
Bing Hua ◽  
Zhi-Ming Chen ◽  
...  

In order to satisfy the real-time requirement of spacecraft autonomous navigation using natural landmarks, a novel algorithm called CSA-SURF (chessboard segmentation algorithm and speeded up robust features) is proposed to improve the speed without loss of repeatability performance of image registration progress. It is a combination of chessboard segmentation algorithm and SURF. Here, SURF is used to extract the features from satellite images because of its scale- and rotation-invariant properties and low computational cost. CSA is based on image segmentation technology, aiming to find representative blocks, which will be allocated to different tasks to speed up the image registration progress. To illustrate the advantages of the proposed algorithm, PCA-SURF, which is the combination of principle component analysis and SURF, is also analyzed in this paper for comparison. Furthermore, random sample consensus (RANSAC) algorithm is applied to eliminate the false matches for further accuracy improvement. The simulation results show that the proposed strategy obtains good results, especially in scaling and rotation variation. Besides, CSA-SURF decreased 50% of the time in extraction and 90% of the time in matching without losing the repeatability performance by comparing with SURF algorithm. The proposed method has been demonstrated as an alternative way for image registration of spacecraft autonomous navigation using natural landmarks.


2018 ◽  
Vol 12 (3) ◽  
pp. 143-157 ◽  
Author(s):  
Håvard Raddum ◽  
Pavol Zajac

Abstract We show how to build a binary matrix from the MRHS representation of a symmetric-key cipher. The matrix contains the cipher represented as an equation system and can be used to assess a cipher’s resistance against algebraic attacks. We give an algorithm for solving the system and compute its complexity. The complexity is normally close to exhaustive search on the variables representing the user-selected key. Finally, we show that for some variants of LowMC, the joined MRHS matrix representation can be used to speed up regular encryption in addition to exhaustive key search.


Author(s):  
Franz Pichler ◽  
Gundolf Haase

A finite element code is developed in which all of the computationally expensive steps are performed on a graphics processing unit via the THRUST and the PARALUTION libraries. The code focuses on the simulation of transient problems where the repeated computations per time-step create the computational cost. It is used to solve partial and ordinary differential equations as they arise in thermal-runaway simulations of automotive batteries. The speed-up obtained by utilizing the graphics processing unit for every critical step is compared against the single core and the multi-threading solutions which are also supported by the chosen libraries. This way a high total speed-up on the graphics processing unit is achieved without the need for programming a single classical Compute Unified Device Architecture kernel.


2021 ◽  
Vol 28 (2) ◽  
pp. 163-182
Author(s):  
José L. Simancas-García ◽  
Kemel George-González

Shannon’s sampling theorem is one of the most important results of modern signal theory. It describes the reconstruction of any band-limited signal from a finite number of its samples. On the other hand, although less well known, there is the discrete sampling theorem, proved by Cooley while he was working on the development of an algorithm to speed up the calculations of the discrete Fourier transform. Cooley showed that a sampled signal can be resampled by selecting a smaller number of samples, which reduces computational cost. Then it is possible to reconstruct the original sampled signal using a reverse process. In principle, the two theorems are not related. However, in this paper we will show that in the context of Non Standard Mathematical Analysis (NSA) and Hyperreal Numerical System R, the two theorems are equivalent. The difference between them becomes a matter of scale. With the scale changes that the hyperreal number system allows, the discrete variables and functions become continuous, and Shannon’s sampling theorem emerges from the discrete sampling theorem.


2018 ◽  
Vol 15 (2) ◽  
pp. 294-301
Author(s):  
Reddy Sreenivasulu ◽  
Chalamalasetti SrinivasaRao

Drilling is a hole making process on machine components at the time of assembly work, which are identify everywhere. In precise applications, quality and accuracy play a wide role. Nowadays’ industries suffer due to the cost incurred during deburring, especially in precise assemblies such as aerospace/aircraft body structures, marine works and automobile industries. Burrs produced during drilling causes dimensional errors, jamming of parts and misalignment. Therefore, deburring operation after drilling is often required. Now, reducing burr size is a serious topic. In this study experiments are conducted by choosing various input parameters selected from previous researchers. The effect of alteration of drill geometry on thrust force and burr size of drilled hole was investigated by the Taguchi design of experiments and found an optimum combination of the most significant input parameters from ANOVA to get optimum reduction in terms of burr size by design expert software. Drill thrust influences more on burr size. The clearance angle of the drill bit causes variation in thrust. The burr height is observed in this study.  These output results are compared with the neural network software @easy NN plus. Finally, it is concluded that by increasing the number of nodes the computational cost increases and the error in nueral network decreases. Good agreement was shown between the predictive model results and the experimental responses.  


Sign in / Sign up

Export Citation Format

Share Document