Falcon: A Production Quality Distributed Memory Reservoir Simulator

1998 ◽  
Vol 1 (05) ◽  
pp. 400-407 ◽  
Author(s):  
G.S. Shiralkar ◽  
R.E. Stephenson ◽  
Wayne Joubert ◽  
Olaf Lubeck ◽  
Bart van Bloemen Waanders

This paper (SPE 51969) was revised for publication from paper SPE 37975, first presented at the 1997 SPE Reservoir Simulation Symposium, Dallas, 8-11 June. Original manuscript received for review 30 June 1997. Revised manuscript received 30 March 1998. Paper peer approved 6 July 1998. Summary We describe a new production model, Falcon, that has achieved speeds on parallel computers that are 100 times faster on real world problems than current production models on a vector computer. Falcon has been used to conduct the largest, geostatistical reservoir study ever conducted within Amoco. In this paper we discuss the following: Falcon's data parallel paradigm with FORTRAN 90 and high performance FORTRAN (HPF); its single program, multiple data (SPMD) paradigm with message passing; efficient memory management that enables simulation of enormous studies; a numerical formulation that reconciles the generalized compositional approach (based on component masses and pressure) with earlier approaches (based on pressures and saturations), in a more general and more efficient approach. We also discuss Falcon's scalability up to 512 processor nodes and performance (timings and memory) achieved on a number of parallel platforms, including Cray Research's T3D and T3E, SGI's Power Challenge and Origin 2000, Thinking Machines' CM5, and IBM's SP2. Falcon also runs on single processor computers such as PC's and IBM's RS6000. We discuss a new parallel linear solver technology based on a fully parallel scalable implementation of incomplete lower-upper (ILU) preconditioning coupled with a GMRES or Orthomin iteration process. This naturally ordered global ILU preconditioner is scalable to hundreds of processors, efficiently solving the matrix problems arising from large scale simulations. The use of the techniques described in this paper has enabled us to run problem sizes of up to 16.5 million gridblocks. Falcon was used to simulate fifty geostatistically derived realizations of a large, black oil waterflood system. The realizations, each with 2.3 million cells and 1,039 wells, took an average of 4.2 hours to execute on a 128-node CM5 computer, thus enabling the simulation study to finish in less than a month. In this field study, we bypassed upscaling through the use of fine vertical resolution gridding. Our focus has been on the applicability of Falcon to real world problems. Falcon can be used for modeling both small and very large reservoirs, including reservoirs characterized by geostatistics. It can be used to simulate black oil, gas/water, and dry gas reservoirs. And, a fully compositional feature is being developed. P. 400

2019 ◽  
Vol 2 (1) ◽  
pp. 61-73
Author(s):  
Pankaj Lathar ◽  
K. G. Srinivasa

With the advancements in science and technology, data is being generated at a staggering rate. The raw data generated is generally of high value and may conceal important information with the potential to solve several real-world problems. In order to extract this information, the raw data available must be processed and analysed efficiently. It has however been observed, that such raw data is generated at a rate faster than it can be processed by traditional methods. This has led to the emergence of the popular parallel processing programming model – MapReduce. In this study, the authors perform a comparative analysis of two popular data processing engines – Apache Flink and Hadoop MapReduce. The analysis is based on the parameters of scalability, reliability and efficiency. The results reveal that Flink unambiguously outperformance Hadoop's MapReduce. Flink's edge over MapReduce can be attributed to following features – Active Memory Management, Dataflow Pipelining and an Inline Optimizer. It can be concluded that as the complexity and magnitude of real time raw data is continuously increasing, it is essential to explore newer platforms that are adequately and efficiently capable of processing such data.


2017 ◽  
Vol 2017 (3) ◽  
pp. 147-167 ◽  
Author(s):  
Gilad Asharov ◽  
Daniel Demmler ◽  
Michael Schapira ◽  
Thomas Schneider ◽  
Gil Segev ◽  
...  

Abstract The Border Gateway Protocol (BGP) computes routes between the organizational networks that make up today’s Internet. Unfortunately, BGP suffers from deficiencies, including slow convergence, security problems, a lack of innovation, and the leakage of sensitive information about domains’ routing preferences. To overcome some of these problems, we revisit the idea of centralizing and using secure multi-party computation (MPC) for interdomain routing which was proposed by Gupta et al. (ACM HotNets’12). We implement two algorithms for interdomain routing with state-of-the-art MPC protocols. On an empirically derived dataset that approximates the topology of today’s Internet (55 809 nodes), our protocols take as little as 6 s of topology-independent precomputation and only 3 s of online time. We show, moreover, that when our MPC approach is applied at country/region-level scale, runtimes can be as low as 0.17 s online time and 0.20 s pre-computation time. Our results motivate the MPC approach for interdomain routing and furthermore demonstrate that current MPC techniques are capable of efficiently tackling real-world problems at a large scale.


Diabetes ◽  
2020 ◽  
Vol 69 (Supplement 1) ◽  
pp. 1588-P ◽  
Author(s):  
ROMIK GHOSH ◽  
ASHOK K. DAS ◽  
AMBRISH MITHAL ◽  
SHASHANK JOSHI ◽  
K.M. PRASANNA KUMAR ◽  
...  

Diabetes ◽  
2020 ◽  
Vol 69 (Supplement 1) ◽  
pp. 2258-PUB
Author(s):  
ROMIK GHOSH ◽  
ASHOK K. DAS ◽  
SHASHANK JOSHI ◽  
AMBRISH MITHAL ◽  
K.M. PRASANNA KUMAR ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document