Calculation of Direct Exchange Areas for Nonuniform Zones Using a Reduced Integration Scheme

2003 ◽  
Vol 125 (5) ◽  
pp. 839-844 ◽  
Author(s):  
Weixue Tian ◽  
Wilson K. S. Chiu

In the zonal method, considerable computational resources are needed to calculate the direct exchange areas (DEA) among the isothermal zones due to integrals with up to six dimensions, while strong singularities occur in the integrands when two zones are adjacent or overlaping (self-irradiation). A special transformation of variables to reduce a double integral into several single integrals is discussed in this paper. This technique was originally presented by Erkku (1959) for calculation of DEA using a uniform zone system in a cylindrical enclosure. However, nonuniform zones are needed for applications with large thermal gradients. Thus we extended this technique to calculate the DEA for non-uniform zones in an axisymmetrical cylinder system. A six-fold reduction in computational time was observed in calculating DEA compared with cases without a variable transformation. It is shown that accuracy and efficiency of estimation of radiation heat flux is improved when using a nonuniform zone system. Reasonable accuracy of all DEA are calculated without resorting to the conservative equations. Results compared well with analytical solutions and numerical results of previous researchers. This technique can be readily extended to rectangular enclosures with similar reduction in computation time expected.

2003 ◽  
Author(s):  
Weixue Tian ◽  
Wilson K. S. Chiu

This paper presents a special transformation of variables to reduce a double integral into three single integrals and its use for calculating Direct Exchange Areas (DEA) in Zonal method. This technique was originally presented for calculation of DEA using a uniform zone system in a cylindrical enclosure. However, non-uniform zones are needed for applications with large thermal gradients. Thus we extended this technique to calculate the DEA for non-uniform zones in an axisymmetrical cylinder system. At least six times of saving in computational time was observed in calculating DEA compared with cases without transforming of variables. It is shown that accuracy and efficiency of estimation of radiation heat flux is improved when using a non-uniform zone system. Reasonable accuracy of all DEA are calculated without resorting to the conservative equations. Results compared well with analytical solutions and numerical results of previous researchers. A brief discussion of its application in calculating DEA in a 3-D rectangular enclosure is also provided.


2021 ◽  
Author(s):  
Giorgio Fighera ◽  
Ernesto Della Rossa ◽  
Patrizia Anastasi ◽  
Mohammed Amr Aly ◽  
Tiziano Diamanti

Abstract Improvements in reservoir simulation computational time thanks to GPU-based simulators and the increasing computational power of modern HPC systems, are paving the way for a massive employment of Ensemble History Matching (EHM) techniques which are intrinsically parallel. Here we present the results of a comparative study between a newly developed EHM tool that aims at leveraging the GPU parallelism, and a commercial third-party EHM software as a benchmark. Both are tested on a real case. The reservoir chosen for the comparison has a production history of 3 years with 15 wells between oil producers, and water and gas injectors. The EHM algorithm used is the Ensemble Smoother with Multiple Data Assimilations (ESMDA) and both tools have access to the same computational resources. The EHM problem was stated in the same way for both tools. The objective function considers well oil productions, water cuts, bottom-hole pressures, and gas-oil-ratios. Porosity and horizontal permeability are used as 3D grid parameters in the update algorithm, along with nine scalar parameters for anisotropy ratios, Corey exponents, and fault transmissibility multipliers. Both the presented tool and the benchmark obtained a satisfactory history match quality. The benchmark tool took around 11.2 hours to complete, while the proposed tool took only 1.5 hours. The two tools performed similar updates on the scalar parameters with only minor discrepancies. Updates on the 3D grid properties instead show significant local differences. The updated ensemble for the benchmark reached extreme values for porosity and permeability which are also distributed in a heterogeneous way. These distributions are quite unlikely in some model regions given the initial geological characterization of the reservoir. The updated ensemble for the presented tool did not reach extreme values in neither porosity nor permeability. The resulting property distributions are not so far off from the ones of the initial ensemble, therefore we can conclude that we were able to successfully update the ensemble while persevering the geological characterization of the reservoir. Analysis suggests that this discrepancy is due to the different way by which our EHM code consider inactive cells in the grid update calculations compared to the benchmark highlighting the fact that statistics including inactive cells should be carefully managed to correctly preserve the geological distribution represented in the initial ensemble. The presented EHM tool was developed from scratch to be fully parallel and to leverage on the abundantly available computational resources. Moreover, the ESMDA implementation was tweaked to improve the reservoir update by carefully managing inactive cells. A comparison against a benchmark showed that the proposed EHM tool achieved similar history match quality while improving the computation time and the geological realism of the updated ensemble.


Author(s):  
G. Malikov ◽  
V. Lisienko ◽  
A. Titaev ◽  
R. Viskanta

A new method based on the discrete transfer modeling technique for calculating the direct exchange areas (DEA) in zonal methods of radiation heat transfer is presented. The key feature of this method is a fast DEA matrix evaluation. The computational time was found to be short in comparison to other methods for direct exchange areas calculation based on numerical quadrature integration. The accuracy of the procedure is assessed by comparing the predictions with those based on the numerical integration for a test case (IFRF furnace).


2015 ◽  
Vol 2015 ◽  
pp. 1-11 ◽  
Author(s):  
Guo-Jun Li ◽  
Ben-Wen Li ◽  
Ya-Song Sun

The evaluation of direct exchange areas (DEA) in zonal method is the most important task due to the heavy computer cost of multi-integrals together with the existing of singularities. A technique of variable transformation to reduce the fold of integrals, which was developed originally by Erkku (1959) to calculate the DEAs of uniformly zonal dividing cylindrical system, was extended by Tian and Chiu (2003) for nonuniformly zonal dividing cylindrical system with large thermal gradients. In this paper, we further extend the reduced integration scheme (RIS) to calculate the DEAs in three-dimensional rectangular system. The detail deductions of six-, five-, and fourfold integrals to threefold ones are presented; the DEAs in a rectangular system with assumption of gray medium are computed by the Gaussian quadrature integration (GQI) and the RIS comparatively. The comparisons reveal that the RIS can provide remarkable higher accuracy and efficiency than GQI. More interestingly and practicably, the singularities of DEAs can be decomposed and weakened obviously by RIS.


2021 ◽  
Vol 11 (5) ◽  
pp. 2177
Author(s):  
Zuo Xiang ◽  
Patrick Seeling ◽  
Frank H. P. Fitzek

With increasing numbers of computer vision and object detection application scenarios, those requiring ultra-low service latency times have become increasingly prominent; e.g., those for autonomous and connected vehicles or smart city applications. The incorporation of machine learning through the applications of trained models in these scenarios can pose a computational challenge. The softwarization of networks provides opportunities to incorporate computing into the network, increasing flexibility by distributing workloads through offloading from client and edge nodes over in-network nodes to servers. In this article, we present an example for splitting the inference component of the YOLOv2 trained machine learning model between client, network, and service side processing to reduce the overall service latency. Assuming a client has 20% of the server computational resources, we observe a more than 12-fold reduction of service latency when incorporating our service split compared to on-client processing and and an increase in speed of more than 25% compared to performing everything on the server. Our approach is not only applicable to object detection, but can also be applied in a broad variety of machine learning-based applications and services.


2022 ◽  
Vol 16 (1) ◽  
pp. 0-0

Secure and efficient authentication mechanism becomes a major concern in cloud computing due to the data sharing among cloud server and user through internet. This paper proposed an efficient Hashing, Encryption and Chebyshev HEC-based authentication in order to provide security among data communication. With the formal and the informal security analysis, it has been demonstrated that the proposed HEC-based authentication approach provides data security more efficiently in cloud. The proposed approach amplifies the security issues and ensures the privacy and data security to the cloud user. Moreover, the proposed HEC-based authentication approach makes the system more robust and secured and has been verified with multiple scenarios. However, the proposed authentication approach requires less computational time and memory than the existing authentication techniques. The performance revealed by the proposed HEC-based authentication approach is measured in terms of computation time and memory as 26ms, and 1878bytes for 100Kb data size, respectively.


2010 ◽  
Vol 3 (6) ◽  
pp. 1555-1568 ◽  
Author(s):  
B. Mijling ◽  
O. N. E. Tuinder ◽  
R. F. van Oss ◽  
R. J. van der A

Abstract. The Ozone Profile Algorithm (OPERA), developed at KNMI, retrieves the vertical ozone distribution from nadir spectral satellite measurements of back scattered sunlight in the ultraviolet and visible wavelength range. To produce consistent global datasets the algorithm needs to have good global performance, while short computation time facilitates the use of the algorithm in near real time applications. To test the global performance of the algorithm we look at the convergence behaviour as diagnostic tool of the ozone profile retrievals from the GOME instrument (on board ERS-2) for February and October 1998. In this way, we uncover different classes of retrieval problems, related to the South Atlantic Anomaly, low cloud fractions over deserts, desert dust outflow over the ocean, and the intertropical convergence zone. The influence of the first guess and the external input data including the ozone cross-sections and the ozone climatologies on the retrieval performance is also investigated. By using a priori ozone profiles which are selected on the expected total ozone column, retrieval problems due to anomalous ozone distributions (such as in the ozone hole) can be avoided. By applying the algorithm adaptations the convergence statistics improve considerably, not only increasing the number of successful retrievals, but also reducing the average computation time, due to less iteration steps per retrieval. For February 1998, non-convergence was brought down from 10.7% to 2.1%, while the mean number of iteration steps (which dominates the computational time) dropped 26% from 5.11 to 3.79.


Geophysics ◽  
2013 ◽  
Vol 78 (1) ◽  
pp. V1-V9 ◽  
Author(s):  
Zhonghuan Chen ◽  
Sergey Fomel ◽  
Wenkai Lu

When plane-wave destruction (PWD) is implemented by implicit finite differences, the local slope is estimated by an iterative algorithm. We propose an analytical estimator of the local slope that is based on convergence analysis of the iterative algorithm. Using the analytical estimator, we design a noniterative method to estimate slopes by a three-point PWD filter. Compared with the iterative estimation, the proposed method needs only one regularization step, which reduces computation time significantly. With directional decoupling of the plane-wave filter, the proposed algorithm is also applicable to 3D slope estimation. We present synthetic and field experiments to demonstrate that the proposed algorithm can yield a correct estimation result with shorter computational time.


Author(s):  
Jérôme Limido ◽  
Mohamed Trabia ◽  
Shawoon Roy ◽  
Brendan O’Toole ◽  
Richard Jennings ◽  
...  

A series of experiments were performed to study plastic deformation of metallic plates under hypervelocity impact at the University of Nevada, Las Vegas (UNLV) Center for Materials and Structures using a two-stage light gas gun. In these experiments, cylindrical Lexan projectiles were fired at A36 steel target plates with velocities range of 4.5–6.0 km/s. Experiments were designed to produce a front side impact crater and a permanent bulging deformation on the back surface of the target without inducing complete perforation of the plates. Free surface velocities from the back surface of target plate were measured using the newly developed Multiplexed Photonic Doppler Velocimetry (MPDV) system. To simulate the experiments, a Lagrangian-based smooth particle hydrodynamics (SPH) is typically used to avoid the problems associated with mesh instability. Despite their intrinsic capability for simulation of violent impacts, particle methods have a few drawbacks that may considerably affect their accuracy and performance including, lack of interpolation completeness, tensile instability, and existence of spurious pressure. Moreover, computational time is also a strong limitation that often necessitates the use of reduced 2D axisymmetric models. To address these shortcomings, IMPETUS Afea Solver® implemented a newly developed SPH formulation that can solve the problems regarding spurious pressures and tensile instability. The algorithm takes full advantage of GPU Technology for parallelization of the computation and opens the door for running large 3D models (20,000,000 particles). The combination of accurate algorithms and drastically reduced computation time now makes it possible to run a high fidelity hypervelocity impact model.


Jurnal INKOM ◽  
2014 ◽  
Vol 8 (1) ◽  
pp. 29 ◽  
Author(s):  
Arnida Lailatul Latifah ◽  
Adi Nurhadiyatna

This paper proposes parallel algorithms for precipitation of flood modelling, especially applied in spatial rainfall distribution. As an important input in flood modelling, spatial distribution of rainfall is always needed as a pre-conditioned model. In this paper two interpolation methods, Inverse distance weighting (IDW) and Ordinary kriging (OK) are discussed. Both are developed in parallel algorithms in order to reduce the computational time. To measure the computation efficiency, the performance of the parallel algorithms are compared to the serial algorithms for both methods. Findings indicate that: (1) the computation time of OK algorithm is up to 23% longer than IDW; (2) the computation time of OK and IDW algorithms is linearly increasing with the number of cells/ points; (3) the computation time of the parallel algorithms for both methods is exponentially decaying with the number of processors. The parallel algorithm of IDW gives a decay factor of 0.52, while OK gives 0.53; (4) The parallel algorithms perform near ideal speed-up.


Sign in / Sign up

Export Citation Format

Share Document