Sub-triangle opacity masks for faster ray tracing of transparent objects

Author(s):  
Holger Gruen ◽  
Carsten Benthin ◽  
Sven Woop

We propose an easy and simple-to-integrate approach to accelerate ray tracing of alpha-tested transparent geometry with a focus on Microsoft® DirectX® or Vulkan® ray tracing extensions. Pre-computed bit masks are used to quickly determine fully transparent and fully opaque regions of triangles thereby skipping the more expensive alpha-test operation. These bit masks allow us to skip up to 86% of all transparency tests, yielding up to 40% speed up in a proof-of-concept DirectX® software only implementation.

Author(s):  
A. G. Jackson ◽  
M. Rowe

Diffraction intensities from intermetallic compounds are, in the kinematic approximation, proportional to the scattering amplitude from the element doing the scattering. More detailed calculations have shown that site symmetry and occupation by various atom species also affects the intensity in a diffracted beam. [1] Hence, by measuring the intensities of beams, or their ratios, the occupancy can be estimated. Measurement of the intensity values also allows structure calculations to be made to determine the spatial distribution of the potentials doing the scattering. Thermal effects are also present as a background contribution. Inelastic effects such as loss or absorption/excitation complicate the intensity behavior, and dynamical theory is required to estimate the intensity value.The dynamic range of currents in diffracted beams can be 104or 105:1. Hence, detection of such information requires a means for collecting the intensity over a signal-to-noise range beyond that obtainable with a single film plate, which has a S/N of about 103:1. Although such a collection system is not available currently, a simple system consisting of instrumentation on an existing STEM can be used as a proof of concept which has a S/N of about 255:1, limited by the 8 bit pixel attributes used in the electronics. Use of 24 bit pixel attributes would easily allowthe desired noise range to be attained in the processing instrumentation. The S/N of the scintillator used by the photoelectron sensor is about 106 to 1, well beyond the S/N goal. The trade-off that must be made is the time for acquiring the signal, since the pattern can be obtained in seconds using film plates, compared to 10 to 20 minutes for a pattern to be acquired using the digital scan. Parallel acquisition would, of course, speed up this process immensely.


Author(s):  
Christian Rauch ◽  
Thomas Ho¨rmann ◽  
Sebastian Jagsch ◽  
Raimund Almbauer

Much attention has been paid recently by research and development engineers on performing multi-physics calculations. One way to do this is to couple commercial tools for examining complex systems. Since the proposal of an software architecture for coupling programs as published in a previous paper significant changes have led to an improved performance for large-scale industrial applications. This architecture is being described and as a proof of concept a simulation is being conducted by coupling two commercial solvers. The speed-up of the new system is being presented. The simulation results are then compared with measurements of surface temperatures of an exhaust system of an actual sports utilities vehicle (SUV) and conclusions are being drawn. The proposed architecture is easily adaptable to various programs as it is implemented in C++ and changes for a specific code can be restricted to a view classes.


Author(s):  
D. Ye ◽  
L. Veen ◽  
A. Nikishova ◽  
J. Lakhlili ◽  
W. Edeling ◽  
...  

Uncertainty quantification (UQ) is a key component when using computational models that involve uncertainties, e.g. in decision-making scenarios. In this work, we present uncertainty quantification patterns (UQPs) that are designed to support the analysis of uncertainty in coupled multi-scale and multi-domain applications. UQPs provide the basic building blocks to create tailored UQ for multiscale models. The UQPs are implemented as generic templates, which can then be customized and aggregated to create a dedicated UQ procedure for multiscale applications. We present the implementation of the UQPs with multiscale coupling toolkit Multiscale Coupling Library and Environment 3. Potential speed-up for UQPs has been derived as well. As a proof of concept, two examples of multiscale applications using UQPs are presented. This article is part of the theme issue ‘Reliability and reproducibility in computational science: implementing verification, validation and uncertainty quantification in silico ’.


2013 ◽  
Vol 2013 ◽  
pp. 1-12 ◽  
Author(s):  
Darío Guerrero-Fernández ◽  
Juan Falgueras ◽  
M. Gonzalo Claros

Current genomic analyses often require the managing and comparison of big data using desktop bioinformatic software that was not developed regarding multicore distribution. The task-farm SCBI_MAPREDUCE is intended to simplify the trivial parallelisation and distribution of new and legacy software and scripts for biologists who are interested in using computers but are not skilled programmers. In the case of legacy applications, there is no need of modification or rewriting the source code. It can be used from multicore workstations to heterogeneous grids. Tests have demonstrated that speed-up scales almost linearly and that distribution in small chunks increases it. It is also shown that SCBI_MAPREDUCE takes advantage of shared storage when necessary, is fault-tolerant, allows for resuming aborted jobs, does not need special hardware or virtual machine support, and provides the same results than a parallelised, legacy software. The same is true for interrupted and relaunched jobs. As proof-of-concept, distribution of a compiled version of BLAST+ in the SCBI_DISTRIBUTED_BLAST gem is given, indicating that other blast binaries can be used while maintaining the same SCBI_DISTRIBUTED_BLAST code. Therefore, SCBI_MAPREDUCE suits most parallelisation and distribution needs in, for example, gene and genome studies.


Geophysics ◽  
2006 ◽  
Vol 71 (3) ◽  
pp. T41-T51 ◽  
Author(s):  
Tao Xu ◽  
Guoming Xu ◽  
Ergen Gao ◽  
Yingchun Li ◽  
Xianyi Jiang ◽  
...  

We propose using a set of blocks to approximate geologically complex media that cannot be well described by layered models. Interfaces between blocks are triangulated to prevent overlaps or gaps often produced by other techniques, such as B-splines, and to speed up the calculation of intersection points between a ray and block interfaces. We also use a smoothing algorithm to make the normal vector of each triangle continuous at the boundary, so that ray tracing can be performed with stability and accuracy. Based on Fermat’s principle, we perturb an initial raypath between two points, generally obtained by shooting, with a segmentally iterative ray-tracing (SIRT) method. Intersection points on a ray are updated in sequence, instead of simultaneously, because the number of new intersection points may be increased or decreased during the iteration process. To improve convergence speed, we update the intersection points by a first-order explicit formula instead of traditional iterative methods. Only transmitted and reflected waves are considered. Numerical tests demonstrate that the combination of block modeling and segmentally iterative ray tracing is effective in implementing kinematic two-point ray tracing in complex 3D media.


2009 ◽  
Vol 57 (5) ◽  
pp. 1469-1480 ◽  
Author(s):  
Vittorio Degli-Esposti ◽  
Franco Fuschini ◽  
Enrico M. Vitucci ◽  
Gabriele Falciasecca

Sign in / Sign up

Export Citation Format

Share Document