An Adaptive Multiscaling Approach for Reducing Computation Time in Simulations of Articulated Biopolymers

2019 ◽  
Vol 14 (5) ◽  
Author(s):  
Ashley Guy ◽  
Alan Bowling

Microscale dynamic simulations can require significant computational resources to generate desired time evolutions. Microscale phenomena are often driven by even smaller scale dynamics, requiring multiscale system definitions to combine these effects. At the smallest scale, large active forces lead to large resultant accelerations, requiring small integration time steps to fully capture the motion and dictating the integration time for the entire model. Multiscale modeling techniques aim to reduce this computational cost, often by separating the system into subsystems or coarse graining to simplify calculations. A multiscale method has been previously shown to greatly reduce the time required to simulate systems in the continuum regime while generating equivalent time histories. This method identifies a portion of the active and dissipative forces that cancel and contribute little to the overall motion. The forces are then scaled to eliminate these noncontributing portions. This work extends that method to include an adaptive scaling method for forces that have large changes in magnitude across the time history. Results show that the adaptive formulation generates time histories similar to those of the unscaled truth model. Computation time reduction is consistent with the existing method.

Author(s):  
Ashley Guy ◽  
Alan Bowling

Molecular dynamics simulations require significant computational resources to generate modest time evolutions. Large active forces lead to large accelerations, requiring subfemtosecond integration time steps to capture the resultant high-frequency vibrations. It is often necessary to combine these fast dynamics with larger scale phenomena, creating a multiscale problem. A multiscale method has been previously shown to greatly reduce the time required to simulate systems in the continuum regime. A new multiscale formulation is proposed to extend the continuum formulation to the atomistic scale. A canonical ensemble model is defined using a modified Nóse–Hoover thermostat to maintain the constant temperature constraint. Results show a significant reduction in computation time mediated by larger allowable integration time steps.


2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
S. Agnes Shifani ◽  
M. S. Godwin Premi

The measurement of strain using some contact techniques has some drawbacks like less accuracy and it takes larger computation time for finding each location of subpixels. Thus, a faster noncontact Digital Image Correlation (DIC) mechanism is utilized along with the traditional techniques to measure the strain. The Newton-Raphson (NR) technique is considered to be an accepted mechanism for accurate tracking of different intensity relocation. Generally, the issue regarding the DIC mechanism is its computational cost. In this paper, an interpolation technique is utilized to accomplish a high precision rate and faster image correlation; thereby it reduces the computation time required for finding the matched pixel and viably handles the rehashing relationship process. Hence, the proposed mechanism provides better efficiency along with a reduced number of iterations required for finding the identity. The number of iterations can be reduced using the Sum of Square of Subset Intensity Gradients (SSSIG) method. The evaluation of the projected scheme is tested with different images through various parameters. Finally, the outcome indicates that the projected mechanism takes only a few milliseconds to match the best matching location, whereas the prevailing techniques require 16 seconds for the same operation with the same step size. This demonstrates the effectiveness of the proposed scheme.


2021 ◽  
Vol 7 ◽  
Author(s):  
P. Hong ◽  
H. P. Hong

The time history analysis is used to estimate the peak responses of structures subjected to stationary and nonstationary winds. The time histories of the fluctuating wind processes at multiple points can be simulated based on the spectral representation method for given target auto and cross power spectral density (PSD) functions. As the number of the processes of interest increases, the computation time for the simulation increases drastically. For the stationary homogeneous or nonhomogeneous wind fields, this problem can be overcome by using the frequency-wavenumber PSD function to simulate the stochastic propagating waves or fields. In the present study, a technique to simulate the amplitude modulated and frequency modulated nonstationary and nonhomogeneous stochastic propagating wind fields is presented. The technique relies on representing the nonstationary wind velocity by amplitude modulating a process that is time transformed from a stationary process. It is based on the established relations between the PSD functions of the nonstationary and of the stationary wind velocity. Simple to use and implement equations to carry out the simulation for one-dimensional line wind velocity field and two-dimensional nonstationary and nonhomogeneous wind velocity field are presented. The use of the developed technique and its adequacy is illustrated through numerical examples.


2021 ◽  
Author(s):  
Agnes Shifani S ◽  
Godwin Premi M S

Abstract The measurement of strain using some contact techniques has some drawbacks like less accuracy and it takes larger computation time for finding each location of sub-pixels. Thus, a faster non-contact Digital Image Correlation (DIC) mechanism is utilized along with the traditional techniques to measure the strain. The Newton-Raphson (NR) technique is considered to be an accepted mechanism for accurate tracking of different intensity relocation. Generally, the issue regarding the DIC mechanism is its computational cost. In this paper an interpolation technique is utilized to accomplish a high precision rate and faster image correlation, thereby it reduces the computation time required for finding the matched pixel and viably handles the rehashing relationship process. Hence the proposed mechanism provides better efficiency along with a reduced number of iterations required for finding the identity. The number of iteration can be reduced using the Sum of Square of Subset Intensity Gradient (SSSIG) method. The evaluation of the projected scheme is tested with different images through various parameters. Finally, the outcome indicates that the projected mechanism takes only a few milliseconds to match the best matching location whereas the prevailing techniques require 16 seconds for the same operation with the same step size. This demonstrates the effectiveness of the proposed scheme.


Electronics ◽  
2021 ◽  
Vol 10 (13) ◽  
pp. 1511
Author(s):  
Taylor Simons ◽  
Dah-Jye Lee

There has been a recent surge in publications related to binarized neural networks (BNNs), which use binary values to represent both the weights and activations in deep neural networks (DNNs). Due to the bitwise nature of BNNs, there have been many efforts to implement BNNs on ASICs and FPGAs. While BNNs are excellent candidates for these kinds of resource-limited systems, most implementations still require very large FPGAs or CPU-FPGA co-processing systems. Our work focuses on reducing the computational cost of BNNs even further, making them more efficient to implement on FPGAs. We target embedded visual inspection tasks, like quality inspection sorting on manufactured parts and agricultural produce sorting. We propose a new binarized convolutional layer, called the neural jet features layer, that learns well-known classic computer vision kernels that are efficient to calculate as a group. We show that on visual inspection tasks, neural jet features perform comparably to standard BNN convolutional layers while using less computational resources. We also show that neural jet features tend to be more stable than BNN convolution layers when training small models.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Daiji Ichishima ◽  
Yuya Matsumura

AbstractLarge scale computation by molecular dynamics (MD) method is often challenging or even impractical due to its computational cost, in spite of its wide applications in a variety of fields. Although the recent advancement in parallel computing and introduction of coarse-graining methods have enabled large scale calculations, macroscopic analyses are still not realizable. Here, we present renormalized molecular dynamics (RMD), a renormalization group of MD in thermal equilibrium derived by using the Migdal–Kadanoff approximation. The RMD method improves the computational efficiency drastically while retaining the advantage of MD. The computational efficiency is improved by a factor of $$2^{n(D+1)}$$ 2 n ( D + 1 ) over conventional MD where D is the spatial dimension and n is the number of applied renormalization transforms. We verify RMD by conducting two simulations; melting of an aluminum slab and collision of aluminum spheres. Both problems show that the expectation values of physical quantities are in good agreement after the renormalization, whereas the consumption time is reduced as expected. To observe behavior of RMD near the critical point, the critical exponent of the Lennard-Jones potential is extracted by calculating specific heat on the mesoscale. The critical exponent is obtained as $$\nu =0.63\pm 0.01$$ ν = 0.63 ± 0.01 . In addition, the renormalization group of dissipative particle dynamics (DPD) is derived. Renormalized DPD is equivalent to RMD in isothermal systems under the condition such that Deborah number $$De\ll 1$$ D e ≪ 1 .


2014 ◽  
Vol 2014 ◽  
pp. 1-16 ◽  
Author(s):  
Juan-Ignacio Latorre-Biel ◽  
Emilio Jiménez-Macías ◽  
Mercedes Pérez de la Parte ◽  
Julio Blanco-Fernández ◽  
Eduardo Martínez-Cámara

Artificial intelligence methodologies, as the core of discrete control and decision support systems, have been extensively applied in the industrial production sector. The resulting tools produce excellent results in certain cases; however, the NP-hard nature of many discrete control or decision making problems in the manufacturing area may require unaffordable computational resources, constrained by the limited available time required to obtain a solution. With the purpose of improving the efficiency of a control methodology for discrete systems, based on a simulation-based optimization and the Petri net (PN) model of the real discrete event dynamic system (DEDS), this paper presents a strategy, where a transformation applied to the model allows removing the redundant information to obtain a smaller model containing the same useful information. As a result, faster discrete optimizations can be implemented. This methodology is based on the use of a formalism belonging to the paradigm of the PN for describing DEDS, the disjunctive colored PN. Furthermore, the metaheuristic of genetic algorithms is applied to the search of the best solutions in the solution space. As an illustration of the methodology proposal, its performance is compared with the classic approach on a case study, obtaining faster the optimal solution.


Geophysics ◽  
2018 ◽  
Vol 83 (2) ◽  
pp. V99-V113 ◽  
Author(s):  
Zhong-Xiao Li ◽  
Zhen-Chun Li

After multiple prediction, adaptive multiple subtraction is essential for the success of multiple removal. The 3D blind separation of convolved mixtures (3D BSCM) method, which is effective in conducting adaptive multiple subtraction, needs to solve an optimization problem containing L1-norm minimization constraints on primaries by the iterative reweighted least-squares (IRLS) algorithm. The 3D BSCM method can better separate primaries and multiples than the 1D/2D BSCM method and the method with energy minimization constraints on primaries. However, the 3D BSCM method has high computational cost because the IRLS algorithm achieves nonquadratic optimization with an LS optimization problem solved in each iteration. In general, it is good to have a faster 3D BSCM method. To improve the adaptability of field data processing, the fast iterative shrinkage thresholding algorithm (FISTA) is introduced into the 3D BSCM method. The proximity operator of FISTA can solve the L1-norm minimization problem efficiently. We demonstrate that our FISTA-based 3D BSCM method achieves similar accuracy of estimating primaries as that of the reference IRLS-based 3D BSCM method. Furthermore, our FISTA-based 3D BSCM method reduces computation time by approximately 60% compared with the reference IRLS-based 3D BSCM method in the synthetic and field data examples.


1995 ◽  
Vol 384 ◽  
Author(s):  
J. B. Restorff ◽  
M. Wun-Fogle ◽  
S. F. Cheng ◽  
K. B. Hathaway

ABSTRACTWe have observed time dependent magnetic switching in spin-valve sandwich structures of Cu/Co/Cu/Fe films grown on silicon and Kapton substrates and Permalloy/Co/Cu/Co films grown on NiO or NiO/CoO coated Si substrates. The giant magnetoresistance (MR) values ranged from 1 to 3 percent at room temperature. The films were grown by DC magnetron sputter deposition. Measurements were made on the time required for the MR to stabilize to about 1 part in 104 after the applied field was incremented. This time depends almost linearly on the amplitude of the timedependent MR change with a slope (time / ΔMR) of 20 000 to 30 000 s. Some samples took as long as 70 s to stabilize. The time dependent effects may be important for devices operating in these regions of the magnetoresistance curve. In addition, measurements were made on the time history of the MR value for a period of 75 s following a step change in the field from saturation. We observed that the time dependent behavior of the MR values of both experiments produced an excellent fit to a function of the form ΔMR(t) = α + β;ln(t) where ɑ and β are constants. This time dependence was consistent with the behavior of the magnetic aftereffect.


Author(s):  
O. Mathieu ◽  
C. R. Mulvihill ◽  
E. L. Petersen ◽  
Y. Zhang ◽  
H. J. Curran

Methane and ethane are the two main components of natural gas and typically constitute more than 95% of it. In this study, a mixture of 90% CH4/10% C2H6 diluted in 99% Ar was studied at fuel lean (equiv. ratio = 0.5) conditions, for pressures around 1, 4, and 10 atm. Using laser absorption diagnostics, the time histories of CO and H2O were recorded between 1400 and 1800 K. Water is a final product from combustion, and its formation is a good marker of the completion of the combustion process. Carbon monoxide is an intermediate combustion species, a good marker of incomplete/inefficient combustion, as well as a regulated pollutant for the gas turbine industry. Measurements such as these species time histories are important for validating and assessing chemical kinetics models beyond just ignition delay times and laminar flame speeds. Time-history profiles for these two molecules were compared to a state-of-the-art detailed kinetics mechanism as well as to the well-established GRI 3.0 mechanism. Results show that the H2O profile is accurately reproduced by both models. However, discrepancies are observed for the CO profiles. Under the conditions of this study, the CO profiles typically increase rapidly after an induction time, reach a maximum, and then decrease. This maximum CO mole fraction is often largely over-predicted by the models, whereas the depletion rate of CO past this peak is often over-estimated for pressures above 1 atm.


Sign in / Sign up

Export Citation Format

Share Document