scholarly journals Metaheuristic to Optimize Computational Convergence in Convection-Diffusion and Driven-Cavity Problems

Mathematics ◽  
2021 ◽  
Vol 9 (7) ◽  
pp. 748
Author(s):  
Juana Enríquez-Urbano ◽  
Marco Antonio Cruz-Chávez ◽  
Rafael Rivera-López ◽  
Martín H. Cruz-Rosales ◽  
Yainier Labrada-Nueva ◽  
...  

This work presents an optimization proposal to better the computational convergence time in convection-diffusion and driven-cavity problems by applying a simulated annealing (SA) metaheuristic, obtaining optimal values in relaxation factors (RF) that optimize the problem convergence during its numerical execution. These relaxation factors are tested in numerical models to accelerate their computational convergence in a shorter time. The experimental results show that the relaxation factors obtained by the SA algorithm improve the computational time of the problem convergence regardless of user experience in the initial low-quality RF proposal.

Author(s):  
Poonam Rani ◽  
MPS Bhatia ◽  
Devendra K Tayal

The paper presents an intelligent approach for the comparison of social networks through a cone model by using the fuzzy k-medoids clustering method. It makes use of a geometrical three-dimensional conical model, which astutely represents the user experience views. It uses both the static as well as the dynamic parameters of social networks. In this, we propose an algorithm that investigates which social network is more fruitful. For the experimental results, the proposed work is employed on the data collected from students from different universities through the Google forms, where students are required to rate their experience of using different social networks on different scales.


2011 ◽  
Vol 274 ◽  
pp. 101-111 ◽  
Author(s):  
Norelislam Elhami ◽  
Rachid Ellaia ◽  
Mhamed Itmi

This paper presents a new methodology for the Reliability Based Particle Swarm Optimization with Simulated Annealing. The reliability analysis procedure couple traditional and modified first and second order reliability methods, in rectangular plates modelled by an Assumed Modes approach. Both reliability methods are applicable to the implicit limit state functions through numerical models, like those based on the Assumed Mode Method. For traditional reliability approaches, the algorithms FORM and SORM use a Newton-Raphson procedure for estimate design point. In modified approaches, the algorithms are based on heuristic optimization methods such as Particle Swarm Optimization and Simulated Annealing Optimization. Numerical applications in static, dynamic and stability problems are used to illustrate the applicability and effectiveness of proposed methodology. These examples consist in a rectangular plates subjected to in-plane external loads, material and geometrical parameters which are considered as random variables. The results show that the predicted reliability levels are accurate to evaluate simultaneously various implicit limit state functions with respect to static, dynamic and stability criterions.


2017 ◽  
Vol 10 (2) ◽  
pp. 477-508 ◽  
Author(s):  
C. F.R. SANTOS ◽  
R. C. S. S. ALVARENGA ◽  
J. C. L. RIBEIRO ◽  
L. O CASTRO ◽  
R. M. SILVA ◽  
...  

Abstract This work developed experimental tests and numerical models able to represent the mechanical behavior of prisms made of ordinary and high strength concrete blocks. Experimental tests of prisms were performed and a detailed micro-modeling strategy was adopted for numerical analysis. In this modeling technique, each material (block and mortar) was represented by its own mechanical properties. The validation of numerical models was based on experimental results. It was found that the obtained numerical values of compressive strength and modulus of elasticity differ by 5% from the experimentally observed values. Moreover, mechanisms responsible for the rupture of the prisms were evaluated and compared to the behaviors observed in the tests and those described in the literature. Through experimental results it is possible to conclude that the numerical models have been able to represent both the mechanical properties and the mechanisms responsible for failure.


2007 ◽  
Vol 129 (4) ◽  
pp. 677-689 ◽  
Author(s):  
Lapo F. Mori ◽  
Neil Krishnan ◽  
Jian Cao ◽  
Horacio D. Espinosa

In this paper, the results of experiments conducted to investigate the friction coefficient existing at a brass-steel interface are presented. The research discussed here is the second of a two-part study on the size effects in friction conditions that exist during microextrusion. In the regime of dimensions of the order of a few hundred microns, these size effects tend to play a significant role in affecting the characteristics of microforming processes. Experimental results presented in the previous companion paper have already shown that the friction conditions obtained from comparisons of experimental results and numerical models show a size effect related to the overall dimensions of the extruded part, assuming material response is homogeneous. Another interesting observation was made when extrusion experiments were performed to produce submillimeter sized pins. It was noted that pins fabricated from large grain-size material (211μm) showed a tendency to curve, whereas those fabricated from billets having a small grain size (32μm), did not show this tendency. In order to further investigate these phenomena, it was necessary to segregate the individual influences of material response and interfacial behavior on the microextrusion process, and therefore, a series of frictional experiments was conducted using a stored-energy Kolsky bar. The advantage of the Kolsky bar method is that it provides a direct measurement of the existing interfacial conditions and does not depend on material deformation behavior like other methods to measure friction. The method also provides both static and dynamic coefficients of friction, and these values could prove relevant for microextrusion tests performed at high strain rates. Tests were conducted using brass samples of a small grain size (32μm) and a large grain size (211μm) at low contact pressure (22MPa) and high contact pressure (250MPa) to see whether there was any change in the friction conditions due to these parameters. Another parameter that was varied was the area of contact. Static and dynamic coefficients of friction are reported for all the cases. The main conclusion of these experiments was that the friction coefficient did not show any significant dependence on the material grain size, interface pressure, or area of contact.


Author(s):  
Rapeepan Promyoo ◽  
Hazim El-Mounayri ◽  
Kody Varahramyan

Atomic force microscopy (AFM) has been widely used for nanomachining and fabrication of micro/nanodevices. This paper describes the development and validation of computational models for AFM-based nanomachining. Molecular Dynamics (MD) technique is used to model and simulate mechanical indentation at the nanoscale for different types of materials, including gold, copper, aluminum, and silicon. The simulation allows for the prediction of indentation forces at the interface between an indenter and a substrate. The effects of tip materials on machined surface are investigated. The material deformation and indentation geometry are extracted based on the final locations of the atoms, which have been displaced by the rigid tool. In addition to the modeling, an AFM was used to conduct actual indentation at the nanoscale, and provide measurements to which the MD simulation predictions can be compared. The MD simulation results show that surface and subsurface deformation found in the case of gold, copper and aluminum have the same pattern. However, aluminum has more surface deformation than other materials. Two different types of indenter tips including diamond and silicon tips were used in the model. More surface and subsurface deformation can be observed for the case of nanoindentation with diamond tip. The indentation forces at various depths of indentation were obtained. It can be concluded that indentation force increases as depth of indentation increases. Due to limitations on computational time, the quantitative values of the indentation force obtained from MD simulation are not comparable to the experimental results. However, the increasing trends of indentation force are the same for both simulation and experimental results.


2021 ◽  
Vol 2021 ◽  
pp. 1-15
Author(s):  
Bing Tang ◽  
Linyao Kang ◽  
Li Zhang ◽  
Feiyan Guo ◽  
Haiwu He

Nonnegative matrix factorization (NMF) has been introduced as an efficient way to reduce the complexity of data compression and its capability of extracting highly interpretable parts from data sets, and it has also been applied to various fields, such as recommendations, image analysis, and text clustering. However, as the size of the matrix increases, the processing speed of nonnegative matrix factorization is very slow. To solve this problem, this paper proposes a parallel algorithm based on GPU for NMF in Spark platform, which makes full use of the advantages of in-memory computation mode and GPU acceleration. The new GPU-accelerated NMF on Spark platform is evaluated in a 4-node Spark heterogeneous cluster using Google Compute Engine by configuring each node a NVIDIA K80 CUDA device, and experimental results indicate that it is competitive in terms of computational time against the existing solutions on a variety of matrix orders. Furthermore, a GPU-accelerated NMF-based parallel collaborative filtering (CF) algorithm is also proposed, utilizing the advantages of data dimensionality reduction and feature extraction of NMF, as well as the multicore parallel computing mode of CUDA. Using real MovieLens data sets, experimental results have shown that the parallelization of NMF-based collaborative filtering on Spark platform effectively outperforms traditional user-based and item-based CF with a higher processing speed and higher recommendation accuracy.


2020 ◽  
Vol 4 (1) ◽  
pp. 35-46
Author(s):  
Winarno (Universitas Singaperbangsa Karawang) ◽  
A. A. N. Perwira Redi (Universitas Pertamina)

AbstractTwo-echelon location routing problem (2E-LRP) is a problem that considers distribution problem in a two-level / echelon transport system. The first echelon considers trips from a main depot to a set of selected satellite. The second echelon considers routes to serve customers from the selected satellite. This study proposes two metaheuristics algorithms to solve 2E-LRP: Simulated Annealing (SA) and Large Neighborhood Search (LNS) heuristics. The neighborhood / operator moves of both algorithms are modified specifically to solve 2E-LRP. The proposed SA uses swap, insert, and reverse operators. Meanwhile the proposed LNS uses four destructive operator (random route removal, worst removal, route removal, related node removal, not related node removal) and two constructive operator (greedy insertion and modived greedy insertion). Previously known dataset is used to test the performance of the both algorithms. Numerical experiment results show that SA performs better than LNS. The objective function value for SA and LNS are 176.125 and 181.478, respectively. Besides, the average computational time of SA and LNS are 119.02s and 352.17s, respectively.AbstrakPermasalahan penentuan lokasi fasilitas sekaligus rute kendaraan dengan mempertimbangkan sistem transportasi dua eselon juga dikenal dengan two-echelon location routing problem (2E-LRP) atau masalah lokasi dan rute kendaraan dua eselon (MLRKDE). Pada eselon pertama keputusan yang perlu diambil adalah penentuan lokasi fasilitas (diistilahkan satelit) dan rute kendaraan dari depo ke lokasi satelit terpilih. Pada eselon kedua dilakukan penentuan rute kendaraan dari satelit ke masing-masing pelanggan mempertimbangan jumlah permintaan dan kapasitas kendaraan. Dalam penelitian ini dikembangkan dua algoritma metaheuristik yaitu Simulated Annealing (SA) dan Large Neighborhood Search (LNS). Operator yang digunakan kedua algoritma tersebut didesain khusus untuk permasalahan MLRKDE. Algoritma SA menggunakan operator swap, insert, dan reverse. Algoritma LNS menggunakan operator perusakan (random route removal, worst removal, route removal, related node removal, dan not related node removal) dan perbaikan (greedy insertion dan modified greedy insertion). Benchmark data dari penelitian sebelumnya digunakan untuk menguji performa kedua algoritma tersebut. Hasil eksperimen menunjukkan bahwa performa algoritma SA lebih baik daripada LNS. Rata-rata nilai fungsi objektif dari SA dan LNS adalah 176.125 dan 181.478. Waktu rata-rata komputasi algoritma SA and LNS pada permasalahan ini adalah 119.02 dan 352.17 detik.


2013 ◽  
Vol 750 ◽  
pp. 64-67
Author(s):  
Wen Yu Zhang ◽  
Dong Ying Ju ◽  
Yao Yao ◽  
Hong Yang Zhao ◽  
Xiao Dong Hu ◽  
...  

In this paper, the established control system and its control algorism of a new twin roll strip caster developed by authors is presented. It is illustrated the roll-gap control strategy of the twin roll strip caster based on a feed forward-feedback system. From the experimental results, the susceptibility of control convergence time, stable and accurate are shown on a higher level than traditional control strategy.


Author(s):  
Anahita Emami ◽  
Seyedmeysam Khaleghian ◽  
Chuang Su ◽  
Saied Taheri

Good understanding of friction in tire-road interaction is of critical importance for vehicle dynamic control systems. Most of the friction models proposed to describe the friction coefficient between tire-treads and road surfaces have been developed based on empirical or semi-empirical relations that are not able to include many effective parameters involved in the tire-road interactions. Therefore, these models are just useful in limited conditions similar to the experiments, and do not accurately represent tire-road traction in numerical tire models. However, in last two decades, a few theoretical models have been developed to calculate the tire-road friction coefficient theoretically by considering both viscoelastic behavior of tire tread compounds and multi-scale interactions between tire treads and rough road surfaces. In this article, a novel physics-based model proposed by Persson has been investigated and used to develop computer algorithms for calculation of sliding friction coefficient between a tire tread compound and a rough substrate. The viscoelastic behavior of tread compound and the surface profile of rough counter surface are the inputs of this physics-based theoretical model. The numerical results of the model have been compared with the experimental results obtained from a dynamic friction tester designed and built in the Center for Tire Research (CenTire). Good agreement between numerical results of theoretical model and experimental results has been found at intermediate range of slip velocities considering the effect of adhesion and shearing in the real contact area in addition to hysteresis friction due to internal energy dissipation in the tire tread compound.


Author(s):  
Steven D. Andreen ◽  
Brad G. Davis

Abstract Many analytical and numerical models exist that can describe the effect of single projectile impacts on steel targets. These models are not adequate for the evaluation of live fire shoot house containment systems, which are subjected to repeated impact loading from small caliber projectiles over the lifetime of the structure. Models assuming perfectly rigid projectiles over-predict penetration depths. Models assuming rigid targets cannot predict any penetration, and hydrodynamic models are best suited to high velocity impacts well above the ranges of conventional ordinance. Development of sufficient analytical or numerical tools using traditional techniques would be either intractable, empirically based and unique to a given scenario, require unique material properties that are not commonly available, or require significant computational effort. Due to the limited amount of empirical data on multiple impact failure, classical reliability methods are not suitable for assessing the probability of containment system perforation. Using existing experimental results of .223 caliber ammunition against AR500 steel panels with 2-inch ballistic rubber, a commonly found protective system in these facilities, the cumulative effects of multiple projectiles were quantified to estimate the number of impacts required to perforate the target material. Impacts were simulated from normal distributions of the x and y coordinates describing the impact point using a cartesian coordinate plane. The impact resistance of the steel was also simulated from a triangular distribution to account for the variability of the experimental results. Monte Carlo Simulation was then used to estimate the expected number of impacts to cause failure at a single point on the target. Using this collective model, it was possible to determine that the distribution of the number of rounds to cause target failure approached a normal distribution. The results indicated that the mean impacts at failure was 11800 with a standard deviation of 800 impacts. Finally, targeting the allowable risk level for structural failure from the JCSS probabilistic model code from the simulated normal distribution, it was determined that the safe number of impacts was approximately 7996. Decision makers can utilize the safe number of impacts to inform training guidance for the future use of facilities and to develop effective inspection requirements. This model can also be adapted to evaluate similar training facilities and to assess how other small caliber projectile impacts would affect live fire shoot house containment systems, providing a useful tool for the design and analysis of future and the assessment of existing facilities for use with ammunition that did not exist during its design.


Sign in / Sign up

Export Citation Format

Share Document