scholarly journals Improve Performance by a Fuzzy-Based Dynamic Replication Algorithm in Grid, Cloud, and Fog

2021 ◽  
Vol 2021 ◽  
pp. 1-14
Author(s):  
Mahsa Beigrezaei ◽  
Abolfazel Toroghi Haghighat ◽  
Seyedeh Leili Mirtaheri

The efficiency of data-intensive applications in distributed environments such as Cloud, Fog, and Grid is directly related to data access delay. Delays caused by queue workload and delays caused by failure can decrease data access efficiency. Data replication is a critical technique in reducing access latency. In this paper, a fuzzy-based replication algorithm is proposed, which avoids the mentioned imposed delays by considering a comprehensive set of significant parameters to improve performance. The proposed algorithm selects the appropriate replica using a hierarchical method, taking into account the transmission cost, queue delay, and failure probability. The algorithm determines the best place for replication using a fuzzy inference system considering the queue workload, number of accesses in the future, last access time, and communication capacity. It uses the Simple Exponential Smoothing method to predict future file popularity. The OptorSim simulator evaluates the proposed algorithm in different access patterns. The results show that the algorithm improves performance in terms of the number of replications, the percentage of storage filled, and the mean job execution time. The proposed algorithm has the highest efficiency in random access patterns, especially random Zipf access patterns. It also has good performance when the number of jobs and file size are increased.

2018 ◽  
Vol 2018 ◽  
pp. 1-10 ◽  
Author(s):  
Bakare K. Ayeni ◽  
Junaidu B. Sahalu ◽  
Kolawole R. Adeyanju

With improvement in computing and technological advancements, web-based applications are now ubiquitous on the Internet. However, these web applications are becoming prone to vulnerabilities which have led to theft of confidential information, data loss, and denial of data access in the course of information transmission. Cross-site scripting (XSS) is a form of web security attack which involves the injection of malicious codes into web applications from untrusted sources. Interestingly, recent research studies on the web application security centre focus on attack prevention and mechanisms for secure coding; recent methods for those attacks do not only generate high false positives but also have little considerations for the users who oftentimes are the victims of malicious attacks. Motivated by this problem, this paper describes an “intelligent” tool for detecting cross-site scripting flaws in web applications. This paper describes the method implemented based on fuzzy logic to detect classic XSS weaknesses and to provide some results on experimentations. Our detection framework recorded 15% improvement in accuracy and 0.01% reduction in the false-positive rate which is considerably lower than that found in the existing work by Koli et al. Our approach also serves as a decision-making tool for the users.


2020 ◽  
Vol 2020 (1) ◽  
pp. 216-234
Author(s):  
Anrin Chakraborti ◽  
Radu Sion

AbstractOblivious RAMs (ORAMs) allow a client to access data from an untrusted storage device without revealing the access patterns. Typically, the ORAM adversary can observe both read and write accesses. Write-only ORAMs target a more practical, multi-snapshot adversary only monitoring client writes – typical for plausible deniability and censorship-resilient systems. This allows write-only ORAMs to achieve significantly-better asymptotic performance. However, these apparent gains do not materialize in real deployments primarily due to the random data placement strategies used to break correlations between logical and physical names-paces, a required property for write access privacy. Random access performs poorly on both rotational disks and SSDs (often increasing wear significantly, and interfering with wear-leveling mechanisms).In this work, we introduce SqORAM, a new locality-preserving write-only ORAM that preserves write access privacy without requiring random data access. Data blocks close to each other in the logical domain land in close proximity on the physical media. Importantly, SqORAM maintains this data locality property over time, significantly increasing read throughput.A full Linux kernel-level implementation of SqORAM is 100x faster than non locality-preserving solutions for standard workloads and is 60-100% faster than the state-of-the-art for typical file system workloads.


2021 ◽  
Vol 13 (7) ◽  
pp. 1
Author(s):  
Farnaz Ghashami ◽  
Kamyar Kamyar

A model of Adaptive Neuro-Fuzzy Inference System (ANFIS) trained with an evolutionary algorithm, namely Genetic Algorithm (GA) is presented in this paper. Further, the model is tested on the NASDAQ stock market indices which is among the most widely followed indices in the United States. Empirical results show that by determining the parameters of ANFIS (premise and consequent parameters) using GA, we can improve performance in terms of Mean Squared Error (MSE), Root Mean Squared Error (RMSE), coefficient of determination (R-Squared) in comparison with using solely ANFIS.


2014 ◽  
Vol 2014 ◽  
pp. 1-10 ◽  
Author(s):  
Priyanka Vashisht ◽  
Rajesh Kumar ◽  
Anju Sharma

In data grids scientific and business applications produce huge volume of data which needs to be transferred among the distributed and heterogeneous nodes of data grids. Data replication provides a solution for managing data files efficiently in large grids. The data replication helps in enhancing the data availability which reduces the overall access time of the file. In this paper an algorithm, namely, EDRA using agents for data grid, has been proposed and implemented. EDRA consists of dynamic replication of hierarchical structure taken into account for the selection of best replica. Decision for selecting the best replica is based on scheduling parameters. The scheduling parameters are bandwidth, load gauge, and computing capacity of the node. The scheduling in data grid helps in reducing the data access time. The distribution of the load on the nodes of data grid is done evenly by considering scheduling parameters. EDRA is implemented using data grid simulator, namely, OptorSim. European Data Grid CMS test bed topology is used in this experiment. The simulation results are obtained by comparing BHR, LRU, No Replication, and EDRA. The result shows the efficiency of EDRA algorithm in terms of mean job execution time, network usage, and storage usage of node.


2017 ◽  
Vol 3 (1) ◽  
pp. 36-48
Author(s):  
Erwan Ahmad Ardiansyah ◽  
Rina Mardiati ◽  
Afaf Fadhil

Prakiraan atau peramalan beban listrik dibutuhkan dalam menentukan jumlah listrik yang dihasilkan. Ini menentukan  agar tidak terjadi beban berlebih yang menyebabkan pemborosan atau kekurangan beban listrik yang mengakibatkan krisis listrik di konsumen. Oleh karena itu di butuhkan prakiraan atau peramalan yang tepat untuk menghasilkan energi listrik. Teknologi softcomputing dapat digunakan  sebagai metode alternatif untuk prediksi beban litrik jangka pendek salah satunya dengan metode  Adaptive Neuro Fuzzy Inference System pada penelitian tugas akhir ini. Data yang di dapat untuk mendukung penelitian ini adalah data dari APD PLN JAWA BARAT yang berisikan laporan data beban puncak bulanan penyulang area gardu induk majalaya dari januari 2011 sampai desember 2014 sebagai data acuan dan data aktual januari-desember 2015. Data kemudian dilatih menggunakan metode ANFIS pada software MATLAB versi b2010. Dari data hasil pelatihan data ANFIS kemudian dilakukan perbandingan dengan data aktual dan data metode regresi meliputi perbandingan anfis-aktual, regresi-aktual dan perbandingan anfis-regresi-aktual. Dari perbandingan disimpulkan bahwa data metode anfis lebih mendekati data aktual dengan rata-rata 1,4%, menunjukan prediksi ANFIS dapat menjadi referensi untuk peramalan beban listrik dimasa depan.


2009 ◽  
Vol 8 (3) ◽  
pp. 887-897
Author(s):  
Vishal Paika ◽  
Er. Pankaj Bhambri

The face is the feature which distinguishes a person. Facial appearance is vital for human recognition. It has certain features like forehead, skin, eyes, ears, nose, cheeks, mouth, lip, teeth etc which helps us, humans, to recognize a particular face from millions of faces even after a large span of time and despite large changes in their appearance due to ageing, expression, viewing conditions and distractions such as disfigurement of face, scars, beard or hair style. A face is not merely a set of facial features but is rather but is rather something meaningful in its form.In this paper, depending on the various facial features, a system is designed to recognize them. To reveal the outline of the face, eyes, ears, nose, teeth etc different edge detection techniques have been used. These features are extracted in the term of distance between important feature points. The feature set obtained is then normalized and are feed to artificial neural networks so as to train them for reorganization of facial images.


Sign in / Sign up

Export Citation Format

Share Document