scholarly journals Resource Management Scheme Based on Ubiquitous Data Analysis

2014 ◽  
Vol 2014 ◽  
pp. 1-11
Author(s):  
Heung Ki Lee ◽  
Jaehee Jung ◽  
Gangman Yi

Resource management of the main memory and process handler is critical to enhancing the system performance of a web server. Owing to the transaction delay time that affects incoming requests from web clients, web server systems utilize several web processes to anticipate future requests. This procedure is able to decrease the web generation time because there are enough processes to handle the incoming requests from web browsers. However, inefficient process management results in low service quality for the web server system. Proper pregenerated process mechanisms are required for dealing with the clients’ requests. Unfortunately, it is difficult to predict how many requests a web server system is going to receive. If a web server system builds too many web processes, it wastes a considerable amount of memory space, and thus performance is reduced. We propose an adaptive web process manager scheme based on the analysis of web log mining. In the proposed scheme, the number of web processes is controlled through prediction of incoming requests, and accordingly, the web process management scheme consumes the least possible web transaction resources. In experiments, real web trace data were used to prove the improved performance of the proposed scheme.

2019 ◽  
Vol 2 (3) ◽  
pp. 266
Author(s):  
Nongki Angsar

The increase in web traffic and the development of network bandwidth that is relatively faster than the development of microprocessor technology today causes the one point server platform to be no longer sufficient to meet the scalability requirements of web server systems. Multiple server platforms are the answer. One known solution is cluster-based web server systems. In this study, a cluster-based web server system would be designed with the Never Queue algorithm and continued with testing the distribution of web workload on this system. The tests were carried out by generating HTTP workloads statically (with fast HTTP requests per fixed second) and dynamically (rapid HTTP requests per second that change or rise regularly) from the client to the web server system pool. Followed by analyzing data package traffic. In this study, the results of static testing with rapid HTTP requests per second which still showed that the Never Queue algorithm distributed HTTP requests to the web server system pool properly and got HTTP replies that tend to be stable at the HTTP average of 1031.8 replies/s. As for the rapid parameters of TCP connections, response times and errors increased with the rapid increasing HTTP requests generated. The average output was at 2,983 Mbps.


2016 ◽  
pp. 607-623
Author(s):  
Hemant Kumar Mehta

This chapter presents a toolkit for evaluation of resource management algorithms developed for Grid computing. This simulator named as EcoGrid and it is devised to support large number of resource or computing nodes and processes. Generally, grid simulators represent each resource using a thread that occupies large amount of space on the thread stack in main memory. However, EcoGrid models each node by an object instead of a thread. Memory space used by an object is much smaller than a thread, thus EcoGrid is highly scalable as compared to state-of-the-art simulators. EcoGrid is dynamically configurable and works with real as well as synthetic workloads. The simulator is bundled with a synthetic load generator that generates the workload using appropriate statistical distributions.


Author(s):  
Hemant Kumar Mehta

This paper presents a toolkit for evaluation of resource management algorithms developed for Grid computing. This simulator named as EcoGrid and it is devised to support large number of resource or computing nodes and processes. Generally, grid simulators represent each resource using a thread that occupies large amount of space on the thread stack in main memory. However, EcoGrid models each node by an object instead of a thread. Memory space used by an object is much smaller than a thread, thus EcoGrid is highly scalable as compared to state-of-the-art simulators. EcoGrid is dynamically configurable and works with real as well as synthetic workloads. The simulator is bundled with a synthetic load generator that generates the workload using appropriate statistical distributions.


2019 ◽  
Vol 8 (1) ◽  
pp. 1-5
Author(s):  
Marvin Chandra Wijaya

The performance of web processing needs to increase to meet the growth of internet usage, one of which is by using cache on the web proxy server. This study examines the implementation of the proxy cache replacement algorithm to increase cache hits in the proxy server. The study was conducted by creating a clustered or distributed web server system using eight web server nodes. The system was able to provide increased latency by 90 % better and increased throughput of 5.33 times better.


Author(s):  
Amjad Mahmood ◽  
Taher S.K. Homeed

Object replication is a well-known technique to improve performance of a distributed Web server system. This paper first presents an algorithm to group correlated Web objects that are most likely to be requested by a given client in a single session so that they can be replicated together, preferably, on the same server. A centralized object replication algorithm is then proposed to replicate the object groups to a cluster of Web-server system in order to minimize the user perceived latency subject to certain constraints. Due to dynamic nature of the Web contents and users’ access patterns, a distributed object replication algorithm is also proposed where each site locally replicates the object groups based on the local access patterns. The performance of the proposed algorithms is compared with three well-known algorithms and the results are reported. The results demonstrate the superiority of the proposed algorithms.


SIMULATION ◽  
1997 ◽  
Vol 68 (1) ◽  
pp. 23-33 ◽  
Author(s):  
Martin F. Arlitt ◽  
Carey L. Williamson

Given the continued growth of the World-Wide Web, performance of Web sewers is becoming increasingly important. File caching can be used to reduce the time that it takes a Web server to respond to client requests, by storing the most popular files in the main memory of the Web sewer, and by reducing the volume of data that must be transferred between secondary storage and the Web server. In this paper, we use trace-driven simulation to evaluate the effects of various replacement, threshold, and partitioning policies on the performance of a Web sewer. The workload traces for the simulations come from Web server access logs, from six different Internet Web sewers. The traces represent three different orders of magnitude in sewer activity and two different orders of magnitude in time duration. The results from our simulation study show that frequency-based caching strategies, using a variation of the Least Frequently Used (LFU) replacement policy, perform the best for the Web sewer workload traces considered. Thresholding policies and cache partitioning policies for Internet Web servers do not appear to be effective.


2013 ◽  
Vol 1 (2) ◽  
pp. 28
Author(s):  
Dite Ardian ◽  
Adian Fatchur Rochim ◽  
Eko Didik Widianto

The development of internet technology has many organizations that expanded service website. Initially used single web server that is accessible to everyone through the Internet, but when the number of users that access the web server is very much the traffic load to the web server and the web server anyway. It is necessary for the optimization of web servers to cope with the overload received by the web server when traffic is high. Methodology of this final project research include the study of literature, system design, and testing of the system. Methods from the literature reference books related as well as from several sources the internet. The design of this thesis uses Haproxy and Pound Links as a load balancing web server. The end of this reaserch is testing the network system, where the system will be tested this stage so as to create a web server system that is reliable and safe. The result is a web server system that can be accessed by many user simultaneously rapidly as load balancing Haproxy and Pound Links system which is set up as front-end web server performance so as to create a web server that has performance and high availability.


2012 ◽  
Vol 1 (2) ◽  
pp. 15-27
Author(s):  
Harikesh Singh ◽  
Shishir Kumar

The traffic increasing in the network creates bulk congestion while the bulk transfer of data evolves. Performance evaluation and high availability of servers are important factors to resolve this problem using various cluster based systems. There are several low-cost servers using the load sharing cluster system which are connected to high speed networks, and apply load balancing technique between servers. It offers high computing power and high availability. A distributed website server can provide scalability and flexibility to manage with emergent client demands. Efficiency of a replicated web server system will depend on the way of distributed incoming requests among these replicas. A distributed Web-server architectures schedule client requests among the multiple server nodes in a user-transparent way that affects the scalability and availability. The aim of this paper is the development of a load balancing techniques on distributed Web-server systems.


2016 ◽  
Vol 1 (1) ◽  
pp. 001
Author(s):  
Harry Setya Hadi

String searching is a common process in the processes that made the computer because the text is the main form of data storage. Boyer-Moore is the search string from right to left is considered the most efficient methods in practice, and matching string from the specified direction specifically an algorithm that has the best results theoretically. A system that is connected to a computer network that literally pick a web server that is accessed by multiple users in different parts of both good and bad aim. Any activity performed by the user, will be stored in Web server logs. With a log report contained in the web server can help a web server administrator to search the web request error. Web server log is a record of the activities of a web site that contains the data associated with the IP address, time of access, the page is opened, activities, and access methods. The amount of data contained in the resulting log is a log shed useful information.


Sign in / Sign up

Export Citation Format

Share Document