scholarly journals Fast Optimal Replica Placement with Exhaustive Search Using Dynamically Reconfigurable Processor

2011 ◽  
Vol 2011 ◽  
pp. 1-11
Author(s):  
Hidetoshi Takeshita ◽  
Sho Shimizu ◽  
Hiroyuki Ishikawa ◽  
Akifumi Watanabe ◽  
Yutaka Arakawa ◽  
...  

This paper proposes a new replica placement algorithm that expands the exhaustive search limit with reasonable calculation time. It combines a new type of parallel data-flow processor with an architecture tuned for fast calculation. The replica placement problem is to find a replica-server set satisfying service constraints in a content delivery network (CDN). It is derived from the set cover problem which is known to be NP-hard. It is impractical to use exhaustive search to obtain optimal replica placement in large-scale networks, because calculation time increases with the number of combinations. To reduce calculation time, heuristic algorithms have been proposed, but it is known that no heuristic algorithm is assured of finding the optimal solution. The proposed algorithm suits parallel processing and pipeline execution and is implemented on DAPDNA-2, a dynamically reconfigurable processor. Experiments show that the proposed algorithm expands the exhaustive search limit by the factor of 18.8 compared to the conventional algorithm search limit running on a Neumann-type processor.

2020 ◽  
Author(s):  
Anusha Ampavathi ◽  
Vijaya Saradhi T

UNSTRUCTURED Big data and its approaches are generally helpful for healthcare and biomedical sectors for predicting the disease. For trivial symptoms, the difficulty is to meet the doctors at any time in the hospital. Thus, big data provides essential data regarding the diseases on the basis of the patient’s symptoms. For several medical organizations, disease prediction is important for making the best feasible health care decisions. Conversely, the conventional medical care model offers input as structured that requires more accurate and consistent prediction. This paper is planned to develop the multi-disease prediction using the improvised deep learning concept. Here, the different datasets pertain to “Diabetes, Hepatitis, lung cancer, liver tumor, heart disease, Parkinson’s disease, and Alzheimer’s disease”, from the benchmark UCI repository is gathered for conducting the experiment. The proposed model involves three phases (a) Data normalization (b) Weighted normalized feature extraction, and (c) prediction. Initially, the dataset is normalized in order to make the attribute's range at a certain level. Further, weighted feature extraction is performed, in which a weight function is multiplied with each attribute value for making large scale deviation. Here, the weight function is optimized using the combination of two meta-heuristic algorithms termed as Jaya Algorithm-based Multi-Verse Optimization algorithm (JA-MVO). The optimally extracted features are subjected to the hybrid deep learning algorithms like “Deep Belief Network (DBN) and Recurrent Neural Network (RNN)”. As a modification to hybrid deep learning architecture, the weight of both DBN and RNN is optimized using the same hybrid optimization algorithm. Further, the comparative evaluation of the proposed prediction over the existing models certifies its effectiveness through various performance measures.


Electronics ◽  
2021 ◽  
Vol 10 (4) ◽  
pp. 423
Author(s):  
Márk Szalay ◽  
Péter Mátray ◽  
László Toka

The stateless cloud-native design improves the elasticity and reliability of applications running in the cloud. The design decouples the life-cycle of application states from that of application instances; states are written to and read from cloud databases, and deployed close to the application code to ensure low latency bounds on state access. However, the scalability of applications brings the well-known limitations of distributed databases, in which the states are stored. In this paper, we propose a full-fledged state layer that supports the stateless cloud application design. In order to minimize the inter-host communication due to state externalization, we propose, on the one hand, a system design jointly with a data placement algorithm that places functions’ states across the hosts of a data center. On the other hand, we design a dynamic replication module that decides the proper number of copies for each state to ensure a sweet spot in short state-access time and low network traffic. We evaluate the proposed methods across realistic scenarios. We show that our solution yields state-access delays close to the optimal, and ensures fast replica placement decisions in large-scale settings.


Author(s):  
Rui Qiu ◽  
Yongtu Liang

Abstract Currently, unmanned aerial vehicle (UAV) provides the possibility of comprehensive coverage and multi-dimensional visualization of pipeline monitoring. Encouraged by industry policy, research on UAV path planning in pipeline network inspection has emerged. The difficulties of this issue lie in strict operational requirements, variable flight missions, as well as unified optimization for UAV deployment and real-time path planning. Meanwhile, the intricate structure and large scale of the pipeline network further complicate this issue. At present, there is still room to improve the practicality and applicability of the mathematical model and solution strategy. Aiming at this problem, this paper proposes a novel two-stage optimization approach for UAV path planning in pipeline network inspection. The first stage is conventional pre-flight planning, where the requirement for optimality is higher than calculation time. Therefore, a mixed integer linear programming (MILP) model is established and solved by the commercial solver to obtain the optimal UAV number, take-off location and detailed flight path. The second stage is re-planning during the flight, taking into account frequent pipeline accidents (e.g. leaks and cracks). In this stage, the flight path must be timely rescheduled to identify specific hazardous locations. Thus, the requirement for calculation time is higher than optimality and the genetic algorithm is used for solution to satisfy the timeliness of decision-making. Finally, the proposed method is applied to the UAV inspection of a branched oil and gas transmission pipeline network with 36 nodes and the results are analyzed in detail in terms of computational performance. In the first stage, compared to manpower inspection, the total cost and time of UAV inspection is decreased by 54% and 56% respectively. In the second stage, it takes less than 1 minute to obtain a suboptimal solution, verifying the applicability and superiority of the method.


Author(s):  
Mustafa C. Camur ◽  
Thomas Sharkey ◽  
Chrysafis Vogiatzis

We consider the problem of identifying the induced star with the largest cardinality open neighborhood in a graph. This problem, also known as the star degree centrality (SDC) problem, is shown to be [Formula: see text]-complete. In this work, we first propose a new integer programming (IP) formulation, which has a smaller number of constraints and nonzero coefficients in them than the existing formulation in the literature. We present classes of networks in which the problem is solvable in polynomial time and offer a new proof of [Formula: see text]-completeness that shows the problem remains [Formula: see text]-complete for both bipartite and split graphs. In addition, we propose a decomposition framework that is suitable for both the existing and our formulations. We implement several acceleration techniques in this framework, motivated by techniques used in Benders decomposition. We test our approaches on networks generated based on the Barabási–Albert, Erdös–Rényi, and Watts–Strogatz models. Our decomposition approach outperforms solving the IP formulations in most of the instances in terms of both solution time and quality; this is especially true for larger and denser graphs. We then test the decomposition algorithm on large-scale protein–protein interaction networks, for which SDC is shown to be an important centrality metric. Summary of Contribution: In this study, we first introduce a new integer programming (NIP) formulation for the star degree centrality (SDC) problem in which the goal is to identify the induced star with the largest open neighborhood. We then show that, although the SDC can be efficiently solved in tree graphs, it remains [Formula: see text]-complete in both split and bipartite graphs via a reduction performed from the set cover problem. In addition, we implement a decomposition algorithm motivated by Benders decomposition together with several acceleration techniques to both the NIP formulation and the existing formulation in the literature. Our experimental results indicate that the decomposition implementation on the NIP is the best solution method in terms of both solution time and quality.


2017 ◽  
Vol 59 ◽  
pp. 463-494 ◽  
Author(s):  
Shaowei Cai ◽  
Jinkun Lin ◽  
Chuan Luo

The problem of finding a minimum vertex cover (MinVC) in a graph is a well known NP-hard combinatorial optimization problem of great importance in theory and practice. Due to its NP-hardness, there has been much interest in developing heuristic algorithms for finding a small vertex cover in reasonable time. Previously, heuristic algorithms for MinVC have focused on solving graphs of relatively small size, and they are not suitable for solving massive graphs as they usually have high-complexity heuristics. This paper explores techniques for solving MinVC in very large scale real-world graphs, including a construction algorithm, a local search algorithm and a preprocessing algorithm. Both the construction and search algorithms are based on low-complexity heuristics, and we combine them to develop a heuristic algorithm for MinVC called FastVC. Experimental results on a broad range of real-world massive graphs show that, our algorithms are very fast and have better performance than previous heuristic algorithms for MinVC. We also develop a preprocessing algorithm to simplify graphs for MinVC algorithms. By applying the preprocessing algorithm to local search algorithms, we obtain two efficient MinVC solvers called NuMVC2+p and FastVC2+p, which show further improvement on the massive graphs.


Author(s):  
Ghalem Belalem

Data grids have become an interesting and popular domain in grid community (Foster and Kesselmann, 2004). Generally, the grids are proposed as solutions for large scale systems, where data replication is a well-known technique used to reduce access latency and bandwidth, and increase availability. In splitting of the advantages of replication, there are many problems that should be solved such as, • The replica placement that determines the optimal locations of replicated data in order to reduce the storage cost and data access (Xu et al, 2002); • The problem of determining which replica will be accessed to in terms of consistency when we need to execute a read or write operation (Ranganathan and Foster, 2001); • The problem of degree of replication which consists in finding a minimal number of replicas without reducing the performance of user applications; • The problem of replica consistency that concerns the consistency of a set of replicated data. This consistency provides a completely coherent view of all the replicas for a user (Gray et al 1996). Our principal aim, in this article, is to integrate into consistency management service, an approach based on an economic model for resolving conflicts detected in the data grid.


Sign in / Sign up

Export Citation Format

Share Document