An Adaptive-Grained Consistency Maintenance Scheme for Shared Data on Emergency and Rescue Applications

2013 ◽  
Vol 5 (2) ◽  
pp. 54-71 ◽  
Author(s):  
Jyh-Biau Chang ◽  
Po-Cheng Chen ◽  
Ce-Kuen Shieh ◽  
Jia-Hao Yang ◽  
Sheng-Hung Hsieh

Efficient information sharing is difficult to achieve in the scenario of emergency and rescue operations because there is no communication infrastructure at the disaster sites. In general, the network condition is relatively reliable in the intra-site environment but relatively unreliable in the inter-site environment. The network partitioning problem may occur between two sites. Although one can exploit the replication technique used in data grid to improve the information availability in emergency and rescue applications, the data consistency problem occurs between replicas. In this paper, the authors propose a middleware called “Seagull” to transparently manage the data availability and consistency issues of emergency and rescue applications. Seagull adopts the optimistic replication scheme to provide the higher data availability in the inter-site environment. It also adopts the pessimistic replication scheme to provide the stronger data consistency guarantee in the intra-site environment. Moreover, it adopts an adaptive consistency granularity strategy that achieves the better performance of the consistency management because this strategy provides the higher parallelism when the false sharing happens. Lastly, Seagull adopts the transparency data consistency management scheme, and thus the users do not need to modify their source codes to run on the Seagull.

2020 ◽  
Vol 63 (8) ◽  
pp. 1216-1230 ◽  
Author(s):  
Wei Guo ◽  
Sujuan Qin ◽  
Jun Lu ◽  
Fei Gao ◽  
Zhengping Jin ◽  
...  

Abstract For a high level of data availability and reliability, a common strategy for cloud service providers is to rely on replication, i.e. storing several replicas onto different servers. To provide cloud users with a strong guarantee that all replicas required by them are actually stored, many multi-replica integrity auditing schemes were proposed. However, most existing solutions are not resource economical since users need to create and upload replicas of their files by themselves. A multi-replica solution called Mirror is presented to overcome the problems, but we find that it is vulnerable to storage saving attack, by which a dishonest provider can considerably save storage costs compared to the costs of storing all the replicas honestly—while still can pass any challenge successfully. In addition, we also find that Mirror is easily subject to substitution attack and forgery attack, which pose new security risks for cloud users. To address the problems, we propose some simple yet effective countermeasures and an improved proofs of retrievability and replication scheme, which can resist the aforesaid attacks and maintain the advantages of Mirror, such as economical bandwidth and efficient verification. Experimental results show that our scheme exhibits comparable performance with Mirror while achieving high security.


Author(s):  
Shinichi Fukushige ◽  
Yuki Matsuyama ◽  
Eisuke Kunii ◽  
Yasushi Umeda

Within the framework of sustainability in manufacturing industry, product lifecycle design is a key approach for constructing resource circulation systems of industrial products that drastically reduce environmental loads, resource consumption and waste generation. In such design, designers should consider both a product and its lifecycle from a holistic viewpoint, because the product’s structure, geometry, and other attributes are closely coupled with the characteristics of the lifecycle. Although product lifecycle management (PLM) systems integrate product data during its lifecycle into one data architecture, they do not focus on support for lifecycle design process. In other words, PLM does not provide explicit models for designing product lifecycles. This paper proposes an integrated model of a product and its lifecycle and a method for managing consistency between the two. For the consistency management, three levels of consistency (i.e., topological, geometric, and semantic) are defined. Based on this management scheme, the product lifecycle model allows designers to evaluate environmental, economic, and other performance of the designed lifecycle using lifecycle simulation.


Author(s):  
Suyud Widiono

A database server called the Database Management System (DBMS) that relates tables in a database is called the Relational Database Management System (RDBMS). DBMS/RDBMS is a computer program that provides data services for computers or other computer programs. One of the RDBMS type database server (hereinafter referred to as a database server) is MariaDB. The database server is in charge of managing and providing data, so data must always be ready, fast presented, accurate, and safe, it cannot be damaged or even lost. One way to provide this data is to install several database servers using the concept of replication in the Multiple Server Database system. Replication in a cluster server database is a method of installing several database server nodes that allow between node servers to copy each other and distribute data from one node to another database server node, which then synchronizes data between server nodes to maintain data consistency. This study looks for the most optimal number of minimal database server nodes to provide accurate, fast and safe data on the MariaDB Cluster RDBMS. From the results of the replication test from the cluster server database, it can be concluded that the number of 3 (three) node servers can be known to always synchronize and consistency of data between server nodes, so there are 3 (three) nodes of minimum database node with MariaDB RDBMS.


Author(s):  
Neng Huang ◽  
Junxing Zhu ◽  
Chaonian Guo ◽  
Shuhan Cheng ◽  
Xiaoyong Li

With the rapid development of mobile Internet, there is a higher demand for the real-time, reliability and availability of information systems and to prevent the possible systemic risks of information systems, various business consistency standards and regulatory guidelines have been published, such as Recovery Time Object (RTO) and Recovery Point Object (RPO). Some of the current related researches focus on the standards, methods, management tools and technical frameworks of business consistency, while others study the data consistency algorithms in the cases of large data, cloud computing and distributed storage. However, few researchers have studied on how to monitor the data consistency and RPO of production-disaster recovery, and what architecture and technology should be applied in the monitoring. Moreover, in some information systems, due to the complex structures and distributions of data, it is difficult for traditional methods to quickly detect and accurately locate the first error data. Besides, due to the separation of production data center (PDC) and disaster recovery data center (DRDC), it is difficult to calculate the data difference and RPO between the two centers. This paper first discusses the architecture of remote distributed DRDCs. The architecture can make the disaster recovery (DR) system always online and the data always readable, and support the real-time monitoring of data availability, consistency as well as other related indicators, in this way to make DRDC out-of-the-box in disasters. Second, inspired by blockchain, this paper proposes a method to realize real-time monitoring of data consistency and RTO by building hash chains for PDC and DRDC. Third, this paper evaluates the hash chain operations from the algorithm time complexity, the data consistency, and the validity of RPO monitoring algorithms and since DR system is actually a kind of distributed system, the proposed approach can also be applied to the data consistency detection and data difference monitoring in other distributed systems.


Author(s):  
Ghalem Belalem ◽  
Naima Belayachi ◽  
Radjaa Behidji ◽  
Belabbes Yagoubi

Data grids are current solutions to the needs of large scale systems and provide a set of different geographically distributed resources. Their goal is to offer an important capacity of parallel calculation, ensure a data effective and rapid access, improve the availability, and tolerate the breakdowns. In such systems, however, these advantages are possible only by using the replication technique. The use of this technique raises the problem of maintaining consistency of replicas of the same data set. In order to guarantee replica set reliability, it is necessary to have high coherence. This fact, however, penalizes performance. In this paper, the authors propose studying balancing influence on replica quality. For this reason, a service of hybrid consistency management is developed, which combines the pessimistic and optimistic approaches and is extended by a load balancing service to improve service quality. This service is articulated on a hierarchical model with two levels.


2004 ◽  
Vol 05 (03) ◽  
pp. 299-312 ◽  
Author(s):  
M. D. MUSTAFA ◽  
B. NATHRAH ◽  
M. H. SUZURI ◽  
M. T. ABU OSMAN

Replication is an important technique in peer-to-peer environment, where it increases data availability and accessibility to users despite site or communication failure. However, determining the number of replication and where to replicate the data are the major issues. This paper proposes a hybrid replication model for fixed and mobile network in order to achieve high data availability. For the fixed network, a data will be replicated synchronously in a diagonal manner of logical grid structure, while for the mobile network, a data will be replicated asynchronously based on commonly visited sites for each user. In comparison to the previous techniques, diagonal replication technique (DRG) on fixed network requires lower communication cost for an operation, while providing higher data availability, which is preferred for large systems.


Author(s):  
Jie Song ◽  
Silvia Calatrava Sierra ◽  
Jaime Caffarel Rodriguez ◽  
Jorge Martin Perandones ◽  
Guillermo del Campo Jimenez ◽  
...  

Cloud computing technology has gained substantial research interest, due to its remarkable range of services. The major concerns of cloud computing are availability and security. Several security algorithms are presented in the literature for achieving better security and the data availability is increased by utilizing data replicas. However, creation of replicas for all the data is unnecessary and consumes more storage space. Considering this issue, this article presents a Secure Data Replication Management Scheme (SDRMS), which creates replicas by considering the access frequency of the data and the replicas are loaded onto the cloud server by considering the current load of it. This idea balances the load of the cloud server. All the replicas are organized in a tree like structure and the replicas with maximum hit ratio are placed on the first level of the tree to ensure better data accessibility. The performance of the work is satisfactory in terms of data accessibility, storage exploitation, replica allocation and retrieval time.


Sign in / Sign up

Export Citation Format

Share Document