Improving Data Availability through Dynamic Model-Driven Replication in Large Peer-to-Peer Communities

Author(s):  
K. Ranganathan ◽  
A. Iamnitchi ◽  
I. Foster
2021 ◽  
Author(s):  
Hoda R.K. Nejad

With the emergence of wireless devices, service delivery for ad-hoc networks has started to attract a lot of attention recently. Ad-hoc networks provide an attractive solution for networking in the situations where network infrastructure or service subscription is not available. We believe that overlay networks, particularly peer-to-peer (P2P) systems, is a good abstraction for application design and deployment over ad-hoc networks. The principal benefit of this approach is that application states are only maintained by the nodes involved in the application execution and all other nodes only perform networking related functions. On the other hand, data access applications in Ad-hoc networks suffer from restricted resources. In this thesis, we explore how to use Cooperative Caching to improve data access efficiency in Ad-hoc network. We propose a Resource-Aware Cooperative Caching P2P system (RACC) for data access applications in Ad-hoc networks. The objective is to improve data availability by considering energy of each node, demand and supply of network. We evaluated and compared the performance of RACC with Simple Cache, CachePath and CacheData schemes. Our simulation results show that RACC improves the lay of query as well as energy usage of the network as compared to Simple Cache, CachePath and CacheData.


Author(s):  
Enrico Franchi ◽  
Michele Tomaiuolo

Social networking sites have deeply changed the perception of the web in the last years. Although the current approach to build social networking systems is to create huge centralized systems owned by a single company, such strategy has many drawbacks, e.g., lack of privacy, lack of anonymity, risks of censorship and operating costs. These issues contrast with some of the main requirements of information systems, including: (i) confidentiality, i.e., the interactions between a user and the system must remain private unless explicitly public; (ii) integrity; (iii) accountability; (iv) availability; (v) identity and anonymity. Moreover, social networking platforms are vulnerable to many kind of attacks: (i) masquerading, which occurs when a user disguises his identity and pretends to be another user; (ii) unauthorized access; (iii) denial of service; (iv) repudiation, which occurs when a user participates in an activity and later claims he did not; (v) eavesdropping; (vi) alteration of data; (vii) copy and replay attacks; and, in general, (viii) attacks making use of social engineering techniques. In order to overcome both the intrinsic defects of centralized systems and the general vulnerabilities of social networking platforms, many different approaches have been proposed, both as federated (i.e., consisting of multiple entities cooperating to provide the service, but usually distinct from users) or peer-to-peer systems (with users directly cooperating to provide the service); in this work the most interesting ones were reviewed. Eventually, the authors present their own approach to create a solid distributed social networking platform consisting in a novel peer-to-peer system that leverages existing, widespread and stable technologies such as distributed hash tables and BitTorrent. The topics considered in detail are: (i) anonymity and resilience to censorship; (ii) authenticatable contents; (iii) semantic interoperability using activity streams and weak semantic data formats for contacts and profiles; and (iv) data availability.


2004 ◽  
Vol 05 (03) ◽  
pp. 299-312 ◽  
Author(s):  
M. D. MUSTAFA ◽  
B. NATHRAH ◽  
M. H. SUZURI ◽  
M. T. ABU OSMAN

Replication is an important technique in peer-to-peer environment, where it increases data availability and accessibility to users despite site or communication failure. However, determining the number of replication and where to replicate the data are the major issues. This paper proposes a hybrid replication model for fixed and mobile network in order to achieve high data availability. For the fixed network, a data will be replicated synchronously in a diagonal manner of logical grid structure, while for the mobile network, a data will be replicated asynchronously based on commonly visited sites for each user. In comparison to the previous techniques, diagonal replication technique (DRG) on fixed network requires lower communication cost for an operation, while providing higher data availability, which is preferred for large systems.


Author(s):  
Babak Behzad ◽  
Surendra Byna ◽  
Stefan M. Wild ◽  
Prabhat ◽  
Marc Snir

2020 ◽  
Vol 14 (3) ◽  
pp. 332-341
Author(s):  
Fariba Khazaei Koohpar ◽  
Afsaneh Fatemi ◽  
Fatemeh Raji

Author(s):  
Franklin F. K. Chen ◽  
B. Ronald Moncrief

Abstract A canyon building houses special nuclear material processing facilities in two canyon like structures, each with approximately a million cubic feet of air space and a hundred thousand hydraulic equivalent feet of ductwork of various cross sections. The canyon ventilation system is a “once through” design with separate supply and exhaust fans, utilizes two large sand filters to remove radionuclide particulate matter, and exhausts through a tall stack. The ventilation equipment is similar to most industrial ventilation systems. However, in a canyon building, nuclear contamination prohibits access to a large portion of the system and therefore limits the kind of plant data possible. The facility investigated is 40 years old and is operating with original or replacement equipment of comparable antiquity. These factors, access and aged equipment, present a challenge in gauging the performance of canyon ventilation, particularly under uncommon operating conditions. The ability to assess canyon ventilation system performance became critical with time, as the system took on additional exhaust loads and aging equipment approached design maximum. Many “What if?” questions, needed to address modernization/safety issues, are difficult to answer without a dynamic model. This paper describes the development, the validation and the utilization of a dynamic model to analyze the capacity of this ventilation system, under many unusual but likely conditions. The development of a ventilation model with volume and hydraulics of this scale is unique. The resultant model resolutions of better than 0.05″wg under normal plant conditions and approximately 0.2″wg under all plant conditions achievable with a desktop computer is a benchmark of the power of micro-computers. The detail planning and the persistent execution of large scale plant experiments under very restrictive conditions not only produced data to validate the model but lent credence to subsequent applications of the model to mission oriented analysis. Modelling methodology adopted a two parameter space approach, rational parameters and irrational parameters. Rational parameters, such as fan age-factors, idle parameters, infiltration areas and tunnel hydraulic parameters are deduced from plant data based on certain hydraulic models. Due to limited accessibility and therefore partial data availability, the identification of irrational model parameters, such as register positions and unidentifiable infiltrations, required unique treatment of the parameter space. These unique parameters were identified by a numerical search strategy to minimize a set of performance indices. With the large number of parameters, this further attests to our strategy in utilizing the computing power of modern micros. Nine irrational parameters at five levels and 12 sets of plant data, counting up to 540 runs, were completely searched over the time span of a long weekend. Some key results, in assessing emergency operation, in evaluating modernization options, are presented to illustrate the functions of the dynamic model.


2015 ◽  
Vol 9 (6) ◽  
pp. 2399-2404 ◽  
Author(s):  
B. Marzeion ◽  
P. W. Leclercq ◽  
J. G. Cogley ◽  
A. H. Jarosch

Abstract. Recent estimates of the contribution of glaciers to sea-level rise during the 20th century are strongly divergent. Advances in data availability have allowed revisions of some of these published estimates. Here we show that outside of Antarctica, the global estimates of glacier mass change obtained from glacier-length-based reconstructions and from a glacier model driven by gridded climate observations are now consistent with each other, and also with an estimate for the years 2003–2009 that is mostly based on remotely sensed data. This consistency is found throughout the entire common periods of the respective data sets. Inconsistencies of reconstructions and observations persist in estimates on regional scales.


Sign in / Sign up

Export Citation Format

Share Document