scholarly journals Secure Data Compression Scheme for Scalable Data in Dynamic Data Storage Environments

Author(s):  
K. Suvetha Bharathi ◽  
K. Palanivel

With the continuous and exponential increase of the number of users and the size of their data, data deduplication becomes more and more a necessity for cloud storage providers. By storing a unique copy of duplicate data, cloud providers greatly reduce their storage and data transfer costs. These huge volumes of data need some practical platforms for the storage, processing and availability and cloud technology offers all the potentials to fulfill these requirements. Data deduplicationis referred to as a strategy offered to data providers to eliminate the duplicate data and keeps only a single unique copy of it for storage space saving purpose. This paper, presents a scheme that permits a more fine-grained trade-off. The intuition is that outsourced data may require different levels of protection, depending on how popular content is shared by many users. A novel idea is presented that differentiates data according to their popularity. Based on this idea, an encryption scheme is designed that guarantees semantic security for unpopular data and also provides the higher level security to the cloud data. This way, data de-duplication can be effective for popular data, whilst semantically secure encryption protects unpopular content. Also, the backup recover system can be used at the time of blocking and also analyze frequent login access system.

Author(s):  
Gokulakrishnan V ◽  
Illakiya B

With the rapidly increasing amounts of data produced worldwide, networked and multi- user storage systems are becoming very popular. However, concerns over data security still prevent many users from migrating data to remote storage. The conventional solution is to encrypt the data before it leaves the owner’s premises. While sound from a security perspective,this approach prevents the storage provider from effectively applying storage efficiency functions, such as compression and deduplication, which would allow optimal usage of the resources and consequently lower service cost. Client-side data deduplication in particular ensures that multiple uploads of the same content only consume network bandwidth and storage space of a single upload. Deduplication is actively used by a number of backup providers as well as various data services. In this project, we present a scheme that permits the storage without duplication of multiple types of files. And also need the intuition is that outsourced data may require different levels of protection. Based on this idea, we design an encryption scheme that guarantees semantic security for unpopular data and provides weaker security and better storage and bandwidth benefits for popular data. This way, data deduplication can be effective for popular data, whilst semantically secure encryption protects unpopular content. We can use the backup recover system at the time of blocking and also analyze frequent log in access system.


Author(s):  
A. Mohamed Divan Masood ◽  
S. K. Muthusundar

<p>The explosive increase of data brings new challenges to the data storage and supervision in cloud settings. These data typically have to be processed in an appropriate fashion in the cloud. Thus, any improved latency may originanimmense loss to the enterprises. Duplication detection plays a very main role in data management. Data deduplication calculates an exclusive fingerprint for each data chunk by using hash algorithms such as MD5 and SHA-1. The designed fingerprint is then comparing against other accessible chunks in a database that dedicates for storing the chunks. As an outcome, Deduplication system improves storage consumption while reducing reliability. Besides, the face of privacy for responsive data also arises while they are outsourced by users to cloud. Aiming to deal with the above security challenges, this paper makes the first effort to honor the notion of distributed dependable Deduplication system. We offer new distributed Deduplication systems with privileged reliability in which the data chunks are distributed across a variety of cloud servers. The protection needs an different of using convergent encryption as in foregoing Deduplication systems.</p>


2016 ◽  
Vol 1 (1) ◽  
pp. 145-158 ◽  
Author(s):  
Hualong Wu ◽  
Bo Zhao

AbstractThe emergence of cloud computing brings the infinite imagination space, both in individual and organizations, due to its unprecedented advantages in the IT history: on-demand self-service, ubiquitous network access, location independent resource pooling, rapid resource elasticity, usage-based pricing and transference of risk. Many of the individuals or organizations ease the pressure on their local data storage, and mitigate the maintenance overhead of local data storage by using outsource data to cloud. However, the data outsourcing is not absolutely safe in the cloud. In order to enhance the users’ confidence of the integrity of their outsource data in the cloud. To promote the rapid deployment of cloud data storage service and regain security assurances with outsourced data dependability, many scholars tend to design the Remote Data Auditing (RDA) technique as a new concept to enable public auditability for the outsourced data in the cloud. The RDA is a useful technique to ensure the correctness of the data outsourced to cloud servers. This paper presents a comprehensive survey on techniques of remote data auditing in cloud server. Recently, more and more remote auditing approaches are categorized into the three different classes, that is, replication-based, erasure coding-based, and network coding-based to present a taxonomy. This paper also aims to the explore major issues.


Author(s):  
M. Chinnadurai ◽  
A. Jayashri

Cloud computing is one of the important factoring that leads it into a productive phase. This means that most of the main problems with cloud computing have been addressed to a degree that clouds have become interesting for full commercial exploitation. However, permissions over data security still prevent many users from migrating data to remote storage. Client-side data compression in particular ensures that multiple uploads of the same content only consume network bandwidth and storage space of a single upload. Compression is actively used by a number of cloud backup providers as well as various cloud services. Unfortunately, encrypted data is pseudorandom and thus cannot be deduplicated: as a consequence, current schemes have to entirely sacrifice either security or storage efficiency. In this system, present a scheme that permits a more fine-grained trade-off. The intuition is that outsourced data may require different levels of protection, depending on how popular it is: content shared by many users. Then present a novel idea that differentiates data according to their popularity. In this proposed system, implement an encryption scheme that guarantees semantic security for unpopular data and provides weaker security and better storage and bandwidth benefits for popular data. Proposed data de-duplication can be effective for popular data, also semantically secure encryption protects unpopular content. Finally, can use the backup recover system at the time of blocking and also analyze frequent login access system.


Author(s):  
R. MYTHILI ◽  
P. PRADHEEBA ◽  
P. RAJESHWARI ◽  
S. PADHMAVATHI

The end of this decade is marked by a paradigm shift of the industrial information technology towards a pay-peruse service business model known as cloud computing. Cloud data storage redefines the security issues targeted on customer’s outsourced data. To ensure the correctness of users’ data in the cloud, we propose an effective and flexible distributed scheme with two salient features, opposing to its predecessors. By utilizing the homomorphic token with distributed verification of raptor coded data, our scheme achieves the integration of storage correctness insurance and data error localization, i.e., the identification of misbehaving server(s).Using this new scheme it further support security and dynamic operations on data block. Our result shows that, our proposed model provides a secure storage for data in cloud.


Author(s):  
Ramya. S ◽  
Gokula Krishnan. V

Big data has reached a maturity that leads it into a productive phase. This means that most of the main issues with big data have been addressed to a degree that storage has become interesting for full commercial exploitation. However, concerns over data compression still prevent many users from migrating data to remote storage. Client-side data compression in particular ensures that multiple uploads of the same content only consume network bandwidth and storage space of a single upload. Compression is actively used by a number of backup providers as well as various services. Unfortunately, compressed data is pseudorandom and thus cannot be deduplicated: as a consequence, current schemes have to entirely sacrifice storage efficiency. In this system, present a scheme that permits a more fine-grained trade-off. And present a novel idea that differentiates data according to their popularity. Based on this idea, design a compression scheme that guarantees semantic storage preservation for unpopular data and provides scalable data storage and bandwidth benefits for popular data. We can implement variable data chunk similarity algorithm for analyze the chunks data and store the original data with compressed format. And also includes the encryption algorithm to secure the data. Finally, can use the backup recover system at the time of blocking and also analyze frequent login access system.


Author(s):  
Sunil S ◽  
A Ananda Shankar

Cloud storage system is to provides facilitative file storage and sharing services for distributed clients.The cloud storage preserve the privacy of data holders by proposing a scheme to manage encrypted data storage with deduplication. This process can flexibly support data sharing with deduplication even when the data holder is offline, and it does not intrude the privacy of data holders. It is an effective approach to verify data ownership and check duplicate storage with secure challenge and big data support. We integrate cloud data deduplication with data access control in a simple way, thus reconciling data deduplication and encryption.We prove the security and assess the performance through analysis and simulation. The results show its efficiency, effectiveness and applicability.In this proposed system the upload data will be stored on the cloud based on date.This means that it has to be available to the data holder who need it when they need it. The web log record represents whether the keyword is repeated or not. Records with only repeated search data are retained in primary storage in cloud. All the other records are stored in temporary storage server. This step reduces the size of the web log thereby avoids the burden on the memory and speeds up the analysis.


Sign in / Sign up

Export Citation Format

Share Document