Privacy and Integrity of Outsourced Data Storage and Processing

Big Data ◽  
2016 ◽  
pp. 341-356
Keyword(s):  
2016 ◽  
Vol 1 (1) ◽  
pp. 145-158 ◽  
Author(s):  
Hualong Wu ◽  
Bo Zhao

AbstractThe emergence of cloud computing brings the infinite imagination space, both in individual and organizations, due to its unprecedented advantages in the IT history: on-demand self-service, ubiquitous network access, location independent resource pooling, rapid resource elasticity, usage-based pricing and transference of risk. Many of the individuals or organizations ease the pressure on their local data storage, and mitigate the maintenance overhead of local data storage by using outsource data to cloud. However, the data outsourcing is not absolutely safe in the cloud. In order to enhance the users’ confidence of the integrity of their outsource data in the cloud. To promote the rapid deployment of cloud data storage service and regain security assurances with outsourced data dependability, many scholars tend to design the Remote Data Auditing (RDA) technique as a new concept to enable public auditability for the outsourced data in the cloud. The RDA is a useful technique to ensure the correctness of the data outsourced to cloud servers. This paper presents a comprehensive survey on techniques of remote data auditing in cloud server. Recently, more and more remote auditing approaches are categorized into the three different classes, that is, replication-based, erasure coding-based, and network coding-based to present a taxonomy. This paper also aims to the explore major issues.


10.29007/q3wd ◽  
2019 ◽  
Author(s):  
Kawthar Karkouda ◽  
Ahlem Nabli ◽  
Faiez Gargouri

Nowadays cloud computing become the most popular technology in the area of IT industry. It provides computing power, storage, network and software as a service. While building, a data warehouse typically necessitates an important initial investment. With the cloud pay-as-you-go model, BI system can benefit from this new technology. But, as every new technology, cloud computing brings its own risks in term of security. Because some security issues are inherited from classical architectures, some traditional security solutions are used to protect outsourced data. Unfortunately, those solutions are not enough and cannot guarantee the privacy of sensitive data hosted in the Cloud. In particular, in the case of data warehouse, using traditional encryption solutions cannot be practical because those solutions induce a heavy overhead in terms of data storage and query performance. So, a suitable schema must be proposed in order to balance the security and the performance of data warehouse hosted in the cloud. In this paper, we propose (TrustedDW) a homomorphic encryption schema for securing and querying a data warehouse hosted in the cloud.


Author(s):  
Yong-Feng Ge ◽  
Jinli Cao ◽  
Hua Wang ◽  
Zhenxiang Chen ◽  
Yanchun Zhang

AbstractBy breaking sensitive associations between attributes, database fragmentation can protect the privacy of outsourced data storage. Database fragmentation algorithms need prior knowledge of sensitive associations in the tackled database and set it as the optimization objective. Thus, the effectiveness of these algorithms is limited by prior knowledge. Inspired by the anonymity degree measurement in anonymity techniques such as k-anonymity, an anonymity-driven database fragmentation problem is defined in this paper. For this problem, a set-based adaptive distributed differential evolution (S-ADDE) algorithm is proposed. S-ADDE adopts an island model to maintain population diversity. Two set-based operators, i.e., set-based mutation and set-based crossover, are designed in which the continuous domain in the traditional differential evolution is transferred to the discrete domain in the anonymity-driven database fragmentation problem. Moreover, in the set-based mutation operator, each individual’s mutation strategy is adaptively selected according to the performance. The experimental results demonstrate that the proposed S-ADDE is significantly better than the compared approaches. The effectiveness of the proposed operators is verified.


Author(s):  
K. Suvetha Bharathi ◽  
K. Palanivel

With the continuous and exponential increase of the number of users and the size of their data, data deduplication becomes more and more a necessity for cloud storage providers. By storing a unique copy of duplicate data, cloud providers greatly reduce their storage and data transfer costs. These huge volumes of data need some practical platforms for the storage, processing and availability and cloud technology offers all the potentials to fulfill these requirements. Data deduplicationis referred to as a strategy offered to data providers to eliminate the duplicate data and keeps only a single unique copy of it for storage space saving purpose. This paper, presents a scheme that permits a more fine-grained trade-off. The intuition is that outsourced data may require different levels of protection, depending on how popular content is shared by many users. A novel idea is presented that differentiates data according to their popularity. Based on this idea, an encryption scheme is designed that guarantees semantic security for unpopular data and also provides the higher level security to the cloud data. This way, data de-duplication can be effective for popular data, whilst semantically secure encryption protects unpopular content. Also, the backup recover system can be used at the time of blocking and also analyze frequent login access system.


Data storage over cloud is a demanding service but it is vulnerable for various attacks. It is required to provide strong security to this data. An authenticity of the data on remote storage is necessary but data owners have to trust on cloud storage for their outsourced data. There is a need of some mechanism to check correctness of the data without compromising its confidentiality. We have studied homomorphic algorithms so that can be applied on the cipher text. Aim of the framework designed here is to enable any authentic user or auditor to verify hash value generated by owner with the hash value of encrypted data which is stored at remote storage for assuring authentication of the data. This can be done with strong mathematical cryptographic algorithms those can be used in the public environment and can provide better performance complexity.


Author(s):  
R. MYTHILI ◽  
P. PRADHEEBA ◽  
P. RAJESHWARI ◽  
S. PADHMAVATHI

The end of this decade is marked by a paradigm shift of the industrial information technology towards a pay-peruse service business model known as cloud computing. Cloud data storage redefines the security issues targeted on customer’s outsourced data. To ensure the correctness of users’ data in the cloud, we propose an effective and flexible distributed scheme with two salient features, opposing to its predecessors. By utilizing the homomorphic token with distributed verification of raptor coded data, our scheme achieves the integration of storage correctness insurance and data error localization, i.e., the identification of misbehaving server(s).Using this new scheme it further support security and dynamic operations on data block. Our result shows that, our proposed model provides a secure storage for data in cloud.


2015 ◽  
Vol 2015 ◽  
pp. 1-8 ◽  
Author(s):  
Lingwei Song ◽  
Dawei Zhao ◽  
Xuebing Chen ◽  
Chenlei Cao ◽  
Xinxin Niu

How to verify the integrity of outsourced data is an important problem in cloud storage. Most of previous work focuses on three aspects, which are providing data dynamics, public verifiability, and privacy against verifiers with the help of a third party auditor. In this paper, we propose an identity-based data storage and integrity verification protocol on untrusted cloud. And the proposed protocol can guarantee fair results without any third verifying auditor. The theoretical analysis and simulation results show that our protocols are secure and efficient.


Sign in / Sign up

Export Citation Format

Share Document