A Novel Methodology for Capitalizing on Cloud Storage through a Big Data-as-a-Service Framework

Author(s):  
Georgios Skourletopoulos ◽  
Constandinos X. Mavromoustakis ◽  
George Mastorakis ◽  
Periklis Chatzimisios ◽  
Jordi Mongay Batalla
2018 ◽  
Vol 4 (3) ◽  
pp. 325-340 ◽  
Author(s):  
Xiaokang Wang ◽  
Laurence T. Yang ◽  
Huazhong Liu ◽  
M. Jamal Deen

Author(s):  
Hong-Mei Chen ◽  
Rick Kazman ◽  
Serge Haziyev ◽  
Valentyn Kropov ◽  
Dmitri Chtchourov

2019 ◽  
Vol 8 (2S11) ◽  
pp. 3606-3611

Big data privacy has assumed importance as the cloud computing became a phenomenal success in providing a remote platform for sharing computing resources without geographical and time restrictions. However, the privacy concerns on the big data being outsourced to public cloud storage are still exist. Different anonymity or sanitization techniques came into existence for protecting big data from privacy attacks. In our prior works, we have proposed a misusability probability based metric to know the probable percentage of misusability. We additionally planned a system that suggests level of sanitization before actually applying privacy protection to big data. It was based on misusability probability. In this paper, our focus is on further evaluation of our misuse probability based sanitization of big data approach by defining an algorithm which willanalyse the trade-offs between misuse probability and level of sanitization. It throws light into the proposed framework and misusability measure besides evaluation of the framework with an empirical study. Empirical study is made in public cloud environment with Amazon EC2 (compute engine), S3 (storage service) and EMR (MapReduce framework). The experimental results revealed the dynamics of the trade-offs between them. The insights help in making well informed decisions while sanitizing big data to ensure that it is protected without losing utility required.


2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Lin Yang

In recent years, people have paid more and more attention to cloud data. However, because users do not have absolute control over the data stored on the cloud server, it is necessary for the cloud storage server to provide evidence that the data are completely saved to maintain their control over the data. Give users all management rights, users can independently install operating systems and applications and can choose self-service platforms and various remote management tools to manage and control the host according to personal habits. This paper mainly introduces the cloud data integrity verification algorithm of sustainable computing accounting informatization and studies the advantages and disadvantages of the existing data integrity proof mechanism and the new requirements under the cloud storage environment. In this paper, an LBT-based big data integrity proof mechanism is proposed, which introduces a multibranch path tree as the data structure used in the data integrity proof mechanism and proposes a multibranch path structure with rank and data integrity detection algorithm. In this paper, the proposed data integrity verification algorithm and two other integrity verification algorithms are used for simulation experiments. The experimental results show that the proposed scheme is about 10% better than scheme 1 and about 5% better than scheme 2 in computing time of 500 data blocks; in the change of operation data block time, the execution time of scheme 1 and scheme 2 increases with the increase of data blocks. The execution time of the proposed scheme remains unchanged, and the computational cost of the proposed scheme is also better than that of scheme 1 and scheme 2. The scheme in this paper not only can verify the integrity of cloud storage data but also has certain verification advantages, which has a certain significance in the application of big data integrity verification.


Author(s):  
Mohsen Karimzadeh Kiskani ◽  
Hamid R. Sadjadpour ◽  
Mohammad Reza Rahimi ◽  
Fred Etemadieh

Sign in / Sign up

Export Citation Format

Share Document