scholarly journals Storage and Security Preservation Using Cloud Based Intelligent Compression Scheme

Author(s):  
M. Chinnadurai ◽  
A. Jayashri

Cloud computing is one of the important factoring that leads it into a productive phase. This means that most of the main problems with cloud computing have been addressed to a degree that clouds have become interesting for full commercial exploitation. However, permissions over data security still prevent many users from migrating data to remote storage. Client-side data compression in particular ensures that multiple uploads of the same content only consume network bandwidth and storage space of a single upload. Compression is actively used by a number of cloud backup providers as well as various cloud services. Unfortunately, encrypted data is pseudorandom and thus cannot be deduplicated: as a consequence, current schemes have to entirely sacrifice either security or storage efficiency. In this system, present a scheme that permits a more fine-grained trade-off. The intuition is that outsourced data may require different levels of protection, depending on how popular it is: content shared by many users. Then present a novel idea that differentiates data according to their popularity. In this proposed system, implement an encryption scheme that guarantees semantic security for unpopular data and provides weaker security and better storage and bandwidth benefits for popular data. Proposed data de-duplication can be effective for popular data, also semantically secure encryption protects unpopular content. Finally, can use the backup recover system at the time of blocking and also analyze frequent login access system.

Author(s):  
Gokulakrishnan V ◽  
Illakiya B

With the rapidly increasing amounts of data produced worldwide, networked and multi- user storage systems are becoming very popular. However, concerns over data security still prevent many users from migrating data to remote storage. The conventional solution is to encrypt the data before it leaves the owner’s premises. While sound from a security perspective,this approach prevents the storage provider from effectively applying storage efficiency functions, such as compression and deduplication, which would allow optimal usage of the resources and consequently lower service cost. Client-side data deduplication in particular ensures that multiple uploads of the same content only consume network bandwidth and storage space of a single upload. Deduplication is actively used by a number of backup providers as well as various data services. In this project, we present a scheme that permits the storage without duplication of multiple types of files. And also need the intuition is that outsourced data may require different levels of protection. Based on this idea, we design an encryption scheme that guarantees semantic security for unpopular data and provides weaker security and better storage and bandwidth benefits for popular data. This way, data deduplication can be effective for popular data, whilst semantically secure encryption protects unpopular content. We can use the backup recover system at the time of blocking and also analyze frequent log in access system.


Author(s):  
Ramya. S ◽  
Gokula Krishnan. V

Big data has reached a maturity that leads it into a productive phase. This means that most of the main issues with big data have been addressed to a degree that storage has become interesting for full commercial exploitation. However, concerns over data compression still prevent many users from migrating data to remote storage. Client-side data compression in particular ensures that multiple uploads of the same content only consume network bandwidth and storage space of a single upload. Compression is actively used by a number of backup providers as well as various services. Unfortunately, compressed data is pseudorandom and thus cannot be deduplicated: as a consequence, current schemes have to entirely sacrifice storage efficiency. In this system, present a scheme that permits a more fine-grained trade-off. And present a novel idea that differentiates data according to their popularity. Based on this idea, design a compression scheme that guarantees semantic storage preservation for unpopular data and provides scalable data storage and bandwidth benefits for popular data. We can implement variable data chunk similarity algorithm for analyze the chunks data and store the original data with compressed format. And also includes the encryption algorithm to secure the data. Finally, can use the backup recover system at the time of blocking and also analyze frequent login access system.


Author(s):  
Saravanan K ◽  
P. Srinivasan

Cloud IoT has evolved from the convergence of Cloud computing with Internet of Things (IoT). The networked devices in the IoT world grow exponentially in the distributed computing paradigm and thus require the power of the Cloud to access and share computing and storage for these devices. Cloud offers scalable on-demand services to the IoT devices for effective communication and knowledge sharing. It alleviates the computational load of IoT, which makes the devices smarter. This chapter explores the different IoT services offered by the Cloud as well as application domains that are benefited by the Cloud IoT. The challenges on offloading the IoT computation into the Cloud are also discussed.


Sensors ◽  
2019 ◽  
Vol 19 (12) ◽  
pp. 2783 ◽  
Author(s):  
Kun Ma ◽  
Antoine Bagula ◽  
Clement Nyirenda ◽  
Olasupo Ajayi

The internet of things (IoT) and cloud computing are two technologies which have recently changed both the academia and industry and impacted our daily lives in different ways. However, despite their impact, both technologies have their shortcomings. Though being cheap and convenient, cloud services consume a huge amount of network bandwidth. Furthermore, the physical distance between data source(s) and the data centre makes delays a frequent problem in cloud computing infrastructures. Fog computing has been proposed as a distributed service computing model that provides a solution to these limitations. It is based on a para-virtualized architecture that fully utilizes the computing functions of terminal devices and the advantages of local proximity processing. This paper proposes a multi-layer IoT-based fog computing model called IoT-FCM, which uses a genetic algorithm for resource allocation between the terminal layer and fog layer and a multi-sink version of the least interference beaconing protocol (LIBP) called least interference multi-sink protocol (LIMP) to enhance the fault-tolerance/robustness and reduce energy consumption of a terminal layer. Simulation results show that compared to the popular max–min and fog-oriented max–min, IoT-FCM performs better by reducing the distance between terminals and fog nodes by at least 38% and reducing energy consumed by an average of 150 KWh while being at par with the other algorithms in terms of delay for high number of tasks.


Author(s):  
G. Nivedhitha ◽  
R. Ilakkiya

Cloud computing is a way to increase the capacity or add capabilities dynamically without any upfront investments. Despite the growth achieved from the cloud computing, security is still questionable which impacts the cloud model adoption. Aside of having network and application securities being adopted, there must be a security that authenticate the user when accessing the cloud services that is bound to the rules between the cloud computing provider and the client side. The existing system provides authentication based on keys Encryption algorithms either symmetric key-based or asymmetric are key-based. Both encryption approaches have a major problem related to encryption key management i.e. how to securely generate, store, access and exchange secrete keys. In this paper, an optimized infrastructure for secure authentication and authorization in Cloud Environment using SSO (Single Sign-On) is proposed. SSO is a process of authenticating once and gain access of multiple resources that aims at reducing number of login and password in heterogeneous environment and to gain balance in Security, Efficiency and Usability. Also an authentication model for cloud computing based on the Kerberos protocol to provide single sign-on and to prevent against DDOS attacks is also presented in this paper.


2014 ◽  
Vol 513-517 ◽  
pp. 2273-2276
Author(s):  
Shao Min Zhang ◽  
Jun Ran ◽  
Bao Yi Wang

Ciphertext-Policy Attribute-based encryption (CP-ABE) mechanism is an extension of attribute-based encryption which associates the ciphertext and user's private key with the attribute by taking the attribute as a public key. It makes the representation of the access control policy more flexible, thus greatly reduces the network bandwidth and processing overhead of sending node brought by fine-grained access control of data sharing. According to the principle of CP-ABE encryption mechanism for this mechanism, an improved cloud computing-based encryption algorithm was proposed in this paper to overcome the deficiencies of permission changing process under the massive data. Experimental results show that compared with traditional methods, the new mechanism significantly reduces time-consuming.


Author(s):  
K. Vinod Kumar ◽  
Ranvijay Ranvijay

<p><span>Recently, the utilization of cloud services like storage, various software, networking resources has extremely enhanced due to widespread demand of these cloud services all over the world. On the other hand, it requires huge amount of storage and resource management to accurately cope up with ever-increasing demand. The high demand of these cloud services can lead to high amount of energy consumption in these cloud centers. Therefore, to eliminate these drawbacks and improve energy consumption and storage enhancement in real time for cloud computing devices, we have presented Cache Optimization Cloud Scheduling (COCS) Algorithm Based on Last Level Caches to ensure high cache memory Optimization and to enhance the processing speed of I/O subsystem in a cloud computing environment which rely upon Dynamic Voltage and Frequency Scaling (DVFS). The proposed COCS technique helps to reduce last level cache failures and the latencies of average memory in cloud computing multi-processor devices. This proposed COCS technique provides an efficient mathematical modelling to minimize energy consumption. We have tested our experiment on Cybershake scientific dataset and the experimental results are compared with different conventional techniques in terms of time taken to accomplish task, power consumed in the VMs and average power required to handle tasks.</span></p>


2018 ◽  
Vol 7 (2.20) ◽  
pp. 150
Author(s):  
L Archana ◽  
K P. K. Devan ◽  
P Harikumar

Cloud Computing has already grabbed its roots in many industries. It has become a fascinating choice for small budget organizations, as On-demand resources are available on pay as you use basis. However, security of data being stored at cloud servers is still a big question for organizations in today’s digital era where information is money. Large organizations are reluctant to switch to cloud services since they have threat of their data being manipulated. Cloud service provider’s claim of providing robust security mechanism being maintained by third party, but still there are many reported incidents of security breach in cloud environment in past few years. Thus, there is need for ro-bust security mechanism to be adopted by cloud service providers in order for excelling cloud computing. Since there are n number of data’s in cloud, Storage of those data are to be placed with high rank of Significance. In Existing system, no efficient hybrid algorithms are used there by security and storage is compromised to significant ratio. We propose AES and Fully Homomorphic algorithm to encrypt the data, thereby file size get is compressed thereby increasing Data security and stack pile.  


2014 ◽  
Vol 2014 ◽  
pp. 1-15 ◽  
Author(s):  
P. Peer ◽  
Ž. Emeršič ◽  
J. Bule ◽  
J. Žganec-Gros ◽  
V. Štruc

Cloud computing represents one of the fastest growing areas of technology and offers a new computing model for various applications and services. This model is particularly interesting for the area of biometric recognition, where scalability, processing power, and storage requirements are becoming a bigger and bigger issue with each new generation of recognition technology. Next to the availability of computing resources, another important aspect of cloud computing with respect to biometrics is accessibility. Since biometric cloud services are easily accessible, it is possible to combine different existing implementations and design new multibiometric services that next to almost unlimited resources also offer superior recognition performance and, consequently, ensure improved security to its client applications. Unfortunately, the literature on the best strategies of how to combine existing implementations of cloud-based biometric experts into a multibiometric service is virtually nonexistent. In this paper, we try to close this gap and evaluate different strategies for combining existing biometric experts into a multibiometric cloud service. We analyze the (fusion) strategies from different perspectives such as performance gains, training complexity, or resource consumption and present results and findings important to software developers and other researchers working in the areas of biometrics and cloud computing. The analysis is conducted based on two biometric cloud services, which are also presented in the paper.


2018 ◽  
Vol 2018 ◽  
pp. 1-13 ◽  
Author(s):  
Zhengyang Song ◽  
Yucong Duan ◽  
Shixiang Wan ◽  
Xiaobing Sun ◽  
Quan Zou ◽  
...  

Wide application of the Internet of Things (IoT) system has been increasingly demanding more hardware facilities for processing various resources including data, information, and knowledge. With the rapid growth of generated resource quantity, it is difficult to adapt to this situation by using traditional cloud computing models. Fog computing enables storage and computing services to perform at the edge of the network to extend cloud computing. However, there are some problems such as restricted computation, limited storage, and expensive network bandwidth in Fog computing applications. It is a challenge to balance the distribution of network resources. We propose a processing optimization mechanism of typed resources with synchronized storage and computation adaptation in Fog computing. In this mechanism, we process typed resources in a wireless-network-based three-tier architecture consisting of Data Graph, Information Graph, and Knowledge Graph. The proposed mechanism aims to minimize processing cost over network, computation, and storage while maximizing the performance of processing in a business value driven manner. Simulation results show that the proposed approach improves the ratio of performance over user investment. Meanwhile, conversions between resource types deliver support for dynamically allocating network resources.


Sign in / Sign up

Export Citation Format

Share Document