Order Dataset Release Scheme Based on Safe K-Anonymization for Privacy Protection in Cloud Manufacturing

Author(s):  
Hui Xiu ◽  
Xuemei Jiang ◽  
Xiaomei Zhang

Cloud Manufacturing is a new model to increase the manufacturing and business benefits by sharing manufacturing resources. These resources can bring users convenience, but also may be maliciously analyzed by the attacker which may result in personal or corporate privacy disclosure. In this paper, we discuss the privacy disclosure problem in cloud manufacturing, and propose a method for releasing order data securely with the complex relationship between enterprises and other vendors. With regards to the risk of privacy leakage in the process of data analysis or data mining, we improve the traditional method of anonymous releasing for original order data, and introduce the thought of safe k-anonymization to achieve the process. To meet the needs of protecting sensitive information in data, we analyze the users’ different demands for order data in the cloud manufacturing, use the sampling function to satisfy (β, ε, δ) - DPS to increase the uncertainty of the differential privacy, improve the k-anonymization method, apply the anonymous method with generalization, concealment, and reduce data associations to different attributes. The improved method not only preserves the statistical characteristics of the data, but also protects the privacy information in the order data in the cloud manufacturing environment.

Author(s):  
Poushali Sengupta ◽  
Sudipta Paul ◽  
Subhankar Mishra

The leakage of data might have an extreme effect on the personal level if it contains sensitive information. Common prevention methods like encryption-decryption, endpoint protection, intrusion detection systems are prone to leakage. Differential privacy comes to the rescue with a proper promise of protection against leakage, as it uses a randomized response technique at the time of collection of the data which promises strong privacy with better utility. Differential privacy allows one to access the forest of data by describing their pattern of groups without disclosing any individual trees. The current adaption of differential privacy by leading tech companies and academia encourages authors to explore the topic in detail. The different aspects of differential privacy, its application in privacy protection and leakage of information, a comparative discussion on the current research approaches in this field, its utility in the real world as well as the trade-offs will be discussed.


2014 ◽  
Vol 2014 ◽  
pp. 1-9 ◽  
Author(s):  
Kok-Seng Wong ◽  
Myung Ho Kim

Advances in both sensor technologies and network infrastructures have encouraged the development of smart environments to enhance people’s life and living styles. However, collecting and storing user’s data in the smart environments pose severe privacy concerns because these data may contain sensitive information about the subject. Hence, privacy protection is now an emerging issue that we need to consider especially when data sharing is essential for analysis purpose. In this paper, we consider the case where two agents in the smart environment want to measure the similarity of their collected or stored data. We use similarity coefficient functionFSCas the measurement metric for the comparison with differential privacy model. Unlike the existing solutions, our protocol can facilitate more than one request to computeFSCwithout modifying the protocol. Our solution ensures privacy protection for both the inputs and the computedFSCresults.


Sensors ◽  
2020 ◽  
Vol 20 (9) ◽  
pp. 2516
Author(s):  
Chunhua Ju ◽  
Qiuyang Gu ◽  
Gongxing Wu ◽  
Shuangzhu Zhang

Although the Crowd-Sensing perception system brings great data value to people through the release and analysis of high-dimensional perception data, it causes great hidden danger to the privacy of participants in the meantime. Currently, various privacy protection methods based on differential privacy have been proposed, but most of them cannot simultaneously solve the complex attribute association problem between high-dimensional perception data and the privacy threat problems from untrustworthy servers. To address this problem, we put forward a local privacy protection based on Bayes network for high-dimensional perceptual data in this paper. This mechanism realizes the local data protection of the users at the very beginning, eliminates the possibility of other parties directly accessing the user’s original data, and fundamentally protects the user’s data privacy. During this process, after receiving the data of the user’s local privacy protection, the perception server recognizes the dimensional correlation of the high-dimensional data based on the Bayes network, divides the high-dimensional data attribute set into multiple relatively independent low-dimensional attribute sets, and then sequentially synthesizes the new dataset. It can effectively retain the attribute dimension correlation of the original perception data, and ensure that the synthetic dataset and the original dataset have as similar statistical characteristics as possible. To verify its effectiveness, we conduct a multitude of simulation experiments. Results have shown that the synthetic data of this mechanism under the effective local privacy protection has relatively high data utility.


2016 ◽  
Vol 2016 ◽  
pp. 1-11
Author(s):  
Nan Feng ◽  
Zhiqi Hao ◽  
Sibo Yang ◽  
Harris Wu

With the pervasive use of wireless sensor networks (WSNs) within commercial environments, business privacy leakage due to the exposure of sensitive information transmitted in a WSN has become a major issue for enterprises. We examine business privacy protection in the application of WSNs. We propose a business privacy-protection system (BPS) that is modeled as a hierarchical profile in order to filter sensitive information with respect to enterprise-specified privacy requirements. The BPS aims at solving a tradeoff between metrics that are defined to estimate the utility of information and the business privacy risk. We design profile, risk assessment, and filtration agents to implement the BPS based on multiagent technology. The effectiveness of our proposed BPS is validated by experiments.


2014 ◽  
Vol 8 (1) ◽  
pp. 13-21 ◽  
Author(s):  
ARKADIUSZ LIBER

Introduction: Medical documentation must be protected against damage or loss, in compliance with its integrity and credibility and the opportunity to a permanent access by the authorized staff and, finally, protected against the access of unauthorized persons. Anonymization is one of the methods to safeguard the data against the disclosure.Aim of the study: The study aims at the analysis of methods of anonymization, the analysis of methods of the protection of anonymized data and the study of a new security type of privacy enabling to control sensitive data by the entity which the data concerns.Material and methods: The analytical and algebraic methods were used.Results: The study ought to deliver the materials supporting the choice and analysis of the ways of the anonymization of medical data, and develop a new privacy protection solution enabling the control of sensitive data by entities whom this data concerns.Conclusions: In the paper, the analysis of solutions of data anonymizing used for medical data privacy protection was con-ducted. The methods, such as k-Anonymity, (X,y)- Anonymity, (a,k)- Anonymity, (k,e)-Anonymity, (X,y)-Privacy, LKC-Privacy, l-Diversity, (X,y)-Linkability, t-Closeness, Confidence Bounding and Personalized Privacy were described, explained and analyzed. The analysis of solutions to control sensitive data by their owners was also conducted. Apart from the existing methods of the anonymization, the analysis of methods of the anonimized data protection was conducted, in particular the methods of: d-Presence, e-Differential Privacy, (d,g)-Privacy, (a,b)-Distributing Privacy and protections against (c,t)-Isolation were analyzed. The author introduced a new solution of the controlled protection of privacy. The solution is based on marking a protected field and multi-key encryption of the sensitive value. The suggested way of fields marking is in accordance to the XML standard. For the encryption (n,p) different key cipher was selected. To decipher the content the p keys of n is used. The proposed solution enables to apply brand new methods for the control of privacy of disclosing sensitive data.


2021 ◽  
Vol 17 (2) ◽  
pp. 155014772199340
Author(s):  
Xiaohui Li ◽  
Yuliang Bai ◽  
Yajun Wang ◽  
Bo Li

Suppressing the trajectory data to be released can effectively reduce the risk of user privacy leakage. However, the global suppression of the data set to meet the traditional privacy model method reduces the availability of trajectory data. Therefore, we propose a trajectory data differential privacy protection algorithm based on local suppression Trajectory privacy protection based on local suppression (TPLS) to provide the user with the ability and flexibility of protecting data through local suppression. The main contributions of this article include as follows: (1) introducing privacy protection method in trajectory data release, (2) performing effective local suppression judgment on the points in the minimum violation sequence of the trajectory data set, and (3) proposing a differential privacy protection algorithm based on local suppression. In the algorithm, we achieve the purpose Maximal frequent sequence (MFS) sequence loss rate in the trajectory data set by effective local inhibition judgment and updating the minimum violation sequence set, and then establish a classification tree and add noise to the leaf nodes to improve the security of the data to be published. Simulation results show that the proposed algorithm is effective, which can reduce the data loss rate and improve data availability while reducing the risk of user privacy leakage.


Author(s):  
Adam Gowri Shankar

Abstract: Body Area Networks (BANs), collects enormous data by wearable sensors which contain sensitive information such as physical condition, location information, and so on, which needs protection. Preservation of privacy in big data has emerged as an absolute prerequisite for exchanging private data in terms of data analysis, validation, and publishing. Previous methods and traditional methods like k-anonymity and other anonymization techniques have overlooked privacy protection issues resulting to privacy infringement. In this work, a differential privacy protection scheme for ‘big data in body area network’ is developed. Compared with previous methods, the proposed privacy protection scheme is best in terms of availability and reliability. Exploratory results demonstrate that, even when the attacker has full background knowledge, the proposed scheme can still provide enough interference to big sensitive data so as to preserve the privacy. Keywords: BAN’s, Privacy, Differential Privacy, Noisy response


2017 ◽  
Vol 2017 ◽  
pp. 1-10 ◽  
Author(s):  
Xiaoye Li ◽  
Jing Yang ◽  
Zhenlong Sun ◽  
Jianpei Zhang

Social networks can be analyzed to discover important social issues; however, it will cause privacy disclosure in the process. The edge weights play an important role in social graphs, which are associated with sensitive information (e.g., the price of commercial trade). In the paper, we propose the MB-CI (Merging Barrels and Consistency Inference) strategy to protect weighted social graphs. By viewing the edge-weight sequence as an unattributed histogram, differential privacy for edge weights can be implemented based on the histogram. Considering that some edges have the same weight in a social network, we merge the barrels with the same count into one group to reduce the noise required. Moreover,k-indistinguishability between groups is proposed to fulfill differential privacy not to be violated, because simple merging operation may disclose some information by the magnitude of noise itself. For keeping most of the shortest paths unchanged, we do consistency inference according to original order of the sequence as an important postprocessing step. Experimental results show that the proposed approach effectively improved the accuracy and utility of the released data.


2021 ◽  
Author(s):  
panjun sun

Abstract The solution of the contradiction between privacy protection and data utility is a research hotspot in the field of privacy protection. Aiming at the problem of tradeoff between privacy and utility in the scenario of differential privacy offline data release, the optimal differential privacy mechanism is studied by using the rate distortion theory. Firstly, based on Shannon communication theory, the noise channel model of differential privacy is abstracted, and the mutual information and the distortion function is used to measure the privacy and utility of data publishing, and the optimization model based on rate distortion theory is constructed. Secondly, considering the influence of associated auxiliary background knowledge on mutual information privacy leakage, a mutual information privacy measure based on joint events is proposed, and a minimum privacy leakage model is proposed by modifying the rate distortion function. Finally, aiming at the difficulty in solving the Lagrange multiplier method, an approximate algorithm for solving the mutual information privacy optimization channel mechanism is proposed based on the alternating iterative method. The effectiveness of the proposed iterative approximation method is verified by experimental simulation. At the same time, the experimental results show that the proposed method reduces the mutual information privacy leakage under the condition of limited distortion, and improves the data utility under the same privacy tolerance


2021 ◽  
Vol 38 (5) ◽  
pp. 1385-1401
Author(s):  
Chao Liu ◽  
Jing Yang ◽  
Weinan Zhao ◽  
Yining Zhang ◽  
Cuiping Shi ◽  
...  

Face images, as an information carrier, are rich in sensitive information. Direct publication of these images would cause privacy leak, due to their natural weak privacy. Most of the existing privacy protection methods for face images adopt data publication under a non-interactive framework. However, the E-effect under this framework covers the entire image, such that the noise influence is uniform across the image. To solve the problem, this paper proposes region growing publication (RGP), an algorithm for the interactive publication of face images under differential privacy. This innovative algorithm combines the region growing technique with differential privacy technique. The privacy budget E is dynamically allocated, and the Laplace noise is added, according to the similarity between adjacent sub-images. To measure this similarity more effectively, the fusion similarity measurement mechanism (FSMM) was designed, which better adapts to the intrinsic attributes of images. Different from traditional region growing rules, the FSMM fully considers various attributes of images, including brightness, contrast, structure, color, texture, and spatial distribution. To further enhance algorithm feasibility, RGP was extended to atypical region growing publication (ARGP). While RGP limits the region growing direction between adjacent sub-images, ARGP searches for the qualified sub-images across the image, with the aid of the exponential mechanism, thereby expanding the region merging scope of the seed point. The results show that our algorithm can satisfy E-differential privacy, and the denoised image still have a high availability.


Sign in / Sign up

Export Citation Format

Share Document