scholarly journals Preserving Differential Privacy for Similarity Measurement in Smart Environments

2014 ◽  
Vol 2014 ◽  
pp. 1-9 ◽  
Author(s):  
Kok-Seng Wong ◽  
Myung Ho Kim

Advances in both sensor technologies and network infrastructures have encouraged the development of smart environments to enhance people’s life and living styles. However, collecting and storing user’s data in the smart environments pose severe privacy concerns because these data may contain sensitive information about the subject. Hence, privacy protection is now an emerging issue that we need to consider especially when data sharing is essential for analysis purpose. In this paper, we consider the case where two agents in the smart environment want to measure the similarity of their collected or stored data. We use similarity coefficient functionFSCas the measurement metric for the comparison with differential privacy model. Unlike the existing solutions, our protocol can facilitate more than one request to computeFSCwithout modifying the protocol. Our solution ensures privacy protection for both the inputs and the computedFSCresults.

Author(s):  
Poushali Sengupta ◽  
Sudipta Paul ◽  
Subhankar Mishra

The leakage of data might have an extreme effect on the personal level if it contains sensitive information. Common prevention methods like encryption-decryption, endpoint protection, intrusion detection systems are prone to leakage. Differential privacy comes to the rescue with a proper promise of protection against leakage, as it uses a randomized response technique at the time of collection of the data which promises strong privacy with better utility. Differential privacy allows one to access the forest of data by describing their pattern of groups without disclosing any individual trees. The current adaption of differential privacy by leading tech companies and academia encourages authors to explore the topic in detail. The different aspects of differential privacy, its application in privacy protection and leakage of information, a comparative discussion on the current research approaches in this field, its utility in the real world as well as the trade-offs will be discussed.


2019 ◽  
Vol 16 (3) ◽  
pp. 705-731
Author(s):  
Haoze Lv ◽  
Zhaobin Liu ◽  
Zhonglian Hu ◽  
Lihai Nie ◽  
Weijiang Liu ◽  
...  

With the invention of big data era, data releasing is becoming a hot topic in database community. Meanwhile, data privacy also raises the attention of users. As far as the privacy protection models that have been proposed, the differential privacy model is widely utilized because of its many advantages over other models. However, for the private releasing of multi-dimensional data sets, the existing algorithms are publishing data usually with low availability. The reason is that the noise in the released data is rapidly grown as the increasing of the dimensions. In view of this issue, we propose algorithms based on regular and irregular marginal tables of frequent item sets to protect privacy and promote availability. The main idea is to reduce the dimension of the data set, and to achieve differential privacy protection with Laplace noise. First, we propose a marginal table cover algorithm based on frequent items by considering the effectiveness of query cover combination, and then obtain a regular marginal table cover set with smaller size but higher data availability. Then, a differential privacy model with irregular marginal table is proposed in the application scenario with low data availability and high cover rate. Next, we obtain the approximate optimal marginal table cover algorithm by our analysis to get the query cover set which satisfies the multi-level query policy constraint. Thus, the balance between privacy protection and data availability is achieved. Finally, extensive experiments have been done on synthetic and real databases, demonstrating that the proposed method preforms better than state-of-the-art methods in most cases.


2014 ◽  
Vol 8 (1) ◽  
pp. 13-21 ◽  
Author(s):  
ARKADIUSZ LIBER

Introduction: Medical documentation must be protected against damage or loss, in compliance with its integrity and credibility and the opportunity to a permanent access by the authorized staff and, finally, protected against the access of unauthorized persons. Anonymization is one of the methods to safeguard the data against the disclosure.Aim of the study: The study aims at the analysis of methods of anonymization, the analysis of methods of the protection of anonymized data and the study of a new security type of privacy enabling to control sensitive data by the entity which the data concerns.Material and methods: The analytical and algebraic methods were used.Results: The study ought to deliver the materials supporting the choice and analysis of the ways of the anonymization of medical data, and develop a new privacy protection solution enabling the control of sensitive data by entities whom this data concerns.Conclusions: In the paper, the analysis of solutions of data anonymizing used for medical data privacy protection was con-ducted. The methods, such as k-Anonymity, (X,y)- Anonymity, (a,k)- Anonymity, (k,e)-Anonymity, (X,y)-Privacy, LKC-Privacy, l-Diversity, (X,y)-Linkability, t-Closeness, Confidence Bounding and Personalized Privacy were described, explained and analyzed. The analysis of solutions to control sensitive data by their owners was also conducted. Apart from the existing methods of the anonymization, the analysis of methods of the anonimized data protection was conducted, in particular the methods of: d-Presence, e-Differential Privacy, (d,g)-Privacy, (a,b)-Distributing Privacy and protections against (c,t)-Isolation were analyzed. The author introduced a new solution of the controlled protection of privacy. The solution is based on marking a protected field and multi-key encryption of the sensitive value. The suggested way of fields marking is in accordance to the XML standard. For the encryption (n,p) different key cipher was selected. To decipher the content the p keys of n is used. The proposed solution enables to apply brand new methods for the control of privacy of disclosing sensitive data.


2019 ◽  
Author(s):  
Iago Chaves ◽  
Javam Machado

Privacy concerns are growing fast because of data protection regulations around the world. Many works have built private algorithms avoiding sensitive information leakage through data publication. Differential privacy, based on formal definitions, is a strong guarantee for individual privacy and the cutting edge for designing private algorithms. This work proposes a differentially private group-by algorithm for data publication under the exponential mechanism. Our method publishes data groups according to a specified attribute while maintaining the desired privacy level and trustworthy utility results.


2019 ◽  
Vol 19 (5) ◽  
pp. 537-545
Author(s):  
Vicenç Torra

Abstract Social choice provides methods for collective decisions. They include methods for voting and for aggregating rankings. These methods are used in multiagent systems for similar purposes when decisions are to be made by agents. Votes and rankings are sensitive information. Because of that, privacy mechanisms are needed to avoid the disclosure of sensitive information. Cryptographic techniques can be applied in centralized environments to avoid the disclosure of sensitive information. A trusted third party can then compute the outcome. In distributed environments, we can use a secure multiparty computation approach for implementing a collective decision method. Other privacy models exist. Differential privacy and k-anonymity are two of them. They provide privacy guarantees that are complementary to multiparty computation approaches, and solutions that can be combined with the cryptographic ones, thus providing additional privacy guarantees, e.g., a differentially private multiparty computation model. In this paper, we propose the use of probabilistic social choice methods to achieve differential privacy. We use the method called random dictatorship and prove that under some circumstances differential privacy is satisfied and propose a variation that is always compliant with this privacy model. Our approach can be implemented using a centralized approach and also a decentralized approach. We briefly discuss these implementations.


Author(s):  
Adam Gowri Shankar

Abstract: Body Area Networks (BANs), collects enormous data by wearable sensors which contain sensitive information such as physical condition, location information, and so on, which needs protection. Preservation of privacy in big data has emerged as an absolute prerequisite for exchanging private data in terms of data analysis, validation, and publishing. Previous methods and traditional methods like k-anonymity and other anonymization techniques have overlooked privacy protection issues resulting to privacy infringement. In this work, a differential privacy protection scheme for ‘big data in body area network’ is developed. Compared with previous methods, the proposed privacy protection scheme is best in terms of availability and reliability. Exploratory results demonstrate that, even when the attacker has full background knowledge, the proposed scheme can still provide enough interference to big sensitive data so as to preserve the privacy. Keywords: BAN’s, Privacy, Differential Privacy, Noisy response


2021 ◽  
Vol 10 (7) ◽  
pp. 454
Author(s):  
Tinghuai Ma ◽  
Fagen Song

With the popularity of location-aware devices (e.g., smart phones), a large number of trajectory data were collected. The trajectory dataset can be used in many fields including traffic monitoring, market analysis, city management, etc. The collection and release of trajectory data will raise serious privacy concerns for users. If users’ privacy is not protected enough, they will refuse to share their trajectory data. In this paper, a new trajectory privacy protection method based on random sampling differential privacy (TPRSDP), which can provide more security protection, is proposed. Compared with other methods, it takes less time to run this method. Experiments are conducted on two real world datasets to validate the proposed scheme, and the results are compared with others in terms of running time and information loss. The performance of the scheme with different parameter values is verified. The setting of the new scheme parameters is discussed in detail, and some valuable suggestions are given.


2019 ◽  
Author(s):  
Italo Santos ◽  
Emanuel Coutinho ◽  
Leonardo Moreira

Privacy is a concept directly related to people's interest in maintaining personal space without the interference of others. In this paper, we focus on study the k-anonymity technique since many generalization algorithms are based on this privacy model. Due to this, we develop a proof of concept that uses the k-anonymity technique for data anonymization to anonymize data raw and generate a new file with anonymized data. We present the system architecture and detailed an experiment using the adult data set which has sensitive information, where each record corresponds to the personal information for a person. Finally, we summarize our work and discuss future works.


2020 ◽  
Vol 10 (18) ◽  
pp. 6396
Author(s):  
Jong Wook Kim ◽  
Su-Mee Moon ◽  
Sang-ug Kang ◽  
Beakcheol Jang

The popularity of wearable devices equipped with a variety of sensors that can measure users’ health status and monitor their lifestyle has been increasing. In fact, healthcare service providers have been utilizing these devices as a primary means to collect considerable health data from users. Although the health data collected via wearable devices are useful for providing healthcare services, the indiscriminate collection of an individual’s health data raises serious privacy concerns. This is because the health data measured and monitored by wearable devices contain sensitive information related to the wearer’s personal health and lifestyle. Therefore, we propose a method to aggregate health data obtained from users’ wearable devices in a privacy-preserving manner. The proposed method leverages local differential privacy, which is a de facto standard for privacy-preserving data processing and aggregation, to collect sensitive health data. In particular, to mitigate the error incurred by the perturbation mechanism of location differential privacy, the proposed scheme first samples a small number of salient data that best represents the original health data, after which the scheme collects the sampled salient data instead of the entire set of health data. Our experimental results show that the proposed sampling-based collection scheme achieves significant improvement in the estimated accuracy when compared with straightforward solutions. Furthermore, the experimental results verify that an effective tradeoff between the level of privacy protection and the accuracy of aggregate statistics can be achieved with the proposed approach.


2021 ◽  
Vol 38 (5) ◽  
pp. 1385-1401
Author(s):  
Chao Liu ◽  
Jing Yang ◽  
Weinan Zhao ◽  
Yining Zhang ◽  
Cuiping Shi ◽  
...  

Face images, as an information carrier, are rich in sensitive information. Direct publication of these images would cause privacy leak, due to their natural weak privacy. Most of the existing privacy protection methods for face images adopt data publication under a non-interactive framework. However, the E-effect under this framework covers the entire image, such that the noise influence is uniform across the image. To solve the problem, this paper proposes region growing publication (RGP), an algorithm for the interactive publication of face images under differential privacy. This innovative algorithm combines the region growing technique with differential privacy technique. The privacy budget E is dynamically allocated, and the Laplace noise is added, according to the similarity between adjacent sub-images. To measure this similarity more effectively, the fusion similarity measurement mechanism (FSMM) was designed, which better adapts to the intrinsic attributes of images. Different from traditional region growing rules, the FSMM fully considers various attributes of images, including brightness, contrast, structure, color, texture, and spatial distribution. To further enhance algorithm feasibility, RGP was extended to atypical region growing publication (ARGP). While RGP limits the region growing direction between adjacent sub-images, ARGP searches for the qualified sub-images across the image, with the aid of the exponential mechanism, thereby expanding the region merging scope of the seed point. The results show that our algorithm can satisfy E-differential privacy, and the denoised image still have a high availability.


Sign in / Sign up

Export Citation Format

Share Document