scholarly journals Data Privacy Protection Based on Micro Aggregation with Dynamic Sensitive Attribute Updating

Sensors ◽  
2018 ◽  
Vol 18 (7) ◽  
pp. 2307 ◽  
Author(s):  
Yancheng Shi ◽  
Zhenjiang Zhang ◽  
Han-Chieh Chao ◽  
Bo Shen

With the rapid development of information technology, large-scale personal data, including those collected by sensors or IoT devices, is stored in the cloud or data centers. In some cases, the owners of the cloud or data centers need to publish the data. Therefore, how to make the best use of the data in the risk of personal information leakage has become a popular research topic. The most common method of data privacy protection is the data anonymization, which has two main problems: (1) The availability of information after clustering will be reduced, and it cannot be flexibly adjusted. (2) Most methods are static. When the data is released multiple times, it will cause personal privacy leakage. To solve the problems, this article has two contributions. The first one is to propose a new method based on micro-aggregation to complete the process of clustering. In this way, the data availability and the privacy protection can be adjusted flexibly by considering the concepts of distance and information entropy. The second contribution of this article is to propose a dynamic update mechanism that guarantees that the individual privacy is not compromised after the data has been subjected to multiple releases, and minimizes the loss of information. At the end of the article, the algorithm is simulated with real data sets. The availability and advantages of the method are demonstrated by calculating the time, the average information loss and the number of forged data.

Author(s):  
Xinwei Sun ◽  
Zhang Wei

With the rapid development of cloud storage technology, the cloud storage platform has gradually been used to store data. However, the privacy protection strategy provided by public cloud storage platform is hard to be trust by users. Moreover, they are unable to customize their own storage strategy according to their demands. This study proposed a consistency-availability-partition tolerance (CAP) theory -based data privacy protection strategy, which firstly employed CAP theory to provide privacy data protection for users and then offer users with choice to select corresponding privacy strategy to store data. Moreover, a total of three privacy protection strategies were put forward, focusing on the balance between data consistency and response time, data consistency and data availability, as well as response time and availability respectively.


2019 ◽  
pp. 470-482
Author(s):  
Xinwei Sun ◽  
Zhang Wei

With the rapid development of cloud storage technology, the cloud storage platform has gradually been used to store data. However, the privacy protection strategy provided by public cloud storage platform is hard to be trust by users. Moreover, they are unable to customize their own storage strategy according to their demands. This study proposed a consistency-availability-partition tolerance (CAP) theory -based data privacy protection strategy, which firstly employed CAP theory to provide privacy data protection for users and then offer users with choice to select corresponding privacy strategy to store data. Moreover, a total of three privacy protection strategies were put forward, focusing on the balance between data consistency and response time, data consistency and data availability, as well as response time and availability respectively.


2017 ◽  
Vol 2017 ◽  
pp. 1-9 ◽  
Author(s):  
Mingshan Xie ◽  
Yong Bai ◽  
Mengxing Huang ◽  
Zhuhua Hu

Privacy-preserving in wireless sensor networks is one of the key problems to be solved in practical applications. It is of great significance to solve the problem of data privacy protection for large-scale applications of wireless sensor networks. The characteristics of wireless sensor networks make data privacy protection technology face serious challenges. At present, the technology of data privacy protection in wireless sensor networks has become a hot research topic, mainly for data aggregation, data query, and access control of data privacy protection. In this paper, multiorder fusion data privacy-preserving scheme (MOFDAP) is proposed. Random interference code, random decomposition of function library, and cryptographic vector are introduced for our proposed scheme. In multiple stages and multiple aspects, the difficulty of cracking and crack costs are increased. The simulation results demonstrate that, compared with the typical Slice-Mix-AggRegaTe (SMART) algorithm, the algorithm proposed in this paper has a better data privacy-preserving ability when the traffic load is not very heavy.


Author(s):  
Fanglan Zheng ◽  
Erihe ◽  
Kun Li ◽  
Jiang Tian ◽  
Xiaojia Xiang

In this paper, we propose a vertical federated learning (VFL) structure for logistic regression with bounded constraint for the traditional scorecard, namely FL-LRBC. Under the premise of data privacy protection, FL-LRBC enables multiple agencies to jointly obtain an optimized scorecard model in a single training session. It leads to the formation of scorecard model with positive coefficients to guarantee its desirable characteristics (e.g., interpretability and robustness), while the time-consuming parameter-tuning process can be avoided. Moreover, model performance in terms of both AUC and the Kolmogorov–Smirnov (KS) statistics is significantly improved by FL-LRBC, due to the feature enrichment in our algorithm architecture. Currently, FL-LRBC has already been applied to credit business in a China nation-wide financial holdings group.


2019 ◽  
Vol 42 (2) ◽  
Author(s):  
Alan Toy ◽  
Gehan Gunasekara

The data transfer model and the accountability model, which are the dominant models for protecting the data privacy rights of citizens, have begun to present significant difficulties in regulating the online and increasingly transnational business environment. Global organisations take advantage of forum selection clauses and choice of law clauses and attention is diverted toward the data transfer model and the accountability model as a means of data privacy protection but it is impossible to have confidence that the data privacy rights of citizens are adequately protected given well known revelations regarding surveillance and the rise of technologies such as cloud computing. But forum selection and choice of law clauses no longer have the force they once seemed to have and this opens the possibility that extraterritorial jurisdiction may provide a supplementary conceptual basis for championing data privacy in the globalised context of the Internet. This article examines the current basis for extraterritorial application of data privacy laws and suggests a test for increasing their relevance.


Author(s):  
Fritz Grupe ◽  
William Kuechler ◽  
Scott Sweeney

Author(s):  
Anastasia Kozyreva ◽  
Philipp Lorenz-Spreen ◽  
Ralph Hertwig ◽  
Stephan Lewandowsky ◽  
Stefan M. Herzog

AbstractPeople rely on data-driven AI technologies nearly every time they go online, whether they are shopping, scrolling through news feeds, or looking for entertainment. Yet despite their ubiquity, personalization algorithms and the associated large-scale collection of personal data have largely escaped public scrutiny. Policy makers who wish to introduce regulations that respect people’s attitudes towards privacy and algorithmic personalization on the Internet would greatly benefit from knowing how people perceive personalization and personal data collection. To contribute to an empirical foundation for this knowledge, we surveyed public attitudes towards key aspects of algorithmic personalization and people’s data privacy concerns and behavior using representative online samples in Germany (N = 1065), Great Britain (N = 1092), and the United States (N = 1059). Our findings show that people object to the collection and use of sensitive personal information and to the personalization of political campaigning and, in Germany and Great Britain, to the personalization of news sources. Encouragingly, attitudes are independent of political preferences: People across the political spectrum share the same concerns about their data privacy and show similar levels of acceptance regarding personalized digital services and the use of private data for personalization. We also found an acceptability gap: People are more accepting of personalized services than of the collection of personal data and information required for these services. A large majority of respondents rated, on average, personalized services as more acceptable than the collection of personal information or data. The acceptability gap can be observed at both the aggregate and the individual level. Across countries, between 64% and 75% of respondents showed an acceptability gap. Our findings suggest a need for transparent algorithmic personalization that minimizes use of personal data, respects people’s preferences on personalization, is easy to adjust, and does not extend to political advertising.


Author(s):  
Shenglong Liu ◽  
Hongbin Zhu ◽  
Tao Zhao ◽  
Heng Wang ◽  
Xianzhou Gao ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document