scholarly journals Manipulation Attacks in Local Differential Privacy

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Albert Cheu ◽  
Adam Smith ◽  
Jonathan Ullman

Local differential privacy is a widely studied restriction on distributed algorithms that collect aggregates about sensitive user data, and is now deployed in several large systems. We initiate a systematic study of a fundamental limitation of locally differentially private protocols: they are highly vulnerable to adversarial manipulation. While any algorithm can be manipulated by adversaries who lie about their inputs, we show that any noninteractive locally differentially private protocol can be manipulated to a much greater extent---when the privacy level is high, or the domain size is large, a small fraction of users in the protocol can completely obscure the distribution of the honest users' input. We also construct protocols that are optimally robust to manipulation for a variety of common tasks in local differential privacy. Finally, we give simple experiments validating our  theoretical results, and demonstrating that protocols that are optimal without manipulation can have dramatically different levels of robustness to manipulation. Our results suggest caution when deploying local differential privacy and reinforce the importance of efficient cryptographic  techniques for the distributed emulation of centrally differentially private mechanisms.

Author(s):  
Shuo Han ◽  
George J. Pappas

Many modern dynamical systems, such as smart grids and traffic networks, rely on user data for efficient operation. These data often contain sensitive information that the participating users do not wish to reveal to the public. One major challenge is to protect the privacy of participating users when utilizing user data. Over the past decade, differential privacy has emerged as a mathematically rigorous approach that provides strong privacy guarantees. In particular, differential privacy has several useful properties, including resistance to both postprocessing and the use of side information by adversaries. Although differential privacy was first proposed for static-database applications, this review focuses on its use in the context of control systems, in which the data under processing often take the form of data streams. Through two major applications—filtering and optimization algorithms—we illustrate the use of mathematical tools from control and optimization to convert a nonprivate algorithm to its private counterpart. These tools also enable us to quantify the trade-offs between privacy and system performance.


Author(s):  
Elias Yaacoub

Military communications need to be secure in harsh operational conditions under constant enemy attacks and attempts to eavesdrop, jam, or decrypt the communications. Physical layer security (PLS) can be used in conjunction with traditional cryptographic techniques to ensure an additional layer of security for military communications. In this article, PLS techniques at different levels of military communications, from communications at the military section level to the battalion or command center level, are discussed and analyzed. The presented solutions were tailored to the challenges faced in each scenario, leading to good performance. Additional challenges are also discussed, and suitable solutions are outlined.


Author(s):  
Alan G. Haddow ◽  
Steven W. Shaw

Abstract This paper presents results from tests completed on a rotor system fitted with pendulum-type torsional vibration absorbers. A review of the associated theoretical background is also given and the experimental and theoretical results are compared and contrasted. An overview of the test apparatus is provided and its unique features are discussed. To the best knowledge of the authors, this is the first time that a systematic study of the dynamic behavior of torsional vibration absorbers has been undertaken in a controlled environment.


2019 ◽  
Vol 2019 ◽  
pp. 1-9 ◽  
Author(s):  
Shiyu Yan ◽  
Xiaohua Yang ◽  
Guodong Cheng ◽  
Hua Liu

For the verification test of some scientific calculation programs, different comparison methods are commonly applied to ensure the correctness of the computations. However, it is difficult to verify whether the testing output is correct, because the oracles which include the expected output are not always available or too hard to get. For this reason, the authors focus on using the Richardson Extrapolation to estimate the convergences of the numerical solution on different levels of mesh refinement. These numerical convergence properties can be applied to verification test, without the need for giving the oracles. In the present study, the authors take the program test of the multigroup neutron diffusion equations as a study case and propose the Richardson Extrapolation-based verification method. Three verification criterions are obtained based on our approach. In addition, a test experiment is conducted demonstrating the validity of our theoretical results.


2020 ◽  
Vol 34 (01) ◽  
pp. 784-791 ◽  
Author(s):  
Qinbin Li ◽  
Zhaomin Wu ◽  
Zeyi Wen ◽  
Bingsheng He

The Gradient Boosting Decision Tree (GBDT) is a popular machine learning model for various tasks in recent years. In this paper, we study how to improve model accuracy of GBDT while preserving the strong guarantee of differential privacy. Sensitivity and privacy budget are two key design aspects for the effectiveness of differential private models. Existing solutions for GBDT with differential privacy suffer from the significant accuracy loss due to too loose sensitivity bounds and ineffective privacy budget allocations (especially across different trees in the GBDT model). Loose sensitivity bounds lead to more noise to obtain a fixed privacy level. Ineffective privacy budget allocations worsen the accuracy loss especially when the number of trees is large. Therefore, we propose a new GBDT training algorithm that achieves tighter sensitivity bounds and more effective noise allocations. Specifically, by investigating the property of gradient and the contribution of each tree in GBDTs, we propose to adaptively control the gradients of training data for each iteration and leaf node clipping in order to tighten the sensitivity bounds. Furthermore, we design a novel boosting framework to allocate the privacy budget between trees so that the accuracy loss can be further reduced. Our experiments show that our approach can achieve much better model accuracy than other baselines.


2013 ◽  
Vol 2013 ◽  
pp. 1-11 ◽  
Author(s):  
Janak Raj Sharma ◽  
Puneet Gupta

We present iterative methods of convergence order three, five, and six for solving systems of nonlinear equations. Third-order method is composed of two steps, namely, Newton iteration as the first step and weighted-Newton iteration as the second step. Fifth and sixth-order methods are composed of three steps of which the first two steps are same as that of the third-order method whereas the third is again a weighted-Newton step. Computational efficiency in its general form is discussed and a comparison between the efficiencies of proposed techniques with existing ones is made. The performance is tested through numerical examples. Moreover, theoretical results concerning order of convergence and computational efficiency are verified in the examples. It is shown that the present methods have an edge over similar existing methods, particularly when applied to large systems of equations.


Author(s):  
Oluwaseyi Feyisetan ◽  
Abhinav Aggarwal ◽  
Zekun Xu ◽  
Nathanael Teissier

Accurately learning from user data while ensuring quantifiable privacy guarantees provides an opportunity to build better ML models while maintaining user trust. Recent literature has demonstrated the applicability of a generalized form of Differential Privacy to provide guarantees over text queries. Such mechanisms add privacy preserving noise to vectorial representations of text in high dimension and return a text based projection of the noisy vectors. However, these mechanisms are sub-optimal in their trade-off between privacy and utility. In this proposal paper, we describe some challenges in balancing this trade-off. At a high level, we provide two proposals: (1) a framework called LAC which defers some of the noise to a privacy amplification step and (2), an additional suite of three different techniques for calibrating the noise based on the local region around a word. Our objective in this paper is not to evaluate a single solution but to further the conversation on these challenges and chart pathways for building better mechanisms.


Sensors ◽  
2019 ◽  
Vol 19 (20) ◽  
pp. 4483 ◽  
Author(s):  
Iago Sestrem Ochôa ◽  
Luis Augusto Silva ◽  
Gabriel de Mello ◽  
Bruno Alves da Silva ◽  
Juan Francisco de Paz ◽  
...  

With the popularization of the Internet-of-Things, various applications have emerged to make life easier. These applications generate a large amount of user data. Analyzing the data obtained from these applications, one can infer personal information about each user. Considering this, it is clear that ensuring privacy in this type of application is essential. To guarantee privacy various solutions exist, one of them is UbiPri middleware. This paper presents a decentralized implementation of UbiPri middleware using the Ethereum blockchain. Smart contracts were used in conjunction with a communication gateway and a distributed storage service to ensure users privacy. The results obtained show that the implementation of this work ensures privacy at different levels, data storage security, and performance regarding scalability in the Internet of Things environments.


Author(s):  
Qiuchen Zhang ◽  
Jing Ma ◽  
Jian Lou ◽  
Li Xiong

We study the differentially private (DP) stochastic nonconvex optimization with a focus on its under-studied utility measures in terms of the expected excess empirical and population risks. While the excess risks are extensively studied for convex optimization, they are rarely studied for nonconvex optimization, especially the expected population risk. For the convex case, recent studies show that it is possible for private optimization to achieve the same order of excess population risk as to the nonprivate optimization under certain conditions. It still remains an open question for the nonconvex case whether such ideal excess population risk is achievable. In this paper, we progress towards an affirmative answer to this open problem: DP nonconvex optimization is indeed capable of achieving the same excess population risk as to the nonprivate algorithm in most common parameter regimes, under certain conditions (i.e., well-conditioned nonconvexity). We achieve such improved utility rates compared to existing results by designing and analyzing the stagewise DP-SGD with early momentum algorithm. We obtain both excess empirical risk and excess population risk to achieve differential privacy. Our algorithm also features the first known results of excess and population risks for DP-SGD with momentum. Experiment results on both shallow and deep neural networks when respectively applied to simple and complex real datasets corroborate the theoretical results.


2021 ◽  
Vol 2021 (1) ◽  
pp. 64-84
Author(s):  
Ashish Dandekar ◽  
Debabrota Basu ◽  
Stéphane Bressan

AbstractThe calibration of noise for a privacy-preserving mechanism depends on the sensitivity of the query and the prescribed privacy level. A data steward must make the non-trivial choice of a privacy level that balances the requirements of users and the monetary constraints of the business entity.Firstly, we analyse roles of the sources of randomness, namely the explicit randomness induced by the noise distribution and the implicit randomness induced by the data-generation distribution, that are involved in the design of a privacy-preserving mechanism. The finer analysis enables us to provide stronger privacy guarantees with quantifiable risks. Thus, we propose privacy at risk that is a probabilistic calibration of privacy-preserving mechanisms. We provide a composition theorem that leverages privacy at risk. We instantiate the probabilistic calibration for the Laplace mechanism by providing analytical results.Secondly, we propose a cost model that bridges the gap between the privacy level and the compensation budget estimated by a GDPR compliant business entity. The convexity of the proposed cost model leads to a unique fine-tuning of privacy level that minimises the compensation budget. We show its effectiveness by illustrating a realistic scenario that avoids overestimation of the compensation budget by using privacy at risk for the Laplace mechanism. We quantitatively show that composition using the cost optimal privacy at risk provides stronger privacy guarantee than the classical advanced composition. Although the illustration is specific to the chosen cost model, it naturally extends to any convex cost model. We also provide realistic illustrations of how a data steward uses privacy at risk to balance the trade-off between utility and privacy.


Sign in / Sign up

Export Citation Format

Share Document