1. Private Aggregation from Fewer Anonymous Messages 2020 DifferentialPrivacy Eurocrypt Privacy arxiv.org
    Badih Ghazi, Pasin Manurangsi, Rasmus Pagh, Ameya Velingker

    Consider the setup where $n$ parties are each given a number $x_i \in \mathbb{F}_q$ and the goal is to compute the sum $\sum_i x_i$ in a secure fashion and with as little communication as possible. We study this problem in the anonymized model of Ishai et al. (FOCS 2006) where each party may broadcast anonymous messages on an insecure channel.
    We present a new analysis of the one-round “split and mix” protocol of Ishai et al. In order to achieve the same security parameter, our analysis reduces the required number of messages by a $\Theta(\log n)$ multiplicative factor. We complement our positive result with lower bounds showing that the dependence of the number of messages on the domain size, the number of parties, and the security parameter is essentially tight.
    Using a reduction of Balle et al. (2019), our improved analysis of the protocol of Ishai et al. yields, in the same model, an $\left(\varepsilon, \delta\right)$-differentially private protocol for aggregation that, for any constant $\varepsilon > 0$ and any $\delta = \frac{1}{\mathrm{poly}(n)}$, incurs only a constant error and requires only a constant number of messages per party. Previously, such a protocol was known only for $\Omega(\log n)$ messages per party.

  2. Secure parallel computation on national scale volumes of data 2020 DifferentialPrivacy MPC Usenix usenix.org
    Sahar Mazloom, Phi Hung Le, Samuel Ranellucci, and Dov Gordon

    We revisit the problem of performing secure computation of graph-parallel algorithms, focusing on the applications of securely outsourcing matrix factorization, and histograms. Leveraging recent results in low-communication secure multi-party computation, and a security relaxation that allows the computation servers to learn some differentially private leakage about user inputs, we construct a new protocol that reduces overall runtime by 320X, reduces the number of AES calls by 750X , and reduces the total communication by 200X . Our system can securely compute histograms over 300 million items in about 4 minutes, and it can perform sparse matrix factorization, which is commonly used in recommendation systems, on 20 million records in about 6 minutes. Furthermore, in contrast to prior work, our system is secure against a malicious adversary that corrupts one of the computing servers.

  3. Differentially Private Access Patterns in Secure Computation 2018 CCS DifferentialPrivacy MPC eprint.iacr.org
    Sahar Mazloom and S. Dov Gordon

    We explore a new security model for secure computation on large datasets. We assume that two servers have been employed to compute on private data that was collected from many users, and, in order to improve the efficiency of their computation, we establish a new tradeoff with privacy. Specifically, instead of claiming that the servers learn nothing about the input values, we claim that what they do learn from the computation preserves the differential privacy of the input. Leveraging this relaxation of the security model allows us to build a protocol that leaks some information in the form of access patterns to memory, while also providing a formal bound on what is learned from the leakage.


    We then demonstrate that this leakage is useful in a broad class of computations. We show that computations such as histograms, PageRank and matrix factorization, which can be performed in common graph-parallel frameworks such as MapReduce or Pregel, benefit from our relaxation. We implement a protocol for securely executing graph-parallel computations, and evaluate the performance on the three examples just mentioned above. We demonstrate marked improvement over prior implementations for these computations.

  4. HisTorε: Differentially Private and Robust Statistics Collection for Tor 2017 DifferentialPrivacy NDSS Tor ndss-symposium.org
    Akshaya Mani and Micah Sherr

    A large volume of existing research attempts to understand who uses Tor and how the network is used (and misused). However, conducting measurements on the live Tor network, if done improperly, can endanger the security and anonymity of the millions of users who depend on the network to enhance their online privacy. Indeed, several existing measurement studies of Tor have been heavily criticized for unsafe research practices.


    Tor needs privacy-preserving methods of gathering statistics. The recently proposed PrivEx system demonstrates how data can be safely collected on Tor using techniques from differential privacy. However, as we demonstrate in this paper, the integrity of the statistics reported by PrivEx is brittle under realistic deployment conditions. An adversary who operates even a single relay in the volunteer-operated anonymity network can arbitrarily influence the result of PrivEx queries. We argue that a safe and useful data collection mechanism must provide both privacy and integrity protections.


    This paper presents HisTor , a privacy-preserving statistics collection scheme based on ( ; )-differential privacy that is robust against adversarial manipulation. We formalize the security guarantees of HisTor and show using historical data from the Tor Project that HisTor provides useful data collection and reporting with low bandwidth and processing overheads.

  5. Securely Sampling Biased Coins with Applications to Differential Privacy 2019 CCS DifferentialPrivacy eprint.iacr.org
    Jeffrey Champion, abhi shelat and Jonathan Ullman

    We design an efficient method for sampling a large batch of d independent coins with a given bias p∈[0,1]. The folklore secure computation method for doing so requires O(λ+logd) communication and computation per coin to achieve total statistical difference 2−λ. We present an exponential improvement over the folklore method that uses just O(log(λ+logd)) gates per coin when sampling d coins with total statistical difference 2−λ. We present a variant of our work that also concretely beats the folklore method for λ≥60 which are parameters that are often used in practice. Our new technique relies on using specially designed oblivious data structures to achieve biased coin samples that take an expected 2 random bits to sample.


    Using our new sampling technique, we present an implementation of the differentially private report-noisy-max mechanism (a more practical implementation of the celebrated exponential mechanism) as a secure multi-party computation. Our benchmarks show that one can run this mechanism on a domain of size d=212 in 6 seconds and up to d=219 in 14 minutes. As far as we know, this is the first complete distributed implementation of either of these mechanisms.

  6. Utility-Aware Synthesis of Differentially Private and Attack-Resilient Location Traces 2018 CCS DifferentialPrivacy Privacy cc.gatech.edu
    Mehmet Emre Gursoy, Ling Liu, Stacey Truex, Lei Yu, and Wenqi Wei

    As mobile devices and location-based services become increasingly ubiquitous, the privacy of mobile users’ location traces continues to be a major concern. Traditional privacy solutions rely on perturbing each position in a user’s trace and replacing it with a fake location. However, recent studies have shown that such point-based perturbation of locations is susceptible to inference attacks and suffers from serious utility losses, because it disregards the moving trajectory and continuity in full location traces. In this paper, we argue that privacy-preserving synthesis of complete location traces can be an effective solution to this problem. We present AdaTrace, a scalable location trace synthesizer with three novel features: provable statistical privacy, deterministic attack resilience, and strong utility preservation. AdaTrace builds a generative model from a given set of real traces through a four-phase synthesis process consisting of feature extraction, synopsis learning, privacy and utility preserving noise injection, and generation of differentially private synthetic location traces. The output traces crafted by AdaTrace preserve utility-critical information existing in real traces, and are robust against known location trace attacks. We validate the effectiveness of AdaTrace by comparing it with three state of the art approaches (ngram, DPT, and SGLT) using real location trace datasets (Geolife and Taxi) as well as a simulated dataset of 50,000 vehicles in Oldenburg, Germany. AdaTrace offers up to 3-fold improvement in trajectory utility, and is orders of magnitude faster than previous work, while preserving differential privacy and attack resilience.

  7. BLENDER: Enabling Local Search with a Hybrid Differential Privacy Model 2017 DifferentialPrivacy Usenix usenix.org
    Brendan Avent, Aleksandra Korolova, David Zeber, Torgeir Hovden, and Benjamin Livshits

    We propose a hybrid model of differential privacy that considers a combination of regular and opt-in users who desire the differential privacy guarantees of the local privacy model and the trusted curator model, respectively. We demonstrate that within this model, it is possible to design a new type of blended algorithm for the task of privately computing the most popular records of a web search log. This blended approach provides significant improvements in the utility of obtained data compared to related work while providing users with their desired privacy guarantees. Specifically, on two large search click data sets comprising 4.8 million and 13.2 million unique queries respectively, our approach attains NDCG values exceeding 95% across a range of commonly used privacy budget values.

  8. Locally Differentially Private Protocols for Frequency Estimation 2017 DifferentialPrivacy Usenix usenix.org
    Tianhao Wang, Jeremiah Blocki, Ninghui Li, and Somesh Jha

    Protocols satisfying Local Differential Privacy (LDP) enable parties to collect aggregate information about a population while protecting each user’s privacy, without relying on a trusted third party. LDP protocols (such as Google’s RAPPOR) have been deployed in real-world scenarios. In these protocols, a user encodes his private information and perturbs the encoded value locally before sending it to an aggregator, who combines values that users contribute to infer statistics about the population. In this paper, we introduce a framework that generalizes several LDP protocols proposed in the literature. Our framework yields a simple and fast aggregation algorithm, whose accuracy can be precisely analyzed. Our in-depth analysis enables us to choose optimal parameters, resulting in two new protocols (i.e., Optimized Unary Encoding and Optimized Local Hashing) that provide better utility than protocols previously proposed. We present precise conditions for when each proposed protocol should be used, and perform experiments that demonstrate the advantage of our proposed protocols.

  9. Is Interaction Necessary for Distributed Private Learning? 2017 DifferentialPrivacy MachineLearning Oakland ieeexplore.ieee.org
    A. Smith and A. Thakurta and J. Upadhyay

    Recent large-scale deployments of differentially private algorithms employ the local model for privacy (sometimes called PRAM or randomized response), where data are randomized on each individual’s device before being sent to a server that computes approximate, aggregate statistics. The server need not be trusted for privacy, leaving data control in users’ hands. For an important class of convex optimization problems (including logistic regression, support vector machines, and the Euclidean median), the best known locally differentially-private algorithms are highly interactive, requiring as many rounds of back and forth as there are users in the protocol. We ask: how much interaction is necessary to optimize convex functions in the local DP model? Existing lower bounds either do not apply to convex optimization, or say nothing about interaction. We provide new algorithms which are either noninteractive or use relatively few rounds of interaction. We also show lower bounds on the accuracy of an important class of noninteractive algorithms, suggesting a separation between what is possible with and without interaction.

  10. Locally Differentially Private Frequent Itemset Mining 2018 DifferentialPrivacy Oakland ieeexplore.ieee.org
    Tianhao Wang and Ninghui Li and Somesh Jha

    The notion of Local Differential Privacy (LDP) enables users to respond to sensitive questions while preserving their privacy. The basic LDP frequent oracle (FO) protocol enables an aggregator to estimate the frequency of any value. But when each user has a set of values, one needs an additional padding and sampling step to find the frequent values and estimate their frequencies. In this paper, we formally define such padding and sample based frequency oracles (PSFO). We further identify the privacy amplification property in PSFO. As a result, we propose SVIM, a protocol for finding frequent items in the set-valued LDP setting. Experiments show that under the same privacy guarantee and computational cost, SVIM significantly improves over existing methods. With SVIM to find frequent items, we propose SVSM to effectively find frequent itemsets, which to our knowledge has not been done before in the LDP setting.