1. Is Interaction Necessary for Distributed Private Learning? 2017 DifferentialPrivacy MachineLearning Oakland
    A. Smith and A. Thakurta and J. Upadhyay
    [View PDF on ieeexplore.ieee.org]
    [Show BibTex Citation]

    author={A. {Smith} and A. {Thakurta} and J. {Upadhyay}},
    booktitle={2017 IEEE Symposium on Security and Privacy (SP)},
    title={Is Interaction Necessary for Distributed Private Learning?},
    keywords={convex programming;data privacy;security of data;distributed private learning;differentially-private algorithms;convex optimization;noninteractive algorithms;Protocols;Servers;Convex functions;Privacy;Approximation algorithms;Data privacy;Algorithm design and analysis;Differential privacy;local differential privacy;convex optimization;oracle complexity},

Recent large-scale deployments of differentially private algorithms employ the local model for privacy (sometimes called PRAM or randomized response), where data are randomized on each individual’s device before being sent to a server that computes approximate, aggregate statistics. The server need not be trusted for privacy, leaving data control in users’ hands. For an important class of convex optimization problems (including logistic regression, support vector machines, and the Euclidean median), the best known locally differentially-private algorithms are highly interactive, requiring as many rounds of back and forth as there are users in the protocol. We ask: how much interaction is necessary to optimize convex functions in the local DP model? Existing lower bounds either do not apply to convex optimization, or say nothing about interaction. We provide new algorithms which are either noninteractive or use relatively few rounds of interaction. We also show lower bounds on the accuracy of an important class of noninteractive algorithms, suggesting a separation between what is possible with and without interaction.