Private Stochastic Convex Optimization: Optimal Rates in ℓ1 Geometry
AuthorsHilal Asi, Vitaly Feldman, Tomer Koren, Kunal Talwar
AuthorsHilal Asi, Vitaly Feldman, Tomer Koren, Kunal Talwar
Stochastic convex optimization over an -bounded domain is ubiquitous in machine learning applications such as LASSO but remains poorly understood when learning with differential privacy. We show that, up to logarithmic factors the optimal excess population loss of any -differentially private optimizer is The upper bound is based on a new algorithm that combines the iterative localization approach of FeldmanKoTa20 with a new analysis of private regularized mirror descent. It applies to bounded domains for and queries at most gradients improving over the best previously known algorithm for the case which needs gradients. Further, we show that when the loss functions satisfy additional smoothness assumptions, the excess loss is upper bounded (up to logarithmic factors) by This bound is achieved by a new variance-reduced version of the Frank-Wolfe algorithm that requires just a single pass over the data. We also show that the lower bound in this case is the minimum of the two rates mentioned above.
January 10, 2025research area Methods and Algorithms, research area Privacy
Fingerprinting codes are a crucial tool for proving lower bounds in differential privacy. They have been used to prove tight lower bounds for several fundamental questions, especially in the "low accuracy" regime. Unlike reconstruction/discrepancy approaches however, they are more suited for proving worst-case lower bounds, for query sets that arise naturally from the fingerprinting codes construction. In this work, we propose a general framework...
June 20, 2023research area Methods and Algorithms, research area Privacyconference COLT
*= Equal Contributors
Online prediction from experts is a fundamental problem in machine learning and several works have studied this problem under privacy constraints. We propose and analyze new algorithms for this problem that improve over the regret bounds of the best existing algorithms for non-adaptive adversaries. For approximate differential privacy, our algorithms achieve regret bounds of for...