Private Stochastic Convex Optimization with Heavy Tails: Near-Optimality from Simple Reductions
AuthorsHilal Asi, Daogao Liu, Kevin Tian
AuthorsHilal Asi, Daogao Liu, Kevin Tian
We study the problem of differentially private stochastic convex optimization (DP-SCO) with heavy-tailed gradients, where we assume a -moment bound on the Lipschitz constants of sample functions, rather than a uniform bound. We propose a new reduction-based approach that enables us to obtain the first optimal rates (up to logarithmic factors) in the heavy-tailed setting, achieving error under -approximate differential privacy, up to a mild factor, where and are the and moment bounds on sample Lipschitz constants, nearly-matching a lower bound of (Lowy et al. 2023).
July 11, 2025research area Methods and Algorithms, research area Privacyconference ICML
We design new differentially private algorithms for the problems of adversarial bandits and bandits with expert advice. For adversarial bandits, we give a simple and efficient conversion of any non-private bandit algorithms to private bandit algorithms. Instantiating our conversion with existing non-private bandit algorithms gives a regret upper bound of , improving upon the existing upper bound...
July 9, 2021research area Methods and Algorithms, research area Privacyconference ICML
Stochastic convex optimization over an -bounded domain is ubiquitous in machine learning applications such as LASSO but remains poorly understood when learning with differential privacy. We show that, up to logarithmic factors the optimal excess population loss of any -differentially private optimizer is The upper bound is based on a new algorithm that combines the...