View publication

We study the problem of differentially private stochastic convex optimization (DP-SCO) with heavy-tailed gradients, where we assume a kthk^{\text{th}}-moment bound on the Lipschitz constants of sample functions, rather than a uniform bound. We propose a new reduction-based approach that enables us to obtain the first optimal rates (up to logarithmic factors) in the heavy-tailed setting, achieving error G21n+Gk(dnε)11kG_2 \cdot \frac 1 {\sqrt n} + G_k \cdot (\frac{\sqrt d}{n\varepsilon})^{1 - \frac 1 k} under (ε,δ)(\varepsilon, \delta)-approximate differential privacy, up to a mild polylog(lognδ)\textup{polylog}(\frac{\log n}{\delta}) factor, where G22G_2^2 and GkkG_k^k are the 2nd2^{\text{nd}} and kthk^{\text{th}} moment bounds on sample Lipschitz constants, nearly-matching a lower bound of (Lowy et al. 2023).

Related readings and updates.

We design new differentially private algorithms for the problems of adversarial bandits and bandits with expert advice. For adversarial bandits, we give a simple and efficient conversion of any non-private bandit algorithms to private bandit algorithms. Instantiating our conversion with existing non-private bandit algorithms gives a regret upper bound of O(KTε)O\left(\frac{\sqrt{KT}}{\sqrt{\varepsilon}}\right), improving upon the existing upper bound...

Read more

Stochastic convex optimization over an 1ℓ_1-bounded domain is ubiquitous in machine learning applications such as LASSO but remains poorly understood when learning with differential privacy. We show that, up to logarithmic factors the optimal excess population loss of any (ε,δ)(\varepsilon, \delta)-differentially private optimizer is log(d)/n  +\sqrt{\log(d)/n}\; + d/εn.\sqrt{d}/\varepsilon n. The upper bound is based on a new algorithm that combines the...

Read more