Trade-offs in Data Memorization via Strong Data Processing Inequalities
AuthorsVitaly Feldman, Guy Kornowski†**, Xin Lyu‡**
Trade-offs in Data Memorization via Strong Data Processing Inequalities
AuthorsVitaly Feldman, Guy Kornowski†**, Xin Lyu‡**
Recent research demonstrated that training large language models involves memorization of a significant fraction of training data. Such memorization can lead to privacy violations when training on sensitive user data and thus motivates the study of data memorization’s role in learning. In this work, we develop a general approach for proving lower bounds on excess data memorization, that relies on a new connection between strong data processing inequalities and data memorization. We then demonstrate that several simple and natural binary classification problems exhibit a trade-off between the number of samples available to a learning algorithm, and the amount of information about the training data that a learning algorithm needs to memorize to be accurate. In particular, bits of information about the training data need to be memorized when -dimensional examples are available, which then decays as the number of examples grows at a problem-specific rate. Further, our lower bounds are generally matched (up to logarithmic factors) by simple learning algorithms. We also extend our lower bounds to more general mixture-of-clusters models. Our definitions and results build on the work of Brown et al (2021) and address several limitations of the lower bounds in their work.
Cram Less to Fit More: Training Data Pruning Improves Memorization of Facts
April 13, 2026research area Methods and Algorithms, research area Speech and Natural Language ProcessingWorkshop at ICLR
This paper was accepted at the Workshop on Navigating and Addressing Data Problems for Foundation Models at ICLR 2026.
Large language models (LLMs) can struggle to memorize factual knowledge in their parameters, often leading to hallucinations and poor performance on knowledge-intensive tasks. In this paper, we formalize fact memorization from an information-theoretic perspective and study how training data distributions affect fact accuracy. We…
What Neural Networks Memorize and Why: Discovering the Long Tail via Influence Estimation
December 3, 2020research area Methods and Algorithmsconference NeurIPS
Deep learning algorithms are well-known to have a propensity for fitting the training data very well and often fit even outliers and mislabeled data points. Such fitting requires memorization of training data labels, a phenomenon that has attracted significant research interest but has not been given a compelling explanation so far. A recent work of Feldman (2019) proposes a theoretical explanation for this phenomenon based on a combination of…