Discriminating Form and Meaning in Multilingual Models with Minimal-Pair ABX Tasks
AuthorsMaureen de Seyssel*, Jie Chi*, Skyler Seto, Maartje ter Hoeve, Masha Fedzechkina, Natalie Schluter
AuthorsMaureen de Seyssel*, Jie Chi*, Skyler Seto, Maartje ter Hoeve, Masha Fedzechkina, Natalie Schluter
We introduce a set of training-free ABX-style discrimination tasks to evaluate how multilingual language models represent language identity (form) and semantic content (meaning). Inspired from speech processing, these zero-shot tasks measure whether minimal differences in representation can be reliably detected. This offers a flexible and interpretable alternative to probing. Applied to XLM-R (Conneau et al, 2020) across pretraining checkpoints and layers, we find that language discrimination declines over training and becomes concentrated in lower layers, while meaning discrimination strengthens over time and stabilizes in deeper layers. We then explore probing tasks, showing some alignment between our metrics and linguistic learning performance. Our results position ABX tasks as a lightweight framework for analyzing the structure of multilingual representations.
*Equal Contributors
May 16, 2025research area Speech and Natural Language Processingconference ACL
January 29, 2020research area Speech and Natural Language Processingconference ICASSP