Evaluating Entity Disambiguation and the Role of Popularity in Retrieval-Based NLP
In collaboration with University of California, Irvine
AuthorsAnthony Chen, Pallavi Gudipati, Shayne Longpre, Xiao Ling, Sameer Singh
In collaboration with University of California, Irvine
AuthorsAnthony Chen, Pallavi Gudipati, Shayne Longpre, Xiao Ling, Sameer Singh
Retrieval is a core component for open-domain NLP tasks. In open-domain tasks, multiple entities can share a name, making disambiguation an inherent yet under-explored problem. We propose an evaluation benchmark for assessing the entity disambiguation capabilities of these retrievers, which we call Ambiguous Entity Retrieval (AmbER) sets. We define an AmbER set as a collection of entities that share a name along with queries about those entities. By covering the set of entities for polysemous names, AmbER sets act as a challenging test of entity disambiguation. We create AmbER sets for three popular open-domain tasks: fact checking, slot filling, and question answering, and evaluate a diverse set of retrievers. We find that the retrievers exhibit popularity bias, significantly under-performing on rarer entities that share a name, e.g., they are twice as likely to retrieve erroneous documents on queries for the less popular entity under the same name. These experiments on AmbER sets show their utility as an evaluation tool and highlight the weaknesses of popular retrieval systems.