DELPHI: Data for Evaluating LLMs' Performance in Handling Controversial Issues
AuthorsDavid Q. Sun*, Artem Abzaliev*, Hadas Kotek, Zidi Xiu, Christopher Klein, Jason D. Williams
AuthorsDavid Q. Sun*, Artem Abzaliev*, Hadas Kotek, Zidi Xiu, Christopher Klein, Jason D. Williams
*Equal Contributors
Controversy is a reflection of our zeitgeist and an important aspect of any discourse. The rise of large language models (LLMs) as conversational systems has increased public reliance on these systems for answers to their various questions. Consequently, it is crucial to systematically examine how these models respond to questions that pertain to ongoing debates. However, few such datasets exist in providing human-annotated labels reflecting the contemporary discussions. To foster research in this area, we propose a novel construction of a controversial questions dataset, expanding upon the publicly released Quora Question Pairs Dataset. This dataset presents challenges concerning knowledge recency, safety, fairness, and bias. We evaluate different LLMs using a subset of this dataset, illuminating how they handle controversial issues and the stances they adopt. This research ultimately contributes to our understanding of LLMs' interaction with controversial issues, paving the way for improvements in their comprehension and handling of complex societal debates.