← smac.pub home

One-Shot Labeling for Automatic Relevance Estimation

pdf bibtex code poster short conference paper

Authors: Sean MacAvaney*, Luca Soldaini*

* equal contribution

Appeared in: Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2023)

DOI 10.1145/3539618.3592032 DBLP conf/sigir/MacAvaneyS23 arXiv 2302.11266 Google Scholar 7wWfoDgAAAAJ:blknAaTinKkC Semantic Scholar 352bcafbcc95a84d96019688955cab5c43eb23f0 Enlighten 296335 smac.pub sigir2023-autoqrels


Dealing with unjudged documents ("holes") in relevance assessments is a perennial problem when evaluating search systems with offline experiments. Holes can reduce the apparent effectiveness of retrieval systems during evaluation and introduce biases in models trained with incomplete data. In this work, we explore whether large language models can help us fill such holes to improve offline evaluations. We examine an extreme, albeit common, evaluation setting wherein only a single known relevant document per query is available for evaluation. We then explore various approaches for predicting the relevance of unjudged documents with respect to a query and the known relevant document, including nearest neighbor, supervised, and prompting techniques. We find that although the predictions of these One-Shot Labelers (1SLs) frequently disagree with human assessments, the labels they produce yield a far more reliable ranking of systems than the single labels do alone. Specifically, the strongest approaches can consistently reach system ranking correlations of over 0.85 with the full rankings over a variety of measures. Meanwhile, the approach substantially reduces the false positive rate of t-tests due to holes in relevance assessments (from 15-30% down to under 5%), giving researchers more confidence in results they find to be significant.

BibTeX @inproceedings{macavaney:sigir2023-autoqrels, author = {MacAvaney, Sean and Soldaini, Luca}, title = {One-Shot Labeling for Automatic Relevance Estimation}, booktitle = {Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval}, year = {2023}, url = {https://arxiv.org/abs/2302.11266}, doi = {10.1145/3539618.3592032} }