← smac.pub home

Guiding Retrieval using Large Language Models

bibtex long conference paper to appear

Authors: Mandeep Rathee, Sean MacAvaney, Avishek Anand

Appearing in: 47th European Conference on Information Retrieval (ECIR 2025)

Links/IDs:
Enlighten 343699 smac.pub ecir2025-llmgar

Abstract:

Large Language Models (LLMs) have shown strong promise as rerankers, especially in listwise'' settings where an LLM is prompted to rerank several search results at once. However, thiscascading'' retrieve-and-rerank approach is limited by the bounded recall problem: relevant documents not retrieved initially are permanently excluded from the final ranking. In this paper, we apply ideas from the area of adaptive retrieval to explore the capacity of these listwise LLM models to help guide the retrieval process itself, thereby overcoming the bounded recall problem for LLM rerankers. Specifically, we propose an adaptive algorithm for exact list merging results both from the initial ranking and feedback documents provided by the most relevant documents seen up to that point. Through extensive experiments across diverse LLM rerankers, first stage retrievers, and feedback sources, we demonstrate that our method can improve nDCG@10 by up to 13.23% and recall by 28.02%--all while keeping the total number of LLM inferences constant and overheads due to the adaptive process minimal. The work opens the door to leveraging LLM-based search in settings where the initial pool of results is limited, e.g., by legacy systems, or by the cost of deploying a semantic first-stage.

BibTeX @inproceedings{rathee:ecir2025-llmgar, author = {Rathee, Mandeep and MacAvaney, Sean and Anand, Avishek}, title = {Guiding Retrieval using Large Language Models}, booktitle = {47th European Conference on Information Retrieval}, year = {2025} }