Appeared in: 2nd International Workshop on Open Web Search (WOWS@ECIR 2025)
Abstract:
Modern information retrieval systems often rely on multiple components executed in a pipeline. In a research setting, this can lead to substantial redundant computations (e.g., retrieving the same query multiple times for different downstream rerankers). To overcome this, researchers take cached “result” files as inputs, which represent the output of another pipeline. However, these result files can be brittle and can cause a disconnect between the conceptual design of the pipeline and its logical implementation. To overcome both the redundancy problem (when executing complete pipelines) and the disconnect problem (when relying on intermediate result files), we describe our recent efforts to improve the caching capabilities in the open-source PyTerrier IR platform. We focus on two main directions: (1) automatic implicit caching of common pipeline prefixes when comparing systems and (2) explicit caching of operations through a new extension package, pyterrier-caching. These approaches allow for the best of both worlds: pipelines can be fully expressed end-to-end, while also avoiding redundant computations between pipelines.
BibTeX @inproceedings{macavaney:wows2025-caching, author = {MacAvaney, Sean and Macdonald, Craig}, title = {On Precomputation and Caching in Information Retrieval Experiments with Pipeline Architectures}, booktitle = {2nd International Workshop on Open Web Search}, year = {2025}, url = {https://arxiv.org/abs/2504.09984} }