← smac.pub home

Analyzing Adversarial Attacks on Sequence-to-Sequence Relevance Models

pdf bibtex 9 citations long conference paper

Authors: Andrew Parry, Maik Fröbe, Sean MacAvaney, Martin Potthast, Matthias Hagen

Appeared in: Proceedings of the 46th European Conference on Information Retrieval Research (ECIR 2024)

Links/IDs:
DOI 10.1007/978-3-031-56060-6_19 DBLP conf/ecir/ParryFMPH24 arXiv 2403.07654 Google Scholar 7wWfoDgAAAAJ:pqnbT2bcN3wC Enlighten 312854 smac.pub ecir2024-llmadv

Abstract:

Modern sequence-to-sequence relevance models like monoT5 can effectively capture complex interactions between queries and documents through cross-encoding. However, the use of natural language tokens in prompts, such as Query, Document, and Relevant for monoT5, opens an attack vector for malicious documents to manipulate their relevance score through prompt injection, e.g., by adding target words such as true. Since such possibilities have not yet been considered in retrieval evaluation, we analyze the impact of query-independent prompt injection via templates and via LLM-based rewriting of documents. Our experiments on the TREC Deep Learning track show that adversarial documents can easily manipulate different sequence-to-sequence relevance models, while BM25 (as a typical lexical model) is not affected. Remarkably, the attacks also affect encoder-only relevance models (which do not rely on natural language prompt tokens), albeit to a lesser extent.

BibTeX @inproceedings{parry:ecir2024-llmadv, author = {Parry, Andrew and Fröbe, Maik and MacAvaney, Sean and Potthast, Martin and Hagen, Matthias}, title = {Analyzing Adversarial Attacks on Sequence-to-Sequence Relevance Models}, booktitle = {Proceedings of the 46th European Conference on Information Retrieval Research}, year = {2024}, url = {https://arxiv.org/abs/2403.07654}, doi = {10.1007/978-3-031-56060-6_19} }