← smac.pub home

Exploiting Positional Bias for Query-Agnostic Generative Content in Search

pdf bibtex long conference paper

Authors: Andrew Parry, Sean MacAvaney, Debasis Ganguly

Appeared in: Findings of the Association for Computational Linguistics: ACL 2024 (Findings of ACL 2024)

Links/IDs:
DOI 10.18653/V1/2024.FINDINGS-ACL.656 DBLP conf/acl/ParryMG24 arXiv 2405.00469 Google Scholar 7wWfoDgAAAAJ:dfsIfKJdRG4C Enlighten 326286 smac.pub acl2024-adv

Abstract:

In recent years, neural ranking models (NRMs) have been shown to substantially outperform their lexical counterparts in text retrieval. In traditional search pipelines, a combination of lexical features leads to well-defined behaviour. However, as neural approaches become increasingly prevalent as the final scoring component of engines or as standalone systems, their robustness to malicious text and, more generally, semantic perturbation must be better understood. We posit that the transformer attention mechanism can induce exploitable defects through positional bias in search models, leading to an attack that could generalise beyond a single query or topic. We demonstrate such defects by showing that non-relevant text--such as promotional content--can be easily injected into a document without adversely affecting its position in search results. Unlike previous gradient-based attacks, we demonstrate these biases in a query-agnostic fashion, showing that without the knowledge of information need, we can still reduce the negative effects of non-relevant content injection by controlling injection position, showing fundamental bias in multiple state-of-the-art NRM architectures. We then use large language models to generate promotional text conditioned on target documents, finding that contextualisation of a non-relevant text further reduces negative effects whilst likely circumventing existing content filtering mechanisms. In contrast, lexical models correctly penalise such content. We then investigate a simple yet effective compensation for the weaknesses of the NRMs in search, validating our hypotheses regarding transformer bias.

BibTeX @inproceedings{parry:acl2024-adv, author = {Parry, Andrew and MacAvaney, Sean and Ganguly, Debasis}, title = {Exploiting Positional Bias for Query-Agnostic Generative Content in Search}, booktitle = {Findings of the Association for Computational Linguistics: ACL}, year = {2024}, url = {https://arxiv.org/abs/2405.00469}, doi = {10.18653/V1/2024.FINDINGS-ACL.656} }