← smac.pub home

ABNIRML: Analyzing the Behavior of Neural IR Models

link bibtex code 43 citations journal article

Authors: Sean MacAvaney, Sergey Feldman, Nazli Goharian, Doug Downey, Arman Cohan

Appeared in: TACL

DOI 10.1162/tacl_a_00457 DBLP journals/tacl/MacAvaneyFGDC22 Google Scholar 7wWfoDgAAAAJ:HDshCWvjkbEC Semantic Scholar 402954647b4a4855ccbe953c762518b90c7557c0 Enlighten 259712 smac.pub taclc2022-abnirml


Pretrained contextualized language models such as BERT and T5 have established a new state-of-the-art for ad-hoc search. However, it is not yet well-understood why these methods are so effective, what makes some variants more effective than others, and what pitfalls they may have. We present a new comprehensive framework for Analyzing the Behavior of Neural IR ModeLs ABNIRML, which includes new types of diagnostic probes that allow us to probe several characteristics---such as sensitivity to word order---that are not addressed by previous techniques. To demonstrate the value of the framework, we conduct an extensive empirical study that yields insights into the factors that contribute to the neural model's gains, and identify potential unintended biases the models exhibit. We find evidence that recent neural ranking models are fundamentally different from prior ranking models: they rely less on exact term overlap with the query, and instead leverage richer linguistic information, evidenced by their much higher sensitivity to word and sentence order. We also find that the pretrained language model alone does not dictate a system's behavior in ad-hoc retrieval, and that the same model within different ranking architectures can result in very different behavior.

BibTeX @article{macavaney:taclc2022-abnirml, author = {MacAvaney, Sean and Feldman, Sergey and Goharian, Nazli and Downey, Doug and Cohan, Arman}, title = {ABNIRML: Analyzing the Behavior of Neural IR Models}, year = {2022}, url = {https://direct.mit.edu/tacl/article/doi/10.1162/tacl_a_00457/110013/ABNIRML-Analyzing-the-Behavior-of-Neural-IR-Models}, doi = {10.1162/tacl_a_00457}, journal = {TACL} }