← smac.pub home

ABNIRML: Analyzing the Behavior of Neural IR Models

pdf bibtex code non-refereed

See revised version, published in TACL link

Authors: Sean MacAvaney, Sergey Feldman, Nazli Goharian, Doug Downey, Arman Cohan

Appeared in: arXiv

Links/IDs:
DBLP journals/corr/abs-2011-00696 arXiv 2011.00696 smac.pub arxiv2020-abnirml

Abstract:

Numerous studies have demonstrated the effectiveness of pretrained contextualized language models such as BERT and T5 for ad-hoc search. However, it is not well-understood why these methods are so effective, what makes some variants more effective than others, and what pitfalls they may have. We present a new comprehensive framework for Analyzing the Behavior of Neural IR ModeLs (ABNIRML), which includes new types of diagnostic tests that allow us to probe several characteristics---such as sensitivity to word order---that are not addressed by previous techniques. To demonstrate the value of the framework, we conduct an extensive empirical study that yields insights into the factors that contribute to the neural model's gains, and identify potential unintended biases the models exhibit. We find evidence that recent neural ranking models have fundamentally different characteristics from prior ranking models. For instance, these models can be highly influenced by altered document word order, sentence order and inflectional endings. They can also exhibit unexpected behaviors when additional content is added to documents, or when documents are expressed with different levels of fluency or formality. We find that these differences can depend on the architecture and not just the underlying language model.

BibTeX @article{macavaney:arxiv2020-abnirml, author = {MacAvaney, Sean and Feldman, Sergey and Goharian, Nazli and Downey, Doug and Cohan, Arman}, title = {ABNIRML: Analyzing the Behavior of Neural IR Models}, year = {2020}, url = {https://arxiv.org/abs/2011.00696}, journal = {arXiv}, volume = {abs/2011.00696} }

Tweets