Appeared in: arXiv
Abstract:
Numerous studies have demonstrated the effectiveness of pretrained contextualized language models such as BERT and T5 for ad-hoc search. However, it is not well-understood why these methods are so effective, what makes some variants more effective than others, and what pitfalls they may have. We present a new comprehensive framework for Analyzing the Behavior of Neural IR ModeLs (ABNIRML), which includes new types of diagnostic tests that allow us to probe several characteristics---such as sensitivity to word order---that are not addressed by previous techniques. To demonstrate the value of the framework, we conduct an extensive empirical study that yields insights into the factors that contribute to the neural model's gains, and identify potential unintended biases the models exhibit. We find evidence that recent neural ranking models have fundamentally different characteristics from prior ranking models. For instance, these models can be highly influenced by altered document word order, sentence order and inflectional endings. They can also exhibit unexpected behaviors when additional content is added to documents, or when documents are expressed with different levels of fluency or formality. We find that these differences can depend on the architecture and not just the underlying language model.
BibTeX @article{macavaney:arxiv2020-abnirml, author = {MacAvaney, Sean and Feldman, Sergey and Goharian, Nazli and Downey, Doug and Cohan, Arman}, title = {ABNIRML: Analyzing the Behavior of Neural IR Models}, year = {2020}, url = {https://arxiv.org/abs/2011.00696}, journal = {arXiv}, volume = {abs/2011.00696} }
Using BERT, T5, or similar as a ranking function? What language characteristics do these models care about? What could go wrong?
— Sean MacAvaney (@macavaney) November 11, 2020
With @SergeyFeldman, Nazli Goharian, @_DougDowney, and @armancohan, we investigate this in our new pre-print ABNIRML: https://t.co/Glh62Fq1tf pic.twitter.com/OOzbsUfTF5