pdf bibtex code 6 citations long conference paper
Appeared in: Asian Conference on Machine Learning 2021 (ACML 2021)
Abstract:
We consider the query recommendation problem in closed loop interactive learning settings like online information gathering and exploratory analytics. The problem can be naturally modelled using the Multi-Armed Bandits (MAB) framework with countably many arms. The standard MAB algorithms for countably many arms begin with selecting a random set of candidate arms and then applying standard MAB algorithms, e.g., UCB, on this candidate set downstream. We show that such a selection strategy often results in higher cumulative regret and to this end, we propose a selection strategy based on the maximum utility of the arms. We show that in tasks like online information gathering, where sequential query recommendations are employed, the sequences of queries are correlated and the number of potentially optimal queries can be reduced to a manageable size by selecting queries with maximum utility with respect to the currently executing query. Our experimental results using a recent real online literature discovery service log file demonstrate that the proposed arm selection strategy improves the cumulative regret substantially with respect to the commonly used random selection strategy for a variety of contextual multi-armed bandit algorithms. Our data model and source code are available at.
BibTeX @inproceedings{puthiya:acml2021-bandit, author = {Puthiya, Sham and Anagnostopoulos, Christos and Murray-Smith, Roderick and Zervas, Evangelos and MacAvaney, Sean}, title = {Max-Utility Based Arm Selection Strategy For Sequential Query Recommendations}, booktitle = {Asian Conference on Machine Learning}, year = {2021}, url = {https://arxiv.org/abs/2108.13810} }