Session I: Efficiency/Effectiveness Trade-offs (Start time 9:00)
- Introduction to LtR and aims of the tutorial. (30 min.)
- Introduction on LtR, its historical evolution and main results and the illustration of the goals of the tutorial.
- The role of LtR in modern Web search engines. Review of the main approaches of LtR: focus on tree-based models and artificial neural networks. Discussion of the quality vs. efficiency trade-off in the use of LtR models, and brief description of multi-stage ranking architectures
- Efficiency in Learning to Rank (60 min.)
- Detailed analysis of state-of-the-art solutions for improving the efficiency of LtR models along different dimensions.
- Hands-on Session (30 min.)
- We show how to develop state-of-the-art strategies to gain a more efficient ranking model without losing effectiveness. Given a model learnt with a state-of-the-art algorithm such as LambdaMART, we will show how to reduce its runtime cost by a factor larger than 18x.
Session II: Neural Learning to Rank using TensorFlow (Start time 11:30)
- Introduction to Neural Ranking (30 min.)
- Neural learning-to-rank primer
- TensorFlow and Estimator framework overview
- TensorFlow Ranking: components and APIs
- Hand-on Session (90 min.)
- Introduction to data formats and data sets and colaboratory setup
- Demo A: TensorFlow Ranking for Search using the MSLR-Web30k data set
- Demo B: TensorFlow Ranking for Passage Retrieval using the MSMARCO data set
Session III: Unbiased Learning to Rank (Start time 14:30)
- Introduction to Learning from User Interactions (10 min.)
- Counterfactual Learning to Rank (50 min.)
- Counterfactual evaluation and Propensity-weighted Learning to Rank
- Position bias estimation techniques — online and offline estimation and practical considerations
- Online Learning to Rank (45 min.)
- Interleaving and how it deals with position bias
- Dueling Bandit Gradient Descent (DBGD): the method that defined a decade of Online Learning to Rank (OLTR) algorithms.
- Extensions of DBGD and their limitations
- Pairwise Differentiable Gradient Descent (PDGD)
- An empirical and theoretical comparison between PDGD and DBGD
- Conclusions & Future Directions (15 min.)
A detailed abstract for Session III is available here.
See here for more details about the program, and the material that will be covered.