Abstract

This tutorial aims to weave together diverse strands of modern learning-to-rank (LtR) research, and present them in a unified full-day tutorial. First, we will introduce the fundamentals of LtR, and an overview of its various subfields. Then, we will discuss some recent advances in gradient boosting methods such as Lamb-daMART by focusing on their efficiency/effectiveness trade-offs and optimizations. We will then present TF-Ranking, a new open-source TensorFlow package for neural LtR models, and how it can be used for modeling sparse textual features. We will conclude the tutorial by covering unbiased LtR – a new research field aiming at learning from biased implicit user feedback.

The tutorial will consist of three two-hour sessions, each focusing on one of the topics described above. It will provide a mix of theoretical and hands-on sessions, and should benefit both academics interested in learning more about the current state-of-the-art in LtR, as well as practitioners who want to use LtR techniques in their applications.

Sessions

  • Session I: Efficiency/Effectiveness Trade-offs (2 hours)
  • Session II: Neural Learning to Rank using TensorFlow (2 hours)
  • Session III: Unbiased Learning to Rank (2 hours)

For more details see the program overview