Skip to main content

Interpreting Neural Rankers

This project aims to explain the output of ranking models used in all commercial search engines today. Albeit the astonishing performance of the neural networks, they are mostly used as black boxes. The project provides the possibility of explaining to the end users, why and how their preferences influence the neural ranker’s behavior. With such explanations, transparency and interpretability as well as trustfulness of neural rankers are enhanced.


Today algorithmic decision making is prevalent in several fields including medicine, automobiles and retail. On one hand, this is testament to the ever improving performance and capabilities of complex machine learning models. On the other hand, the increased complexity has resulted in a lack of transparency and interpretability which has led to critical decision making models being deployed as functional black boxes. Being able to explain the actions of such systems will help attribute liability, build trust, expose biases and in turn lead to improved models. This has most recently led to research on extracting post-hoc explanations from black box classifiers and sequence generators in tasks like image captioning, text classification and machine translation. However, little work has been done on explaining the output of ranking models used in all commercial search engines today.

With this grant, we plan to develop algorithms for post-hoc explanations of black box rankers. In particular we focus on text-based neural network rankers that learn feature representations which are hard to understand for developers and end users alike.

Research area
Intelligent Access to Information