Skip to main content
Interpreting Search Engine Rankings

Today algorithmic decision making is prevalent in several fields including medicine, automobiles and retail. On one hand, this is testament to the ever improving performance and capabilities of complex machine learning models. Most of the modern search engines also heavily rely on machine learning to make it easier for users to find relevant information by ranking documents based on the user intent. This is either by taking into account many signals based on information in the documents, importance of the document, user behaviour etc. On the other hand, the increased complexity has resulted in a lack of transparency and interpretability which has led to critical decision making models being deployed as functional black boxes. But for the most part they are used as black boxes which output a prediction, score or rankings without understanding partially or even completely how different features influence the model prediction. In such cases when an algorithm prioritizes information to predict, classify or rank; algorithmic transparency becomes an important feature to keep tabs on restricting discrimination and enhancing explainability-based trust in the system. In the context of machine learning and information retrieval systems interpretability can be defined as "the ability to explain or to present in understandable terms to a human". 

This has recently led to research on generating explanations from black box classifiers and sequence generators in tasks like image classification and captioning (Computer Vision); text classification and machine translation (Natural Language Processing); explaining recommendations (Recommender Systems). However, there is limited to no substantial contribution in the field of information retrieval due to learning approaches either from large amounts of training data and complex models. The objective of this proposal is to understand and lay the foundations for interpretability in information retrieval.



Prof. Dr. Avishek Anand

Avishek Anand is an assistant professor at the Leibniz Universität, Hannover and is also a member of the L3S Research Center in Hannover, Germany. He did his PhD at the Department of Databases and Information Systems, Max Planck Institute for Informatics, Saarbrücken, Germany, where he worked on indexing and query processing approaches for supporting temporal text workloads. His current research interests focus on various issues in Information retrieval like how can we include time information to improve search result ? How do we explain decisions taken by search engines ?