Avishek Anand

Juniorprofessor (Assistant professor) @ Leibniz University, Hannover.

Avishek Anand

Assistant Professor

Current Affiliation: Leibniz University, Hannover.
L3S Research Center, Hannover
Address: L3S Research Center, Appelstraße 4, Hannover.
Phone: +49 0511 762 17795
Fax: +49 0511 762 17779
Email: anand (at) l3s . de

[Linkedin] [L3S Home] [Google Scholar]

Research Interests



I am interested in developing effective and interpretable machine learning models for the problems of Web and information retrieval. On one hand, I focus on developing representation learning techniques for information retrieval like document ranking, question answering, document recommendation and Web tasks like node classification, link prediction. On the other hand, I tries to tackle the problem of interpretability of machine learning models in general and retrieval models in specific. That is, how can we better understand the rationale behind predictions of a black-box retrieval model ? In the past, I also worked on temporal information retrieval and text mining. My research is generously supported by Amazon research awards, Schufa Faculty Grants. Read More

A list of my research interests are as below:

  • Interpretability in Search Systems.

  • Representation learning for text and graphs.

  • Temporal Information Retrieval.

  • Scalable Machine Learning.




  • Projects


    img

    Interpreting Search

    Retrieval models in information retrieval are used to rank documents for typically under-specified queries. As these models become increasingly powerful and sophisticated, they also become harder to understand. Consequently, it is hard for to identify artifacts in training, data specific biases and intents from a complex trained model like neural rankers even if trained purely on text features. EXS is a search system designed speci cally to provide its users with insight into the following questions: “What is the intent of the query according to the ranker?”, “Why is this document ranked higher than another?” and “Why is this document relevant to the query?”.

    img

    Representation Learning for Text and Graphs

    There has been significant progress in unsupervised representation learning for text and graphs for Web and Text tasks. Word and node embeddings are low dimensional, continuous and dense representations of words and nodes in a semantic vector space. Such embeddings are routinely used as input representations in neural network architectures tasks for IR, NLP and the Web. We focus on making progress in learning effective representations of text and graphs by answering two important research questions -- How do we scale representation learning for very large datasets ? and Can we characterize the success of a representation learning method based on the learning task and input data ? Go to Nerd

    img

    Temporal Search

    Longitudinal corpora like newspaper archives are of immense value to historical research, and time as an important factor for historians strongly influences their search behaviour in these archives. While searching for articles published over time, a key preference is to retrieve documents which cover the important aspects from important points in time which is different from standard search behavior. We have developed HistDiv and Tempas, which are temporal search and exploration systems for searching historical news collections.

    Go to Histdiv, Go to Tempas



    Brief CV



    2017-now Assistant Professor
    Leibniz University, Hannover.
    2014-2017 Post-doctoral Researcher
    L3S Research Center, Hannover.
    2009-2013 Phd Student.
    Department of Databases and Information Systems, Max Planck Institute for Informatics, Saarbruecken.
    2007-2009 Masters Student.
    Saarland University and Max Planck Institute for Informatics, Saarbruecken.
    2005-2007 Software Engineer.
    Microsoft, India Development Center.
    2001-2005 Bachelor Student.
    Indian Institute of Information Technology, Allahabad.