L3S Best Publication of the Quarter (Q3+Q4/2025)
Category: Explainable AI and AutoML
DeepCAVE: A Visualization and Analysis Tool for Automated Machine Learning
Authors: Sarah Segel, Helena Graf, Edward Bergman, Kristina Thieme, Marcel Wever, Frank Hutter, Marius Lindauer
Published in JMLR MLOSS
Which problem do you solve with your research?
Modern AI systems depend heavily on so-called hyperparameters that strongly influence how well a model performs. Automatically finding good settings is a central part of Automated Machine Learning (AutoML). However, these optimisation processes are often complex and difficult to understand, making them feel like a “black box” even to experts.
What is new about your research?
With DeepCAVE, we introduce an interactive visualisation and analysis tool that makes AutoML more transparent. DeepCAVE allows users to explore how different configurations were tested, how performance evolved over time, which parameters have the greatest impact, and where further improvements may be possible. It supports modern optimisation scenarios and multiple popular AutoML frameworks, all within a browser-based interface.
What is the potential impact?
By transforming complex optimisation data into intuitive visual insights, DeepCAVE helps researchers and practitioners better understand, debug, and improve AI systems. In the long term, this contributes to more trustworthy, efficient, and human-centred AI systems.

Link to the full paper: https://arxiv.org/abs/2512.01810
