L3S Best Publication of the Quarter (Q3+Q4/2025)
Category: Explainable AI and AutoML
HyperSHAP: Shapley Values and Interactions for Explaining Hyperparameter Optimization
Authors: Marcel Wever, Maximilian Muschalik, Fabian Fumagalli, Marius Lindauer
Presented at AAAI 2026
Which problem do you solve with your research?
Modern AI systems depend heavily on so-called hyperparameters: settings that strongly influence how well a model performs. Finding the best combination of these settings (a process known as hyperparameter optimisation) is essential for building accurate and reliable AI systems. However, this process is often a black box: we know the final result works well, but we do not understand why.
What is new about your research?
With HyperSHAP, we introduce a method that makes this process more transparent and explainable. Our approach shows which hyperparameters truly matter, how they interact with each other, and how much improvement they actually bring. In other words, we do not just optimise AI systems: we explain the optimisation itself. HyperSHAP is based on so-called Shapley values and allows for explaining the optimisation process from different angles. Unlike earlier approaches, HyperSHAP shows how much real performance gain is possible through tuning, instead of only measuring how much performance varies.
What is the potential impact?
This increased transparency has important implications. It helps researchers and companies build AI systems more efficiently, reduces costly trial-and-error experimentation, and strengthens trust in automated decision-making. In high-stakes areas such as healthcare, finance, or public administration, understanding why an AI system performs well is just as important as achieving good performance.

Link to the full paper: https://arxiv.org/pdf/2502.01276
