EmoNet‑Face: A New Emotion Vocabulary for More Empathic and Fairer AI

L3S Best Publication Award (Q3+Q4/2025)
Category: Emotion Recognition 

EmoNet-Face: An Expert-Annotated Benchmark for Synthetic Emotion Recognition

Authors: Christoph Schuhmann · Robert Kaczmarczyk · Gollam Rabby · Maurice Kraus · Felix Friedrich · Huu Nguyen · Kalyan Sai Krishna · Kourosh Nadi · Kristian Kersting · Sören Auer 

Presented at NeurIPS 2025

The paper in a nutshell:

For AI to be truly useful in the real world, it needs to understand the subtle nuances of human feelings, yet most current systems only recognise a few basic emotions and often harbour hidden biases. Our research, EmoNet Face, bridges this gap by introducing a sophisticated “emotion vocabulary” of 40 distinct states, moving beyond simple happiness or sadness to capture complex feelings like bitterness and shame. To ensure the technology works fairly for everyone, we developed a massive, demographically balanced dataset that represents diverse ages, genders, and ethnicities with high-quality, clear imagery. The result is a system that recognises human expressions as accurately as a human expert, providing a reliable and inclusive foundation for the next generation of empathetic AI. 

Which problem do you solve with your research? 

The fundamental problem we address is that today’s AI is often “emotionally blind” and unintentionally biased, which makes it unreliable for sensitive, real-world applications. Most AI systems only recognise a handful of basic emotions and frequently struggle to distinguish between subtle but important feelings, such as the difference between simple sadness and complex shame. Furthermore, because many AI models were trained on narrow, accidental datasets, they often fail to work accurately across different ethnicities, ages, and genders. Our research replaces these “blunt instruments” with EmoNet Face, a precision toolkit that trains AI to understand 40 distinct emotional states while ensuring the technology performs fairly and accurately for everyone, regardless of their background. 

Link to the full paper: https://arxiv.org/abs/2505.20033