©Figure: AI for Science: How data, reasoning, and knowledge converge. Knowledge Graphs and Neuro-symbolic AI connect human reasoning with machine learning to explain why scientific phenomena occur. CAIMed, TrustKG, and ORKG demonstrate how interpretable and trustworthy AI can help scientists understand not only what happens, but why.

AI for Science

When Machines Start Thinking

There’s no shortage of data: satellites record climate change, microscopes produce cellular images, and hospitals generate countless patient records. Yet data alone are not knowledge. Insight arises only when we understand what the information means. AI for Science aims to turn this flood of data into new discoveries – not just to detect what is happening, but to explain why.

“At L3S we see AI as a partner, not a competitor,” says Prof. Dr Maria-Esther Vidal. “Our goal is to build systems that help researchers think more deeply and act more precisely.”

Connecting Knowledge

Vidal and her team are working with neuro-symbolic AI – an approach that combines machine learning with symbolic reasoning. “So that machines don’t just compute, but actually understand,” Vidal explains.

The foundation is formed by knowledge graphs, which link facts – for example about genes, drugs or materials – to provide context and make reasoning transparent.

“When an AI system makes a medical decision, it must be able to explain how it arrived there,” says Dr Disha Purohit of L3S. “Only then can we build trust – in science as well as in medicine.”

Making Research Visible

Three L3S projects show how this works in practice:

  • CAIMed analyses medical data, images and genetic information to understand why some therapies work – and others don’t.
  • TrustKG provides the foundation by transforming research data into structured knowledge graphs that document every step and make outcomes verifiable.
  • The Open Research Knowledge Graph (ORKG) represents scientific publications as machine-readable knowledge, enabling studies to be compared, evidence to be traced and connections between disciplines to emerge.

Tangible Impact in Medicine

CAIMed and TrustKG are already delivering concrete results. “For lung and breast cancer, our systems not only predict which patients will respond to a therapy – but also explain why,” Vidal says. The models reveal drug interactions and familial risk patterns that had previously remained hidden.

The potential reaches far beyond oncology: from early detection of dementia to transplant medicine, where AI helps distinguish between donors with resolved and unresolved hepatitis-B infections.

Another module, Medi-AgenAI, aims to make medical language more accessible. It combines large language models, knowledge graphs and symbolic reasoning to translate complex terminology into clear, comprehensible text. “Patients should be able to understand their treatment options – not simply place blind trust in them,” says Vidal.

NeSyEx: AI for the Entire Research Process

With NeSyEx (Neuro-Symbolic Experimental AI), the team extends this approach to the entire research process – from study design to interpretation. AI thus becomes a kind of scientific assistant. These intelligent agents merge symbolic reasoning with generative AI and operate on structured knowledge graphs and workflow models, promoting transparency, reproducibility and personalisation in research.

From Prediction to Understanding

These developments mark a turning point: AI in science is moving from mere prediction to genuine understanding. Yet challenges remain. Research data are fragmented and inconsistent, and many models still function as black boxes. For AI to be trustworthy, it must follow the FUTURE-AI principles – fairness, universality, traceability, usability, robustness and explainability.

“Future systems must recognise relationships that enhance human judgement – and handle their sources of knowledge transparently,” Vidal emphasises.

A New Era of Science

Today, humans and machines work side by side. AI helps researchers ask better questions and uncover new relationships. By turning data into knowledge – and knowledge into understanding – it strengthens what lies at the heart of science itself: curiosity, the pursuit of insight, and the desire to make the world a better place.

Featured Projects
Contact

Prof. Dr. Maria-Esther Vidal

Maria-Esther Vidal is Professor at Leibniz University Hannover and Head of the Scientific Data Management Group at L3S and TIB–Leibniz Institute for Science and Technology. Her research focuses on trustworthy and neuro-symbolic AI for scientific data and knowledge graphs. She is a mentor in CAIMed and leads TrustKG.

Dr. Disha Purohit

Disha Purohit is Postdoctoral Researcher in CAIMed and TrustKG at L3S, and TIB–L3S Joint Lab. She develops hybrid AI systems that combine learning and reasoning to enhance transparency and explainability in science.