©Sergey Nivens – stock.adobe.com

Issue: 01/2023

Research for Trustworthy Precision Medicine

Patient data and artificial intelligence

Every person is unique. But in medicine, treatments are often the same for everyone. What works well in most cases, however, is not the best solution for everyone. Patients with a certain disease might react differently to the same treatment. While some tolerate the therapy well, others suffer from severe side effects or the treatment does not work at all. Personalised Medicine promises more precise diagnoses, more individualised therapies and medication tailored to individual patient groups. To achieve this, as much data and factors from as many patients as possible must be taken into account. With artificial intelligence, these huge amounts of data can be better processed and analysed.

Since the summer of 2020, the Federal Ministry of Education and Research has been funding an international future lab for artificial intelligence at the L3S that focuses on personalised medicine: the Leibniz AI Lab. Top international researchers from Australia, New Zealand, India, Greece, Great Britain and the USA will visit the AI Lab in Hannover and conduct research together with colleagues from Leibniz Universität Hannover, Hannover Medical School and from European partner institutes across disciplines in the fields of computer science, bioinformatics, medicine, human genetics and data science.

“The research focuses on new approaches and algorithms for intelligent, reliable and responsible systems,” says the head of the Leibniz AI Lab Prof. Dr. Wolfgang Nejdl. The interdisciplinary research team integrates a variety of approaches relevant to AI. Intelligence is enabled by knowledge graphs, deep learning, sensor fusion and scene interpretation, probabilistic methods and information extraction from the web. “Reproducibility and robustness of the methods are just as important as privacy by design. After all, results of intelligent systems should be explainable, fair and attributable,” says Nejdl.

Trustworthy AI

Artificial intelligence methods have proven extremely successful in many areas: for example, machine vision, natural language processing or signal processing. For personalised medicine, however, these methods lack a fundamental ingredient: the link between cause and effect. “The relationships between cause and effect are central to how we humans perceive the world around us, how we act and how we react to changes in our environment,” says Dr Sandipan Sikdar, junior professor at the Leibniz AI Lab. “Because current AI methods have little understanding of cause-and-effect relationships, they are fickle, cannot be applied to new domains, only generalise from one data point to the next and cannot explain their actions to users. At the Leibniz AI Lab, we focus on developing causal AI methods that not only lead to more precise diagnoses, more individualised therapies and medicines, but are also more robust, generalisable and explainable.”

Especially in the medical field, AI must be trustworthy and explainable. The development of intelligent systems should therefore already be built on ethical foundations. In the Leibniz AI Lab, scientists therefore also address questions such as: What exactly does trustworthy actually mean, considering that trustworthiness is a concept that we confer on people – and not on systems? Who is responsible if harm occurs when an AI product is used: the manufacturer, the developers, the clinicians or an individual actor? Whose ethical judgements should be relied upon and why? “Our work aims to explore these questions in partnership with developers and clinicians with a patient-centred focus,” says Dr Cameron Pierson, co-coordinator of the Leibniz AI Lab.

The draft EU regulation on AI, for example, draws on the Ethical Guidelines for Trusted AI established by an independent High-Level Expert Group (HLEG) on artificial intelligence set up by the European Commission. According to this, if AI systems are to be trustworthy, they should also be compatible with ethical standards. This requires processes and ethical reviews that are robust and scalable. Within the framework of the Leibniz AI Lab, a Trustworthy AI Lab was therefore also launched, which bundles various projects of the L3S in order to jointly research trustworthy AI. One central aspect, for example, is the Z-Inspection® method. This method has established itself as a reliable and robust method to evaluate ethical questions and tensions of AI-based systems for use in the medical field.

Questions on topics such as ethics and causality are also discussed in events: summer schools, workshops and symposia create a space for cross-disciplinary exchange. Such events are planned again in 2023: among others, a symposium in May on law and ethics in AI and biomedicine and another in September on intersections between AI and medicine.

Use cases for AI

At the Leibniz AI Lab, computer scientists and physicians are jointly focusing on five application areas for personalised medicine in order to improve the diagnosis and treatment of diseases using AI as an example:

  • Breast cancer 
  • Acute lymphatic leukaemia in children 
  • Neurodegenerative diseases such as Parkinson’s
  • Covid-19 
  • Life-threatening situations in paediatric intensive care medicine 

Prof. Dr. Thomas Illig, head of the Hannover Unified Biobank, describes the task of the MHH: “We provide precise and high-quality data sets for the AI applications. On the one hand, this concerns the merging of clinical and personal patient data, some of which come from different institutions and from different points in time. On the other hand, we support data pre-processing to make the multi-layered data sets usable for machine learning.” The biobank also plays an important role in data protection, as it acts as a gatekeeper in the compilation and release of anonymised data sets. In addition, the biobank provides support in the design of the cohorts, the evaluation strategy and the bioinformatic interpretation of the data.

In other projects, some of which are associated with the Leibniz AI Lab, L3S scientists are also working together with colleagues from the MHH and other medical research institutions to develop reliable, intelligent and responsible systems for personalised medicine. They are using machine learning methods, for example, to predict the success of cochlear implant surgery, to combat infections on dental implants or to better understand the norovirus.


Featured Projects
Contact

Wolfgang Nejdl

Wolfgang Nejdl is executive director of the L3S and head of the Leibniz AI Lab.

Thomas Illig

Thomas Illig

L3S member Thomas Illig is Head of the Hannover Unified Biobank and Principal Investigator of the Leibniz AI Lab.