©Generated by Dall-e
AI in Digital School Education
Feedback from the Language Model
ChatGPT is currently demonstrating impressively what is possible with artificial intelligence. But it also shows that humanity faces new challenges with it: for example, when AI generates erroneous or discriminatory texts.
ChatGPT is based on a so-called language model that has learned which words are best to say or write in a given context based on billions of texts. Such language models are the focus of current research into Natural Language Processing, the machine analysis and generation of natural language.
At the Institute for Artificial Intelligence at Leibniz Universität Hannover, L3S member Prof. Dr. Henning Wachsmuth and his research group are investigating how language models can be designed to use their strengths in free text formulation without producing factually or ethically questionable information. “Language models can be trained so that the texts they produce fulfil predefined conditions,” says Wachsmuth. This makes it possible, for example, to ensure that information is conveyed correctly in most cases and at the same time in a way that is appropriate for the target group. “In this way, we can support people in their everyday lives, but retain control over what the AI is actually doing in the background,” says Wachsmuth.
One example is the interdisciplinary ArgSchool project, which the German Research Foundation has been funding since the end of 2021. At L3S, Maja Stahl, led by Wachsmuth and language didactics professor Sara Rezat from the University of Paderborn, is investigating how artificial intelligence can help students learn argumentative writing. “Argumentative writing is a central component of school education and requires contemporary and thoughtful forms of learning that are adapted to the challenges of this task,” Wachsmuth explains.
At the University of Paderborn, Rezat’s colleagues initially prepared around 1200 argumentative texts by hand from fifth and ninth grade students, on the basis of which specialised language models are now being trained at the L3S. “The goal of our research is to develop AI procedures that can automatically analyse argumentative texts and give students feedback on successful aspects and those that need improvement,” says Stahl. The procedures evaluate and take into account the developmental level of the individual students in order to better individualise the feedback and not overtax anyone.
The project also focuses on a quality feature of argumentative texts, namely the critical examination of counter-positions. Many students have difficulties with this. Automatically generated feedback can point out missing counter-positions and also suggest text passages where these can be added. In this way, students are encouraged to consider other points of view and to critically question their own opinions.
Despite all the possibilities offered by individualised feedback, the scientists are also aware of the limitations of AI, especially when used in school education. “We can measure and optimise the goodness of the underlying language models,” says Stahl. “Since language models transfer statistical patterns from known contexts to new ones, they will never work completely error-free. It is therefore also important that students learn to question the feedback.” Wachsmuth sees this as a general challenge of our time: “AI holds great potential, but also great risks. The deeper AI penetrates our society, the more important it is to train everyone in how to deal with it. Not least ChatGPT demonstrates this to us.”
L3S member Henning Wachsmuth heads the Natural Language Processing department at the Institute for Artificial Intelligence at Leibniz Universität Hannover.
Maja Stahl is a PhD student and research assistant at L3S and in the Natural Language Processing department at LUH.