©Sergey Nivens – stock.adobe.com
Agentic and Embodied AI
AI with a Body
Artificial intelligence writes texts, recognises images, and navigates streets. But many everyday tasks require more than that – machines that can orient themselves in their environment, grasp objects, carry loads, or perform repairs. The L3S research focus “Agentic and Embodied AI” addresses precisely these systems: AI that acts autonomously (Agentic AI) and physically interacts with the world (Embodied AI).
Agentic AI – Acting Intelligence
Agentic AI refers to learning systems that solve complex tasks step by step while adapting flexibly to current conditions. They learn through feedback – for example, when they win at chess – and often achieve superhuman results.
The most common method is Reinforcement Learning (RL): AI is trained through simulated interactions before tackling real-world tasks, such as controlling a robot. However, one challenge remains: many models only work in the environment where they were trained. With CARL, a test environment developed at L3S, researchers can now systematically assess how well such systems respond to new situations – and how to improve their transferability. “We were able not only to identify when performance drops, for instance with reduced ground friction, but also to show that structured integration of environmental variables can counteract this,” says Dr. Theresa Eimer, research group leader at L3S.
Eimer and her team also aim to simplify the application of RL algorithms: Automatic Reinforcement Learning (AutoRL) is designed to autonomously determine the best training settings for novel tasks – a crucial step towards making RL practical for complex applications such as robotics. Adaptive systems can continuously optimise their models and parameters. One example is GrowNN (Growing Neural Networks in Deep Reinforcement Learning): it starts with a simple model and automatically expands its capacity as tasks become more complex.
Embodied AI – Giving AI a Body
Embodied AI describes AI systems that interact with the real world via sensors and actuators. Often these involve digital twins, virtual sensors, or other estimation systems running on physical platforms. “Modern methods such as FranSys enable this quickly and accurately based on system measurement data – whether for a robotic arm or a drone,” explains Dr.-Ing Daniel Weber, co-developer of FranSys (Fast recurrent neural network-based method for multi-step-ahead nonautoregressive System Identification).
A well-known weakness of AI models is their limited transferability. Typically, a separate model must be trained for each new system. With RIANN (Robust IMU-based Attitude Neural Network), Weber and his co-authors have shown for the first time that a generalised model can reliably estimate spatial orientation across different systems – outperforming specialised approaches.
Going even further, RING (Recurrent Inertial Graph-based Estimator) is trained entirely in simulation and then works as a simple plug-and-play solution. Another way to make models more robust is to embed physical knowledge directly, for example by incorporating differential equations into PINNs (Physics-Informed Neural Networks). This significantly reduces the need for training data when precise model knowledge is available.
Agentic Embodied AI – When AI Truly Acts
Combining Agentic and Embodied AI opens up entirely new possibilities: systems that interact intelligently with the physical world. A promising approach is Model Predictive Control (MPC), where a robot plans its actions in advance. This method, however, requires extremely fast simulations. The elegant solution: PINN for MPC. These AI models have already internalised the system’s physics – enabling lightning-fast predictive control.
Contact
Dr. Theresa Eimer
Postdoctoral researcher at L3S and the Institute for Artificial Intelligence at Leibniz University Hannover, focusing on automated reinforcement learning (AutoRL).
Dr.-Ing. Daniel Oliver Martin Weber
Postdoctoral researcher at L3S and the Institute for Mechatronic Systems at Leibniz University Hannover, specialising in modelling dynamic systems with neural networks.