Learning in Control

Algorithms of artificial intelligence and machine learning are of increasing importance for control applications. Our research and expertise in this domain ranges from the modeling of unknown or uncertain dynamics over iterative and reinforcement learning to Bayesian optimization.


Contact

Prof. Dr.-Ing. Knut Graichen
Tel.: +49 9131 85-27127
E-Mail | Homepage



One research focus at the Chair is on learning in model design and identification. Hybrid and data-driven models are attractive if physical modeling is either poor or requires high effort. Practical applications often require an online adaptation of these models in order to reflect effects of aging or wear or to increase the model accuracy in different operation regimes. Information about the reliability and trustworthiness of a learned model can directly be used within the control design. For instance, the prediction of the uncertainty allows to satisfy constraints with a given probability. A challenge with learning-based methods is to ensure real-time feasibility with possibly weak hardware resources in order to bring these advanced learning in control methods into practice.

Embedded learning of combustion models
Stochastic MPC considers obstacle uncertainty for collision avoidance

Another field of research and expertise is learning in optimization and control, for instance, reinforcement learning and Bayesian optimization. Reinforcement learning aims at obtaining an optimal control strategy from repeated interactions with the system. Formulating this task as an optimization problem shows the conceptual similarity to model predictive control, with the difference that reinforcement learning does not require model knowledge of the system. In a similar spirit, Bayesian optimization allows to solve complex optimization problem, in particular if the cost function or constraints are not analytically known or can only be evaluated by costly numerical simulations. Many technical tasks such as the optimization of production processes, an optimal product design or the search of optimal controller setpoints can be formulated as (partially) unknown optimization problem, illustrating the generality and importance of Bayesian optimization.

Reinforcement learning for a hydraulic clutch
Bayesian optimization subject to an unknown equality constraint

Related projects

AGENT-2: Predictive and learning control methods

To achieve climate targets, CO2 emissions in the building sector have to be significantly reduced. However, the integration of renewable energy sources increases the complexity of building energy systems and thus the requirements for the operation strategy. Model-based and predictive controllers are necessary for efficient operation. However, due to the high complexity of the energy systems, the development, implementation, and commissioning are very complex leading to high costs, which is why…

More information

AUTOtech.agil: Robust Planning and Control using Probabilistic Methods

Anomaly detection and intelligent recalibration of sensorized systems

KI-unterstützte Modellierung zur Steigerung der Regelgüte

Kinesthetic teaching and predictive control of interaction tasks in robotics

Precise interactions as part of industrial manufacturing tasks are typically very complex to characterize and implement. One reason for this is the heterogeneity of the task-specific requirements for the motion and control behavior. A direct implementation of the task into a robot program therefore requires highly qualified specialists and is only profitable for large lot sizes. For a flexible applicability and easy (re-)configuration of the robot system, an approach to programming by kinesthetic…

More information

Thermische Umrichtermodellierung für elektrische Antriebssysteme

Robust Reinforcement Learning for Thermal Management Control

KARMA: Development of an innovative camera-based framework for collision-free human-machine movement


Related publications

Since 2021

2020

2019 and earlier