Computational Intelligence in Uncertainty Quantification for Learning Controland Differential Games
Jan 1, 2021·,,,,,·
0 min read
Mushuang Liu
Yan Wan
Zongli Lin
Frank L. Lewis
Junfei Xie
Brian A. Jalaian
Abstract
Multi-dimensional uncertainties often modulate modern system dynamics in a complicated fashion. They lead to challenges for real-time control, considering the significant computation load needed to evaluate them in real-time decision processes. This chapter describes the use of computationally effective uncertainty evaluation methods for adaptive optimal control, including learning control and differential games. Two uncertainty evaluation methods are described, the multivariate probabilistic collocation method (MPCM) and its extension the MPCM-OFFD that integrates the MPCM with the orthogonal fractional factorial design (OFFD) to break the curse of dimensionality. These scalable uncertainty evaluation methods are then developed for reinforcement learning (RL)-based adaptive optimal control. Stochastic differential games, including the two-player zero-sum and multi-player nonzero-sum games, are formulated and investigated. Nash equilibrium solutions for these games are found in real time using the MPCM-based on-policy/off-policy RL methods. Real-world applications on broad-band long-distance aerial networking and strategic air traffic management demonstrate the practical use of MPCM- and MPCM-OFFD-based learning control for uncertain systems.
Type
Publication
Handbook of Reinforcement Learning and Control