* Zoom-ID: 781 783 9572
In this talk, we introduce an entropy-regularized optimal control problem for the deterministic control system. We derive dynamic programming principle and corresponding the Hamilton-Jacobi-Bellman (HJB) equation, which is regularizedversion of the HJB equation of the classical optimal control problem. After deriving the HJB equation, we provide several mathematical properties of it, including asymptotic convergence. We also provide an explicit example of control-affine problem, in whichthe optimal control is given as a normal distribution. Finally, we test the maximum entropy optimal control framework to several numerical examples, illustrating the benefit of the maximum entropy framework.