Overview

optimal control


Show Summary Details

Quick Reference

A method of solving a dynamic optimization problem formulated in continuous time. Mathematically, it is a solution of a set of differential equations describing the paths of the control variables which minimize the cost functional associated with the control policy. The solution is derived by using Pontryagin's maximum principle (a necessary condition) or by solving the Hamilton–Jacobi–Bellman equation (a sufficient condition).

Subjects: Economics.


Reference entries

Users without a subscription are not able to see the full content. Please, subscribe or login to access all content.