Optimal Control
Article REF: AF1374 V1

Optimal Control

Author : J. Frédéric BONNANS

Publication date: April 10, 2015 | Lire en français

Logo Techniques de l'Ingenieur You do not have access to this resource.
Request your free trial access! Free trial

Already subscribed?

Overview

ABSTRACT

Optimal control theory analyzes how to optimize dynamical systems with various criteria: reaching a target in minimal time or with minimal energy, maximizing the efficiency of an industrial process, etc. This involves the optimization of both time-independent parameters and of the control variables that are a function of time. This article analyzes the first- and second-order optimality conditions, and how to solve them by time discretization, the shooting algorithm, or dynamic programming.

Read this article from a comprehensive knowledge base, updated and supplemented with articles reviewed by scientific committees.

Read the article

AUTHOR

  • J. Frédéric BONNANS: INRIA Research Director - INRIA and Center for Applied Mathematics, Ecole Polytechnique, Palaiseau

 INTRODUCTION

A dynamic system is said to be controlled if it can be acted upon by time-dependent variables, known as commands. Let's illustrate this concept in the case of a spacecraft, described by position and velocity variables (in 3 ) h and V, and a mass m > 0, i.e. 7 state variables. The dynamics are, omitting the time argument, h˙=V , mV˙=F(h,V)+u and m˙=c|u| . Here c is a positive constant and F (h, V) corresponds to the forces of gravity and (where applicable) aerodynamics. The control is the applied force, whose Euclidean norm is denoted by |u| , subjected to a constraint of the type |u|U . Given a fixed initial point, we seek to reach a target (part of state space) by minimizing a compromise between travel time and energy expended.

For the real-time implementation of a control system, it is necessary to take into account the means of observation and the reconstitution of the state, while considering aspects of signal processing and the choice of control electronics. In contrast, in this article, we consider only the upstream study, in which a deterministic framework is used, and an optimal control is calculated off-line. The shape of the latter can guide the design of the real-time controller.

The presentation will first follow the approach of Lagrange and Pontriaguine, which consists in studying the variations of an optimal trajectory to determine its properties. First- and second-order optimality conditions will be analyzed, in connection with the shooting algorithm, with particular attention...

You do not have access to this resource.
Logo Techniques de l'Ingenieur

Exclusive to subscribers. 97% yet to be discovered!

You do not have access to this resource. Click here to request your free trial access!

Already subscribed?


KEYWORDS

dynamical systems   |   path following   |   minimal time   |   shooting algorithm   |   dynamical programming

Ongoing reading
Optimum control

Article included in this offer

"Mathematics"

( 165 articles )

Complete knowledge base

Updated and enriched with articles validated by our scientific committees

Services

A set of exclusive tools to complement the resources

View offer details