Overview
ABSTRACT
Problems of differentiable optimization arise when one tries to determine the optimal value of a finite number of parameters, optimality referring to the minimality of a given criteria. This article describes the principal algorithms for the resolution of these problems by précising their motivation. these resolution problems arise in numerous sectors of engineering as well as in science and economy. They are sometimes posed in infinite dimension; in that case, one tries to determine an optimal function. The current numerical methods of optimization stem from ever multiplying advances that enrich another.
Read this article from a comprehensive knowledge base, updated and supplemented with articles reviewed by scientific committees.
Read the articleAUTHOR
-
Jean Charles GILBERT: Director of Research at INRIA (French National Institute for Research in Computer Science and Control)
INTRODUCTION
This well-reasoned synthesis describes the main algorithms for solving differentiable optimization problems, and gives their motivation. These problems arise when the optimal value of a finite number of parameters is to be determined. Optimality here means the minimality of a given criterion. The assumed differentiability of the functions defining the problem immediately rules out combinatorial optimization (the parameters to be optimized take only integer or discrete values, see the "Optimization in integers" dossier
Optimization problems arise in many fields of engineering, as well as in science and economics, often after simulation steps have been completed. These problems are often infinite-dimensional, i.e. we're looking for an optimal function rather than a finite number of optimal parameters. We then have to go through a discretization phase (in space, in time) to get back to our own framework, and thus to a problem that can be solved on a computer. The direct transcription of optimal control problems follows such a discretization procedure. Other examples are described in the "Continuous optimization" section
Numerical optimization methods were mainly developed after the Second World War, in parallel with the improvement of computers, and have been constantly enriched ever since. In nonlinear optimization, several waves can be distinguished: penalization methods, the augmented Lagrangian method (1958), quasi-Newton methods (1959), Newtonian or SQP methods (1976), interior point algorithms (1984). One wave doesn't erase the previous one, but provides better answers to certain classes of problem, as was the case with interior point methods in positive semidefinite optimization (PSO). Particular attention is paid to algorithms that can handle large-scale...
Exclusive to subscribers. 97% yet to be discovered!
Already subscribed? Log in!
Differentiable optimization
Article included in this offer
"Mathematics"
(
165 articles
)
Updated and enriched with articles validated by our scientific committees
A set of exclusive tools to complement the resources
Documentation sources
Also in our database
Exclusive to subscribers. 97% yet to be discovered!
Already subscribed? Log in!