Differentiable optimization
Article REF: AF1252 V1

Differentiable optimization

Author : Jean Charles GILBERT

Publication date: April 10, 2008 | Lire en français

Logo Techniques de l'Ingenieur You do not have access to this resource.
Request your free trial access! Free trial

Already subscribed?

Overview

ABSTRACT

Problems of differentiable optimization arise when one tries to determine the optimal value of a finite number of parameters, optimality referring to the minimality of a given criteria. This article describes the principal algorithms for the resolution of these problems by précising their motivation. these resolution problems arise in numerous sectors of engineering as well as in science and economy. They are sometimes posed in infinite dimension; in that case, one tries to determine an optimal function. The current numerical methods of optimization stem from ever multiplying advances that enrich another.

Read this article from a comprehensive knowledge base, updated and supplemented with articles reviewed by scientific committees.

Read the article

AUTHOR

  • Jean Charles GILBERT: Director of Research at INRIA (French National Institute for Research in Computer Science and Control)

 INTRODUCTION

This well-reasoned synthesis describes the main algorithms for solving differentiable optimization problems, and gives their motivation. These problems arise when the optimal value of a finite number of parameters is to be determined. Optimality here means the minimality of a given criterion. The assumed differentiability of the functions defining the problem immediately rules out combinatorial optimization (the parameters to be optimized take only integer or discrete values, see the "Optimization in integers" dossier [AF 1 251] ) and non-smooth optimization (the functions have irregularities, see the "Optimization and convexity" dossier [AF 1 253] ).

Optimization problems arise in many fields of engineering, as well as in science and economics, often after simulation steps have been completed. These problems are often infinite-dimensional, i.e. we're looking for an optimal function rather than a finite number of optimal parameters. We then have to go through a discretization phase (in space, in time) to get back to our own framework, and thus to a problem that can be solved on a computer. The direct transcription of optimal control problems follows such a discretization procedure. Other examples are described in the "Continuous optimization" section [S 7 210] .

Numerical optimization methods were mainly developed after the Second World War, in parallel with the improvement of computers, and have been constantly enriched ever since. In nonlinear optimization, several waves can be distinguished: penalization methods, the augmented Lagrangian method (1958), quasi-Newton methods (1959), Newtonian or SQP methods (1976), interior point algorithms (1984). One wave doesn't erase the previous one, but provides better answers to certain classes of problem, as was the case with interior point methods in positive semidefinite optimization (PSO). Particular attention is paid to algorithms that can handle large-scale...

You do not have access to this resource.
Logo Techniques de l'Ingenieur

Exclusive to subscribers. 97% yet to be discovered!

You do not have access to this resource. Click here to request your free trial access!

Already subscribed?


Ongoing reading
Differentiable optimization

Article included in this offer

"Mathematics"

( 165 articles )

Complete knowledge base

Updated and enriched with articles validated by our scientific committees

Services

A set of exclusive tools to complement the resources

View offer details