Dakota Reference Manual  Version 6.4
Large-Scale Engineering Optimization and Uncertainty Analysis
 All Pages
numerical_hessians


Hessians are needed and will be approximated by finite differences

Specification

Alias: none

Argument(s): none

Required/Optional Description of Group Dakota Keyword Dakota Keyword Description
Optional fd_step_size

Step size used when computing gradients and Hessians

Optional
(Choose One)
step scaling (Group 1) relative

(Default) Scale step size by the parameter value

absolute Do not scale step-size
bounds Scale step-size by the domain of the parameter
Optional
(Choose One)
difference interval (Group 2) forward

(Default) Use forward differences

central Use central differences

Description

The numerical_hessians specification means that Hessian information is needed and will be computed with finite differences using either first-order gradient differencing (for the cases of analytic_gradients or for the functions identified by id_analytic_gradients in the case of mixed_gradients) or first- or second-order function value differencing (all other gradient specifications). In the former case, the following expression

\[ \nabla^2 f ({\bf x})_i \cong \frac{\nabla f ({\bf x} + h {\bf e}_i) - \nabla f ({\bf x})}{h} \]

estimates the $i^{th}$ Hessian column, and in the latter case, the following expressions

\[ \nabla^2 f ({\bf x})_{i,j} \cong \frac{f({\bf x} + h_i {\bf e}_i + h_j {\bf e}_j) - f({\bf x} + h_i {\bf e}_i) - f({\bf x} - h_j {\bf e}_j) + f({\bf x})}{h_i h_j} \]

and

\[ \nabla^2 f ({\bf x})_{i,j} \cong \frac{f({\bf x} + h {\bf e}_i + h {\bf e}_j) - f({\bf x} + h {\bf e}_i - h {\bf e}_j) - f({\bf x} - h {\bf e}_i + h {\bf e}_j) + f({\bf x} - h {\bf e}_i - h {\bf e}_j)}{4h^2} \]

provide first- and second-order estimates of the $ij^{th}$ Hessian term. Prior to Dakota 5.0, Dakota always used second-order estimates. In Dakota 5.0 and newer, the default is to use first-order estimates (which honor bounds on the variables and require only about a quarter as many function evaluations as do the second-order estimates), but specifying central after numerical_hessians causes Dakota to use the old second-order estimates, which do not honor bounds. In optimization algorithms that use Hessians, there is little reason to use second-order differences in computing Hessian approximations.

See Also

These keywords may also be of interest: