Dakota Reference Manual  Version 6.16
Explore and Predict with Confidence
 All Pages
optpp_q_newton


Quasi-Newton optimization method

Topics

This keyword is related to the topics:

Specification

Alias: none

Argument(s): none

Child Keywords:

Required/Optional Description of Group Dakota Keyword Dakota Keyword Description
Optional search_method Select a search method for Newton-based optimizers
Optional merit_function Balance goals of reducing objective function and satisfying constraints
Optional steplength_to_boundary Controls how close to the boundary of the feasible region the algorithm is allowed to move
Optional centering_parameter Controls how closely the algorithm should follow the "central path"
Optional max_step Max change in design point
Optional gradient_tolerance Stopping critiera based on L2 norm of gradient
Optional max_iterations

Number of iterations allowed for optimizers and adaptive UQ methods

Optional convergence_tolerance

Stopping criterion based on objective function or statistics convergence

Optional speculative Compute speculative gradients
Optional max_function_evaluations

Number of function evaluations allowed for optimizers

Optional scaling

Turn on scaling for variables, responses, and constraints

Optional model_pointer

Identifier for model block to be used by a method

Description

This is a Newton method that expects a gradient and computes a low-rank approximation to the Hessian. Each of the Newton-based methods are automatically bound to the appropriate OPT++ algorithm based on the user constraint specification (unconstrained, bound-constrained, or generally-constrained). In the generally-constrained case, the Newton methods use a nonlinear interior-point approach to manage the constraints.

See package_optpp for info related to all optpp methods.

Expected HDF5 Output

If Dakota was built with HDF5 support and run with the hdf5 keyword, this method writes the following results to HDF5: