![]() |
Dakota Reference Manual
Version 6.2
Large-Scale Engineering Optimization and Uncertainty Analysis
|
Sequential Quadratic Program
This keyword is related to the topics:
Alias: none
Argument(s): none
Required/Optional | Description of Group | Dakota Keyword | Dakota Keyword Description | |
---|---|---|---|---|
Optional | verify_level | Verify the quality of analytic gradients | ||
Optional | function_precision | Specify the maximum precision of the analysis code responses | ||
Optional | linesearch_tolerance | Choose how accurately the algorithm will compute the minimum in a line search | ||
Optional | linear_inequality_constraint_matrix | Define coefficients of the linear inequality constraints | ||
Optional | linear_inequality_lower_bounds | Define lower bounds for the linear inequality constraint | ||
Optional | linear_inequality_upper_bounds | Define upper bounds for the linear inequality constraint | ||
Optional | linear_inequality_scale_types | Specify how each linear inequality constraint is scaled | ||
Optional | linear_inequality_scales | Define the characteristic values to scale linear inequalities | ||
Optional | linear_equality_constraint_matrix | Define coefficients of the linear equalities | ||
Optional | linear_equality_targets | Define target values for the linear equality constraints | ||
Optional | linear_equality_scale_types | Specify how each linear equality constraint is scaled | ||
Optional | linear_equality_scales | Define the characteristic values to scale linear equalities | ||
Optional | model_pointer | Identifier for model block to be used by a method |
NPSOL provides an implementation of sequential quadratic programming that can be accessed with npsol_sqp
.
Stopping Criteria
The method independent controls for max_iterations
and max_function_evaluations
limit the number of major SQP iterations and the number of function evaluations that can be performed during an NPSOL optimization. The convergence_tolerance
control defines NPSOL's internal optimality tolerance which is used in evaluating if an iterate satisfies the first-order Kuhn-Tucker conditions for a minimum. The magnitude of convergence_tolerance
approximately specifies the number of significant digits of accuracy desired in the final objective function (e.g., convergence_tolerance
= 1.e-6
will result in approximately six digits of accuracy in the final objective function). The constraint_tolerance
control defines how tightly the constraint functions are satisfied at convergence. The default value is dependent upon the machine precision of the platform in use, but is typically on the order of 1.e-8
for double precision computations. Extremely small values for constraint_tolerance
may not be attainable. The output
verbosity setting controls the amount of information generated at each major SQP iteration: the silent
and quiet
settings result in only one line of diagnostic output for each major iteration and print the final optimization solution, whereas the verbose
and debug
settings add additional information on the objective function, constraints, and variables at each major iteration.
Concurrency
NPSOL is not a parallel algorithm and cannot directly take advantage of concurrent evaluations. However, if numerical_gradients
with method_source
dakota
is specified, then the finite difference function evaluations can be performed concurrently (using any of the parallel modes described in the Users Manual [5]).
An important related observation is the fact that NPSOL uses two different line searches depending on how gradients are computed. For either analytic_gradients
or numerical_gradients
with method_source
dakota
, NPSOL is placed in user-supplied gradient mode (NPSOL's "Derivative Level" is set to 3) and it uses a gradient-based line search (the assumption is that user-supplied gradients are inexpensive). On the other hand, if numerical_gradients
are selected with method_source
vendor
, then NPSOL is computing finite differences internally and it will use a value-based line search (the assumption is that finite differencing on each line search evaluation is too expensive). The ramifications of this are: (1) performance will vary between method_source
dakota
and method_source
vendor
for numerical_gradients
, and (2) gradient speculation is unnecessary when performing optimization in parallel since the gradient-based line search in user-supplied gradient mode is already load balanced for parallel execution. Therefore, a speculative
specification will be ignored by NPSOL, and optimization with numerical gradients should select method_source
dakota
for load balanced parallel operation and method_source
vendor
for efficient serial operation.
Linear constraints
Lastly, NPSOL supports specialized handling of linear inequality and equality constraints. By specifying the coefficients and bounds of the linear inequality constraints and the coefficients and targets of the linear equality constraints, this information can be provided to NPSOL at initialization and tracked internally, removing the need for the user to provide the values of the linear constraints on every function evaluation.