Dakota
Version 6.4

Wrapper class for the NLPQLP optimization library, Version 2.0. More...
Public Member Functions  
NLPQLPOptimizer (ProblemDescDB &problem_db, Model &model)  
standard constructor  
NLPQLPOptimizer (Model &model)  
alternate constructor  
~NLPQLPOptimizer ()  
destructor  
void  core_run () 
core portion of run; implemented by all derived classes and may include pre/post steps in lieu of separate pre/post More...  
Protected Member Functions  
void  initialize_run () 
performs runtime set up  
Protected Member Functions inherited from Optimizer  
Optimizer ()  
default constructor  
Optimizer (ProblemDescDB &problem_db, Model &model)  
alternate constructor; accepts a model  
Optimizer (unsigned short method_name, Model &model)  
alternate constructor for "on the fly" instantiations  
Optimizer (unsigned short method_name, size_t num_cv, size_t num_div, size_t num_dsv, size_t num_drv, size_t num_lin_ineq, size_t num_lin_eq, size_t num_nln_ineq, size_t num_nln_eq)  
alternate constructor for "on the fly" instantiations  
~Optimizer ()  
destructor  
void  post_run (std::ostream &s) 
void  finalize_run () 
utility function to perform common operations following post_run(); deallocation and resetting of instance pointers More...  
void  print_results (std::ostream &s) 
Protected Member Functions inherited from Minimizer  
Minimizer ()  
default constructor  
Minimizer (ProblemDescDB &problem_db, Model &model)  
standard constructor More...  
Minimizer (unsigned short method_name, Model &model)  
alternate constructor for "on the fly" instantiations  
Minimizer (unsigned short method_name, size_t num_lin_ineq, size_t num_lin_eq, size_t num_nln_ineq, size_t num_nln_eq)  
alternate constructor for "on the fly" instantiations  
~Minimizer ()  
destructor  
void  update_from_model (const Model &model) 
set inherited data attributes based on extractions from incoming model  
const Model &  algorithm_space_model () const 
Model  original_model (unsigned short recasts_left=0) 
Return a shallow copy of the original model this Iterator was originally passed, optionally leaving recasts_left on top of it.  
void  data_transform_model () 
Wrap iteratedModel in a RecastModel that subtracts provided observed data from the primary response functions (variables and secondary responses are unchanged) More...  
void  scale_model () 
Wrap iteratedModel in a RecastModel that performs variable and/or response scaling. More...  
Real  objective (const RealVector &fn_vals, const BoolDeque &max_sense, const RealVector &primary_wts) const 
compute a composite objective value from one or more primary functions More...  
Real  objective (const RealVector &fn_vals, size_t num_fns, const BoolDeque &max_sense, const RealVector &primary_wts) const 
compute a composite objective with specified number of source primary functions, instead of userPrimaryFns More...  
void  objective_gradient (const RealVector &fn_vals, const RealMatrix &fn_grads, const BoolDeque &max_sense, const RealVector &primary_wts, RealVector &obj_grad) const 
compute the gradient of the composite objective function  
void  objective_gradient (const RealVector &fn_vals, size_t num_fns, const RealMatrix &fn_grads, const BoolDeque &max_sense, const RealVector &primary_wts, RealVector &obj_grad) const 
compute the gradient of the composite objective function More...  
void  objective_hessian (const RealVector &fn_vals, const RealMatrix &fn_grads, const RealSymMatrixArray &fn_hessians, const BoolDeque &max_sense, const RealVector &primary_wts, RealSymMatrix &obj_hess) const 
compute the Hessian of the composite objective function  
void  objective_hessian (const RealVector &fn_vals, size_t num_fns, const RealMatrix &fn_grads, const RealSymMatrixArray &fn_hessians, const BoolDeque &max_sense, const RealVector &primary_wts, RealSymMatrix &obj_hess) const 
compute the Hessian of the composite objective function More...  
void  archive_allocate_best (size_t num_points) 
allocate results arrays and labels for multipoint storage  
void  archive_best (size_t index, const Variables &best_vars, const Response &best_resp) 
archive the best point into the results array  
void  resize_best_vars_array (size_t newsize) 
Safely resize the best variables array to newsize taking into account the envelopeletter design pattern and any recasting. More...  
void  resize_best_resp_array (size_t newsize) 
Safely resize the best response array to newsize taking into account the envelopeletter design pattern and any recasting. More...  
Real  sum_squared_residuals (size_t num_pri_fns, const RealVector &residuals, const RealVector &weights) 
return weighted sum of squared residuals  
void  print_residuals (size_t num_terms, const RealVector &best_terms, const RealVector &weights, size_t num_best, size_t best_index, std::ostream &s) 
print num_terms residuals and misfit for final results  
void  print_model_resp (size_t num_pri_fns, const RealVector &best_fns, size_t num_best, size_t best_index, std::ostream &s) 
print the original user model resp in the case of data transformations  
void  local_recast_retrieve (const Variables &vars, Response &response) const 
infers MOO/NLS solution from the solution of a singleobjective optimizer More...  
Protected Member Functions inherited from Iterator  
Iterator (BaseConstructor, ProblemDescDB &problem_db)  
constructor initializes the base class part of letter classes (BaseConstructor overloading avoids infinite recursion in the derived class constructors  Coplien, p. 139) More...  
Iterator (NoDBBaseConstructor, unsigned short method_name, Model &model)  
alternate constructor for base iterator classes constructed on the fly More...  
Iterator (NoDBBaseConstructor, unsigned short method_name)  
alternate constructor for base iterator classes constructed on the fly More...  
virtual void  derived_init_communicators (ParLevLIter pl_iter) 
derived class contributions to initializing the communicators associated with this Iterator instance  
virtual const VariablesArray &  initial_points () const 
gets the multiple initial points for this iterator. This will only be meaningful after a call to initial_points mutator.  
StrStrSizet  run_identifier () const 
get the unique run identifier based on method name, id, and number of executions  
Private Member Functions  
void  initialize () 
Shared constructor code.  
void  allocate_workspace () 
Allocates workspace for the optimizer.  
void  deallocate_workspace () 
Releases workspace memory.  
void  allocate_constraints () 
Allocates constraint mappings.  
Private Attributes  
int  L 
L : Number of parallel systems, i.e. function calls during line search at predetermined iterates. HINT: If only less than 10 parallel function evaluations are possible, it is recommended to apply the serial version by setting L=1.  
int  numEqConstraints 
numEqConstraints : Number of equality constraints.  
int  MMAX 
MMAX : Row dimension of array DG containing Jacobian of constraints. MMAX must be at least one and greater or equal to M.  
int  N 
N : Number of optimization variables.  
int  NMAX 
NMAX : Row dimension of C. NMAX must be at least two and greater than N.  
int  MNN2 
MNN2 : Must be equal to M+N+N+2.  
double *  X 
X(NMAX,L) : Initially, the first column of X has to contain starting values for the optimal solution. On return, X is replaced by the current iterate. In the driving program the row dimension of X has to be equal to NMAX. X is used internally to store L different arguments for which function values should be computed simultaneously.  
double *  F 
F(L) : On return, F(1) contains the final objective function value. F is used also to store L different objective function values to be computed from L iterates stored in X.  
double *  G 
G(MMAX,L) : On return, the first column of G contains the constraint function values at the final iterate X. In the driving program the row dimension of G has to be equal to MMAX. G is used internally to store L different set of constraint function values to be computed from L iterates stored in X.  
double *  DF 
DF(NMAX) : DF contains the current gradient of the objective function. In case of numerical differentiation and a distributed system (L>1), it is recommended to apply parallel evaluations of F to compute DF.  
double *  DG 
DG(MMAX,NMAX) : DG contains the gradients of the active constraints (ACTIVE(J)=.true.) at a current iterate X. The remaining rows are filled with previously computed gradients. In the driving program the row dimension of DG has to be equal to MMAX.  
double *  U 
U(MNN2) : U contains the multipliers with respect to the actual iterate stored in the first column of X. The first M locations contain the multipliers of the M nonlinear constraints, the subsequent N locations the multipliers of the lower bounds, and the final N locations the multipliers of the upper bounds. At an optimal solution, all multipliers with respect to inequality constraints should be nonnegative.  
double *  C 
C(NMAX,NMAX) : On return, C contains the last computed approximation of the Hessian matrix of the Lagrangian function stored in form of an LDL decomposition. C contains the lower triangular factor of an LDL factorization of the final quasiNewton matrix (without diagonal elements, which are always one). In the driving program, the row dimension of C has to be equal to NMAX.  
double *  D 
D(NMAX) : The elements of the diagonal matrix of the LDL decomposition of the quasiNewton matrix are stored in the onedimensional array D.  
double  ACC 
ACC : The user has to specify the desired final accuracy (e.g. 1.0D7). The termination accuracy should not be smaller than the accuracy by which gradients are computed.  
double  ACCQP 
ACCQP : The tolerance is needed for the QP solver to perform several tests, for example whether optimality conditions are satisfied or whether a number is considered as zero or not. If ACCQP is less or equal to zero, then the machine precision is computed by NLPQLP and subsequently multiplied by 1.0D+4.  
double  STPMIN 
STPMIN : Minimum steplength in case of L>1. Recommended is any value in the order of the accuracy by which functions are computed. The value is needed to compute a steplength reduction factor by STPMIN**(1/L1). If STPMIN<=0, then STPMIN=ACC is used.  
int  MAXFUN 
MAXFUN : The integer variable defines an upper bound for the number of function calls during the line search (e.g. 20). MAXFUN is only needed in case of L=1, and must not be greater than 50.  
int  MAXIT 
MAXIT : Maximum number of outer iterations, where one iteration corresponds to one formulation and solution of the quadratic programming subproblem, or, alternatively, one evaluation of gradients (e.g. 100).  
int  MAX_NM 
MAX_NM : Stack size for storing merit function values at previous iterations for nonmonotone line search (e.g. 10). In case of MAX_NM=0, monotone line search is performed.  
double  TOL_NM 
TOL_NM : Relative bound for increase of merit function value, if line search is not successful during the very first step. Must be nonnegative (e.g. 0.1).  
int  IPRINT 
IPRINT : Specification of the desired output level. IPRINT = 0 : No output of the program. IPRINT = 1 : Only a final convergence analysis is given. IPRINT = 2 : One line of intermediate results is printed in each iteration. IPRINT = 3 : More detailed information is printed in each iteration step, e.g. variable, constraint and multiplier values. IPRINT = 4 : In addition to 'IPRINT=3', merit function and steplength values are displayed during the line search.  
int  MODE 
MODE : The parameter specifies the desired version of NLPQLP. MODE = 0 : Normal execution (reverse communication!). MODE = 1 : The user wants to provide an initial guess for the multipliers in U and for the Hessian of the Lagrangian function in C and D in form of an LDL decomposition.  
int  IOUT 
IOUT : Integer indicating the desired output unit number, i.e. all writestatements start with 'WRITE(IOUT,... '.  
int  IFAIL 
IFAIL : The parameter shows the reason for terminating a solution process. Initially IFAIL must be set to zero. On return IFAIL could contain the following values: IFAIL =2 : Compute gradient values w.r.t. the variables stored in first column of X, and store them in DF and DG. Only derivatives for active constraints ACTIVE(J)=.TRUE. need to be computed. Then call NLPQLP again, see below. IFAIL =1 : Compute objective fn and all constraint values subject the variables found in the first L columns of X, and store them in F and G. Then call NLPQLP again, see below. IFAIL = 0 : The optimality conditions are satisfied. IFAIL = 1 : The algorithm has been stopped after MAXIT iterations. IFAIL = 2 : The algorithm computed an uphill search direction. IFAIL = 3 : Underflow occurred when determining a new approxi mation matrix for the Hessian of the Lagrangian. IFAIL = 4 : The line search could not be terminated successfully. IFAIL = 5 : Length of a working array is too short. More detailed error information is obtained with 'IPRINT>0'. IFAIL = 6 : There are false dimensions, for example M>MMAX, N>=NMAX, or MNN2<>M+N+N+2. IFAIL = 7 : The search direction is close to zero, but the current iterate is still infeasible. IFAIL = 8 : The starting point violates a lower or upper bound. IFAIL = 9 : Wrong input parameter, i.e., MODE, LDL decomposition in D and C (in case of MODE=1), IPRINT, IOUT IFAIL = 10 : Internal inconsistency of the quadratic subproblem, division by zero. IFAIL > 100 : The solution of the quadratic programming subproblem has been terminated with an error message and IFAIL is set to IFQL+100, where IFQL denotes the index of an inconsistent constraint.  
double *  WA 
WA(LWA) : WA is a real working array of length LWA.  
int  LWA 
LWA : LWA value extracted from NLPQLP20.f.  
int *  KWA 
KWA(LKWA) : The user has to provide working space for an integer array.  
int  LKWA 
LKWA : LKWA should be at least N+10.  
int *  ACTIVE 
ACTIVE(LACTIV) : The logical array shows a user the constraints, which NLPQLP considers to be active at the last computed iterate, i.e. G(J,X) is active, if and only if ACTIVE(J)=.TRUE., J=1,...,M.  
int  LACTIVE 
LACTIV : The length LACTIV of the logical array should be at least 2*M+10.  
int  LQL 
LQL : If LQL = .TRUE., the quadratic programming subproblem is to be solved with a full positive definite quasi Newton matrix. Otherwise, a Cholesky decomposition is performed and updated, so that the subproblem matrix contains only an upper triangular factor.  
int  numNlpqlConstr 
total number of constraints seen by NLPQL  
SizetList  nonlinIneqConMappingIndices 
a list of indices for referencing the DAKOTA nonlinear inequality constraints used in computing the corresponding NLPQL constraints.  
RealList  nonlinIneqConMappingMultipliers 
a list of multipliers for mapping the DAKOTA nonlinear inequality constraints to the corresponding NLPQL constraints.  
RealList  nonlinIneqConMappingOffsets 
a list of offsets for mapping the DAKOTA nonlinear inequality constraints to the corresponding NLPQL constraints.  
SizetList  linIneqConMappingIndices 
a list of indices for referencing the DAKOTA linear inequality constraints used in computing the corresponding NLPQL constraints.  
RealList  linIneqConMappingMultipliers 
a list of multipliers for mapping the DAKOTA linear inequality constraints to the corresponding NLPQL constraints.  
RealList  linIneqConMappingOffsets 
a list of offsets for mapping the DAKOTA linear inequality constraints to the corresponding NLPQL constraints.  
Additional Inherited Members  
Static Public Member Functions inherited from Optimizer  
static void  not_available (const std::string &package_name) 
Static helper function: thirdparty opt packages which are not available.  
Static Protected Member Functions inherited from Iterator  
static void  gnewton_set_recast (const Variables &recast_vars, const ActiveSet &recast_set, ActiveSet &sub_model_set) 
conversion of request vector values for the GaussNewton Hessian approximation More...  
Protected Attributes inherited from Optimizer  
size_t  numObjectiveFns 
number of objective functions (iterator view)  
bool  localObjectiveRecast 
flag indicating whether local recasting to a single objective is used  
Optimizer *  prevOptInstance 
pointer containing previous value of optimizerInstance  
Static Protected Attributes inherited from Optimizer  
static Optimizer *  optimizerInstance 
pointer to Optimizer instance used in static member functions  
Wrapper class for the NLPQLP optimization library, Version 2.0.
AN IMPLEMENTATION OF A SEQUENTIAL QUADRATIC PROGRAMMING METHOD FOR SOLVING NONLINEAR OPTIMIZATION PROBLEMS BY DISTRIBUTED COMPUTING AND NONMONOTONE LINE SEARCH
This subroutine solves the general nonlinear programming problem
minimize F(X) subject to G(J,X) = 0 , J=1,...,ME G(J,X) >= 0 , J=ME+1,...,M XL <= X <= XU
and is an extension of the code NLPQLD. NLPQLP is specifically tuned to run under distributed systems. A new input parameter L is introduced for the number of parallel computers, that is the number of function calls to be executed simultaneously. In case of L=1, NLPQLP is identical to NLPQLD. Otherwise the line search is modified to allow L parallel function calls in advance. Moreover the user has the opportunity to used distributed function calls for evaluating gradients.
The algorithm is a modification of the method of Wilson, Han, and Powell. In each iteration step, a linearly constrained quadratic programming problem is formulated by approximating the Lagrangian function quadratically and by linearizing the constraints. Subsequently, a onedimensional line search is performed with respect to an augmented Lagrangian merit function to obtain a new iterate. Also the modified line search algorithm guarantees convergence under the same assumptions as before.
For the new version, a nonmonotone line search is implemented which allows to increase the merit function in case of instabilities, for example caused by roundoff errors, errors in gradient approximations, etc.
The subroutine contains the option to predetermine initial guesses for the multipliers or the Hessian of the Lagrangian function and is called by reverse communication.

virtual 
core portion of run; implemented by all derived classes and may include pre/post steps in lieu of separate pre/post
Virtual run function for the iterator class hierarchy. All derived classes need to redefine it.
Reimplemented from Iterator.
References NLPQLPOptimizer::ACC, NLPQLPOptimizer::ACCQP, NLPQLPOptimizer::ACTIVE, Iterator::activeSet, Iterator::bestResponseArray, Iterator::bestVariablesArray, NLPQLPOptimizer::C, Model::continuous_lower_bounds(), Model::continuous_upper_bounds(), Model::continuous_variables(), Dakota::copy_data(), Model::current_response(), NLPQLPOptimizer::D, NLPQLPOptimizer::deallocate_workspace(), NLPQLPOptimizer::DF, NLPQLPOptimizer::DG, Model::evaluate(), NLPQLPOptimizer::F, Response::function_gradients(), Response::function_values(), NLPQLPOptimizer::G, NLPQLPOptimizer::IFAIL, NLPQLPOptimizer::IOUT, NLPQLPOptimizer::IPRINT, Iterator::iteratedModel, NLPQLPOptimizer::KWA, NLPQLPOptimizer::L, NLPQLPOptimizer::LACTIVE, Model::linear_eq_constraint_coeffs(), Model::linear_eq_constraint_targets(), Model::linear_ineq_constraint_coeffs(), NLPQLPOptimizer::linIneqConMappingIndices, NLPQLPOptimizer::linIneqConMappingMultipliers, NLPQLPOptimizer::linIneqConMappingOffsets, NLPQLPOptimizer::LKWA, Optimizer::localObjectiveRecast, NLPQLPOptimizer::LQL, NLPQLPOptimizer::LWA, NLPQLPOptimizer::MAX_NM, NLPQLPOptimizer::MAXFUN, Iterator::maxFunctionEvals, NLPQLPOptimizer::MAXIT, NLPQLPOptimizer::MMAX, NLPQLPOptimizer::MNN2, NLPQLPOptimizer::MODE, NLPQLPOptimizer::N, NLPQLPOptimizer::NMAX, Model::nonlinear_eq_constraint_targets(), NLPQLPOptimizer::nonlinIneqConMappingIndices, NLPQLPOptimizer::nonlinIneqConMappingMultipliers, NLPQLPOptimizer::nonlinIneqConMappingOffsets, Model::num_nonlinear_eq_constraints(), Model::num_nonlinear_ineq_constraints(), Minimizer::numContinuousVars, NLPQLPOptimizer::numEqConstraints, Minimizer::numFunctions, NLPQLPOptimizer::numNlpqlConstr, Optimizer::numObjectiveFns, Model::primary_response_fn_sense(), ActiveSet::request_value(), ActiveSet::request_values(), NLPQLPOptimizer::STPMIN, NLPQLPOptimizer::TOL_NM, NLPQLPOptimizer::U, NLPQLPOptimizer::WA, and NLPQLPOptimizer::X.