![]() |
Dakota Reference Manual
Version 6.2
Large-Scale Engineering Optimization and Uncertainty Analysis
|
Artificial neural network model
Alias: none
Argument(s): none
Required/Optional | Description of Group | Dakota Keyword | Dakota Keyword Description | |
---|---|---|---|---|
Optional | max_nodes | Maximum number of hidden layer nodes | ||
Optional | range | Range for neural network random weights | ||
Optional | random_weight | (Inactive) Random weight control | ||
Optional | export_model_file | Export surrogate to Surfpack model file |
Dakota's artificial neural network surrogate is a stochastic layered perceptron network, with a single hidden layer. Weights for the input layer are chosen randomly, while those in the hidden layer are estimated from data using a variant of the Zimmerman direct training approach[92].
This typically yields lower training cost than traditional neural networks, yet good out-of-sample performance. This is helpful in surrogate-based optimization and optimization under uncertainty, where multiple surrogates may be repeatedly constructed during the optimization process, e.g., a surrogate per response function, and a new surrogate for each optimization iteration.
The neural network is a non parametric surface fitting method. Thus, along with Kriging (Gaussian Process) and MARS, it can be used to model data trends that have slope discontinuities as well as multiple maxima and minima. However, unlike Kriging, the neural network surrogate is not guaranteed to interpolate the data from which it was constructed.
This surrogate can be constructed from fewer than data points, however, it is a good rule of thumb to use at least
data points when possible.
The form of the neural network model is
where is the evaluation point in
-dimensional parameter space; the terms
are the random input layer weight matrix and bias vector, respectively; and
are a weight vector and bias scalar, respectively, estimated from training data. These coefficients are analogous to the polynomial coefficients obtained from regression to training data. The neural network uses a cross validation-based orthogonal matching pursuit solver to determine the optimal number of nodes and to solve for the weights and offsets.