Dakota Reference Manual  Version 6.2
Large-Scale Engineering Optimization and Uncertainty Analysis
 All Pages
neural_network


Artificial neural network model

Specification

Alias: none

Argument(s): none

Required/Optional Description of Group Dakota Keyword Dakota Keyword Description
Optional max_nodes

Maximum number of hidden layer nodes

Optional range

Range for neural network random weights

Optional random_weight

(Inactive) Random weight control

Optional export_model_file

Export surrogate to Surfpack model file

Description

Dakota's artificial neural network surrogate is a stochastic layered perceptron network, with a single hidden layer. Weights for the input layer are chosen randomly, while those in the hidden layer are estimated from data using a variant of the Zimmerman direct training approach[92].

This typically yields lower training cost than traditional neural networks, yet good out-of-sample performance. This is helpful in surrogate-based optimization and optimization under uncertainty, where multiple surrogates may be repeatedly constructed during the optimization process, e.g., a surrogate per response function, and a new surrogate for each optimization iteration.

The neural network is a non parametric surface fitting method. Thus, along with Kriging (Gaussian Process) and MARS, it can be used to model data trends that have slope discontinuities as well as multiple maxima and minima. However, unlike Kriging, the neural network surrogate is not guaranteed to interpolate the data from which it was constructed.

This surrogate can be constructed from fewer than $n_{c_{quad}}$ data points, however, it is a good rule of thumb to use at least $n_{c_{quad}}$ data points when possible.

Theory

The form of the neural network model is

\[ \hat{f}(\mathbf{x}) \approx \tanh\left\{ \mathbf{A}_{1} \tanh\left( \mathbf{A}_{0}^{T} \mathbf{x} +\theta_{0}^T \right)+\theta_{1} \right\} \]

where $\mathbf{x}$ is the evaluation point in $n$-dimensional parameter space; the terms $\mathbf{A}_{0}, \theta_{0}$ are the random input layer weight matrix and bias vector, respectively; and $\mathbf{A}_{1}, \theta_{1}$ are a weight vector and bias scalar, respectively, estimated from training data. These coefficients are analogous to the polynomial coefficients obtained from regression to training data. The neural network uses a cross validation-based orthogonal matching pursuit solver to determine the optimal number of nodes and to solve for the weights and offsets.