Dakota Reference Manual
Version 6.2
LargeScale Engineering Optimization and Uncertainty Analysis

Uncertainty quantification with stochastic collocation
Alias: nond_stoch_collocation
Argument(s): none
Required/Optional  Description of Group  Dakota Keyword  Dakota Keyword Description  

Optional (Choose One)  Automated refinement type (Group 1)  p_refinement  Automatic polynomial order refinement  
h_refinement  Employ hrefinement to refine the grid  
Optional (Choose One)  Basis polynomial family (Group 2)  piecewise  Use piecewise local basis functions  
askey  Select the standardized random variables (and associated basis polynomials) from the Askey family that best match the userspecified random variables.  
wiener  Use standard normal random variables (along with Hermite orthogonal basis polynomials) when transforming to a standardized probability space.  
Required (Choose One)  Interpolation grid type (Group 3)  quadrature_order  Cubature using tensorproducts of Gaussian quadrature rules  
sparse_grid_level  Set the sparse grid level to be used when peforming sparse grid integration or sparse grid interpolation  
Optional  dimension_preference  A set of weights specifying the realtive importance of each uncertain variable (dimension)  
Optional  use_derivatives  Use derivative data to construct surrogate models  
Optional (Choose One)  Nesting of quadrature rules (Group 4)  nested  Enforce use of nested quadrature rules if available  
non_nested  Enforce use of nonnested quadrature rules  
Optional  variance_based_decomp  Activates global sensitivity analysis based on decomposition of response variance into main, interaction, and total effects  
Optional (Choose One)  Covariance type (Group 5)  diagonal_covariance  Display only the diagonal terms of the covariance matrix  
full_covariance  Display the full covariance matrix  
Optional  sample_type  Selection of sampling strategy  
Optional  probability_refinement  Allow refinement of probability and generalized reliability results using importance sampling  
Optional  export_points_file  Output file for evaluations of a surrogate model  
Optional  fixed_seed  Reuses the same seed value for multiple random sampling sets  
Optional  reliability_levels  Specify reliability levels at which the response values will be estimated  
Optional  response_levels  Values at which to estimate desired statistics for each response  
Optional  distribution  Selection of cumulative or complementary cumulative functions  
Optional  probability_levels  Specify probability levels at which to estimate the corresponding response value  
Optional  gen_reliability_levels  Specify generalized relability levels at which to estimate the corresponding response value  
Optional  rng  Selection of a random number generator  
Optional  samples  Number of samples for samplingbased methods  
Optional  seed  Seed of the random number generator  
Optional  model_pointer  Identifier for model block to be used by a method 
Stochastic collocation is a general framework for approximate representation of random response functions in terms of finitedimensional interpolation bases.
The stochastic collocation (SC) method is very similar to polynomial_chaos, with the key difference that the orthogonal polynomial basis functions are replaced with interpolation polynomial bases. The interpolation polynomials may be either local or global and either valuebased or gradientenhanced. In the local case, valuedbased are piecewise linear splines and gradientenhanced are piecewise cubic splines, and in the global case, valuedbased are Lagrange interpolants and gradientenhanced are Hermite interpolants. A valuebased expansion takes the form
where is the total number of collocation points, is a response value at the collocation point, is the multidimensional interpolation polynomial, and is a vector of standardized random variables.
Thus, in PCE, one forms coefficients for known orthogonal polynomial basis functions, whereas SC forms multidimensional interpolation functions for known coefficients.
Basis polynomial family (Group 2)
In addition to the askey and wiener basis types also supported by polynomial_chaos, SC supports the option of piecewise
local basis functions. These are piecewise linear splines, or in the case of gradientenhanced interpolation via the use_derivatives
specification, piecewise cubic Hermite splines. Both of these basis options provide local support only over the range from the interpolated point to its nearest 1D neighbors (within a tensor grid or within each of the tensor grids underlying a sparse grid), which exchanges the fast convergence of global bases for smooth functions for robustness in the representation of nonsmooth response functions (that can induce Gibbs oscillations when using highorder global basis functions). When local basis functions are used, the usage of nonequidistant collocation points (e.g., the Gauss point selections described above) is not well motivated, so equidistant NewtonCotes points are employed in this case, and all random variable types are transformed to standard uniform probability space. The global gradientenhanced interpolants (Hermite interpolation polynomials) are also restricted to uniform or transformed uniform random variables (due to the need to compute collocation weights by integration of the basis polynomials) and share the variable support shown in variable_support for Piecewise SE. Due to numerical instability in these highorder basis polynomials, they are deactivated by default but can be activated by developers using a compiletime switch.
Interpolation grid type (Group 3)
To form the multidimensional interpolants of the expansion, two options are provided.
quadrature_order
and, optionally, dimension_preference
for anisotropic tensor grids). As for PCE, nonnested Gauss rules are employed by default, although the presence of p_refinement
or h_refinement
will result in default usage of nested rules for normal or uniform variables after any variable transformations have been applied (both defaults can be overridden using explicit nested
or non_nested
specifications). sparse_grid_level
and, optionally, dimension_preference
for anisotropic sparse grids) defined from Gaussian rules. As for sparse PCE, nested rules are employed unless overridden with the non_nested
option, and the growth rules are restricted unless overridden by the unrestricted
keyword. Another distinguishing characteristic of stochastic collocation relative to polynomial_chaos is the ability to reformulate the interpolation problem from a nodal
interpolation approach into a hierarchical
formulation in which each new level of interpolation defines a set of incremental refinements (known as hierarchical surpluses) layered on top of the interpolants from previous levels. This formulation lends itself naturally to uniform or adaptive refinement strategies, since the hierarchical surpluses can be interpreted as error estimates for the interpolant. Either global or local/piecewise interpolants in either valuebased or gradientenhanced approaches can be formulated using hierarchical
interpolation. The primary restriction for the hierarchical case is that it currently requires a sparse grid approach using nested quadrature rules (GenzKeister, GaussPatterson, or NewtonCotes for standard normals and standard uniforms in a transformed space: Askey, Wiener, or Piecewise settings may be required), although this restriction can be relaxed in the future. A selection of hierarchical
interpolation will provide greater precision in the increments to mean, standard deviation, covariance, and reliabilitybased level mappings induced by a grid change within uniform or goaloriented adaptive refinement approaches (see following section).
It is important to note that, while quadrature_order
and sparse_grid_level
are array inputs, only one scalar from these arrays is active at a time for a particular expansion estimation. These scalars can be augmented with a dimension_preference
to support anisotropy across the random dimension set. The array inputs are present to support advanced use cases such as multifidelity UQ, where multiple grid resolutions can be employed.
Automated refinement type (Group 1)
Automated expansion refinement can be selected as either p_refinement
or h_refinement
, and either refinement specification can be either uniform
or dimension_adaptive
. The dimension_adaptive
case can be further specified as either sobol
or generalized
(decay
not supported). Each of these automated refinement approaches makes use of the max_iterations
and convergence_tolerance
iteration controls. The h_refinement
specification involves use of the same piecewise interpolants (linear or cubic Hermite splines) described above for the piecewise
specification option (it is not necessary to redundantly specify piecewise
in the case of h_refinement
). In future releases, the hierarchical
interpolation approach will enable local refinement in addition to the current uniform
and dimension_adaptive
options.
Covariance type (Group 5)
These two keywords are used to specify how this method computes, stores, and outputs the covariance of the responses. In particular, the diagonal covariance option is provided for reducing postprocessing overhead and output volume in high dimensional applications.
Active Variables
The default behavior is to form expansions over aleatory uncertain continuous variables. To form expansions over a broader set of variables, one needs to specify active
followed by state
, epistemic
, design
, or all
in the variables specification block.
For continuous design, continuous state, and continuous epistemic uncertain variables included in the expansion, interpolation points for these dimensions are based on GaussLegendre rules if nonnested, GaussPatterson rules if nested, and NewtonCotes points in the case of piecewise bases. Again, when probability integrals are evaluated, only the aleatory random variable domain is integrated, leaving behind a polynomial relationship between the statistics and the remaining design/state/epistemic variables.
Optional Keywords regarding method outputs
Each of these sampling specifications refer to sampling on the SC approximation for the purposes of generating approximate statistics.
sample_type
samples
seed
fixed_seed
rng
probability_refinement
distribution
reliability_levels
response_levels
probability_levels
gen_reliability_levels
Since SC approximations are formed on structured grids, there should be no ambiguity with simulation sampling for generating the SC expansion.
When using the probability_refinement
control, the number of refinement samples is not under the user's control (these evaluations are approximationbased, so management of this expense is less critical). This option allows for refinement of probability and generalized reliability results using importance sampling.
Multifidelity UQ
When using multifidelity UQ, the high fidelity expansion generated from combining the low fidelity and discrepancy expansions retains the polynomial form of the low fidelity expansion (only the coefficients are updated). Refer to polynomial_chaos for information on the multifidelity interpretation of array inputs for quadrature_order
and sparse_grid_level
.
Usage Tips
If n is small, then tensorproduct Gaussian quadrature is again the preferred choice. For larger n, tensorproduct quadrature quickly becomes too expensive and the sparse grid approach is preferred. For selfconsistency in growth rates, nested rules employ restricted exponential growth (with the exception of the dimension_adaptive
p_refinement
generalized
case) for consistency with the linear growth used for nonnested Gauss rules (integrand precision for sparse grid level l and for tensor grid order m).
Additional Resources
Dakota provides access to SC methods through the NonDStochCollocation class. Refer to the Uncertainty Quantification Capabilities chapter of the Users Manual[5] and the Stochastic Expansion Methods chapter of the Theory Manual[4] for additional information on the SC algorithm.
method, stoch_collocation sparse_grid_level = 2 samples = 10000 seed = 12347 rng rnum2 response_levels = .1 1. 50. 100. 500. 1000. variance_based_decomp
As mentioned above, a valuebased expansion takes the form
The interpolation polynomial assumes the value of 1 at the collocation point and 0 at all other collocation points, involving either a global Lagrange polynomial basis or local piecewise splines. It is easy to see that the approximation reproduces the response values at the collocation points and interpolates between these values at other points. A gradientenhanced expansion (selected via the use_derivatives
keyword) involves both type 1 and type 2 basis functions as follows:
where the type 1 interpolant produces 1 for the value at the collocation point, 0 for values at all other collocation points, and 0 for derivatives (when differentiated) at all collocation points, and the type 2 interpolant produces 0 for values at all collocation points, 1 for the derivative component at the collocation point, and 0 for the derivative component at all other collocation points. Again, this expansion reproduces the response values at each of the collocation points, and when differentiated, also reproduces each component of the gradient at each of the collocation points. Since this technique includes the derivative interpolation explicitly, it eliminates issues with matrix illconditioning that can occur in the gradientenhanced PCE approach based on regression. However, the calculation of highorder global polynomials with the desired interpolation properties can be similarly numerically challenging such that the use of local cubic splines is recommended due to numerical stability.
These keywords may also be of interest: