Dakota Reference Manual  Version 6.12
Explore and Predict with Confidence
 All Pages

Multilevel uncertainty quantification using function train expansions


Alias: none

Argument(s): none

Child Keywords:

Required/Optional Description of Group Dakota Keyword Dakota Keyword Description
Optional max_iterations

Number of iterations allowed for optimizers and adaptive UQ methods

Optional allocation_control

Sample allocation approach for multilevel expansions

Optional discrepancy_emulation

Formulation for emulation of model discrepancies.

Optional rounding_tolerance

An accuracy tolerance that is used to guide rounding during rank adaptation.

Optional arithmetic_tolerance

A secondary rounding tolerance used for post-processing

Optional regression_type

Type of solver for forming function train approximations by regression

Optional max_solver_iterations

Maximum iterations in determining polynomial coefficients

Optional max_cross_iterations

Maximum number of iterations for cross-approximation during a rank adaptation.

Optional solver_tolerance

Convergence tolerance for the optimizer used during the regression solve.

Optional tensor_grid Use sub-sampled tensor-product quadrature points to build a polynomial chaos expansion.
Optional collocation_points_sequence

Sequence of collocation point counts used in a multi-stage expansion

Optional collocation_ratio

Set the number of points used to build a PCE via regression to be proportional to the number of terms in the expansion.

Optional start_order_sequence

Sequence of start orders used in a multi-stage expansion

Optional max_order

Maximum polynomial order of each univariate function within the functional tensor train.

Optional start_rank_sequence

Sequence of start ranks used in a multi-stage expansion

Optional kick_rank

The increment in rank employed during each iteration of the rank adaptation.

Optional max_rank

Limits the maximum rank that is explored during a rank adaptation.

Optional adapt_rank

Activate adaptive procedure for determining best rank representation

Optional samples_on_emulator

Number of samples at which to evaluate an emulator (surrogate)

Optional sample_type

Selection of sampling strategy

Optional rng

Selection of a random number generator

Optional probability_refinement Allow refinement of probability and generalized reliability results using importance sampling
Optional final_moments

Output moments of the specified type and include them within the set of final statistics.

Optional response_levels

Values at which to estimate desired statistics for each response

Optional probability_levels Specify probability levels at which to estimate the corresponding response value
Optional reliability_levels Specify reliability levels at which the response values will be estimated
Optional gen_reliability_levels Specify generalized relability levels at which to estimate the corresponding response value
Optional distribution

Selection of cumulative or complementary cumulative functions

Optional variance_based_decomp

Activates global sensitivity analysis based on decomposition of response variance into main, interaction, and total effects

(Choose One)
Covariance Type (Group 1) diagonal_covariance Display only the diagonal terms of the covariance matrix
full_covariance Display the full covariance matrix
Optional convergence_tolerance

Stopping criterion based on objective function or statistics convergence

Optional import_approx_points_file

Filename for points at which to evaluate the PCE/SC surrogate

Optional export_approx_points_file

Output file for evaluations of a surrogate model

Optional seed_sequence

Sequence of seed values for a multi-stage random sampling

Optional fixed_seed

Reuses the same seed value for multiple random sampling sets

Optional model_pointer

Identifier for model block to be used by a method


As described in the function_train method and the function_train model, the function train (FT) approximation is a polynomial expansion that exploits low rank structure within the mapping from input random variables to output quantities of interest (QoI). For multilevel and multifidelity function train approximations, we decompose this expansion into several constituent expansions, one per model form or solution control level, where independent function train approximations are constructed for the low-fidelity/coarse resolution model and one or more levels of model discrepancy.

In a three-model case with low-fidelity (L), medium-fidelity (M), and high-fidelity (H) models and an additive discrepancy approach, we can denote this as:

\[ Q^H \approx \hat{Q}_{r_L}^L + \hat{\Delta}_{r_{ML}}^{ML} + \hat{\Delta}_{r_{HM}}^{HM} \]

where $\Delta^{ij}$ represents a discrepancy expansion computed from $Q^i - Q^j$ and reduced rank representations of these discrepancies may be targeted ( $ r_{HM} < r_{ML} < r_L $).

In multilevel approaches, sample allocation for the constituent expansions is performed as described in allocation_control.

Expected HDF5 Output

If Dakota was built with HDF5 support and run with the hdf5 keyword, this method writes the following results to HDF5:

In addition, the execution group has the attribute equiv_hf_evals, which records the equivalent number of high-fidelity evaluations.

See Also

These keywords may also be of interest: