Dakota Reference Manual  Version 6.12
Explore and Predict with Confidence
 All Pages
multifidelity_polynomial_chaos


Multifidelity uncertainty quantification using polynomial chaos expansions

Specification

Alias: none

Argument(s): none

Child Keywords:

Required/Optional Description of Group Dakota Keyword Dakota Keyword Description
Optional p_refinement Automatic polynomial order refinement
Optional max_refinement_iterations

Maximum number of expansion refinement iterations

Optional allocation_control

Sample allocation approach for multifidelity expansions

Optional discrepancy_emulation

Formulation for emulation of model discrepancies.

Required
(Choose One)
Chaos Coefficient Estimation Approach (Group 1) quadrature_order_sequence

Sequence of quadrature orders used in a multi-stage expansion

sparse_grid_level_sequence

Sequence of sparse grid levels used in a multi-stage expansion

expansion_order_sequence

Sequence of expansion orders used in a multi-stage expansion

orthogonal_least_interpolation Build a polynomial chaos expansion from simulation samples using orthogonal least interpolation.
Optional
(Choose One)
Basis Polynomial Family (Group 2) askey

Select the standardized random variables (and associated basis polynomials) from the Askey family that best match the user-specified random variables.

wiener

Use standard normal random variables (along with Hermite orthogonal basis polynomials) when transforming to a standardized probability space.

Optional normalized The normalized specification requests output of PCE coefficients that correspond to normalized orthogonal basis polynomials
Optional export_expansion_file

Export the coefficients and multi-index of a Polynomial Chaos Expansion (PCE) to a file

Optional samples_on_emulator

Number of samples at which to evaluate an emulator (surrogate)

Optional sample_type

Selection of sampling strategy

Optional rng

Selection of a random number generator

Optional probability_refinement Allow refinement of probability and generalized reliability results using importance sampling
Optional final_moments

Output moments of the specified type and include them within the set of final statistics.

Optional response_levels

Values at which to estimate desired statistics for each response

Optional probability_levels Specify probability levels at which to estimate the corresponding response value
Optional reliability_levels Specify reliability levels at which the response values will be estimated
Optional gen_reliability_levels Specify generalized relability levels at which to estimate the corresponding response value
Optional distribution

Selection of cumulative or complementary cumulative functions

Optional variance_based_decomp

Activates global sensitivity analysis based on decomposition of response variance into main, interaction, and total effects

Optional
(Choose One)
Covariance Type (Group 3) diagonal_covariance Display only the diagonal terms of the covariance matrix
full_covariance Display the full covariance matrix
Optional convergence_tolerance

Stopping criterion based on objective function or statistics convergence

Optional import_approx_points_file

Filename for points at which to evaluate the PCE/SC surrogate

Optional export_approx_points_file

Output file for evaluations of a surrogate model

Optional seed_sequence

Sequence of seed values for a multi-stage random sampling

Optional fixed_seed

Reuses the same seed value for multiple random sampling sets

Optional model_pointer

Identifier for model block to be used by a method

Description

As described in polynomial_chaos, the polynomial chaos expansion (PCE) is a general framework for the approximate representation of random response functions in terms of series expansions in standardized random variables:

\[R = \sum_{i=0}^P \alpha_i \Psi_i(\xi) \]

where $\alpha_i$ is a deterministic coefficient, $\Psi_i$ is a multidimensional orthogonal polynomial and $\xi$ is a vector of standardized random variables.

In the multilevel and multifidelity cases, we decompose this expansion into several constituent expansions, one per model form or solution control level. In a bi-fidelity case with low-fidelity (LF) and high-fidelity (HF) models and an additive discrepancy approach, we have:

\[R = \sum_{i=0}^{P^{LF}} \alpha^{LF}_i \Psi_i(\xi) + \sum_{i=0}^{P^{HF}} \delta_i \Psi_i(\xi) \]

where $\delta_i$ is a coefficient for the discrepancy expansion.

The same specification options are available as described in polynomial_chaos with one key difference: many of the coefficient estimation inputs change from a scalar input for a single expansion to a sequence specification for a low-fidelity expansion followed by multiple discrepancy expansions.

To obtain the coefficients $\alpha_i$ and $\delta_i$ for each of the expansions, the following options are provided:

  1. multidimensional integration by a tensor-product of Gaussian quadrature rules (specified with quadrature_order_sequence, and, optionally, dimension_preference).
  2. multidimensional integration by the Smolyak sparse grid method (specified with sparse_grid_level_sequence and, optionally, dimension_preference)
  3. multidimensional integration by Latin hypercube sampling (specified with expansion_order_sequence and expansion_samples_sequence).
  4. linear regression (specified with expansion_order_sequence and either collocation_points_sequence or collocation_ratio), using either over-determined (least squares) or under-determined (compressed sensing) approaches.
  5. orthogonal least interpolation (specified with orthogonal_least_interpolation and collocation_points_sequence)

It is important to note that, while quadrature_order_sequence, sparse_grid_level_sequence, expansion_order_sequence, expansion_samples_sequence, and collocation_points_sequence are array inputs, only one scalar from these arrays is active at a time for a particular expansion estimation. In order to specify anisotropy in resolution across the random variable set, a dimension_preference specification can be used to augment scalar specifications for quadrature order, sparse grid level, and expansion order.

Multifidelity UQ using PCE requires that the model selected for iteration by the method specification is a multifidelity surrogate model (see hierarchical), which defines an ordered_model_sequence (see hierarchical). Two types of hierarchies are supported: (i) a hierarchy of model forms composed from more than one model within the ordered_model_sequence, or (ii) a hierarchy of discretization levels comprised from a single model within the ordered_model_sequence which in turn specifies a solution_level_control (see solution_level_control).

In both cases, an expansion will first be formed for the low fidelity model or coarse discretization, using the first value within the coefficient estimation sequence, along with any specified refinement strategy. Second, expansions are formed for one or more model discrepancies (the difference between response results if additive correction or the ratio of results if multiplicative correction), using all subsequent values in the coefficient estimation sequence (if the sequence does not provide a new value, then the previous value is reused) along with any specified refinement strategy. The number of discrepancy expansions is determined by the number of model forms or discretization levels in the hierarchy.

After formation and refinement of the constituent expansions, each of the expansions is combined (added or multiplied) into an expansion that approximates the high fidelity model, from which the final set of statistics are generated. For polynomial chaos expansions, this high fidelity expansion can differ significantly in form from the low fidelity and discrepancy expansions, particularly in the multiplicative case where it is expanded to include all of the basis products.

Additional Resources

Dakota provides access to multifidelity PCE methods through the NonDMultilevelPolynomialChaos class. Refer to the Stochastic Expansion Methods chapter of the Theory Manual[4] for additional information on the Multifidelity PCE algorithm.

Expected HDF5 Output

If Dakota was built with HDF5 support and run with the hdf5 keyword, this method writes the following results to HDF5:

In addition, the execution group has the attribute equiv_hf_evals, which records the equivalent number of high-fidelity evaluations.

Examples

method,
    multifidelity_polynomial_chaos
      model_pointer = 'HIERARCH'
      sparse_grid_level_sequence = 4 3 2

model,
    id_model = 'HIERARCH'
    surrogate hierarchical
      ordered_model_fidelities = 'LF' 'MF' 'HF'
      correction additive zeroth_order

See Also

These keywords may also be of interest: