Dakota Reference Manual  Version 6.12
Explore and Predict with Confidence
 All Pages
bayes_calibration


Bayesian calibration

Topics

This keyword is related to the topics:

Specification

Alias: nond_bayes_calibration

Argument(s): none

Child Keywords:

Required/Optional Description of Group Dakota Keyword Dakota Keyword Description
Required
(Choose One)
Bayesian Calibration Method (Group 1) queso

Markov Chain Monte Carlo algorithms from the QUESO package

gpmsa

(Experimental) Gaussian Process Models for Simulation Analysis (GPMSA) Bayesian calibration

wasabi

(Experimental Method) Non-MCMC Bayesian inference using interval analysis

dream DREAM (DiffeRential Evolution Adaptive Metropolis)
muq

Markov Chain Monte Carlo algorithms from the MUQ package

Optional experimental_design

(Experimental) Adaptively select experimental designs for iterative Bayesian updating

Optional calibrate_error_multipliers

Calibrate hyper-parameter multipliers on the observation error covariance

Optional burn_in_samples

Manually specify the burn in period for the MCMC chain.

Optional posterior_stats

Compute information-theoretic metrics on posterior parameter distribution

Optional chain_diagnostics

Compute diagnostic metrics for Markov chain

Optional model_evidence

Calculate model evidence (marginal likelihood of model) when using Bayesian methods

Optional model_discrepancy

(Experimental) Post-calibration calculation of model discrepancy correction

Optional sub_sampling_period

Specify a sub-sampling of the MCMC chain

Optional probability_levels

Specify probability levels at which to compute credible and prediction intervals

Optional convergence_tolerance

Stopping criterion based on objective function or statistics convergence

Optional max_iterations

Number of iterations allowed for optimizers and adaptive UQ methods

Optional model_pointer

Identifier for model block to be used by a method

Optional scaling

Turn on scaling for variables, responses, and constraints

Description

Bayesian calibration methods take prior information on parameter values (in the form of prior distributions) and observational data (e.g. from experiments) and infer posterior distributions on the parameter values. When the computational simulation is then executed with samples from the posterior parameter distributions, the results that are produced are consistent with ("agree with") the experimental data. Calibrating parameters from a computational simulation model requires a likelihood function that specifies the likelihood of observing a particular observation given the model and its associated parameterization; Dakota assumes a Gaussian likelihood function. The algorithms that produce the posterior distributions on model parameters are most commonly Monte Carlo Markov Chain (MCMC) sampling algorithms. MCMC methods require many samples, often tens of thousands, so in the case of model calibration, often emulators of the computational simulation are used. For more details on the algorithms underlying the methods, see the Dakota User's manual.

Dakota has four classes of Bayesian calibration methods: QUESO/DRAM, GPMSA, DREAM, and WASABI.

  1. The QUESO methods use components from the QUESO library (Quantification of Uncertainty for Estimation, Simulation, and Optimization) developed at The University of Texas at Austin. Dakota uses its DRAM (Delayed Rejected Adaptive Metropolis) algorithm, and variants, for the MCMC sampling.

  2. GPMSA (Gaussian Process Models for Simulation Analysis) is an approach developed at Los Alamos National Laboratory and Dakota currently uses the QUESO implementation. It constructs Gaussian process models to emulate the expensive computational simulation as well as model discrepancy. GPMSA also has extensive features for calibration, such as the capability to include a model discrepancy term and the capability to model functional data such as time series data. This is an experimental capability and not all features are available in Dakota yet.

  3. DREAM (DiffeRential Evolution Adaptive Metropolis) is a method that runs multiple different chains simultaneously for global exploration, and automatically tunes the proposal covariance during the process by a self-adaptive randomized subspace sampling. [88].

  4. WASABI: Non-MCMC Bayesian inference via interval analysis

Usage Tips

The Bayesian capabilities are under active development. At this stage, the QUESO methods in Dakota are the most advanced and robust, followed by DREAM, followed by GPMSA and WASABI which are not yet ready for production use.

Note that as of Dakota 6.2, the field responses and associated field data may be used with QUESO and DREAM. That is, the user can specify field simulation data and field experiment data, and Dakota will interpolate and provide the proper residuals to the Bayesian calibration.