![]() |
Dakota Reference Manual
Version 6.2
Large-Scale Engineering Optimization and Uncertainty Analysis
|
Evidence theory with evidence measures computed with global optimization methods
This keyword is related to the topics:
Alias: nond_global_evidence
Argument(s): none
Required/Optional | Description of Group | Dakota Keyword | Dakota Keyword Description | |
---|---|---|---|---|
Optional (Choose One) | Group 1 | sbo | Use the surrogate based optimization method | |
ego | Use the Efficient Global Optimization method | |||
ea | Use an evolutionary algorithm | |||
lhs | Uses Latin Hypercube Sampling (LHS) to sample variables | |||
Optional | response_levels | Values at which to estimate desired statistics for each response | ||
Optional | distribution | Selection of cumulative or complementary cumulative functions | ||
Optional | probability_levels | Specify probability levels at which to estimate the corresponding response value | ||
Optional | gen_reliability_levels | Specify generalized relability levels at which to estimate the corresponding response value | ||
Optional | rng | Selection of a random number generator | ||
Optional | samples | Number of samples for sampling-based methods | ||
Optional | seed | Seed of the random number generator | ||
Optional | model_pointer | Identifier for model block to be used by a method |
global_evidence
allows the user to specify several global approaches for calculating the belief and plausibility functions:
lhs
- note: this takes the minimum and maximum of the samples as the bounds per "interval cell combination." ego
- uses Efficient Global Optimization which is based on an adaptive Gaussian process surrogate. sbo
- uses a Gaussian process surrogate (non-adaptive) within an optimization process. ea
- uses an evolutionary algorithm. This can be expensive as the ea will be run for each interval cell combination.Note that to calculate the plausibility and belief cumulative distribution functions, one has to look at all combinations of intervals for the uncertain variables. In terms of implementation, if one is using LHS sampling as outlined above, this method creates a large sample over the response surface, then examines each cell to determine the minimum and maximum sample values within each cell. To do this, one needs to set the number of samples relatively high: the default is 10,000 and we recommend at least that number. If the model you are running is a simulation that is computationally quite expensive, we recommend that you set up a surrogate model within the Dakota input file so that global_evidence
performs its sampling and calculations on the surrogate and not on the original model. If one uses optimization methods instead to find the minimum and maximum sample values within each cell, this can also be computationally expensive.
Additional Resources
See the topic page evidence_theory for important background information and usage notes.
Refer to variable_support for information on supported variable types.
The basic idea is that one specifies an "evidence structure" on uncertain inputs and propagates that to obtain belief and plausibility functions on the response functions. The inputs are defined by sets of intervals and Basic Probability Assignments (BPAs). Evidence propagation is computationally expensive, since the minimum and maximum function value must be calculated for each "interval cell combination." These bounds are aggregated into belief and plausibility.
These keywords may also be of interest: