Dakota Reference Manual  Version 6.2 Large-Scale Engineering Optimization and Uncertainty Analysis
global_evidence

Evidence theory with evidence measures computed with global optimization methods

## Topics

This keyword is related to the topics:

## Specification

Alias: nond_global_evidence

Argument(s): none

Required/Optional Description of Group Dakota Keyword Dakota Keyword Description
Optional
(Choose One)
Group 1 sbo Use the surrogate based optimization method
ego Use the Efficient Global Optimization method
ea Use an evolutionary algorithm
lhs

Uses Latin Hypercube Sampling (LHS) to sample variables

Optional response_levels

Values at which to estimate desired statistics for each response

Optional distribution

Selection of cumulative or complementary cumulative functions

Optional probability_levels Specify probability levels at which to estimate the corresponding response value
Optional gen_reliability_levels Specify generalized relability levels at which to estimate the corresponding response value
Optional rng

Selection of a random number generator

Optional samples

Number of samples for sampling-based methods

Optional seed

Seed of the random number generator

Optional model_pointer

Identifier for model block to be used by a method

## Description

`global_evidence` allows the user to specify several global approaches for calculating the belief and plausibility functions:

• `lhs` - note: this takes the minimum and maximum of the samples as the bounds per "interval cell combination."
• `ego` - uses Efficient Global Optimization which is based on an adaptive Gaussian process surrogate.
• `sbo` - uses a Gaussian process surrogate (non-adaptive) within an optimization process.
• `ea` - uses an evolutionary algorithm. This can be expensive as the ea will be run for each interval cell combination.

Note that to calculate the plausibility and belief cumulative distribution functions, one has to look at all combinations of intervals for the uncertain variables. In terms of implementation, if one is using LHS sampling as outlined above, this method creates a large sample over the response surface, then examines each cell to determine the minimum and maximum sample values within each cell. To do this, one needs to set the number of samples relatively high: the default is 10,000 and we recommend at least that number. If the model you are running is a simulation that is computationally quite expensive, we recommend that you set up a surrogate model within the Dakota input file so that `global_evidence` performs its sampling and calculations on the surrogate and not on the original model. If one uses optimization methods instead to find the minimum and maximum sample values within each cell, this can also be computationally expensive.