Dakota Reference Manual
Version 6.16
Explore and Predict with Confidence

Randomly samples variables according to their distributions
This keyword is related to the topics:
Alias: nond_sampling
Argument(s): none
Child Keywords:
Required/Optional  Description of Group  Dakota Keyword  Dakota Keyword Description  

Optional  samples  Number of samples for samplingbased methods  
Optional  seed  Seed of the random number generator  
Optional  fixed_seed  Reuses the same seed value for multiple random sampling sets  
Optional  sample_type  Selection of sampling strategy  
Optional  refinement_samples  Performs an incremental Latin Hypercube Sampling (LHS) study  
Optional  d_optimal  Generate a Doptimal sampling design  
Optional  variance_based_decomp  Activates global sensitivity analysis based on decomposition of response variance into contributions from variables  
Optional  backfill  Ensures that the samples of discrete variables with finite support are unique  
Optional  principal_components  Activates principal components analysis of the response matrix of N samples * L responses.  
Optional  wilks  Number of samples for random sampling using Wilks statistics  
Optional  final_moments  Output moments of the specified type and include them within the set of final statistics.  
Optional  response_levels  Values at which to estimate desired statistics for each response  
Optional  probability_levels  Specify probability levels at which to estimate the corresponding response value  
Optional  reliability_levels  Specify reliability levels at which the response values will be estimated  
Optional  gen_reliability_levels  Specify generalized relability levels at which to estimate the corresponding response value  
Optional  distribution  Selection of cumulative or complementary cumulative functions  
Optional  rng  Selection of a random number generator  
Optional  model_pointer  Identifier for model block to be used by a method 
This method generates parameter values by drawing samples from the specified uncertain variable probability distributions. The computational model is executed over all generated parameter values to compute the responses for which statistics are computed. The statistics support sensitivity analysis and uncertainty quantification.
Default Behavior
By default, sampling
methods operate on aleatory and epistemic uncertain variables. The types of variables can be restricted or expanded (to include design or state variables) through use of the active
keyword in the variables block in the Dakota input file. If continuous design and/or state variables are designated as active, the sampling algorithm will treat them as parameters with uniform probability distributions between their upper and lower bounds. Refer to variable_support for additional information on supported variable types, with and without correlation.
The following keywords change how the samples are selected:
Expected Outputs
As a default, Dakota provides correlation analyses when running LHS. Correlation tables are printed with the simple, partial, and rank correlations between inputs and outputs. These can be useful to get a quick sense of how correlated the inputs are to each other, and how correlated various outputs are to inputs. variance_based_decomp
is used to request more sensitivity information, with additional cost.
Additional statistics can be computed from the samples using the following keywords:
response_levels
reliability_levels
probability_levels
gen_reliability_levels
response_levels
computes statistics at the specified response value. The other three allow the specification of the statistic value, and will estimate the corresponding response value.
distribution
is used to specify whether the statistic values are from cumulative or complementary cumulative functions.
Expected HDF5 Output
If Dakota was built with HDF5 support and run with the hdf5 keyword, this method writes the following results to HDF5:
Usage Tips
sampling
is a robust approach to doing sensitivity analysis and uncertainty quantification that can be applied to any problem. It requires more simulations than newer, advanced methods. Thus, an alternative may be preferable if the simulation is computationally expensive.
# tested on Dakota 6.0 on 140501 environment tabular_data tabular_data_file = 'Sampling_basic.dat' method sampling sample_type lhs samples = 20 model single variables active uncertain uniform_uncertain = 2 descriptors = 'input1' 'input2' lower_bounds = 2.0 2.0 upper_bounds = 2.0 2.0 continuous_state = 1 descriptors = 'constant1' initial_state = 100 interface analysis_drivers 'text_book' fork responses response_functions = 1 no_gradients no_hessians
This example illustrates a basic sampling Dakota input file.
seed
specified, this will not be reproducable variables
block, two types of variables are used active
keyword, w/ the uncertain
optionThese keywords may also be of interest:
Q: Do I need to keep the LHS* and S4 files? A: No