Dakota Reference Manual  Version 6.4
Large-Scale Engineering Optimization and Uncertainty Analysis
 All Pages
incremental_lhs


(Deprecated keyword) Augments an existing Latin Hypercube Sampling (LHS) study

Specification

Alias: none

Argument(s): none

Default: no sample reuse in coefficient estimation

Description

This keyword is deprecated. Instead specify sample_type lhs with refinement_samples.

An incremental random sampling approach will augment an existing random sampling study with refinement_samples to get better estimates of mean, variance, and percentiles. The number of refinement_samples in each refinement level must result in twice the number of previous samples.

Typically, this approach is used when you have an initial study with sample size N1 and you want to perform an additional N1 samples. Ideally, a Dakota restart file containing the initial N1 samples, so only N1 (instead of 2 x N1) potentially expensive function evaluations will be performed.

This process can be extended by repeatedly doubling the refinement_samples:

method
  sampling
    samples = 50
    refinement_samples = 50 100 200 400 800

Usage Tips

The incremental approach is useful if it is uncertain how many simulations can be completed within available time.

See the examples below and the Usage and Restarting Dakota Studies pages.

Examples

Suppose an initial study is conducted using sample_type random with samples = 50. A follow-on study uses a new input file where samples = 50, and refinement_samples = 50, resulting in 50 reused samples (from restart) and 50 new random samples. The 50 new samples will be combined with the 50 previous samples to generate a combined sample of size 100 for the analysis.

One way to ensure the restart file is saved is to specify a non-default name, via a command line option:

dakota -input LHS_50.in -write_restart LHS_50.rst

which uses the input file:

# LHS_50.in

environment
  tabular_data
    tabular_data_file = 'LHS_50.dat'

method
  sampling
    sample_type lhs
    samples = 50

model
  single

variables
  uniform_uncertain = 2
    descriptors  =   'input1'     'input2'
    lower_bounds =  -2.0     -2.0
    upper_bounds =   2.0      2.0

interface
  analysis_drivers 'text_book'
    fork

responses
  response_functions = 1
  no_gradients
  no_hessians

and the restart file is written to LHS_50.rst.

Then an incremental LHS study can be run with:

dakota -input LHS_100.in -read_restart LHS_50.rst -write_restart LHS_100.rst

where LHS_100.in is shown below, and LHS_50.rst is the restart file containing the results of the previous LHS study.

# LHS_100.in

environment
  tabular_data
    tabular_data_file = 'LHS_100.dat'

method
  sampling
    sample_type incremental_lhs
    samples = 50
      refinement_samples = 50

model
  single

variables
  uniform_uncertain = 2
    descriptors  =   'input1'     'input2'
    lower_bounds =  -2.0     -2.0
    upper_bounds =   2.0      2.0

interface
  analysis_drivers 'text_book'
    fork

responses
  response_functions = 1
  no_gradients
  no_hessians

The user will get 50 new LHS samples which maintain both the correlation and stratification of the original LHS sample. The new samples will be combined with the original samples to generate a combined sample of size 100.