Dakota Reference Manual  Version 6.2
Large-Scale Engineering Optimization and Uncertainty Analysis
 All Pages

Morris One-at-a-Time


This keyword is related to the topics:


Alias: none

Argument(s): none

Required/Optional Description of Group Dakota Keyword Dakota Keyword Description
Optional partitions

Number of partitions of each variable

Optional samples

Number of samples for sampling-based methods

Optional seed

Seed of the random number generator

Optional model_pointer

Identifier for model block to be used by a method


The Morris One-At-A-Time (MOAT) method, originally proposed by Morris [62], is a screening method, designed to explore a computational model to distinguish between input variables that have negligible, linear and additive, or nonlinear or interaction effects on the output. The computer experiments performed consist of individually randomized designs which vary one input factor at a time to create a sample of its elementary effects.

The number of samples (samples) must be a positive integer multiple of (number of continuous design variable + 1) and will be automatically adjusted if misspecified.

The number of partitions (partitions) applies to each variable being studied and must be odd (the number of MOAT levels per variable is partitions + 1). This will also be adjusted at runtime as necessary.

For information on practical use of this method, see [73].


With MOAT, each dimension of a $k-$dimensional input space is uniformly partitioned into $p$ levels, creating a grid of $p^k$ points ${\bf x} \in \bf{R}^k$ at which evaluations of the model $y({\bf x})$ might take place. An elementary effect corresponding to input $i$ is computed by a forward difference

\[ d_i({\bf x}) = \frac{y({\bf x} + \Delta {\bf e}_i) - y({\bf x})}{\Delta}, \]

where $e_i$ is the $i^{\mbox{\scriptsize th}}$ coordinate vector, and the step $\Delta$ is typically taken to be large (this is not intended to be a local derivative approximation). In the present implementation of MOAT, for an input variable scaled to $[0,1]$, $\Delta = \frac{p}{2(p-1)}$, so the step used to find elementary effects is slightly larger than half the input range.

The distribution of elementary effects $d_i$ over the input space characterizes the effect of input $i$ on the output of interest. After generating $r$ samples from this distribution, their mean,

\[ \mu_i = \frac{1}{r}\sum_{j=1}^{r}{d_i^{(j)}}\]

modified mean

\[ \mu_i^* = \frac{1}{r}\sum_{j=1}^{r}{|d_i^{(j)}|}, \]

(using absolute value) and standard deviation

\[ \sigma_i = \sqrt{ \frac{1}{r}\sum_{j=1}^{r}{ \left(d_i^{(j)} - \mu_i \right)^2} } \]

are computed for each input $i$. The mean and modified mean give an indication of the overall effect of an input on the output. Standard deviation indicates nonlinear effects or interactions, since it is an indicator of elementary effects varying throughout the input space.