Method Results Output
Final results from most Dakota methods such as statistics from UQ studies and best parameter values from optimizations can now be output to an HDF5 file. Dakota's HDF5 output can be conveniently read and queried using many popular languages, including Python, Perl, and Matlab, obviating the need for scraping or copying and pasting from the console output. Key features:
- Most results that currently are written to the console for sampling, optimization/calibration, parameter studies, and stochastic expansions are also written to HDF5.
- Dakota makes use of dimension scales and attributes to "document" results.
- Metadata for the Dakota study including the full input file, run duration, Dakota version, and more, are stored as attributes at a root level.
Enabling: The HDF5-based method results output is enabled for internal supported builds, but not downloads on the website. It must be explicitly enabled when compiling from source code (See the HDF5 Results Output section of the Options for Dakota Features page). Requires HDF5 1.10.2 or higher.
Documentation: The Dakota HDF5 Output section of the Reference Manual provides a brief introduction to HDF5 and describes the layout of Dakota's output in detail. Example Python scripts and Jupyter notebooks are available at share/dakota/examples/hdf5.
Dakota Graphical User Interface
Highlights:
- All-new Dakota Study Wizard which guides you through a series of questions to configure a Dakota input file, including choosing an appropriate method.
- Improved simulation inferfacing - Stronger interoperation between wizard-generated Python driver scripts, Next-Gen Workflow drivers, and Dakota studies generated using the New Dakota Study Wizard.
- A variety of Dakota input file text editor enhancements, including completion proposals, alias recognition, more reliable error markers, and formatting options.
- Next-Gen Workflow engine supports graphical plotting nodes.
Details:
- New Dakota Study Wizard
- The wizard supports a host of different data sources for generating Dakota studies.
- The wizard asks you a series of questions to help you choose a Dakota method.
- Some chosen methods are pre-populated with heuristic "getting started" settings for your study problem.
- The wizard has seamless support for both GUI-generated Python scripts and nested workflows as your study's target analysis driver.
- Script-Based Dakota Driver Wizard (formerly Dakota-to-Python Interface Wizard)
- In addition to a Python driver script, this wizard now generates an "interface manifest" file (see below)
- Generated Python script can be configured to echo stdout of underlying simulation model, as well as prepend its stdout stream with Dakota run number information.
- Nested Workflow for Dakota Driver Wizard
- A new wizard that auto-populates a Next-Gen Workflow file with parameters and responses based on your Dakota study.
- Interface Manifest Support
- Dakota GUI now accepts arbitrary drivers that define "interface manifest" files - that is, a file that formally states what input and output the driver expects to receive and send. This interface manifest feature allows files to present themselves as Dakota drivers in multiple contexts, making for easier connection to multiple Dakota studies.
- Recognized driver formats now include: Python scripts, SH and CSH scripts, Windows BAT scripts, Next-Gen Workflow files, Perl scripts, VBS scripts.
- "Recognize as Analysis Driver" context menu option for creating interface manifests for arbitrary files that should be recognized as Dakota drivers.
- Dakota Text Editor
- Support for Eclipse completion proposals
- The Dakota text editor now reports errors for duplicate IDs
- The Dakota text editor now reports that multiple methods without IDs are ambiguous
- Better error markup for unrecognized keywords and unrecognized parameter list keywords.
- Text editor now observes both keyword aliases (i.e. alternate keywords according to the Dakota grammar) and partial, non-ambiguous keywords (within reason)
- Defaults for Dakota text file indentation & quote type are configurable in the Preferences dialog
- Project Navigator View
- "Add Keyword" context menu option for Dakota input files
- Next-Gen Workflow
- "Expert mode" flag added to DakotaNode. Leaving unchecked allows nested workflows to run with fewer nodes.
- Most Chartreuse plot types supported as Next-Gen Workflow nodes.
- Changed DakotaNode’s "search order" for the location of the runtime workflow install directory.
- Chartreuse (aka Integrated plotly.js Plotting)
- Support for contour plots
- Plot data providers inform the plotting dialogs about completeness of data
- The Dakota plot data provider now sets reasonable defaults for simple cases.
- Dakota Console
- Errors related to launching Dakota from the GUI are now explicitly shown via pop-up dialogs.
- Misc
- More robust behavior around resolving the path to Dakota at launch time.
Bugfixes:
- Script-Based Dakota Driver Wizard: Feature to auto-substitute new interface block into Dakota input file is no longer broken.
- Dakota Text Editor: The Dakota text editor now scopes variable duplicate checking to be per-block.
- Dakota Console: Output and error streams no longer interleave characters together.
- Project Navigator View: Fixed bug where navigator view of Dakota input file wouldn't always update on a change to the text.
Documentation: See the Dakota GUI User's Manual for more information.
Bayesian Calibration
New Features and Usability Improvements
- Model evidence calculation with Monte Carlo sampling and 2nd-order local Laplace approximation (see reference manual)
- Model discrepancy updated to treat field-valued responses in the surrogate-based discrepancy model
- Some output options made more explicit and verbosity-controlled output improved.
- Issue more helpful warning when QUESO is not enabled in the Dakota build.
Documentation: See Chapter 5.8 for Bayesian calibration and Chapter 5.8.9 for model discrepancy in the User's Manual. See Chapter 5.5 of the Theory Manual for an example of field discrepancy construction.
Examples Library (SNL only)
(SNL only) Dakota has extablished an online examples library that is accessible through the graphical user interface and a Gitlab repository browser. The activity-based examples help users with common tasks and will be core to future publicly available tutorials and documentation. The library supports Dakota-maintained and user-contributed examples, and the ability to apply access controls as needed.
Accessing: See Dakota@SNL after logging in to the Dakota website, or browse through the GUI (SNL only).
Miscellaneous Enhancements and Bugfixes
- Python simulation interfacing module dakota.interfacing features more Pythonic iteration of Dakota objects and several bugfixes.
- Simplified identity response map for most common mixed epistemic/aleatory UQ studies.
- (MK TODO) ROL optimizer improvements
- Resolved failures resulting from having COBYLA in two third-party libraries (Acro, NOWPAC). Libraries can now be enabled together without issue.
- Adaptive sampling method now properly respects batch selection options for distance penalty and constant liar (were being ignore before)
- Refactored Pecos math and linear algebra utilities in support of increasing Dakota modularity.
- Prototype modular surrogate library, with Python interface and Jupyter notebook-based examples.
- Bug fix: Dakota + JEGA (SOGA/MOGA) optimization algorithms will now terminate when signals, e.g., CTRL-C, are raised by the user or underlying application.
- Bug fix: Initializing JEGA (SOGA/MOGA) optimization algorithms with the flat_file option will now work with discrete variables.
- Bug fix: Acro Coliny/SCOLib optimization methods will now work with calibration data that includes configuration variables (previously would seg fault)
- Bug fix: For UQ methods and variable views, include uncertain discrete set of string variables in the epistemic variable view.
- Bug fix: Post-run input will again work with freeform-formatted input data
Deprecated and Changed
- Unique identifiers in input files: Labels, e.g., those in id_method or id_variables, must be unique among blocks of each top-level type. While a method and a variables block may (perhaps unhelpfully) use the same string identifier, two methods may not. Unlabeled method blocks will now be labeled by Dakota with NO_ID, while Dakota-internally constructed ("helper") methods will be labeled NOSPEC_ID_<num>.
- X Windows-based plotting is disabled by default.
- No mandatory changes to required compilers or third-party libraries.
Compatibility
- HDF5 v1.10.2 or higher is required to compile Dakota with the HDF5-based method results output enabled. HDFView may be helpful in viewing the output database, and Python h5py is necessary to run tests and examples.
- Enabling the optional prototype surrogate model library requires Python NumPy and Swig.
- No mandatory changes to required compilers or third-party libraries.