Documentation > Explore

Sensitivity analysis generally involve an a priori sampling of the input space and a statistical method to analyse the co-variance of the inputs and outputs of the model.

Sensitivity analysis can be done at a global or local level. Global methods provide summary statistics of the effects of inputs variation in the complete input space, whereas local methods focus the effect of inputs variation around a given point of the input space (think of a Jacobian matrix e.g.). The one factor at a time method can be viewed as a local sensitivity method, as only one factor vary, the other remaining fixed at their nominal value.

OpenMOLE implements two classical methods for global sensitivity analysis: Morris and Saltelli.

OpenMOLE Market example

## Content:

## Sensitivity analysis π

Sensitivity analysis correspond to a set of methods capturing how a model reacts to a change in its inputs. The goal of these statistical methods is to measure how variation propagates from the inputs to the outputs. More specifically, sensitivity analysis is defined by (Saltelli et al., 2008) as describing the Β«relative importance of each input in determining [output] variabilityΒ». As a consequence, typical result of such methods is an ordering of its inputs according to their sensitivity.Sensitivity analysis generally involve an a priori sampling of the input space and a statistical method to analyse the co-variance of the inputs and outputs of the model.

Sensitivity analysis can be done at a global or local level. Global methods provide summary statistics of the effects of inputs variation in the complete input space, whereas local methods focus the effect of inputs variation around a given point of the input space (think of a Jacobian matrix e.g.). The one factor at a time method can be viewed as a local sensitivity method, as only one factor vary, the other remaining fixed at their nominal value.

OpenMOLE implements two classical methods for global sensitivity analysis: Morris and Saltelli.

## Morris method π

### Principle π

Morris method is a statistical method for global sensitivity analysis. This method is of the type "one-factor-at-a-time", and was conceived as a preliminary computational experiment, to grasp the relative influence of each factor. In comparison to LHS screening, it has the advantage to provide information for each factor. The input space is considered as a grid and trajectories are sampled among these points. The method captures output variation when one of the trajectory points is moved to one of its closest neighbors. This variation is called an elementary effect. A certain number of trajectories**R**are generated, in order to observe the consequence of elementary effects anywhere in the input space (trajectories are generated such that given a starting point, any point at fixed distance is equiprobable - note that the method is still subject to the curse of dimensionality for trajectories to fill the input space). Finally, the method summarizes these elementary effect to estimate global sensitivity in the output space. This method is computationally cheap, as each trajectory has

**k+1**parameter points if

**k**is the number of factors. The total number of model runs will thus be

**R*(k+1)**, so the number of trajectory can be adjusted to the computational budget.

### Results and Interpretation π

Morris' method computes three sensitivity indicators for each model input and each model output. An elementary effect for input**i**and output

**y**is obtained when the factor

_{j}**x**is changed during one trajectory by a step

_{i}**delta**from an point

_{i}**x**, and computed as

_{0}**epsilon**= (y

_{ij}_{j}* x

_{0}- y

_{j}* (x

_{0}+ delta

_{i}) / delta

_{i}). These are computed as summary statistics on simulated elementary effects and are:

- The overall sensitivity measure,
**i**is the average of the elementary effects_{j}**epsilon**, computed on effects for which factor_{ij}**i**was changed (by construction there are exactly**R**such effects, one for each trajectory, and they are independent). It is interpreted as the average influence of the input**i**on the model output**j**variability. Note that it can aggregate very different strengths of effects, and more dramatically will cancel opposite effects: an indicator which profile along a dimension is a squared function for example will be considered as unsensitive to the input regarding this sensitivity index. - A more robust version of the final sensitivity measure,
**mu**, is computed as the average of the absolute value of the elementary effects, ensuring robustness against non-monotonic models. It is still an average and will miss non-linear effects.^{*}_{ij} - To account for non-linearities or interactions between factors, the measure
**sigma**is computed as the standard deviation of the elementary effects. A low standard deviation means that effects are constant, i.e. the output is linear on this factor. A high value will mean either that the indicator is non-linear on this factor (low variations at some places and high variations are others) or that variations change a lot when changing other factor values, i.e. that this factor has interactions with others. Both are equivalent regarding a projection on the dimension of factor considered._{ij}

## Morris' method within OpenMOLE π

### Specific constructor π

The`SensitivityMorris`

constructor is defined in OpenMOLE and takes the following parameters:
`evaluation`

is the task (or a composition of tasks) that uses your inputs, typically your model task.`inputs`

is the list of your model's inputs.`outputs`

is the list of your model's outputs, which behavior is evaluated by the method.`sample`

is the number of trajectories sampled, which in practice will determine the accuracy of estimation of sensitivity indices but also the total number of runs.`levels`

is the resolution of relative variations, i.e. the number of steps**p**for each dimension of the grid in which trajectories are sampled. In other words, any variation of a factor**delta**will be a multiple of_{i}**1/p**. It should be adapted to the number of trajectories: a higher number of levels will be more suited to a high number of trajectories, to not miss parts of the space with a small number of local trajectories.

### Use example π

Here is how you can make use of this constructor in OpenMOLE:```
val i1 = Val[Double]
val i2 = Val[Double]
val i3 = Val[Double]
val o1 = Val[Double]
val o2 = Val[Double]
SensitivityMorris(
evaluation = model,
inputs = Seq(
i1 in (0.0, 1.0),
i2 in (0.0, 1.0),
i3 in (0.0, 1.0)),
outputs = Seq(o1, o2),
sample = 10,
level = 10
) hook display
```

### Additional material π

Paper describing method and its evaluation: Campolongo F, Saltelli A, Cariboni, J, 2011, From screening to quantitative sensitivity analysis. A unified approach, Computer Physics Communication. 182 4, pp. 978-988. The book on sensitivity analysis is also a good reference for the description of sensitivity analysis methods and case studies of their applications: Saltelli, A., Tarantola, S., Campolongo, F., & Ratto, M. (2004). Sensitivity analysis in practice: a guide to assessing scientific models (Vol. 1). New York: Wiley.OpenMOLE Market example

## Saltelli's method π

Saltelli is a statistical method for global sensitivity analysis. It estimates sensitivity indices based on relative variances. More precisely, the first order sensitivity coefficient for factor**x**and output indicator

_{i}**y**is computed by first conditionally to any

_{j}**x**value, estimating the expectancy of

_{i}**y**conditionally to the value of

_{j}**x**with all other factors varying, and then considering the variance of these local conditional expectancies. In simpler words, it is the variance after projecting along the dimension of the factor. It is written as

_{i}**Var**. An other global sensitivity index does not consider a projection but the full behavior along the factor for all other possible parameter values. This corresponds to the total effect, i.e. first order but also interactions with other factors. This index is written as

_{~i}}[E_{Xi}(y_{j}x_{i})] / Var[y_{j}]} where**X**are all other factors but_{~i}**x**_{i}**E**. In practice, Sobol quasi-random sequences are used to estimate the indices. The computational budget for this method is fixed by the number of Sobol points drawn, so in practice the user controls directly the number of model runs.

_{X~i}[Var_{xi}(y_{j}X_{~i}/ Var[y_{j}]## Saltelli's method within OpenMOLE π

### Specific constructor π

The`SensitivitySaltelli`

constructor is defined in OpenMOLE and can take the following parameters:
`evaluation`

is the task (or a composition of tasks) that uses your inputs, typically your model task.`inputs`

is the list of your model's inputs`outputs`

is the list of your model's outputs for which the sensitivity indices will be computed.`samples`

number of samples to draw for the estimation of the relative variances, which correspond exactly to the number of model runs. The higher the dimension, the poorer the estimation of indices will be for low number of samples.

### Hook π

The @code{hook} keyword is used to save or display results generated during the execution of a workflow. The generic way to use it is to write either`hook(workDirectory / "path/of/a/file")`

to save the results, or `hook display`

to display the results in the standard output.
In the output file contains for each index 2 parts: the `firstOrderIndices`

and the `totalOrderIndices`

, which contain each the matrices of indices for each factor and each indicator.
### Use example π

Here is how you can make use of this constructor in OpenMOLE:```
val i1 = Val[Double]
val i2 = Val[Double]
val i3 = Val[Double]
val o1 = Val[Double]
val o2 = Val[Double]
SensitivitySaltelli(
evaluation = model,
inputs = Seq(
i1 in (0.0, 1.0),
i2 in (0.0, 1.0),
i3 in (0.0, 1.0)),
outputs = Seq(o1, o2),
sample = 100
) hook display
```