Sensitivity analysis (Ofer Abarbanel online library)

Sensitivity analysis is the study of how the uncertainty in the output of a mathematical model or system (numerical or otherwise) can be divided and allocated to different sources of uncertainty in its inputs.[1][2] A related practice is uncertainty analysis, which has a greater focus on uncertainty quantification and propagation of uncertainty; ideally, uncertainty and sensitivity analysis should be run in tandem.

The process of recalculating outcomes under alternative assumptions to determine the impact of a variable under sensitivity analysis can be useful for a range of purposes,[3] including:

  • Testing the robustness of the results of a model or system in the presence of uncertainty.
  • Increased understanding of the relationships between input and output variables in a system or model.
  • Uncertainty reduction, through the identification of model input that cause significant uncertainty in the output and should therefore be the focus of attention in order to increase robustness (perhaps by further research).
  • Searching for errors in the model (by encountering unexpected relationships between inputs and outputs).
  • Model simplification – fixing model input that has no effect on the output, or identifying and removing redundant parts of the model structure.
  • Enhancing communication from modelers to decision makers (e.g. by making recommendations more credible, understandable, compelling or persuasive).
  • Finding regions in the space of input factors for which the model output is either maximum or minimum or meets some optimum criterion (see optimization and Monte Carlo filtering).
  • In case of calibrating models with large number of parameters, a primary sensitivity test can ease the calibration stage by focusing on the sensitive parameters. Not knowing the sensitivity of parameters can result in time being uselessly spent on non-sensitive ones.[4]
  • To seek to identify important connections between observations, model inputs, and predictions or forecasts, leading to the development of better models.[5][6]

Overview

A mathematical model (for example in biology, climate change, economics or engineering) can be highly complex, and as a result, its relationships between inputs and outputs may be poorly understood. In such cases, the model can be viewed as a black box, i.e. the output is an “opaque” function of its inputs.

Quite often, some or all of the model inputs are subject to sources of uncertainty, including errors of measurement, absence of information and poor or partial understanding of the driving forces and mechanisms. This uncertainty imposes a limit on our confidence in the response or output of the model. Further, models may have to cope with the natural intrinsic variability of the system (aleatory), such as the occurrence of stochastic events.[7]

Good modeling practice requires that the modeler provide an evaluation of the confidence in the model. This requires, first, a quantification of the uncertainty in any model results (uncertainty analysis); and second, an evaluation of how much each input is contributing to the output uncertainty. Sensitivity analysis addresses the second of these issues (although uncertainty analysis is usually a necessary precursor), performing the role of ordering by importance the strength and relevance of the inputs in determining the variation in the output.[2]

In models involving many input variables, sensitivity analysis is an essential ingredient of model building and quality assurance. National and international agencies involved in impact assessment studies have included sections devoted to sensitivity analysis in their guidelines. Examples are the European Commission (see e.g. the guidelines for impact assessment),[8] the White House Office of Management and Budget, the Intergovernmental Panel on Climate Change and US Environmental Protection Agency’s modeling guidelines.[9] In a comment published in 2020 in the journal Nature 22 scholars take COVID-19 as the occasion for suggesting five ways to make models serve society better. One of the five recommendations, under the heading of ‘Mind the assumptions’ is to ‘perform global uncertainty and sensitivity analyses […] allowing all that is uncertain — variables, mathematical relationships and boundary conditions — to vary simultaneously as runs of the model produce its range of predictions.’[10]

Settings, constraints, and related issues

Settings and constraints

The choice of method of sensitivity analysis is typically dictated by a number of problem constraints or settings. Some of the most common are

  • Computational expense:Sensitivity analysis is almost always performed by running the model a (possibly large) number of times, i.e. a sampling-based approach.[11] This can be a significant problem when,
    • A single run of the model takes a significant amount of time (minutes, hours or longer). This is not unusual with very complex models.
    • The model has a large number of uncertain inputs. Sensitivity analysis is essentially the exploration of the multidimensional input space, which grows exponentially in size with the number of inputs. See the curse of dimensionality.

Computational expense is a problem in many practical sensitivity analyses. Some methods of reducing computational expense include the use of emulators (for large models), and screening methods (for reducing the dimensionality of the problem). Another method is to use an event-based sensitivity analysis method for variable selection for time-constrained applications.[12] This is an input variable selection (IVS) method that assembles together information about the trace of the changes in system inputs and outputs using sensitivity analysis to produce an input/output trigger/event matrix that is designed to map the relationships between input data as causes that trigger events and the output data that describes the actual events. The cause-effect relationship between the causes of state change i.e. input variables and the effect system output parameters determines which set of inputs have a genuine impact on a given output. The method has a clear advantage over analytical and computational IVS method since it tries to understand and interpret system state change in the shortest possible time with minimum computational overhead.[12][13]

  • Correlated inputs:Most common sensitivity analysis methods assume independence between model inputs, but sometimes inputs can be strongly correlated. This is still an immature field of research and definitive methods have yet to be established.
  • Nonlinearity:Some sensitivity analysis approaches, such as those based on linear regression, can inaccurately measure sensitivity when the model response is nonlinear with respect to its inputs. In such cases, variance-based measures are more appropriate.
  • Model interactions:Interactions occur when the perturbation of two or more inputs simultaneously causes variation in the output greater than that of varying each of the inputs alone. Such interactions are present in any model that is non-additive, but will be neglected by methods such as scatterplots and one-at-a-time perturbations.[14] The effect of interactions can be measured by the total-order sensitivity index.
  • Multiple outputs:Virtually all sensitivity analysis methods consider a single univariate model output, yet many models output a large number of possibly spatially or time-dependent data. Note that this does not preclude the possibility of performing different sensitivity analyses for each output of interest. However, for models in which the outputs are correlated, the sensitivity measures can be hard to interpret.
  • Given data:While in many cases the practitioner has access to the model, in some instances a sensitivity analysis must be performed with “given data”, i.e. where the sample points (the values of the model inputs for each run) cannot be chosen by the analyst. This may occur when a sensitivity analysis has to be performed retrospectively, perhaps using data from an optimisation or uncertainty analysis, or when data comes from a discrete source.[15]

Assumptions vs. inferences

In uncertainty and sensitivity analysis there is a crucial trade off between how scrupulous an analyst is in exploring the input assumptions and how wide the resulting inference may be. The point is well illustrated by the econometrician Edward E. Leamer:[16][17]

I have proposed a form of organized sensitivity analysis that I call ‘global sensitivity analysis’ in which a neighborhood of alternative assumptions is selected and the corresponding interval of inferences is identified. Conclusions are judged to be sturdy only if the neighborhood of assumptions is wide enough to be credible and the corresponding interval of inferences is narrow enough to be useful.

Note Leamer’s emphasis is on the need for ‘credibility’ in the selection of assumptions. The easiest way to invalidate a model is to demonstrate that it is fragile with respect to the uncertainty in the assumptions or to show that its assumptions have not been taken ‘wide enough’. The same concept is expressed by Jerome R. Ravetz, for whom bad modeling is when uncertainties in inputs must be suppressed lest outputs become indeterminate.[18]

Pitfalls and difficulties

Some common difficulties in sensitivity analysis include

  • Too many model inputs to analyse. Screening can be used to reduce dimensionality. Another way to tackle the curse of dimensionality is to use sampling based on low discrepancy sequences[19]
  • The model takes too long to run. Emulators (including HDMR) can reduce the number of model runs needed.
  • There is not enough information to build probability distributions for the inputs. Probability distributions can be constructed from expert elicitation, although even then it may be hard to build distributions with great confidence. The subjectivity of the probability distributions or ranges will strongly affect the sensitivity analysis.
  • Unclear purpose of the analysis. Different statistical tests and measures are applied to the problem and different factors rankings are obtained. The test should instead be tailored to the purpose of the analysis, e.g. one uses Monte Carlo filtering if one is interested in which factors are most responsible for generating high/low values of the output.
  • Too many model outputs are considered. This may be acceptable for the quality assurance of sub-models but should be avoided when presenting the results of the overall analysis.
  • Piecewise sensitivity. This is when one performs sensitivity analysis on one sub-model at a time. This approach is non conservative as it might overlook interactions among factors in different sub-models (Type II error).
  • Commonly used OAT approach is not valid for nonlinear models. Global methods should be used instead.[20]

Applications

Examples of sensitivity analyses can be found in various area of application, such as:

  • Environmental sciences
  • Business
  • Social sciences
  • Chemistry
  • Engineering
  • Epidemiology
  • Meta-analysis
  • Multi-criteria decision making
  • Time-critical decision making
  • Model calibration
  • Uncertainty Quantification

Sensitivity auditing

It may happen that a sensitivity analysis of a model-based study is meant to underpin an inference, and to certify its robustness, in a context where the inference feeds into a policy or decision making process. In these cases the framing of the analysis itself, its institutional context, and the motivations of its author may become a matter of great importance, and a pure sensitivity analysis – with its emphasis on parametric uncertainty – may be seen as insufficient. The emphasis on the framing may derive inter-alia from the relevance of the policy study to different constituencies that are characterized by different norms and values, and hence by a different story about ‘what the problem is’ and foremost about ‘who is telling the story’. Most often the framing includes more or less implicit assumptions, which could be political (e.g. which group needs to be protected) all the way to technical (e.g. which variable can be treated as a constant).

In order to take these concerns into due consideration the instruments of SA have been extended to provide an assessment of the entire knowledge and model generating process. This approach has been called ‘sensitivity auditing’. It takes inspiration from NUSAP,[53] a method used to qualify the worth of quantitative information with the generation of `Pedigrees’ of numbers. Likewise, sensitivity auditing has been developed to provide pedigrees of models and model-based inferences.[54] Sensitivity auditing has been especially designed for an adversarial context, where not only the nature of the evidence, but also the degree of certainty and uncertainty associated to the evidence, will be the subject of partisan interests.[55] Sensitivity auditing is recommended in the European Commission guidelines for impact assessment,[8] as well as in the report Science Advice for Policy by European Academies.[56]

Ofer Abarbanel – Executive Profile

Ofer Abarbanel online library

Ofer Abarbanel online library

Ofer Abarbanel online library

Ofer Abarbanel online library