helps to optimise the value of innovation

Managing Uncertainty via Deterministic Sensitivity Analysis

Added on 29/03/2025

Managing Uncertainty via Deterministic Sensitivity Analysis

Added on 29/03/2025

 
 

Simulating the financial impact of introducing a new technology inherently involves uncertainty. As a result, the central outcome should be seen as an indicative reference for decision-makers who must assess whether the implementation of the new technology aligns with its clinical added value. In one of our previous blogs (“How to Deal with Uncertainties“), we briefly touched on deterministic sensitivity analysis (DSA).

In this blog, we take a deeper dive into this type of sensitivity analysis.

Imagine a medical technology is introduced to the market, and the developer needs to convince payers that the budget needed will be used efficiently. In this case, the developer can conduct an uncertainty analysis. The first step is to make a solid estimate of the number of interventions with the new technology. This estimate is based on current clinical practice (how many patients are currently managed with existing technology) and a simulation of the market share that the innovation is expected to capture.

Several unknown factors play a role in this estimation, including but not limited to:

  • The number of patients in both scenarios (before and after the introduction of the innovative technology)
  • The acquired market share (per year)
  • Shifts within existing technologies
  • Additional effects, such as reduced hospital stays leading to a quicker return to economic activity

Secondary effects may also vary, such as a reduction in clinical events or follow-up visits. In short, as an analyst, you need to map out all potential effects. The data sources typically come from clinical studies (provided they have a solid study protocol), literature (indirect information), or expert surveys. Regardless of the source, all data are based on samples, as it is impossible to include all patients over an indefinite time horizon.

Since sampling is used, there is inherent uncertainty in the core data. To mitigate this, clinical studies must ensure sufficiently large sample sizes to limit confidence intervals and achieve statistically significant results, demonstrating that the new technology is superior (or at least not inferior) to current technologies.

As you can see, almost all economic analyses involve some degree of uncertainty regarding the central outcome measure. But how can this uncertainty be incorporated into a budget impact analysis? There are at least two options:

Option 1: Confidence Interval-Based Sensitivity Analysis

In this approach, you determine the minimum and maximum values of the parameters based on study data, using, for example, the 95% confidence interval as a boundary. The advantage is that the impact on the central outcome is assessed based on the variation reported in the study. If the reported variation is 2%, it makes little sense to conduct a sensitivity analysis with a 10% deviation, as this would likely be unrealistic in a real-world setting.

Additionally, incorporating every uncertain parameter in the sensitivity analysis helps decision-makers identify which parameter has the most significant real-world impact on the available budget.

Option 2: Fixed Percentage Variation Sensitivity Analysis

In this approach, the minimum and maximum values of the parameters are determined based on a predefined deviation (e.g., 15%). Each parameter is tested uniformly, allowing the parameter with the greatest effect on the central outcome to be identified.

For example, if 10 parameters in an analysis are subject to uncertainty, it is best to include all 10 in the sensitivity analysis. After each run (20 in total—10 based on the lowest value and 10 based on the highest value), the impact on the central outcome is assessed. The sum of the deviations (expressed in absolute budget terms) of the minimum and maximum values is then ranked, allowing the identification of the parameter with the most significant potential impact on the projected budget.

Which Option Is Preferable?

Should you choose Option 1 (confidence intervals) or Option 2 (fixed proportional deviation)? Both approaches have their advantages and disadvantages.

As consultants, we recommend using both methods. For payers, it is crucial to assess financial risks as they would occur in real-life situations (Option 1). At the same time, understanding which determinant has the most significant impact on the final budget (Option 2) is equally valuable.

A confidence interval is derived from a sample, but is this sample relevant to the clinical setting where the new technology will be implemented? Are the inclusion and exclusion criteria of the study comparable to real-world clinical practice? For example, if the new technology is an Advanced Therapy Medicinal Product (ATMP) applicable to a very small population, is the confidence interval the best measure to use? These are reasons to consider using a fixed variation approach.

On the other hand, it makes little sense to vary the number of patients beyond the maximum prevalence.

Which Parameters Should Be Included?

Another key question is which parameters to include in the analysis. In theory, all parameters leading to the final result should be included, except for fixed values. For instance, drug prices are usually known and not subject to variability, making them irrelevant for a sensitivity analysis. However, if the dosage is weight-dependent, it may be necessary to allow for variations in the average patient weight.

Additionally, micro-effects should generally be excluded. For example, the cost of a hospital admission is significantly higher than that of a single doctor’s visit. However, if the analysis shows that a single follow-up visit can be eliminated, it may still be a determining factor.

It is up to the analyst to determine which parameters are relevant and which are less significant.

Grouping Costs Where Possible

We recommend clustering costs where feasible. For example, the cost of an adverse event may be the sum of multiple sub-costs. Rather than varying each sub-cost, it is more meaningful to vary the total cost.

This discussion has primarily focused on deterministic sensitivity analysis. The same principles apply to probabilistic sensitivity analysis (PSA), with the key difference being that values are randomly selected between the predefined extremes. PSA is a distinct technique that we will explore in more detail in a future blog.

Do you have any questions or suggestions regarding sensitivity analyses? Feel free to reach out—we’re happy to help!

The formula shown in the image helps determine the required sample size to achieve a specified level of confidence. In this context, Z and β represent values from the t-table, while δ denotes the specified difference.

Example:

A manufacturer claims that a device has an average lifespan of 850 hours with a standard deviation of 50 hours. However, a payer suspects that this estimate is overstated by 40 hours. To verify this claim with reasonable confidence, how many devices should be tested?

Using the formula:

n = ((1.96+0.84) ×50/40)^2

Rounding up, 13 devices should be tested to determine whether the payer’s assumption is valid with 95% confidence.