helps to optimise the value of innovation

Review our most recent Blog: AI in healthcare, Risks and Opportunities.

How to deal with uncertainties?

Added on 13/02/2023

A model contains a large amount of inputs and every single input will be surrounded with uncertainty and the end result will be a translation of the composite effect of all inputs and their potential variance.

How to deal with uncertainties?

Added on 13/02/2023

A model contains a large amount of inputs and every single input will be surrounded with uncertainty and the end result will be a translation of the composite effect of all inputs and their potential variance.

 
 

As we all know, simulation models should predict the targeted outcome(s). The outcome can be the estimated budget, health gain (e.g. OS prediction) or any other quantification related to the subject of the analysis. As valid for almost all models, the designer should build a model that will quantify the potential effect without losing the user in too much detail and complexity. The balance between complexity and user friendliness should be such that the efforts needed to simulate a given hypothetic situation will provide a result which can help decision makers in their analysis.

In general, healthcare related simulation models rely on epidemiologic estimations (e.g. proportion of the population suffering from a predefined disease or event) as well as clinical efficacy data based on clinical trial data. The smaller the sample, the higher the uncertainty will be. Even if you would be able to capture all patients in the given model, uncertainty will remain a point of discussion as patient characteristics (age, body weight, hereditary, … ) can influence the way the patients respond to a given treatment. Moreover, every single input will be surrounded with uncertainty and the end result will be a translation of the composite effect of all inputs and their potential variance.

How to deal with uncertainties?

Sample size: A basic idea is to rely on data from a sample which is large enough to reduce uncertainty as much as possible. The smaller the sample, the more uncertainty will be part of the analysis. This is a well-known problem for orphan drugs as it is almost impossible to collect sufficient data within an acceptable time frame (in general the predefined time horizon of the clinical study). Published data from other studies can be used as validation and provide additional insights.

Direct comparison: Single arm studies do not have a comparator. The only way to compare such data is by using data from other studies (or literature) and ‘matching’ both data sets (‘matched adjusted indirect comparison’). However, when ‘matching’ data sets, non-comparable data are removed, which ends up in a lower number of data, increasing the uncertainty.

Local data: No need to point out that source data from patients or studies from the other side of the globe might not be representative for the data needed to simulate the impact in the country of interest. This is valid for cost data as well for clinical data. Treatment guidelines can differ in other countries and the patient characteristics can influence the response rate. Of course, sometimes there are no local data available and the use of foreign data is the only option you will have, an additional cause of uncertainty.

Expert opinion: Even when robust and country specific data are available, input from key opinion leaders will decrease the uncertainty of assumptions and estimations made. Annual market shares can be validated by clinical experts who will use your innovation in their practice. Even more, extrapolated survival curves can be discussed with them and decision related data (e.g. patient flow) can help to reduce uncertainty.

Even when using the most optimal, validated dataset, simulation models will not be able to predict the outcomes with 100% certainty. The method to deal with this is to perform uncertainty analysis, which  can be done by a deterministic or a probabilistic approach.

A deterministic sensitivity analysis will analyse the impact of every single parameter used in your analysis. To do so, every single value (input value as used in the base case) will be replaced by a lower and upper value to trace the (potential) impact versus the base case outcome. When varying each single parameter as used in the analysis, a deterministic sensitivity analysis will allow you to trace the parameter which can have the biggest impact on the end results. All outcomes are then ranked top down. At least 2 options are possible to define the upper and lower value. Option 1: all parameters are subjected to the same variance (example 15% in both directions); or option 2: all parameters are subjected to the confidence interval (e.g. CI of 95%). There is no outspoken preference to use option 1 or 2. Option 1 will provide insights in the parameters which will influence the most the end results in an upper or lower direction but it won’t have much power to generate real life insights. The Belgian KCE guidelines for economic analysis doesn’t inform the user on which option to use. (KCE 2012, Belgian Guidelines for Economic Evaluation and Budget Impact Analysis, Second Edition 183C)Our recommendation is to select the option which will help decision makers analyse the cost / benefit of the concerned innovation in an optimal way. .

In health economic analyses, a probabilistic sensitivity analysis (PSA) is the preferred method to measure uncertainty. All variables will be varied at random at the same time and the outcomes will be reported in a list. By repeating this several times, several potential outcomes will be generated. At the end, one can calculate the aggregated average of all the results. No need to say that the more times you perform a PSA (e.g. 1 000 times) the more representative the average will be. In practice, the minimum and maximum value to be used should be based on the reported confidence interval for the parameter, as reported in the studies. When no CI is traceable, one can continue with a (predefined) logic variance (e.g. 15%). All PSA results can be mapped in a scatterplot and one can expect all results to be distributed around the central deterministic value of the base case analysis.

Scenarios can also be valuable. Different sources (literature) can provide other insights. But, which source is the most correct? It can be valuable to perform scenario’s which rely on different data sets, making additional information (impact on the end result) available to facilitate the decision process.

In conclusion, if you are involved in the interpretation of simulation models, always bear in mind that the sample size will predict a potential outcome which can differ versus ‘real life’. As most predictions are projected over the future (sometimes over a lifetime horizon), one needs to be aware that uncertainty will be part of your analysis. Dare to challenge your predictions and play devil’s advocate to understand the impact of single input values. Sensitivity analysis can help to build a smart strategy and provides an opportunity to interact with key opinion leaders. This way, you will be ready to provide insights to the payer in all transparency and pave the way to market access.

Do you have questions or do you like know more about, please feel free to contact us.