CONSIDERING AND COMMUNICATING UNCERTAINTY IN HTA - HTAi Global Policy Forum 2021 Background Paper - Health ...

Page created by Tony Marshall
 
CONTINUE READING
CONSIDERING AND COMMUNICATING UNCERTAINTY IN HTA - HTAi Global Policy Forum 2021 Background Paper - Health ...
CONSIDERING AND
            COMMUNICATING
          UNCERTAINTY IN HTA
HTAi Global Policy Forum 2021 Background Paper

                                                 1
CONSIDERING AND COMMUNICATING UNCERTAINTY IN HTA - HTAi Global Policy Forum 2021 Background Paper - Health ...
Contents
Introduction .................................................................................................................................................... 4
   Prior Policy Fora Topics Relevant to Uncertainty ..................................................................................... 4
Background ..................................................................................................................................................... 6
   Conceptualizing Uncertainty ..................................................................................................................... 6
Input Uncertainty ........................................................................................................................................... 9
   Clinical Uncertainty .................................................................................................................................... 10
   Characterizing Clinical Uncertainty ........................................................................................................... 11
       Quality Measures ................................................................................................................................... 11
       Surrogate Endpoints ..............................................................................................................................12
   Economic Model Uncertainty .................................................................................................................... 12
       Parameter Uncertainty ......................................................................................................................... 13
       Deterministic Sensitivity Analyses ......................................................................................................... 13
       Probabilistic Sensitivity Analyses ........................................................................................................... 14
       Calibration of Extrapolation ................................................................................................................... 14
       Structural (Model) Uncertainty ............................................................................................................. 14
   Affordability Uncertainty ........................................................................................................................... 15
   Summary Approaches to Managing Economic Model Uncertainty ......................................................... 16
   Value of Information Analyses .................................................................................................................. 16
   Tools for Cataloguing Model Uncertainty .................................................................................................16
   Who is Responsible for Uncertainty? ........................................................................................................ 17
   The Future of Input Uncertainty ................................................................................................................ 18
Throughput Uncertainty ................................................................................................................................ 19
   Understanding Uncertainty in the Throughput Stage .............................................................................. 20
       (Non-manufacturer) Stakeholder Input ................................................................................................ 20
   Impact of Uncertainty on HTA Deliberations ............................................................................................ 21
   Managing Throughput Uncertainty ........................................................................................................... 21
   Context of Throughput Uncertainty .......................................................................................................... 22
   Consistency and Predictability ................................................................................................................... 23
   Stakeholder Input ....................................................................................................................................... 24
   Case Study – CAR T Therapy ...................................................................................................................... 24
Output Uncertainty ........................................................................................................................................ 26

                                                                                                                                                                      2
CONSIDERING AND COMMUNICATING UNCERTAINTY IN HTA - HTAi Global Policy Forum 2021 Background Paper - Health ...
A Framework for CommunicatingEpistemic Uncertainty ........................................................................ 26
   Communicating to Stakeholders ............................................................................................................... 27
       Patients ................................................................................................................................................... 27
       Technology Manufacturers .................................................................................................................... 28
       Health System Stakeholders .................................................................................................................. 29
   Impact of the COVID-19 Pandemic ............................................................................................................ 29
Acknowledgements ........................................................................................................................................ 31
References ...................................................................................................................................................... 31

                                                                                                                                                                         3
1 Introduction
 2   The purpose of this background paper is to inform the discussion at the HTAi Global Policy Forum (GPF)
 3   meeting which, for the first time, will be held virtually across the world in February 2021. The topic
 4   chosen for the meeting is “Considering and Communicating Uncertainty in Health Technology
 5   Assessment (HTA)” a theme felt to be especially timely during the COVID-19 pandemic. The topic, and
 6   overall outline, was selected by HTAi GPF member representatives in early 2020 and further refined
 7   through virtual topic scoping and breakout meetings held during June and July of 2020. The meeting’s
 8   main aim is to discuss, at a strategic and policy level, the impact of uncertainty on deliberations and
 9   outputs in HTA and examine how multiple types of uncertainty are best handled and then
10   communicated to multiple stakeholders. It will consider whether global approaches for the treatment of
11   uncertainty can be developed, acknowledging the contextual importance of attitude to uncertainty and
12   risk and its possible variation across settings. The intention is that the focus of the GPF discussions
13   remains policy-orientated, rather than at a detailed operational or methodological level.

14   To support the aims of the GPF meeting, this paper presents an overview of key methods and
15   considerations related to uncertainty available in the published literature. This is supplemented by the
16   concerns identified by HTA users, producers, and other stakeholders as well as those identified by GPF
17   members. These concerns were elicited during 20 expert informant interviews conducted by the GPF
18   Scientific Secretary and Chair, where experts were selected to represent a variety of stakeholder
19   perspectives and “insider” knowledge (see the Acknowledgements for further details). In addition to
20   this, a survey of the current not-for-profit members of the GPF was conducted to determine what
21   explicit methods are in place for considering and communicating uncertainty in their respective
22   organizations. A total of 16 responses from14 for profit member organizations were received.
23   Review and further input from the HTAi GPF Organizing Committee, the wider HTAi GPF membership,
24   and members of the HTAi Board were also received during the development of this background paper.

25   It is important to highlight that management and communication of uncertainty as described in this
26   paper are primarily through the lens of the high-income country. This is reflective of the GPF
27   membership as well as the published literature. However, the same conditions and concerns are also
28   present for low- and middle-income countries. Indeed, the HTAi Asia and Latin America Policy Fora have
29   had wide ranging topics that have implicitly and explicitly included conversations around uncertainty in
30   those regions (https://htai.org/policy-forum/). In addition, the focus of this paper and the GPF
31   discussions will be primarily from the perspective of the HTA community (i.e. those concerned with
32   using or producing HTAs). We acknowledge that other perspectives of uncertainty (such as that of
33   regulators) will differ from those of the HTA community due to different remits, functions, and scopes,
34   and while this is an important consideration and one that has consequences, particularly for technology
35   manufacturers who deal with both perspectives in parallel, it is beyond the scope of this HTAi GPF.

36   Prior Policy Fora Topics Relevant to Uncertainty
37   The last GPF in January 2020 discussed deliberative processes in HTA(1) and at this meeting, GPF
38   members agreed on three core principles of deliberative processes--transparency, inclusivity, and
39   impartiality. These are closely linked with uncertainty, as it is during the deliberative process that the
40   uncertainty in the available data (hereafter referred to as “input uncertainty”) is debated and decisions
41   on how to manage the uncertainty (and what level of uncertainty can be tolerated) are made. Being
42   transparent about what uncertainty exists and how this impacts the decision, communicating the impact

                                                                                                                  4
43   of uncertainty to all relevant stakeholders, and an impartial approach to managing uncertainty (i.e.,
44   consistency and predictability in how the uncertainty is handled) are also relevant principles for the
45   current topic. There have been other GPF topics that are closely linked to the current area of focus;
46   further demonstrating how embedded the concept of uncertainty is in all facets of HTA. For example, in
47   2019, the GPF considered “Real World Evidence (RWE) in the Context of HTA”, (2) noting that a key
48   potential use of RWE is to generate evidence that can reduce uncertainty after initial technology
49   adoption, but that RWE itself can be considered unreliable if parameters for its use are not appropriately
50   outlined. In 2016, the GPF considered the changing paradigm for HTA (3) and touched on many aspects
51   related to addressing uncertainty such as early scientific dialogue. Prior to this, the GPF discussed
52   managed entry agreements (MEAs) in 2010 (4) and coverage with evidence development (CED) in 2007,
53   (5) two vehicles for handling uncertainty through linking price to value and collection of additional
54   evidence.

55   The intention is to not repeat and return to any of these topics in detail during the 2021 GPF discussions
56   but for GPF members to draw on these resources as required. Table 1 lists some questions and topics
57   that will be most relevant for the 2021 GPF.
58   Table 1 Selected key questions relevant to the 2021 HTAi GPF

      Domain                           Questions
      Future of Input                     - How can the HTA community better prepare for increasing input
      Uncertainty                             uncertainty from accelerated regulatory approvals or other
                                              abbreviated regulatory processes?
                                          - How can the HTA community better prepare for increasing input
                                              uncertainty from technologies such as highly specialized
                                              treatments, gene therapies, and other factors?
                                          - What is the opportunity cost of resolving increasing uncertainty?
                                          - Should alternative processes be considered for situations where
                                              uncertainty is unlikely to be resolved?
                                          - How can HTA and regulators be better aligned in considering
                                              uncertainty? Who should influence who (or what can we learn
                                              from each other?)
                                          - Are there particular types/areas of uncertainty that external
                                              stakeholders (e.g., patients and clinicians) can help to resolve?
      Managing Uncertainty                - What are the potential supporting actions to improve the
                                              consistency and predictability of management of uncertainty for
                                              stakeholders?
                                          - How is uncertainty conveyed to and considered by deliberative
                                              committees or otherbodies?
                                          - Are there conditions where additional uncertainty is universally
                                              acceptable (either by disease or by type of technology) or does
                                              this encourage the concept of cut-off “boundaries” (whereby
                                              technologies fall on either side of a pre-determined value?
                                          - What is the role of stakeholders in facilitating the understanding
                                              and management of uncertainty during committee deliberation?
      Communicating                       - What are the principles for communicating uncertainty to
      Uncertainty                             different stakeholdergroups?
                                          - What are the current innovations in this space?

                                                                                                                  5
-   What are the resource implications for communicating
                                           uncertainty? What are the trade-offs for doing this?
                                       -   What can be learned from key case studies, and particularly from
                                           the COVID-19 pandemic on communicating key findings from
                                           ongoing research?

59 Background
60   In the Cambridge English Dictionary, uncertainty is defined as: “not knowing what to do or believe, or
61   not able to decide about something; not known or fixed, not completely certain”(6). This broad
62   definition demonstrates the variety of uses and meanings of the word(7). While there is no single,
63   standalone definition of “uncertainty” in the HTA Glossary(8), the concept is clearly articulated and
64   inherent in many definitions and terms related to HTA methods and processes. By the very nature of
65   what HTA is and what it seeks to do, uncertainty will always exist at some level (9). Therefore,
66   considering uncertainty is a fundamental and inherent component of HTA. However, the process for
67   how these concepts are dealt with is contextual and will vary across jurisdictions and by stakeholder
68   perspective. The types and level of uncertainty, how uncertainty is considered and managed when
69   arriving at recommendations or decisions, and how the various uncertainties are conveyed to multiple
70   stakeholders are all critically important to consider; if any one of these elements is not considered
71   carefully then trust in HTA findings will surely be reduced.

72   Conceptualizing Uncertainty
73   The complexities of considering and communicating uncertainty become more manageable by
74   conceptualizing them using an “Input-Throughput-Output” (ITO) model (as utilized to great effect in the
75   2020 Global Policy Forum on Deliberative Processes(1)). The ITO model is often used to illustrate
76   information processes and complex pathways of care, and it has similarities to other general
77   descriptions of HTA systems and processes for well-informed policy making in health care(10). As with
78   the previous HTAi GPF on deliberative processes, the ITO model provides a useful framework for
79   considering different types of uncertainty and the roles that each play in HTA activities. This background
80   paper will provide an overview of each of the domains; importantly, however, the discussions at the
81   virtual meeting will focus primarily on the throughput and output domains, as input during the scoping
82   process indicated that these domains would potentially benefit from development of a set of key
83   considerations and/or recommendations.

84   Firstly, “input” can generally be considered the collection of material (evidence, information, and
85   perspectives) that informs HTA activities. This sets the stage for consideration of “input uncertainties”
86   by deliberative bodies in HTA and comes primarily in the form of:

87       • clinical uncertainty from concerns regarding trial design, population, and
88           generalisability/heterogeneity
89       • economic model “structural” uncertainty from model design and operation
90       • economic model “parameter” uncertainty from data that is used in the economic model
91           (including temporal uncertainty where the parameters are extrapolated in the economic model)
92       • affordability uncertainty from estimates used to calculate the budget impact of technology in a
93           healthcare system, often driven by uncertainty in clinical evidence and economic extrapolations

                                                                                                                  6
94    The “throughput” stage describes how the various input uncertainties are handled; in other words, the
 95    weighting of the facts, data, values, and reasons that will lead to a collective judgment for a key HTA
 96    process (e.g., topic selection, scoping consultation, adoption decision or recommendation). This is the
 97    stage where the interplay between the clinical, economic, and affordability uncertainties are considered,
 98    and the consensus is sought, or votes are taken. Handling uncertainty during deliberations can vary
 99    according to perspective and societal values. Of course, the risk tolerances of everyone involved in the
100    decision-making process will also vary based on their own interpretation of the key inputs, and this can
101    result in further deliberative uncertainty. The effects of uncertainty considered at the throughput stage
102    can have varying levels of influence on the resulting recommendations.

103    Finally, “output” refers to how the level of uncertainty and its impact on the recommendations is
104    communicated and any learning is consolidated. The importance of clearly and understandably
105    conveying the types of uncertainty described above to decision-makers, patients, the media, and the
106    general public should not be underestimated. Stakeholder-friendly methods of conveying the input and
107    throughput uncertainties are important to ensure that the resulting HTA recommendations and
108    subsequent decisions are understood and fundamentally trusted—this is not to say that all stakeholders
109    will agree with the decision, but should be able to agree that any uncertainties were handled in a fair,
110    impartial and trustworthy manner.

111   Input Uncertainty
112    Input uncertainty can be expressed in terms of variation in the data collected for an HTA, elements of
113    the evidence base for which data are not available, limitations on who the findings apply to and over
114    what timeframe, and other examples. As such it falls into the broad categories of clinical, economic and
115    affordability uncertainty. However, there can be further uncertainties around other aspects of HTA (as
116    HTA is more than just a consideration of the clinical and economic evidence (11)). Examples include
117    uncertainty in how and when technology should be implemented in practice (implementation
118    uncertainty), whether the best price is being offered for the technology (i.e. pricing/relational
119    uncertainty often related to uncertainty around development and production costs, competitive market
120    development, turnover and other factors such as patent duration), and uncertainty around societal
121    values for when circumstances exist and conditions are met for greater uncertainty to be acceptable
122    (value uncertainty) are all additional areas of uncertainty that appraisal committees often must
123    consider. The section below focuses more on traditional types of input uncertainty but these additional
124    concepts and “meta” uncertainties, as well as the relative weight is given to each during deliberations,
125    are often at play.

126    First described by psychologists Luft and Ingham in 1955 using the Johari window analysis technique(12),
127    and in accordance with terminology popularized by Donald Rumsfeld (former US Secretary of Defence),
128    there are “known knowns, known unknowns, and unknown unknowns”. Of these areas of uncertainty, it
129    is the known unknowns that are most relevant at the point at which an HTA is conducted, given the
130    expectation that resolving uncertainty that is known to exist would make the evidence considered by
131    HTA bodies more complete; this is otherwise known as epistemic uncertainty(7). This form of
132    uncertainty is particularly challenging for devices, diagnostics, technologies for ultra-rare conditions,
133    highly specialized/precision technologies, and other potentially innovative technologies that have come
134    to market on an accelerated regulatory pathway.

                                                                                                                   7
135    The subsections below describe each major type of input uncertainty (clinical, economic model, and
136    affordability) followed by descriptions of some of the most common methods seen to characterize the
137    input uncertainty that ispresent.

138   Clinical Uncertainty
139    Uncertainties regarding the effects of treatments are inevitable(13); when a technology is first tried in
140    humans, the effects can be anticipated but cannot be known. The evidence generated may have wide
141    confidence intervals (i.e., data variation) or an apparent benefit related to some outcomes or subgroups
142    of patients but not others. Variability from person to person may be challenging to understand. Added
143    to this, clinical trials cannot provide a complete picture of how technology will work in practice. In
144    further framing clinical uncertainty, the Population, Intervention, Comparator, Outcomes, Timing,
145    Setting (PICOTS) framework is a useful way to consider how uncertainties maybe introduced into the
146    clinical evidence base(14). Those familiar with the use of the PICOTS framework in HTA will recognize
147    some of the more common uncertainties that HTA bodies must grapple with. For example, the trial
148    population may not be reflective of the target population in practice (for example a trial in relatively
149    healthy volunteers will likely have different effects in a frail older population with comorbidities). The
150    intervention itself will be delivered consistently within a clinical trial setting but may have lower
151    adherence (and therefore reduced effectiveness) in practice. The comparator in the study may not
152    reflect the standard of care in the country in which the HTA is taking place, and therefore the relative
153    effectiveness may be uncertain. The outcomes in the study setting may not reflect what is important to
154    clinicians and patients or may be surrogate to the actual outcome of interest in the HTA, therefore the
155    actual benefit of a technology may be uncertain. The timing or duration of the study may be too short to
156    be certain about the long-term benefits (and adverse effects) associated with the technology. The
157    setting of the study may not reflect the setting of delivery of the a technology in the real world (e.g.,
158    academic or highly specialized centers vs. community practice), and therefore the effects in a real-world
159    setting may beuncertain.

160    In a recent study by Vreman et al.(15) that looked at the greatest areas of concern for HTA agencies and
161    regulatory bodies, the greatest concern was uncertainty around the long-term effectiveness of a
162    technology, a finding that did not differ by type of agency or geography. This was echoed by the
163    responses to the Not for Profit GPF member survey, but in addition, it was noted that uncertainty in the
164    structure and parameters used in the economic model was of great concern.

165    The study design can also create an extra level of uncertainty, with small and/or single arm studies
166    creating a divide in terms of how acceptable/comfortable people are with them – particularly for
167    estimating clinical efficacy. It should be noted that there is typically greater comfort with other
168    estimates coming from non-clinical trial data (such as quality of life or utility data). There is also an
169    increasing number of studies where participants in the placebo arm of RCTs are permitted to switch to
170    the intervention arm, thereby essentially reducing the RCT to a single-arm study(16). Clinicians (and
171    therefore many technology appraisal committee members) have been trained to consider statistical
172    significance as a critical demonstration of benefit or harm and it is challenging to go against an ingrained
173    belief system (Expert informant); study design issues such as these may limit or preclude the use of
174    significance testing. On the other hand, statistically significant results may be observed for an interim or
175    surrogate outcome measure for which the clinical significance of the findings is unknown.

                                                                                                                      8
176   Finally, in the era of increasing accelerated regulatory pathways, it is becoming increasingly common for
177   much of the input data from early clinical trials to be provided as data in confidence to regulators and
178   HTA agencies. This brings additional challenges as data are not published and peer-reviewed and are not
179   identifiable through literature searching. This leads to an increased reliance on trusting the technology
180   manufacturer to provide nearly all the data on which the HTA is based. In the response to the Not for
181   Profit GPF member survey, there was an even split between agencies that will and agencies that will not
182   accept data in confidence from technology manufacturers.

183   Characterizing Clinical Uncertainty
184   There are several methods and approaches available to attempt to characterize uncertainty in the
185   clinical parameters. The simplest way in which to do this is with a qualitative (text) summary. Here the
186   level of uncertainty can be listed and synthesized narratively for the decision maker to read and
187   understand; however, this does not provide an intuitive and easily digested summary of the uncertainty,
188   nor does it attempt to score or quantify the impact of the uncertainty. A variety of technical approaches
189   have also been used, the most common of which are described below.

190   Quality Measures
191   Assessing and providing a quantitative measure to the uncertainty present in the clinical inputs is a
192   technique commonly seen by many HTA agencies. Some popular examples of such measures include
193   GRADE, the Cochrane Risk of Bias tool, and the Effective Health Care Methods Guide (used by AHRQ in
194   the US), as well as others. These tools do not reduce any uncertainty but rather provide a means to
195   evaluate the likely impact of clinical uncertainty. The GRADE (Grading of Recommendations,
196   Assessment, Development and Evaluation) is a transparent framework for developing and presenting
197   summaries of evidence and serves to provide a systematic approach for making clinical practice
198   recommendations(17). It is the most widely adopted tool for grading the quality of evidence for making
199   recommendations, with over 100 organizations worldwide have officially endorsed the use of
200   GRADE(18). Reviewers assign one of four levels to categorize the strength of evidence (also known as
201   certainty or quality of evidence): very low, low, moderate, and high. The Cochrane risk of bias tool
202   assigns ratings of low, high, unclear risk of bias in 6 specific domains for a given study, the results of
203   which can be compared across all studies in a sample (19). The Effective Health Care approach is
204   conceptually very similar to GRADE and the strength of evidence across studies (for each outcome)
205   receives a high, moderate, low or insufficient rating (20) based on domains such as consistency across
206   studies, the directness of the outcome of interest, the precision of the findings, and others. Typically
207   randomized controlled trials are graded higher as sources of evidence than observational data (given a
208   greater number of selection and other attendant biases possible in the latter), but the certainty for a
209   given study may be affected by a host of factors.

210   Criticisms of quality measures such as GRADE and others are that they are essentially subjective and
211   cannot be implemented consistently. This criticism is noted particularly for devices, diagnostics, and
212   innovative technologies. It does however provide a reproducible and transparent framework for grading
213   the level of uncertainty in clinical inputs and a summary that is generally easily understood by
214   committee members.

215   In the response to the survey, most respondents stated that they used a specified checklist
216   or adapted versions of GRADE .
217   The Institute for Clinical and Economic Review (ICER) was the only organization to note in the

                                                                                                                   9
218    survey that they have created their own Evidence Rating Matrix, in which uncertainty is represented on
219    a distinct axis in addition to the magnitude of potential health benefit.

220    Surrogate Endpoints
221    As mentioned, accelerated regulatory pathways and a desire for more rapid HTAs (to facilitate faster
222    treatment access for patients) are leading to an increased reliance on evidence from surrogate
223    endpoints. In reviews of the effects of using surrogate endpoints, it was found that surrogate endpoints,
224    on average, overestimate treatment effects(21). The strength of the evidence for the surrogate can be
225    evaluated systematically and commonly three levels are ascribed; Level 1 is clinical trial evidence of
226    treatment effects on the surrogate corresponding to effects on the patient-related outcome; Level 2 is
227    evidence from epidemiological or observational studies that demonstrates a consistent relationship
228    between the surrogate and the patient-related outcome; Level 3 is biological plausibility from
229    pathophysiological studies or from the understanding of the disease process(22). Validation methods
230    (for example correlation of the effects on the surrogate and clinical endpoint) and validation values (for
231    example cut-off values) can also be prescribed. Challenges arise particularly where a technology is
232    developed with a novel mechanism of action and evidence around surrogacy of outcomes and effect
233    does not yet exist at the time of the HTA assessment.

234    In a recent study by Grigore et al.(23), a review of HTA agencies found that 40% of agencies had
235    methodological guidelines that made specific reference to consideration of surrogate outcomes.

236   Economic Model Uncertainty
237    Uncertainty around economic evaluation in HTA can be broadly split into structural and parameter
238    uncertainty. These types of input uncertainty are described in more detail within the sub-sections
239    below. The following section describes methods for characterizing economic model uncertainty most
240    seen by HTA agencies.

241    There can also be additional uncertainty introduced around the heterogeneity and stochastic variance of
242    the economic model (24). Stochastic (first-order) uncertainty relates to the fact that individuals facing
243    the same probabilities will experience the effects of a disease or an intervention differently due to
244    random variation. This type of uncertainty is informed by confidence intervals and ranges of treatment
245    effects and is typically primarily of concern for rare diseases and small patient populations.
246    Heterogeneity describes the variability between the responses to an intervention that can be explained
247    by the differences in the demographic and/or clinical characteristics of patients (for example age-
248    specific results for the impact of an intervention on mortality). This type of uncertainty is informed by
249    subgroups and stratification of a patient population.

250    As indicated, there is a wide range of input uncertainties that can be present within an HTA and must be
251    considered. Most HTA agencies that consider cost-effectiveness as a factor are seeking a primary point
252    estimate of the incremental cost-effectiveness ratio to compare to a threshold (or threshold range) and
253    inform a recommendation. This poses challenges given that clinical, parameter and structural
254    uncertainty may all affect the derivation of that single point estimate. Sometimes the uncertainty in the
255    clinical evidence base may be too great to fully consider cost-effectiveness, for example in the
256    assessment of treatment for a very rare condition with poorly understood outcomes. Typically,
257    however, a point estimate for cost-effectiveness is sought, and the major uncertainties are summarized

                                                                                                                    10
258   and presented. As with clinical input uncertainty, this can be done qualitatively or using technical
259   approaches. Some of the most common approaches presented to HTA agencies are detailed below.

260   Parameter Uncertainty
261   All economic models have parameters that must be estimated, and economic models can only be as
262   reliable as the parameters (inputs) that they utilize (25). A key area of parameter uncertainty (also
263   known as a second-order uncertainty) specifically relates to the fact that the probabilities and other
264   estimates assigned to an economic model are uncertain because they are observed (for example within
265   a clinical trial) and then estimated. The sample size of the observed dataset is therefore a key
266   consideration in determining parameter uncertainty (smaller trials typically result in wider confidence
267   intervals around a point estimate). Parameter uncertainty can therefore link directly with the areas of
268   clinical uncertainty described above, and any problems with a study’s internal or external validity and
269   generalizability to a real-world setting are fed through in model parameter uncertainty.

270   Other parameter estimates may be subject to significant uncertainty, albeit for different reasons. Cost
271   and utility estimates may be uncertain because they are derived from external sources and may not
272   align with the target population for modeling, for example. Parameter uncertainty also arises when
273   there are conflicting estimates from multiple studies, or when there are no available data for a required
274   value and expert opinion must be utilized. In these instances, additional parameter uncertainty can arise
275   if parameters are not chosen in an evidence-based way (for example if estimates are “cherry-picked”
276   from the clinical evidence base or from asking a few key opinion leaders) or unrealistic assumptions are
277   used(26). As per the Not for Profit GPF member survey response, the most common methods for
278   characterizing parameter uncertainty are deterministic and probabilistic sensitivity analyses.

279   Deterministic SensitivityAnalyses
280   Deterministic sensitivity analysis considers the impact of individual economic model parameters on the
281   cost-effectiveness ratio. One or more parameter inputs can be changed manually to evaluate what
282   effect the change in the parameter has on the result. The range that the parameters are varied across is
283   usually pre-specified (often representing the upper and lower limits of the 95% confidence interval or
284   some other measure of variance) (27). Univariate sensitivity analysis refers to the modification of a
285   single parameter at a time, and two-way sensitivity analysis involves modification of two parameters
286   simultaneously; less commonly, multivariate sensitivity analysis involves the modification of several
287   parameters at the same time (however usually no more than 5). The results of these sensitivity analyses
288   can be presented as stack bar charts or as a “tornado diagram”, in which those parameters appearing
289   with the greatest impact on model results appear at the top, with subsequent parameters with lower
290   sensitivity presented below; the resulting figure resembles the funnel cloud of a tornado (28).

291   The main difficulties with conducting deterministic sensitivity analyses arise when the ranges of the
292   parameters are highly uncertain (for example a study may be small and have very wide confidence
293   intervals); this may result in estimates that are not clinically plausible or even relevant, and including
294   these values in a DSA can lead to a skewed perception of the impact of the uncertainty.

295   A variant of deterministic sensitivity analyses known as “threshold analyses” are also increasingly
296   common (Expert informant) to assess the ‘tipping-point’ of an input parameter. For example, at what
297   value of parameter X does the output change to the point that the recommendation as based on the

                                                                                                                   11
298   result would be altered? Commonly, the price at which a common cost-effectiveness threshold is
299   reached is used for price and/or discount negotiations.

300   Probabilistic Sensitivity Analyses
301   Probabilistic sensitivity analyses are a form of sensitivity analysis in which all parameter inputs are
302   varied at once. In probabilistic sensitivity analyses, rather than individual parameter estimates or points
303   on a range, the parameters are sampled from a representative distribution, either an observed one from
304   patient-level data or a distributional form that fits the data well (29). The model is then run over
305   multiple iterations (typically 1,000 or more), each producing a unique cost-effectiveness estimate that
306   can be compared to existing decision-making thresholds.

307   Cost-effectiveness acceptability curves have become a common way to present the results of a PSA(30),
308   but other approaches such as incremental benefit curves, rankograms and scatter plots also exist(31).
309   The main concerns when conducting PSA are whether the number of simulations that are performed is
310   sufficient (with little explicit guidance on this provided) (32). In addition, even the most robust PSA
311   cannot adequately address structural uncertainty; a greater number of iterations will not mitigate
312   uncertainty introduced if the model does not realistically portray disease trajectory or typical clinical
313   practice, for example. While PSA is often described as an acceptable form of sensitivity analysis by HTA
314   agencies (33), our expert informant interviews suggested that HTA agencies are only typically using
315   deterministic sensitivity analyses routinely and that PSA are relatively under-utilized. The recent NICE
316   methods guide review consultation has recognized this under-utilization and has identified PSA as an
317   area requiring a “major change” in its methods update (34).

318   Calibration of Extrapolation
319   In the clinical trial setting, the true long-term effects of a technology are rarely observed, as clinical trials
320   tend to be no more than two years in duration. Many technologies assessed by HTA agencies are chronic
321   therapies intended for lifetime use. To account for this, the observed effects are typically extrapolated
322   to provide a best-case estimate of what is likely to be the longer-term outcome of the use of a
323   technology (35). The accuracy of any extrapolation depends on the reliability of modelling, with various
324   options available to provide and calculate a “best fit” for the data(36), often using available
325   epidemiologic or other long-term observational studies for comparison purposes. Where extrapolation
326   becomes particularly challenging is when there are relatively few observed events (for example, few
327   deaths in a short trial when modeling survival), or where participants have switched from a placebo to
328   intervention arm (as is increasingly common in oncology studies) (16). In addition, there is an increased
329   extrapolation of the effects of disease modifying therapies, where the expected treatment effect from
330   what may be a single intervention timepoint is extrapolated far beyond what is observed. When model
331   calibration has been used to derive parameters, the “uncertainty around the calibrated values should be
332   reported and reflected in deterministic or probabilistic sensitivity analyses or both”(25).

333   Structural (Model) Uncertainty
334   Structural uncertainty is uncertainty about the functional form of an economic model(37). If the
335   structure of the model does not reflect what is happening in real life (for example if all relevant health
336   states are not included) then the results of the economic model may not be reliable (even if all of the
337   inputs are correct). Examples may include not enough health states (which would result in a lack of
338   accuracy in estimates) or too many health states (which may not reflect reality and could result in a
339   many of assumptions being required to inform each health state rather than evidence). The time the

                                                                                                                          12
340   horizon of the economic model is another aspect of a model structure that can commonly generate
341   additional uncertainty (with longer term time horizons also necessitating longer-term extrapolation of
342   treatment effects and costs), and thus also increasing extrapolation uncertainty as described above.

343   Structural uncertainty is often not explored in depth, although it may have just as much impact, if not
344   more, than parameter uncertainty(25). Recent approaches for characterizing structural uncertainty have
345   sought to parameterize the structural uncertainties into the model. Adding parameters or varying
346   elements of the structure can be undertaken and these are commonly presented as scenario analyses
347   whereby different parameters or model assumptions are varied to represent different scenarios that
348   may be possible within a particular healthcare system or setting.

349   In a recent paper by Afzali et al. (38), five approaches to characterizing structural uncertainty were
350   identified, including scenario analyses; model selection; model averaging; parametrization and
351   discrepancy. Where complete rebuilds of the model are considered necessary then this is of course
352   challenging. In these cases, the most appropriate course of action may be simply a qualitative summary,
353   making the presence of uncertainty and possible impact on the findings as explicit as possible(25). In
354   accordance with the Not for Profit GPF member survey, guidelines on addressing structural uncertainty
355   are not typically provided by HTA agencies, though scenario analysis is the most common approach to
356   characterizing uncertainty of this type. The Patient-Centered Outcomes Research Institute (PCORI) has
357   stated that the principles for assessing structural uncertainty are undergoing review now.

358   Affordability Uncertainty
359   Affordability is an increasingly important component for consideration by HTA processes, particularly in
360   low- and middle-income countries (39), but increasingly also in high-income countries. Core aspects for
361   evaluating the potential budget impact of a new therapy are the size of the target population and costs
362   of the technology in practice (40). Uncertainty can arise in estimating the true population size (are all
363   patients with the condition known, are there factors that would prevent or encourage patients to seek
364   access to the new technology, will greater disease awareness increase the population size?). The costs
365   of a new technology in practice can be challenging to estimate if the delivery is not clear (for example
366   not knowing what healthcare setting and support will be needed), or if the place in the treatment
367   pathway is unclear. Costs of the treatment can also include downstream cost offsets which can be
368   difficult to quantify at the point that an HTA is being conducted and can also lead to increased
369   uncertainty. Typically, utilization uncertainty is usually greater than unit cost uncertainty. Finally, as the
370   inputs to the budget impact analysis are often generated by the simulation model used to inform the
371   cost effectiveness analysis, all of the possible clinical and economic uncertainties already described can
372   be present in theseestimates.

373   Typically, guidance around calculating costs and population sizes are provided by HTA agencies,
374   although some agencies have no specified methods. For example, in their survey response the Institute
375   for Clinical and Economic Review (ICER) stated that they let the end user decide what the level of uptake
376   and price will be. Specific examples that were provided in the survey response included submissions to
377   the Pharmaceutical benefits Advisory Committee (PBAC) in Australia, which are expected to include a
378   refined spreadsheet template with details of cost and uptake. The Center for Healthcare Quality
379   Assessment and Control in Russia stated that they provide guidance on what kind of costs should be
380   considered and recommend sensitivity analysis for the size of the population.

                                                                                                                       13
381   Summary Approaches to Managing Economic Model Uncertainty
382   Value of InformationAnalyses
383   Value of Information (VoI) analysis is a technical approach that provides a methodological framework
384   which explicitly considers the level of input uncertainty, parameter uncertainty, and structural
385   uncertainty in an HTA (41). VoI focuses on the likelihood of making a “wrong” decision if the technology
386   is adopted, as such it can be used to understand what the cost of resolving residual uncertainty is. The
387   Expected Value of Information (EVI) is the value of additional research and determines the extent to
388   which further information will reduce the uncertainty(42). The intent is to allow a comparison of the
389   potential benefits of further research with the costs of further investigation which provides an
390   assessment of the value of investing limited healthcare resources in research or provision of the health
391   technology (43).

392   Further steps and analysis methods can be used to determine the expected value of sample information
393   (EVSI) and the expected value of (partial) perfect information (EV[P]PI). These pre-posterior forms of
394   analysis aim to estimate the increased utility that a decision maker would have with access to an
395   additional sample of information or the price that one would be willing to pay to gain access to perfect
396   information. Essentially, these approaches are attempts to quantify the trade-off between making a
397   potentially incorrect decision and generating more evidence (44). While there is much still to be done in
398   terms of education, a recent ISPOR taskforce on VoI provides recommendations for good practice when
399   planning, undertaking or reviewing the results of various VoI analyses(45). However, these methods are
400   generally seen by many as academic exercises and are rarely seen in practice by HTA agencies.

401   Tools for Cataloguing Model Uncertainty
402   There are a range of other summary approaches to characterizing uncertainty that have been reported
403   in the literature; however, when comparing these with the approaches favored by HTA agencies, there is
404   little overlap. Two more recent developments of note are the TRansparent Uncertainty ASsessmenT
405   (TRUST) tool (46) and the TRUST4RD approach(47).

406   The TRUST tool is a general tool that was developed to systematically identify, assess, and report
407   uncertainties in decision (economic) models with to make uncertainties and their impact on cost
408   effectiveness more explicit and transparent. In the validation of the TRUST tool (via HTA stakeholder
409   interviews and application to six case studies) the authors state that stakeholders found it to be feasible
410   and of value for transparent uncertainty assessment, but with the main barrier to use a lack of time to
411   complete the necessary fields. Table 2 is a reproduction of the summarized TRUST approach, with
412   identification of the sources of uncertainty conducted first followed by an assessment of the likely
413   impact of the uncertainties on the cost effectiveness analysis.

414   TRUST4RD has been developed with Orphan Medicinal Products (OMP) in mind, as these technologies
415   are associated with higher levels of input uncertainty (due to evidence from small or non-controlled
416   trials, surrogate or immature outcome measures, and abbreviated follow-up, among other concerns)
417   (48). As depicted in Figure 1, the TRUST4RD approach, developed through multi-stakeholder dialogue,
418   aims to identify uncertainties of most concern for decision-makers by developing an iterative and
419   informed dialogue so that potential approaches to uncertainty resolution can be discussed. The
420   intended result is that future evidence generation can be more directed and will be more likely to
421   demonstrate the value of a technology with less uncertainty than would have otherwise been
422   presented.

                                                                                                                    14
423    Figure 1-TRUST4RD components, from Annemans L, Makady A. TRUST4RD: tool for reducing uncertainties in the evidence
424    generation for specialised treatments for rare diseases. Orphanet journal of rare diseases. 2020;15(1):127.

425

426
427    No reports of HTA agencies utilizing the TRUST tool or TRUST4RD approach have been received to date.
428    However, more HTA agencies are engaging in early dialogue with stakeholders to better understand the
429    probable levels of input uncertainty and to guide evidence generation plans accordingly. As highlighted
430    by the expert testimony, stakeholder input can be particularly useful in identifying where uncertainty
431    will likely be a factor in advance of the HTA process itself. Where there are early scoping discussions
432    stakeholder inputs can be sought at a stage that is often the most useful to shape the evidence
433    generation plans and technology submissions.

434   Who is Responsible for Uncertainty?
435    In considering the many ways in which uncertainty may present in the evidence base for HTA, and the
436    many ways in which it can be summarized and then evaluated, the notion of who is responsible for
437    uncertainty is raised. Clearly, the technology manufacturer must shoulder much of the burden of proof,
438    as they are the owners of the technology and therefore drivers of the evidence base. Indeed, in the
439    circumstances where traditional trial evidence is challenging to generate, there may still be alternative
440    avenues of evidence generation (for example greater detail on the burden of disease data, comparator
441    usage) that may be possible to increase the committee’s comfort with a decision problem.

442    There are, however, considerations of responsibility for other stakeholders. For example, HTA agencies
443    have a responsibility to ensure efficient use of taxpayer money in conducting efficient and effective
444    HTAs to appropriately inform resource allocation. This means that they are responsible for helping to
445    understand and mitigate the effects of uncertainty, as well as ensuring that the appraisal committees
446    are sufficiently equipped to make the best decisions. As summarized above, there has been a range of
447    technical and methodological advances in characterizing uncertainty for decision-making. One could
448    argue in fact that the methods that are available to attempt to mitigate uncertainty are far in advance of
449    what is typically undertaken and presented to appraisal committees. The advances have come with
450    complaints; however, methods may not be perceived as intuitive and their complexities may be

                                                                                                                            15
451    challenging for all members of an appraisal committee to understand. The notion that the technical
452    adaptations for addressing uncertainty are a “black box” remains a continued criticism. The implication
453    is that each committee member must develop technical abilities advanced enough to fully understand
454    the methods and the subsequent results to integrate them into their decision making. This can become
455    onerous and demanding for deliberators. Similarly, academic groups who often provide independent
456    input or review of manufacturer submissions also have a responsibility to aid the HTA agencies in the
457    understanding of the key drivers of uncertainty and their impact. Finally, other stakeholders such as
458    patients and clinicians often benefit from adoption of technology by a health system. As such they are
459    also responsible for helping to provide additional context and clarification of any input uncertainties as
460    much as possible.

461   The Future of Input Uncertainty
462    Finally, as noted several times during the expert interviews, input uncertainty is “nothing new” for the
463    field of medical devices and diagnostics. In this research field, evidence generation for these
464    technologies often relies on single-arm trials (where placebo controls are not possible), small sample
465    sizes, rapid technology evolution, and “learning curve” uncertainty (where the clinician becomes more
466    proficient with experience) (49). Many feel that the uncertainties that device manufacturers and
467    appraisers have been grappling with for years are now just becoming reality for those producing and
468    appraising pharmaceutical technologies. However, as noted, with the advent of precision medicines
469    (often with companion diagnostics), gene and stem cell therapies, and regenerative medicines, the line
470    between drug and device is blurring and input uncertainty is increasing. Artificial intelligence and digital
471    technologies are also bringing additional new complexities and uncertainties that must be considered.
472    All of this is potentially compounded by new accelerated licensing pathways that are bringing
473    technologies to licensure more quickly and with arguably less evidence (which, even when available is
474    often presented as data in confidence) (21). Nearly 75% of respondents to the Not for Profit GPF
475    member survey felt that input uncertainty is increasing in line with the issues outlined above.

476    Such a high degree of input uncertainty is leading to a need for more adaptable and nuanced tools or
477    pathways for managing the uncertainty. Increasingly, there are examples where the uncertainty is
478    genuinely unresolvable in a meaningful timeframe and HTA agencies are starting to consider to where it
479    might be appropriate to manage such technologies with abbreviated or alternative HTA processes
480    (Expert informant). Half of the responses to the not for profit GPF member survey suggested that HTA
481    agencies are putting approaches in place to prepare for increasing uncertainty by moving to lifecycle
482    approaches, with iterative appraisal processes, rapid reviews, and greater acceptance of real world and
483    qualitative evidence. A specific example is that of multigene and multi-purpose investigative
484    technologies as considered by the Medical Services Advisory Committee in Australia, where a concept is
485    being introduced whereby “exemplars” can form the basis of facilitating reviews of additional genes,
486    purposes and/or medical conditions with less evidentiary burden. However, there is the risk of the
487    additional evidence that is generated may indeed increase uncertainty if the results were not as
488    expected (for example if the technology is poorly implemented, or used in a cohort of patients with
489    challenging comorbidities) then the conclusions may not be straightforward.

                                                                                                                      16
490   Throughput Uncertainty
491    The “throughput” stage describes how the various input uncertainties are handled; in other words, the
492    weighting of the facts, values and reasons that will lead to a collective judgement. It is possible that new
493    information can be presented by stakeholders (such as manufacturers, patients and clinicians), however
494    it is at this stage where the interplay between the clinical, economic and affordability uncertainties are
495    considered, and consensus is sought, or votes are taken. Here the presentation of the uncertainty to an
496    appraisal committee is critical, with a view toward facilitating consistent and transparent management
497    of the key unknowns. As highlighted above, there can be multiple and varied levels and types of
498    uncertainty present in any one HTA and a deliberative committee must quickly understand this and
499    decide how the uncertainty impacts their deliberation. For example, NICE explicitly note in the current
500    Methods Guide that they “will be more cautious about recommending a technology when they are less
501    certain about the ICERs presented in the cost-effectiveness analysis” (50).

502    While uncertainty is inevitable in HTA, the notion that some uncertainties are unresolvable is key to
503    consider. At what point is the level of uncertainty great enough that a decision must be deferred, or the
504    technology cannot be recommended at all? The opportunity cost of resolving uncertainty (e.g., with a
505    resource intensive evidence generation request) must be carefully considered, including the situations
506    where the uncertainty is highly unlikely to be resolved in a meaningful timeframe. The question of
507    whether uncertainty is unresolvable is becoming more common, and HTA agencies are starting to
508    consider where it might be appropriate to manage such technologies with abbreviated or alternative
509    HTA processes (Expert informant). Such approaches should not, however, come at the price of reducing
510    quality evidence generation whenever this is truly possible.

511    In contemplating uncertainty in HTA, it is also important to also consider the notions of risk (and
512    appetite for risk) as well as confidence. The level of uncertainty that is acceptable for an individual
513    decision maker (including patients) is largely dependent on their appetite for risk. If a decision maker is
514    more risk averse then they will require more certainty to make a positive recommendation, particularly
515    where the consequences of a “wrong” decision are far-reaching (for example if an expensive technology
516    does not realize the value it is expected to and healthcare resources are diverted from a more cost-
517    effective existing standard of care). For risk-taking decision maker (for example, in populations with
518    terminal prognoses), a higher level of uncertainty can be tolerated; greater risk tolerance is commonly
519    observed in situations in which a technology is considered to be innovative, or has “plausible
520    promise”(51) in an area with significant unmet need. Confidence is a separate but critically important
521    concept, as this is arguably the opposite of uncertainty in the context of decision making. Reducing
522    uncertainty increases the level of confidence any decision maker will have in the evidence and their
523    subsequent recommendations, irrespective of their appetite for risk.

524    It is essentially the risks associated with uncertainty (i.e., making the wrong decision) that is what
525    matters most to patients, clinicians, payers, and healthcare systems, although whether the decision is
526    wrong will certainly vary by stakeholder perspective. It is of particular concern if the perception that the
527    decision is always wrong in one direction (i.e. always reimburse technologies that are not cost-effective).
528    This however must be weighed with the risk of making no decision, whereby patients have no access to
529    potentially beneficial technologies, or by the risk of generating more evidence to reduce the uncertainty
530    (and using scarce resources in doing so). Considering uncertainty in a realistic and pragmatic manner -

                                                                                                                      17
You can also read