A scoring framework for tiered warnings and multicategorical forecasts based on fixed risk measures

Page created by Diana Marquez
 
CONTINUE READING
A scoring framework for tiered warnings and multicategorical
                                                            forecasts based on fixed risk measures
                                                                    Robert Taggart, Nicholas Loveday, Deryn Griffiths
                                                                                Bureau of Meteorology
                                                                              robert.taggart@bom.gov.au

                                                                                         August 31, 2021
arXiv:2108.12814v1 [stat.AP] 29 Aug 2021

                                                                                              Abstract
                                                 The use of tiered warnings and multicategorical forecasts are ubiquitous in meteorological opera-
                                                 tions. Here, a flexible family of scoring functions is presented for evaluating the performance of
                                                 ordered multicategorical forecasts. Each score has a risk parameter α, selected for the specific
                                                 use case, so that it is consistent with a forecast directive based on the fixed threshold probability
                                                 1 − α (equivalently, a fixed α-quantile mapping). Each score also has use-case specific weights so
                                                 that forecasters who accurately discriminate between categorical thresholds are rewarded in pro-
                                                 portion to the weight for that threshold. A variation is presented where the penalty assigned to
                                                 near misses or close false alarms is discounted, which again is consistent with directives based on
                                                 fixed risk measures. The scores presented provide an alternative to many performance measures
                                                 currently in use, whose optimal threshold probabilities for forecasting an event typically vary with
                                                 each forecast case, and in the case of equitable scores are based around sample base rates rather
                                                 than risk measures suitable for users.

                                                 Keywords: Categorical forecasts; Consistent scoring function; Decision theory; Forecast ranking;
                                                 Forecast verification; Risk; Warnings.

                                           1 Introduction
                                           A broad transdisciplinary consensus has emerged over the last two of decades that forecasts ought to
                                           be probabilistic in nature, taking the form of a predictive distribution over possible future outcomes
                                           (Ehm et al., 2016; Gneiting and Katzfuss, 2014). However, in certain settings categorical forecasts are
                                           still very useful, particularly when there is a need for simplicity of communication or to trigger clear
                                           actions. Examples include public weather warnings, alerting services designed to prompt specific
                                           protective action for a commercial venture, and forecasts displayed as graphical icons.
                                                Ideally, any categorical forecast service will be designed so that following the forecast directive,
                                           optimising the performance score and maximising benefits for the user are consistent (Murphy and
                                           Daan, 1985; Murphy, 1993; Gneiting, 2011). In this context, a forecast directive is a rule to convert a
                                           probabilistic forecast into a categorical forecast. The consistent scoring function can be used to track
                                           trends in forecast performance, guide forecast system improvement and rank competing forecast
                                           systems, so that decisions made on such evaluations are expected to benefit the user. When the
                                           target user group is heterogeneous, maximising user benefit will not be possible. Nonetheless, in the
                                           context of public warning services for natural hazards, issues such as warning fatigue, false alarm
                                           intolerance and the cost of missed events have received considerable attention (e.g. Gutter et al.
                                           (2018); Potter et al. (2018); Mackie (2014); Hoekstra et al. (2011)). This body of literature, along
                                           with stakeholder engagement, should provide guidance for creating a suitable forecast directive for
                                           the particular hazard.

                                                                                                  1
Performance measures for categorical forecasts in the meteorological literature typically have
properties that are undesirable for many applications. For example, equitable scores for multicate-
gorical forecasts, such as those of Gandin and Murphy (1992), of which the Gerrity (1992) score is
perhaps most popular, are optimised by routinely adjusting the threshold probability used to convert
a probabilistic forecast into a categorical one. Moreover, these variable threshold probabilities are
usually near climatological or sample base rates, which could lead to a proliferation of false alarms
when warning for rare events. The Murphy and Gandin scores also penalise false alarms and misses
equally, even though the costs of such errors to the user are rarely equal. In Section 3 we show that,
depending on the use case, these and other performance measures could potentially lead to undesir-
able or even perverse service outcomes if forecasters took score optimisation seriously. Nevertheless,
we found many examples in the literature where the suitability of the measure for the problem at
hand was not adequately discussed, possibly because the implications of score optimisation were not
well understood.
    To provide alternatives, we present a family of scoring functions for ordered categorical forecasts
that have flexibility for a broad range of applications, particularly public forecast services, and are
consistent with directives based on fixed risk thresholds. A scoring function is a rule that assigns a
penalty for each particular forecast case when it is compared to the corresponding observation. In
discussing ordered categorical forecast services, we are assuming that forecasts are for some unknown
real-valued quantity, such as accumulated precipitation, and that these must be issued as a categorical
forecast rather than a real-valued forecast or a predictive distribution.
    Within this framework, those designing a multicategorical forecast service specify

  (i) category thresholds that delineate the categories,

 (ii) weights that specify the relative importance of forecasting on the correct side of each category
      threshold, and

(iii) the relative cost α/(1 − α) of a miss to a false alarm, where 0 < α < 1.

In this setup, the scoring function can be expressed as a scoring matrix, which specifies the penalty
for each entry in the contingency table as described in Section 4.1. The scoring function is consistent
with the directive ‘Forecast a category which contains an α-quantile of the predictive distribution.’ In
the dichotomous case this directive is equivalent to ‘Forecast the event if and only if P(event) > 1−α.’
The weights given by (ii) are used for scoring, so that forecasters who can accurately discriminate
between events and nonevents at thresholds with higher weights are appropriately rewarded. As
discussed in Section 4.1, the scoring matrix has a natural interpretation in terms of the simple,
classical cost–loss decision model (Thompson, 1952; Murphy, 1977; Richardson, 2000).
    Section 5.3 presents a natural extension of this framework, by additionally specifying

(iv) a discounting distance parameter a, where 0 ≤ a ≤ ∞, such that the cost of misses and false
     alarms are discounted whenever the observation is within distance a of the forecast category.

When a = 0 no discounting occurs and the setup in the previous paragraph is obtained. When
a > 0, the framework gives a scoring function that discounts the cost of near misses and close false
alarms (see Section 5.3 for details). Barnes et al. (2007) argued that this was a desirable property
in certain contexts, without providing such a scoring function. In this case, the consistent directive
is expressed in terms of a Huber quantile (Taggart, 2020) rather than a quantile of the predictive
distribution. In the limiting case a = ∞, where forecast errors are penalised in proportion to the
distance from the forecast category to the real-valued observation, the directive is expressed in terms
of an expectile of a predictive distribution. Expectiles, of which the expected value is most widely
used, have recently attracted interest in finance as measures of risk (Bellini and Di Bernardino, 2017).
The relationship between quantiles, Huber quantiles and expectiles is summarised in Section 5.3 and

                                                   2
illustrated in Figure 4. This paper demonstrates that these statistical functionals have applications
in meteorology as well as finance.
    A special case of this framework (α = 1/2 and a = ∞) covers the situation where each forecast
category indicates the likelihood of an event, such as there being a ‘low‘, ‘medium‘ or ‘high‘ chance
of lightning. The corresponding consistent scoring matrix is presented in Section 5.5.
    Since score optimisation is consistent with directives based on fixed risk measures, we refer to
this framework as the FIxed Risk Multicategory (FIRM) Forecast Framework and the corresponding
scores as the FIRM Scores.
    For public warning services, we discuss issues that may influence the choice of parameters, with
particular focus on whether the threshold probability 1 − α above which one issues a warning should
vary with warning lead time, and how one can estimate the implicit value of α in an existing service
where it hasn’t been specified.
    The mathematics underpinning our results rests on the insights of Ehm et al. (2016), who showed
that the consistent scoring functions of quantiles and expectiles are weighted averages of correspond-
ing elementary scoring functions, and a result of Taggart (2020) who showed the same for Huber
quantiles. The new scoring functions presented in this paper are linear combinations of the elemen-
tary scoring functions, adapted to the categorical context.

2 Notation and conventions
Suppose that Y is some as yet unknown quantity for which one can issue a forecast, taking possible
values in some interval I ⊆ R. For example Y might be the accumulated rainfall at a particular
location for a specified period of time and I = [0, ∞). The prediction space I is partitioned into
N + 1 mutually exclusive ordered categories (Ci )N                                   N
                                                    i=0 via category thresholds {θi }i=1 ⊂ I that satisfy
θ1 < θ2 < . . . < θN . That is, Y lies in category C0 if Y ≤ θ1 , in the category Ci for 1 ≤ i < N if
θi < Y ≤ θi+1 , and in the category CN if Y > θN . Thus we assume that each category includes the
right endpoint of its defining interval, noting that the theory is easily adapted for those who prefer
the opposite convention.
    A predictive distribution issued by some forecaster for the quantity Y will be denoted by F and
identified with its cumulative density function (CDF). Hence P(Y ≤ y) = F (y) for each possible
outcome y in I. From F one obtains the forecast probability pi that Y lies in the category Ci ,
namely                                
                                      F (θ1 ),
                                                            i = 0,
                                 pi = F (θi+1 ) − F (θi ), 1 ≤ i < N,
                                      
                                         1 − F (θN ),        i = N.
                                      

If 0 < α < 1 then the set of α-quantiles of the predictive distribution of F will be denoted by Qα (F ),
noting that in meteorological applications Qα (F ) is typically a singleton. The CDF of the standard
normal distribution N (0, 1) will be denoted by Φ, and its probability density function (PDF) by φ.
    In the context of tiered warnings, we adopt the convention that higher values of Y represent more
hazardous conditions. Thus C0 can be interpreted as the category of having no warning, while Ci
represents more hazardous warning categories as nonzero i increases. In some practical cases, such
as warning for extremely cold conditions, the reverse convention is more applicable and the theory
can be adapted appropriately.
    For a fixed set of forecast cases, the contingency table (cij )N
                                                                   i,j=0 is a complete summary of the
joint distribution of categorical forecasts and corresponding categorical observations. Here cij is the
number of cases for which category Ci was forecast and Cj observed. See the two 3 by 3 arrays
embedded in Table 1 for an example. In the binary case, where category C0 is interpreted as a
nonevent and C1 as an event, we denote c00 by c (the number of correct negatives), c01 by m (the

                                                   3
number of misses), c10 by f (the number of false alarms) and c11 by h (the number of hits). Note
that, to maintain consistency with higher category cases, the contingency table for dichotomous
forecasts is in reverse order to the usual convention. Three performance measures commonly used
for dichotomous forecasts are the probability of detection (POD), false alarm ratio (FAR) and the
probability of false detection (POFD), which are defined by

                                 h                  f                          f
                       POD =        ,      FAR =            and    POFD =         .
                                h+m                h+f                       f +c

2.1 Synthetic data
Several examples in this paper use synthetic data in a dichotomous setting as follows. Suppose that
a random variable Y has a normal distribution given by Y ∼ N (µ, σ 2 ). Let the category C1 have a
base rate (i.e. climatological relative frequency) of r1 . This implies that the single threshold θ1 at
the boundary of the two categories satisfies r1 = P(Y > θ1 ) and so θ1 = µ + σ 2 Φ−1 (1 − r1 ).
     We construct different forecast systems that issue perfectly calibrated predictive distributions F
for Y , but with varying degrees of sharpness (predictive precision) specified by a positive variable
σ2 , where a smaller value of σ2 indicates a more accurate system. For given σ2 , the forecast system
is constructed as follows. The variable Y is written as a sum Y = Y1 + Y2 of two independent
random variables satisfying Y1 ∼ N (µ, σ12 ), Y2 ∼ N (0, σ22 ) and σ 2 = σ12 + σ22 . The forecast system has
knowledge of Y1 and issues the perfectly calibrated predictive distribution F , where F ∼ Y1 +N (0, σ22 ).
We call σ2 /σ the relative predictive uncertainty of the system, since a value of 0 indicates perfect
knowledge of Y while a value of 1 indicates predictive skill identical to that of a climatological
forecast.
     In each example where this set-up is used, the category threshold θ1 is uniquely determined by
the base rate r1 and each forecast system is identified with its relative predictive uncertainty. In this
way, one can obtain results applicable to a wide range of idealised observational distributions and
corresponding forecast systems, independent of the specific choice of µ and σ. For ease of reference,
each example uses four different base rates (0.01, 0.05, 0.1 and 0.25) and four systems with different
relative predictive uncertainties (0.01, 0.1, 0.25, 0.5).

3 Service implications of optimising performance measures
Many commonly used performance measures for categorical forecasts could lead to undesirable or
possibly perverse service outcomes if forecasters were to take score optimisation seriously. Some
common issues that arise will be illustrated by considering POD and FAR, the Critical Success Index
(CSI), the Gerrity score and Extremal Dependence Score (EDS). These, and other measures, have
their uses, but their alignment with service outcomes should be carefully assessed prior to their
employment (Mason, 2003).
    For dichotomous forecasts, POD and FAR are often used in tandem to report on the accuracy of
a warning system (Karstens et al., 2015; Brooks and Correia Jr, 2018; Stumpf et al., 2015). POD is
optimised by always warning and FAR by never warning, so together they don’t constitute a clear
forecast directive, and in general POD and FAR cannot be used to rank competing warning systems.
    There have been various attempts by those using POD and FAR to provide greater service clarity.
For example, in evaluating its fire weather warnings, the Australian Bureau of Meteorology (BoM)
specifies an annual target that POD ≥ 0.7 and FAR ≤ 0.4 in each of its geographic reporting regions.
To investigate a suitable warning strategy, we use the synthetic set-up of Section 2.1 and suppose
that each year there are 2000 forecast cases to evaluate. A strategy is to warn if and only if the
forecast probability p1 of an event exceeds 1 − α. A range of values for α were tested by synthetically

                                                     4
base rate = 0.01   base rate = 0.05        base rate = 0.1   base rate = 0.25

          proportion meeting target
                                      1.0                                                                                   relative
                                                                                                                            predictive
                                      0.8                                                                                   uncertainty
                                      0.6                                                                                        0.01

                                      0.4                                                                                        0.1

                                      0.2                                                                                        0.25

                                      0.0                                                                                        0.5
                                            0.2 0.4 0.6 0.8    0.2 0.4 0.6 0.8        0.2 0.4 0.6 0.8    0.2 0.4 0.6 0.8
                                                                                  α

Figure 1: The proportion of independently generated forecasts that meet the POD and FAR target
set by the BoM. A system warns if and only if the forecast probability of an event exceeds 1 − α.

generating 2000 samples of 2000 forecast cases for each α and calculating the proportion of samples
that met the BoM target. The results are plotted in Figure 1.
    Based on this analysis, a perfectly calibrated forecast system designed to meet BoM targets
should warn if p1 & 0.3, noting that the optimal risk threshold for warning depends on the accuracy
of the system and observed base rate. However, if towards the end of the annual reporting period
the current POD lies comfortably above 0.7 while FAR > 0.4, then the best strategy for a forecaster
seeking to meet targets would be to warn only if the event were a near certainty in an attempt to
reduce FAR. Thus meeting performance targets, if taken seriously, could result in perverse outcomes.
Note also that for accurate forecast systems, there is little incentive to improve categorical predictive
performance since the target will be met with very high probability for a wide range of warning
decision strategies. Finally, stronger predictive performance is required in geographic regions with a
lower base rates to meet targets.
    The CSI for a set of dichotomous forecasts is defined by CSI = h/(h + m + f ) and dates back
to Gilbert (1884). It is widely used for dichotomous forecasts of rare events (Karstens et al., 2015;
Skinner et al., 2018; Cintineo et al., 2020; Stumpf et al., 2015) since it isn’t dominated by the large
number of correct negatives relative to other outcomes. A forecaster’s expected CSI can be optimised
by forecasting C1 if and only if p1 > h/(2h + m + f ) (Mason, 1989), where h, m and f are entries of
the contingency table for cases thus far. Hence the optimal threshold probability adjusts according
to forecast performance, with less skilful forecast systems warning at lower risk than more skilful
forecasters, and all forecasters warning when p1 ≥ 0.5. As discussed in Section 4.3, there are good
reasons why a public warning service might be designed to warn at a higher level of confidence if
issuing a warning earlier than the standard lead time. Optimising CSI works against this, since
longer lead time forecasts typically have less skill.
    The meteorological literature on multicategorical forecasts has often proffered equitability as a
desirable property for a scoring rule (e.g. Livezey (2003)). A score is equitable if all constant forecasts
and random forecasts receive the same expected score. The family of Gandin and Murphy (1992)
scores for multicategorical forecasts are constructed so they are equitable, penalise under- and over-
prediction equally (symmetry), reward correct categorical forecasts more than incorrect forecasts,
and penalise larger categorical errors more heavily. The Gerrity (1992) score and LEPSCAT (Potts
et al., 1996) are members of this family. The Gerrity score is Livezey’s leading recommendation and
has been used, for example, by Bannister et al. (2021) and Kubo et al. (2017) for tiered warning
verification. In the 2-category case, the Gerrity score is identical to Peirce’s skill score (Peirce, 1884).
    We give four reasons why the Gandin and Murphy scores are unsuitable for a wide variety of
applications, including many warning services. This is primarily due to the properties of equitability
and symmetry.

                                                                                       5
First, the cost of false alarms and misses to users of a warning service are rarely equal.
    Second, equitability ensures that the rewards for forecasting rare events are sufficiently high
that forecasting the event will be worthwhile even if the likelihood of it occurring is small. These
scores ‘do not reward conservatism’ (Livezey, 2003, p. 84), primarily because incorrect forecasts of
less likely categories are penalised relatively lightly. For example, the strategy that optimises the
expected Gerrity score in the dichotomous case is to warn if and only if p1 > r1 , where r1 is the
sample base rate of the event. If the forecaster estimates that r1 < 0.01, then warning when the
probability of occurrence exceeds 1% is a worthwhile strategy for score optimisation, even if it leads
to a proliferation of false alarms that erodes public trust in the service. A related issue is that entries
of the scoring matrices include reciprocals of sample base rates, so that sampling variability results
in score instability if one category is rarely observed.
    Third, in higher category cases, the rule for converting a predictive distribution into a categorical
forecast is not transparent. For example, a tedious calculation shows that forecasting the highest
category C2 for the 3-category Gerrity score is optimal if and only if
                                   −1
                                    (v0 + v2 + 2)p0 + (v2 − v0 )p1
                                                                                   
                        p2 > max                                    , v2 (p0 + p1 ) ,
                                             v0 + v2−1 + 2

where ri is the sample base rate for category Ci and vi = ri /(1−ri ) is a sample odds ratio. Since each
ri also needs to be forecast, there is no clear mapping from the forecaster’s predictive distribution
for a particular forecast case to the optimal categorical forecast. Nor, in the case of public warnings,
would it be easy to communicate service implications to key stakeholders.
    Fourth, optimal rules for converting predictive distributions into categorical forecasts require on-
going re-estimation of final sample base rates, using (say) a mixture of climatology and observed
occurrences, which results in shifting optimal threshold probabilities. For example, in the 2-category
case Mason (2003) states that the optimal strategy is to warn if and only if p1 > (h + m + 1)/(n + 2),
where n is the number of forecast cases and where it is assumed that forecaster has no climatological
knowledge. A modification to this strategy that makes use of prior climatological knowledge is
achievable using Bayesian techniques (c.f. Garthwaite et al. (1995), Example 6.1). In either case,
the optimal warning threshold probability changes with every forecast case. A forecaster may find
themselves initially warning when the risk of an event exceeds 5%, and later warning when the risk
exceeds 2%. For a department that must spend its asset protection budget each financial year or risk
a funding cut, regularly adjusting threshold probabilities is warranted. But such properties should
be one of choice rather than an unintended consequence of selecting an ‘off the shelf’ performance
measure.
    This fourth property is a direct consequence of equitability. It can be addressed by using a scoring
matrix constructed from base rates of a fixed past climatological reference period, though the score
will no longer be truly equitable. However, this adjustment will not address the first three problems
listed.
    We briefly discuss the EDS, which has recently been used to measure the performance of the
German Weather Service’s nowcast warning system (James et al., 2018). The inventors of this score
write that the “optimal threshold [probability] for the EDS is zero, and so the EDS is consistent with
the rule ‘always forecast the event.’ This rule is unlikely ever to be issued as a directive and therefore
the EDS will be hedgable whenever directives are employed” (Ferro and Stephenson, 2011, p. 705).
The same paper urges that “the EDS should be calculated only after recalibrating the forecasts so
that the number of forecast events equals the number of observed events.” Given these properties,
we find it difficult to see the value of the EDS for public warning performance assessment although
it may be valuable in other contexts.
    The performance measures discussed in this section do not have ex-ante penalties for individual
forecast cases. Of these, only the Gandin and Murphy scores assign a penalty to each individual

                                                    6
forecast case, though without modifying these scores the penalties that will be applied to various
errors are not known to the forecaster when each forecast is issued. These measures appear to be
designed for extracting a signal of skill from an existing contingency table when there is no information
on the decision process for issuing categorical forecasts. In the next section, we introduce a family
of scoring functions that are fundamentally different in nature.

4 A new framework for ordered categorical forecasts
4.1 Scoring matrix, optimal forecast strategy and economic interpretation
Here we describe a score to assess ordered categorical forecasts that is consistent with directives
based on fixed threshold probabilities. Those designing the categorical forecast service provide the
following specifications:
   • an increasing sequence (θi )N                                                                 N
                                 i=1 of category thresholds that defines the ordered sequence (Ci )i=0
     of N + 1 categories;
   • a corresponding sequence (wi )N
                                   i=1 of positive weights that specifies the relative importance of
     a forecast and corresponding observation falling on the same side of each category threshold;
     and
   • a single parameter α from the interval (0, 1) such that, for every category threshold, the cost
     of a miss relative to the cost of a false alarm is α/(1 − α).
    For example, a marine wind warning service might be based on three category thresholds θ1 =
25 kt, θ2 = 34 kt and θ1 = 48 kt to demark four categories (no warning, strong wind warning, gale
warning and storm warning). If the importance of forecasting on the correct side of the highest
category threshold (48 kt) is twice that of the other thresholds, then set (w1 , w2 , w3 ) = (1, 1, 2).
Selecting α = 0.7 implies that a miss (relative cost 0.7) is more costly than a false alarm (relative
cost 0.3).
    In this framework, a miss relative to the category threshold θi occurs when the forecast is for
some category below θi whereas the observation y satisfies y > θi . The penalty for such a miss is
αwi . A false alarm relative to the category threshold θi occurs when the forecast is for some category
above θi whereas the observation y satisfies y ≤ θi . The penalty for such a false alarm is (1 − α)wi .
Hits and correct negatives relative to θi , where the forecast category and observed category lie on
the same side of θi , incur zero penalty. When summed across all category thresholds, this scoring
system gives rise to the scoring matrix (sij )Ni,j=0 , whose entries give the penalty when Ci is forecast
and Cj observed, namely
                                          
                                          0,
                                                                i = j,
                                                  j
                                          
                                          
                                          
                                               X
                                          α
                                          
                                                       wk ,      i j.
                                                   k=j+1

For the dichotomous and 3-category cases, the scoring matrices are
                                                                                   
                                                  0             αw1   α(w1 + w2 )
                0     αw1
                                and            (1 − α)w1          0       αw2      .               (2)
            (1 − α)w1  0
                                            (1 − α)(w1 + w2 ) (1 − α)w2     0
The entries above the zero diagonal represent penalties for misses and those below for false alarms,
while correct forecasts are not penalised. In the multicategory case, larger over-prediction errors

                                                   7
receive higher penalties than smaller over-prediction errors, since a larger error is a false alarm
relative to more category thresholds. A similar statement holds for under-prediction penalties.
    The scoring matrix presented is consistent with the directive ‘Forecast any category Ci which
contains an α-quantile of the predictive distribution F .’ In meteorological applications, α-quantiles
and hence the choice of forecast category will typically be unique. The proof of consistency, namely
that a forecaster following the directive will optimise their expected score, will be given in Section 5.1.
An equivalent directive reformulated in categorical terms is ‘Forecast the highest category for which
the probability of observing that category or higher exceeds 1 − α.’ In the dichotomous case, the
directive reduces to ‘Warn if and only if the forecast probability of an event exceeds 1 − α.’ Because
of its connection with measures of risk, we refer to α as the risk parameter of the scoring framework.
    This scoring matrix rewards forecasters who can correctly discriminate between each threshold
θi at the α-quantile level. The weights wi indicate the thresholds at which discrimination is most
valuable, and provides a clear signal for where to target predictive improvement. This scoring matrix
has a degree of transparency that is absent for the Gerrity score, particularly as the number of
categories increases.
    We refer to this new framework as the FIxed Risk Multicategory (FIRM) framework, because
optimal forecasting strategies are consistent with forecast directives based on the fixed threshold
probability 1 − α, or equivalently on the α-quantile as a measure of risk for fixed α. The framework
presented in this subsection is denoted by

                                      FIRM (θi )N            N
                                                                       
                                                  i=1 , (wi )i=1 , α, 0 ,

where the first three parameters specify the category thresholds θi , corresponding weights wi and
risk parameter α. The raison d’être of the final parameter, called the discounting distance parameter
and here taking the value 0, will become apparent in Section 5 where an extension of the framework
is presented.
    The FIRM framework just presented, where the discounting distance parameter is 0, can be inter-
preted as a generalisation of the simple classical cost–loss decision model for dichotomous forecasts
(e.g. Richardson (2003)). In this model, a user takes preventative action at cost C if and only if the
event is forecast. On the other hand, if the event is not forecast but occurs then the user incurs a
loss L. It is assumed that 0 < C < L, otherwise the user would not take preventative action. This
model can be encoded in an expense matrix Mexpense , whose (i, j)th entry is the expense incurred if
Ci is forecast and Cj observed, namely
                                                           
                                                       0 L
                                         Mexpense =           .
                                                      C C

The expense matrix can be converted into a relative economic regret matrix Mregret , the latter
encoding the economic loss incurred relative to actions taken based on a perfect forecast. Explicitly,
                                                           
                                                  0 L−C
                                     Mregret =                ,
                                                  C      0

where the (i, j)th entry gives the relative regret acting on the basis of forecast Ci when Cj was
observed. For example, a miss (forecast C0 , observe C1 ) incurs loss L, but even a perfect forecast
(forecast and observe C1 ) would incur cost C, so the relative economic regret is L − C. As noted by
Ehm et al. (2016), from a decision theoretic perspective the distinction between expense and economic
regret is inessential because the difference depends on the observations only. The matrix Mregret is
precisely the dichotomous FIRM scoring matrix in Equation (2) for the choice α = 1 − C/L and
w1 = L. Thus, over many forecast cases, the mean score is the average relative economic regret for
a user whose decisions to take protective action were based on the forecast. The consistent forecast

                                                    8
directive aligns with the well-known result that the user minimises their expected expense by taking
protective action if and only if the probability of an event exceeds their cost–loss ratio C/L.
    To interpret the multicategorical case, the user takes a specific form of protective action at each
threshold θi below the forecast category. The cost–loss ratio C/L for each threshold is identical but
the relative costs and losses differ by threshold θi according to the weights wi . As before, the FIRM
score is the economic regret relative to basing decisions on a perfect forecast.

4.2 Example using NSW rainfall data
To illustrate the FIRM framework, we use real forecasts of daily precipitation for 110 locations across
New South Wales (NSW), Australia, for the two year period starting 1 April 2019. Two forecast
systems of the BoM are compared: the Operational Consensus Forecast (OCF) and Official. OCF
is an automated statistically post-processed poor man’s ensemble (Bureau of Meteorology, 2018).
Official is the official forecast published by the BoM and is manually curated by meteorologists. Both
systems issue forecasts for the probability of precipitation exceeding various thresholds, from which
we have reconstructed full predictive distributions using a hybrid generalised gamma distribution
with very close fits to known points from the original distribution. The forecast data used here is
from the reconstructed distributions.
    Suppose that a tiered warning service for heavy rainfall has two category thresholds, θ1 = 50mm
and θ2 = 100mm, to demark three categories: ‘no warning conditions’ (C0 ), ‘heavy rainfall’ (C1 ) and
‘very heavy rainfall’ (C2 ). With specified weights w1 = 1 and w2 = 4 and risk parameter α = 0.75,
the FIRM scoring matrix is                                  
                                                0   0.75 3.75
                                            0.25    0    3 ,                                      (3)
                                              1.25   1    0
so that misses attract greater penalties than false alarms. This application of the framework is
denoted by FIRM (50, 100), (1, 4), 0.75, 0 .
     A directive that is consistent with the optimal forecast strategy is to ‘Forecast the category that
contains the 0.75-quantile of the predictive precipitation distribution.’ Consequently, the optimal
forecast strategy is to warn for very heavy rainfall at a location if the 0.75-quantile of the predictive
distribution exceeds 100mm, or equivalently if the forecast probability of exceeding 100mm is greater
than 25%.
     Using this directive for converting a probabilistic forecast into a categorical forecast, contingency
tables for lead day 1 Official and OCF forecasts are shown in Table 1. Note that the observed base
rate in this sample for warning conditions (C1 ∪ C2 ) is about 4.5 times the observed base rate for
very heavy rainfall events (C2 ), the latter being about 0.015% (though base rates vary by location).
The mean score for OCF was 7.2 × 10−3 compared with 7.4 × 10−3 for Official, indicating that OCF
performed better overall. A 95% confidence interval for the difference is (−1.2 × 10−3 , 9.4 × 10−4 )
and includes 0, which indicates that the difference in performance is not statistically significant.
Here the confidence interval is generated by first calculating the difference in daily mean scores,
and then using a normal approximation to the distribution of those differences, a procedure closely
related to conducting one-sample t-tests. The confidence interval is relatively wide partly because the
difference in daily mean scores is nonzero in only 32 out of 731 days, and partly because of noise in
those differences. By writing the scoring matrix as a sum of its upper and lower triangular matrices,
the mean score can also be expressed as a sum of the penalty from misses and from false alarms.
This reveals stark differences between the two systems. The mean penalty for misses was 6.1 × 10−3
(i.e., 87% of the total mean score) for OCF in contrast to 3.6 × 10−3 (49% of the total) for Official.
Conversely, Official was penalised heavily for false alarms relative to OCF.
     Neither OCF nor Official are perfectly calibrated. To understand the potential of both warning
systems once recalibrated, we examine their performance using the FIRM scoring matrix of Equa-

                                                    9
Table 1: Contigency table for NSW rainfall data, lead day 1.

     OCF:                                      observed                                                   Official:                                    observed
                                           C0      C1   C2                               total                                                     C0      C1   C2                       total
                               C0          77984 259 37                                  78280                            C0                       77658 165 13                          77836
      forecast                 C1          199     136 50                                385              forecast        C1                       451     171 36                        658
                               C2          6       15   27                               48                               C2                       80      74   65                       219
                               total       78189 410 114                                 78713                            total                    78189 410 114                         78713

                      0.030
                                                                                 105
                                system                                                             system
                      0.025                                                                                                                     0.006

                                                            penalty S(β, 0.75)
                                       OCF                                                                OCF

                                                                                         better

                                                                                                                               mean score S̄α
         mean score

                                                                                                                                                          better
                      0.020            Official                                                           Official
                                                                                 104
                                                                                                                                                0.004                      system
                      0.015
                                                                                                                                                                                  OCF
                                                                                                                                                0.002
                      0.010                                                      103                                                                                              Official

                              0.3 0.4 0.5 0.6 0.7 0.8 0.9                              0.00       0.25    0.50   0.75   1.00                            0.00       0.25   0.50   0.75   1.00
                                             β                                                             β                                                               α

Figure 2: Scores using NSW rainfall data. Left: Mean score using the FIRM scoring matrix with
α = 0.75, but where the categorical forecast issued is determined by the β-quantile of the predictive
distribution, as discussed in Section 4.2. Centre: Penalty when issuing warnings at lead day 2 based
on β-quantile forecasts, given that lead day 1 warnings will be issued using 0.75-quantile forecasts,
as discussed in Section 4.3. The vertical scale is logarithmic. Right: Mean FIRM scores for different
risk parameters α, and where categorical forecasts are determined by the α-quantile of the predictive
distribution, as discussed in Section 4.3.

tion (3), where α = 0.75, but convert their predictive distributions into categorical forecasts using
the β-quantile. The results are shown in the left panel of Figure 2 for a range of β. Official scored
best when using its 0.66-quantile forecast, with a 7% improvement in score over the uncalibrated
system. OCF scored best using its 0.84-quantile forecast with a 3% improvement. Hence Official has
an over-prediction bias and OCF an under-prediction bias at the 50mm and 100mm category thresh-
olds. For this sample, recalibrated Official would have performed marginally better than recalibrated
OCF.

4.3 Practical considerations when using the FIRM framework
We have introduced the FIRM framework for multicategorical forecasts. But those who use FIRM
need to make appropriate choices for the parameters α, θi and wi . If the forecast service is intended
for decision making in a specific commercial venture then the costs and losses in that operating
environment should determine the parameters (c.f. Ambühl (2010)). Consideration should be given
to varying the parameters with forecast lead time, since forecast accuracy and the cost of taking
protective action also vary with lead time (Jewson et al., 2021).
    For public weather warnings, the considerations are different (c.f. Rothfusz et al. (2018)). Me-
teorology agencies have long histories of selecting (and sometimes revising) category thresholds θi
for their warning services based on previous events, the impact of severe weather on communities,
urban design regulations, consultation with emergency services and community engagement. Less
often are risk parameters, such as α, explicitly selected. Over-warning and false alarm intolerance

                                                                                                     10
can lead to warning fatigue, weaken trust in forecasts and willingness to respond appropriately to
warnings (Gutter et al., 2018; Potter et al., 2018; Mackie, 2014; Hoekstra et al., 2011), and must
be weighed against the cost of misses. Appropriate engagement with the community around risk
tolerance, warning service design and communication is essential. Some recent studies attempt to
quantify appropriate risk thresholds for public forecasts (Rodwell et al., 2020; Roulston and Smith,
2004), while insights from prospect theory are also informative (Kahneman and Tversky, 1979). A
study by LeClerc and Joslyn (2015) suggests that while false alarms can undermine trust in forecasts,
this effect is only moderate compared with the stronger positive effect of including well-communicated
probabilistic information with the forecast (Joslyn et al., 2009). Thus a range of possible α values
might be suitable for a public warning service, provided that the service is informed by best-practice
warning communication and community engagement. Section 4.4 shows a method for estimating
the implicit risk parameter α for an existing warning service. Design of public warning systems also
includes the selection of an appropriate standard lead time (Hoekstra et al., 2011) and assessment
on whether there is any benefit to warning early or late, noting that the urgency for people to pay
attention may wane with increasing lead time (Turner et al., 1986; Mackie, 2014).
    To see why the risk parameter might vary with lead time, consider a dichotomous warning service
with two lead times (‘early’ and ‘standard’), and suppose that warnings are issued at standard lead
time if and only if the probability of an event exceeds 1 − α. Those designing the service specify
that (i) it is slightly undesirable to not warn early then warn at standard lead time, and (ii) highly
undesirable to warn early then retract the warning at the standard lead time. That is, warning
early has some benefit but needs to be weighed against the heavy reputational cost of retracting
warnings. This could be quantified by a penalty matrix T , whose (i, j)th entry tij specifies the cost
of forecasting category Ci early and then forecasting Cj at the standard lead time. To calculate the
suitable threshold probability 1 − β for issuing an early warning, an historical forecast data set can
be used to find a β that minimises the score
                                                 1 X
                                                   1
                                                                (β,α)
                                                 X
                                     S(β, α) =             tij nij      ,
                                                 i=0 j=0

        (β,α)
where nij      is the number of times that Ci was forecast early based on the available β-quantile
forecast, and that Cj was forecast at standard time based on the available α-quantile forecast.
    To illustrate this for the NSW rainfall data, we take the categorical warning threshold θ1 = 50mm,
and suppose that the risk parameter α = 0.75 at the standard issue time (lead day 1) has been set.
Suppose that retracting an early warning should be penalised 15 times greater than issuing a warning
at standard lead time only. Then the penalty matrix T is given by
                                                         
                                                     0 1
                                             T =            .                                       (4)
                                                    15 0

The score S(β, α) is then calculated for a range of β values associated with early warning decisions
(lead day 2). The centre panel of Figure 2 shows the results. On this dataset, β = 0.65 was best for
OCF and β = 0.6 for Official. Unsurprisingly, warning early requires higher confidence than warning
at the standard time. OCF scores better because it exhibits more stability across lead time.
    As with all forecast verification, care must be taken when aggregating performance scores from
multiple locations. Aggregated results are easiest to interpret when the risk parameter α is constant
across the domain, and when the category thresholds θi vary across the domain so that the clima-
tological base rates of each warning category are spatially invariant. The NSW rainfall example in
Section 4.2 used fixed thresholds 50mm and 100mm. Consequently, mean scores are higher in the
wetter northeastern part of the domain than in the drier northwestern quarter, and hence the forecast
system that performs best in the northeast is more likely to obtain a better mean score overall.

                                                  11
To illustrate why α should be constant for meaningful aggregation, for a given α we calculate the
mean score S̄α for the lead day 1 NSW precipitation forecasts, using the FIRM scoring matrix of
Equation (1) for fixed θ1 = 50mm, θ2 = 100mm, w1 = 1 and w2 = 4. The category that is forecast
in each case is the one which contains the α-quantile of the predictive distribution. A graph of S̄α
against α is shown in the right panel of Figure 2. For small α, S̄α is low because the forecast problem
is easy. Here, the α-quantile is usually well below the 50mm threshold, so false alarms are rare while
the misses are penalised very lightly. For mid to high values of α, S̄α is higher because the forecast
problem is harder. There are more cases where the α-quantile is near category thresholds, and so
more chances of a false alarm or otherwise a more heavily penalised miss. This reemphasises that the
FIRM scoring matrix of Equation (1) is not a normalised skill score, though a FIRM skill score can
be constructed from it in the standard way (Potts, 2003, p. 27). Instead, it is designed as a consistent
scoring function to monitor performance trends or to rank competing forecast systems that could be
used for a multicategorical forecast service with specified threshold probability.
    Finally, consideration should be given to the choice of weights wi . If the frequency with which
categories are observed is roughly equal and the consequences of a forecast error relative to one
category threshold is no different from any other threshold, then a natural choice is wi = 1 for every
i. However, for most tiered warning services, observations fall in higher categories less frequently yet
the impact of forecast errors at these higher categories tends to be greater. In this context, applying
equal weights will not appropriately reflect the costs of forecast errors relative to different category
thresholds. Moreover, a forecast system that has good discrimination between events and nonevents
relative to lower category thresholds but performs poorly for higher thresholds is unlikely to suffer a
bad mean FIRM score over many events when equal weights are applied, because forecast cases that
expose its weakness will be relatively rare. Instead, the weights should reflect the higher cost of poor
discrimination at higher thresholds. If quantifying these costs is difficult, one simple approach is to
calculate the base rate ri of observations exceeding θi over a fixed climatological reference period,
and then set wi = 1/ri . If normalisation is desired, set wi = r1 /ri . The FIRM weights selected in
the NSW rainfall example of Section 4.2 loosely followed this principle.

4.4 Estimating an unspecified risk parameter α
Suppose that an existing warning service does not explicitly specify a confidence threshold 1 − α
for issuing a warning. It is possible to estimate an implicit confidence threshold from historical
contingency tables. A naı̈ve approach assumes that the historical ratio of false alarms to misses
indicates the implicit ratio of the cost of a miss to that of a false alarm. That is, an estimate α̂ of α
is based on the equation α̂m = (1 − α̂)f , which rearranged gives α̂ = f /(f + m). However, α̂ is quite
a biased estimator of α. Using the synthetic data set-up of Section 2.1, we calculate α̂ as a function
of α from 2 × 107 forecast cases. The results are plotted in the top panel of Figure 3. The estimate
is particularly bad when α, forecast accuracy and base rates are low.
    Precise analytical statements can also be made. For example, a perfectly calibrated forecast
system that warns if and only if the forecast probability of an event exceeds 0.5 is not expected to
produce misses and false alarms in equal measure. To see why, assume again the synthetic data set-
up, with random variable Y satisfying Y = Y1 + Y2 and perfectly calibrated predictive distributions
of the form F = Y1 + N (0, σ22 ). If the warning threshold is θ then the system warns if and only if
Y1 > θ, since Q0.5 (F ) = Y1 . Now
                                                                       θ−µ
                                                                               
                          P(event observed) = P(Y > θ) = 1 − Φ                    ,
                                                                      σ12 + σ22
while
                                                                       θ−µ
                                                                             
                         P(warning issued) = P(Y1 > θ) = 1 − Φ                    .
                                                                        σ12

                                                   12
base rate = 0.01        base rate = 0.05         base rate = 0.1         base rate = 0.25
                         1.0
                         0.8
                         0.6
                                                                                                                                      relative
                         0.4   α̂                                                                                                     predictive
                                                                                                                                      uncertainty
         estimate of α

                         0.2
                         0.0                                                                                                               0.01

                         1.0                                                                                                               0.1
                         0.8
                                                                                                                                           0.25
                         0.6
                                                                                                                                           0.5
                               α̃

                         0.4
                         0.2
                         0.0
                                    0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0
                                                                                   α

Figure 3: The empirical relationship between α and estimators α̂ (top panel) and α̃ (bottom panel)
for perfectly calibrated forecast systems, as described in Section 2.1, based on 2 × 107 forecast cases.

So if θ > µ then the probability of issuing a warning is less than the probability of observing the
event, and one can expect that there will be more misses than false alarms notwithstanding perfect
calibration.
    A corollary is that, for well-calibrated forecast systems, the relative number of misses to false
alarms is not necessarily a good indicator of performance. Conversely, calibrating a system so that
αm ≈ (1 − α)f may result in a poorer warning service as assessed by the consistent FIRM scoring
matrix. When communicating to users, one should be hesitant in making statements that relate α
to the proportion of false alarms to misses.
    A different approach to estimating α from historical contingency tables comes from signal de-
tection theory (e.g. Mason (2003)). If one assumes that the theory’s ‘noise’ and ‘signal plus noise’
distributions are Gaussian with equal variance, then one obtains an estimate α̃ of α given by
                                                            1                      φ(Φ−1 (1 − POD))   h+m
                                                α̃ =           ,        τ=            −1
                                                                                                    ×      .                                        (5)
                                                          τ +1                    φ(Φ (1 − POFD))     f +c
(Mason, 2003, Section 3.4.4c). Figure 3 shows that for perfectly calibrated normal predictive distri-
butions, α̃ is generally a more reliable estimate of α than α̂, particularly for low base rates and less
accurate forecast systems.
    To illustrate, we convert the 3-category contingency tables of Table 1 into two dichotomous
contingency tables by merging C1 and C2 into a single category C1 ∪ C2 . Then α̃ = 0.75 for OCF and
α̃ = 0.89 for Official. However, neither forecast system is perfectly calibrated, nor is α̃ an unbiased
estimator of α. With a base rate of 0.007, the bottom left panel of Figure 3 suggests that these values
of α̃ may over-estimate α by around 0.05 or 0.1. Applying this correction gives estimated α values
that are compatible with the earlier observation that the 0.75-quantile of OCF had an under-forecast
bias while that for Official had an over-forecast bias.

5 Extension of the FIRM framework
5.1 Reframing and proof of consistency
In Section 4.1 the basic FIRM framework for assessing ordered categorical forecasts was expressed
in terms of a scoring matrix, so that forecasts could be scored using knowledge only of the forecast

                                                                                       13
and observed categories. We now reframe the scoring method so that it is expressed in terms of the
underlying continuous variables, namely some single-valued (or point) forecast x and a corresponding
observation y in I. The two scoring formulations are equivalent, but this reframing allows an efficient
proof that optimal forecast strategy is consistent with the directive ‘Forecast a category that contains
an α-quantile of the predictive distribution.’ It also facilitates notation for an extension of the
framework presented thus far.
                                    Q
   For θ in I and α in (0, 1), let Sθ,α : I × I → [0, ∞) denote the scoring function
                                               
                                               1 − α, y ≤ θ < x,
                                               
                                   Q
                                  Sθ,α (x, y) = α,     x ≤ θ < y,                                    (6)
                                               
                                                0,     otherwise.
                                               

In the convention we have adopted, where higher values of x and y indicate more hazardous forecast
or observed conditions, the scoring function SθQi ,α applies a penalty of 1 − α for false alarms and a
penalty of α for misses relative to the threshold θ. So for a sequence of category thresholds (θi )N
                                                                                                   i=1
and corresponding weights, the scoring function S Q : I × I → [0, ∞), given by
                                                    N
                                                          wi SθQi ,α (x, y),
                                                    X
                                     S Q (x, y) =                                                    (7)
                                                    i=1

is equivalent to the FIRM scoring matrix defined by Equation (1) via mapping the point forecast
x ∈ I and the observation y ∈ I to the unique category in which each belongs.
                           Q
    The scoring function Sθ,α is an elementary scoring function for the α-quantile. Since S Q is a linear
combination of the elementary scoring functions, it is consistent for the α-quantile (Ehm et al., 2016,
Theorem 1). This means that any α-quantile of the predictive distribution F is a minimiser of the
mapping x 7→ E[S Q (x, Y )], given that Y has distribution F (Gneiting, 2011, Definition 2.1). Hence
if Ci contains an α-quantile of F then Ci is a minimiser of the forecaster’s expected FIRM score.

5.2 Varying α with threshold
One may vary the FIRM framework by choosing different risk parameter αi for each categorical
threshold θi , so that the scoring function S is given by
                                                  N
                                                        wi SθQi ,αi (x, y).
                                                  X
                                      S(x, y) =
                                                  i=1

However, we argue that this should only be done if it actually reflects the costs of forecast errors
for the specific user. For public warning services, there are at least two reasons to avoid varying α
with threshold. First, the forecast directive is considerably more complex than ‘Forecast a category
that contains an α-quantile of the predictive distribution’. Second, normally when there is a knife-
edge decision as to which category to forecast, we would expect it to be a choice between adjacent
categories. This is true when α is fixed but can be violated when α varies.
    To illustrate this second point, consider a three-tiered warning service with θ1 = 0, θ2 = 2,
α1 = 0.1, α2 = 0.9 and w1 = w2 = 1. Suppose that the predictive distribution F is N (1, 1), so that
p0 = 0.16, p1 = 0.68 and p2 = 0.16. Which warning category minimises the expected score? A quick
calculation shows that, in general,
                                                 (
                                    Q              (1 − α)F (θ), x > θ,
                               E[Sθ,α (x, Y )] =
                                                   α(1 − F (θ)), x ≤ θ,

                                                     14
whenever Y has predictive distribution F . Consequently, when Y ∼ N (1, 1) it can be shown that
the forecaster’s expected score E[S(x, Y )] is minimised whenever x lies in C0 or C2 but not in C1 .
Hence forecasting either C0 or C2 is optimal whilst forecasting C1 is suboptimal.
    If there is a strong need in a public warning service to have a different risk parameter α for each
categorical threshold θi , then one could consider implementing a separate warning product for each
distinct categorical threshold rather than considering it as a single tiered warning service.

5.3 A scoring function that discounts the penalty for marginal events
In many contexts it may be not be desirable to penalise forecast errors strictly categorically, where
near and gross misses attract the same penalty (Barnes et al., 2007; Sharpe, 2016). The following
variation on our categorical framework provides a more nuanced scoring system so that near misses
are penalised less than gross misses, whilst retaining the categorical nature of the forecast. The
scoring method requires knowledge of the forecast category and of the real-valued observation.
                                                    H
    Whenever 0 < a ≤ ∞, θ ∈ I and 0 < α < 1, let Sθ,α,a : I × I → [0, ∞) denote the scoring function
                                         
                                         (1 − α) min(θ − y, a),
                                                                           y ≤ θ < x,
                           H
                          Sθ,α,a (x, y) = α min(y − θ, a),                  x ≤ θ < y,                (8)
                                         
                                          0,                                otherwise,
                                         

whenever x, y ∈ I. The parameter a is called the discounting distance parameter. When a is finite,
false alarms with respect to the threshold θ are typically penalised by (1 − α)a, but if the observation
y is within distance a of the threshold θ then a discounted penalty is applied, being proportional to
the distance of y from θ. Similar discounting occurs for misses that are within a of the threshold
θ. When a = ∞, the cost of a miss is always proportional to the distance of the observation from
the threshold, and similarly for false alarms. Note that the only information used about the point
forecast x is whether it lies above or below the categorical threshold θ. Hence this scoring function
can be written so that it is categorical with respect to the forecast, but real-valued with respect to
the observation.
    To generalise this for multicategorical forecasts, we sum across all categorical thresholds θi to
obtain the scoring function S H , where
                                                    N
                                                    X
                                     S H (x, y) =         wi SθHi ,α,a (x, y)                         (9)
                                                    i=1

and, like Equation (7), each positive weight wi specifies the relative importance of forecasting on the
correct side of the categorical threshold θi .
    Given a predictive distribution F , one single-valued forecast x in I that optimises the expected
score S H is a so-called Huber quantile Hαa (F ) of the predictive distribution F (Taggart, 2020, Theo-
rem 5.2). Hence, when S H is interpreted as a scoring function for categorical forecasts, it is consistent
with the directive ‘Forecast any category Ci that contains a Huber quantile Hαa (F ).’
    Huber quantiles are a type of generalised quantile (Bellini et al., 2014) that can be traced back
to the pioneering work of Huber (1964). Like quantiles, a Huber quantile Hαa (F ) for any given
predictive distribution F is not necessarily unique when a is finite. However, for meteorological
predictive distributions it is usually unique, and will be whenever the α-quantile Qα (F ) is unique
(Taggart, 2020, Proposition 3.4). In the case when a = ∞, Hαa (F ) is only defined if F has finite first
moment, in which case it will always be unique and is typically called the α-expectile of F (Newey
                                        1/2
and Powell, 1987). The special case H∞ (F ) is the well-known mean (or expected) value of F . An
important property is that Hαa (F ) → Qα (F ) as a ↓ 0, so that Huber quantiles are intermediaries

                                                     15
a=0                                 a=2                                   a=∞
                             a                                  aa                                  a                       a
                  1.0
                            1−α                                 1−α                                                1−α
          F (t)

                  0.5

                             α                                  α                                       α
                  0.0
                        Hα
                         a (F )                             Hα
                                                             a (F )                                         Hα
                                                                                                             a (F )

                        0         10         20        30   0         10         20      30     0           10         20       30
                                                                            t

Figure 4: Huber quantiles Hαa (F ) for the distribution F (t) = 0.7 − 0.3 exp(−t/20), t ≥ 0, where
α = 0.75. The Huber quantile Hα0 (F ) is identical with the α-quantile while Hα∞ (F ) is identical with
the α-expectile. In each case, the shaded regions below or above the graph of F are of width a and
the ratio of the areas of the lower to upper regions is α : (1 − α).

                                                                                                            1/2
between quantiles and expectiles. In particular, the Huber quantile Ha (F ) is an intermediary
between the median and mean values of F .
    There are a number of ways of calculating the Huber functional or expectile of a predictive
distribution F . One approach is to calculate the α-quantile of a specific transformation of F (Jones,
1994). A different approach uses the fact that x is a Huber quantile Hαa (F ) if and only if it is a
solution x to the integral equation
                              Z                              Z
                            α       (1 − F (t)) dt = (1 − α)        F (t) dt                      (10)
                                             [x,x+a]                                  [x−a,x]

(Taggart, 2020), and can thus be computed using numerical methods. Equation (10) has a nice
geometric interpretation in terms of the area above and below the graph of the predictive distribution
F . The Huber quantile Hαa (F ) is a point x at which the ratio of the area below F on the interval
[x − a, x] to the area above F on the interval [x, x + a] is α : (1 − α).
     This interpretation is illustrated in Figure 4 for the distribution F (t) = 0.7 − 0.3 exp(−t/20),
t ≥ 0, which corresponds to a convective situation where the chance of precipitation is only 30%, but
if it does rain substantial falls are possible. The Huber quantile Hα2 (F ) when α = 0.75 is shown in the
central panel, flanked by the limiting cases a ↓ 0 and a → ∞ which correspond to the α-quantile and
α-expectile respectively. The quantile Hα0 (F ), Huber quantile Hα2 (F ) and expectile Hα∞ (F ) are all risk
measures which could be used to prompt a warning if they exceed some specified categorical warning
threshold θ. The figure shows that the quantile ignores information in the tail of the predictive
distribution, while the expectile uses that information and hence for this particular distribution is
greater than the quantile. It is in the tail where the extremes typically lie. For this reason, along
with a number of other properties, expectile forecasts have recently attracted interest in financial
risk (Ehm et al., 2016; Bellini and Di Bernardino, 2017). As a risk measure, the Huber quantile is a
compromise between the expectile and quantile.
     While applying a discounted penalty for near misses and close false alarms is an attractive option,
there are downsides. The scoring method cannot be written as scoring matrix. Instead of maintain-
ing a contingency table, one must keep track of forecast categories and corresponding real-valued
observations. Secondly, Huber quantiles and expectiles will be unfamiliar to most people, whereas
quantiles (or percentiles) are more widely known. Nonetheless, a compelling reason to use a scoring
function like S H instead of S Q is that in many situations it provides a better model of the economic
costs of forecast errors than the classical cost–loss model (Ehm et al., 2016; Taggart, 2020).

                                                                      16
You can also read