Publication details
- Journal: SIAM/ASA Journal on Uncertainty Quantification (JUQ), vol. 1, p. 522–534–13, 2013
- Publisher: Society for Industrial and Applied Mathematics
-
International Standard Numbers:
- Printed: 2166-2525
- Electronic: 2166-2525
- Link:
It has been argued persuasively that, in order to evaluate climate models, the probability distributions of model output need to be compared to the corresponding empirical distributions of observed data. Distance measures between probability distributions, also called divergence functions, can be used for this purpose. We contend that divergence functions ought to be proper, in the sense that acting on modelers' true beliefs is an optimal strategy. The score divergences introduced in this paper derive from proper scoring rules and, thus, they are proper with the integrated quadratic distance and the Kullback--Leibler divergence being particularly attractive choices. Other commonly used divergences fail to be proper. In an illustration, we evaluate and rank simulations from 15 climate models for temperature extremes in a comparison to reanalysis data.