When different forecasters are compared it is important to obey decision-theoretic principles and use validation tools that prevent hedging and motivate the forecasters to report their true beliefs.
This is often achieved in practice by using proper scoring rules, such as the mean square error, the negative log-likelihood, or the continuous ranked probability score.
This setting is discussed in the context of earthquake rate predictions, which are one of many example for point process-valued forecasts.
Developping proper scoring rules for such forecasts is quite challenging and it turns out that some commonly used scores are in fact improper.
We introduce a class of proper scoring rules for point process evaluation that are proper and target different properties of the process.