Explaining individual predictions when features are dependent: More accurate approximations to Shapley values

Publikasjonsdetaljer

  • Journal: Artificial Intelligence, vol. 298, p. 24, 2021
  • Utgiver: Elsevier
  • Internasjonale standardnumre:
    • Trykt: 0004-3702
    • Elektronisk: 1872-7921
  • Lenke:

Explaining complex or seemingly simple machine learning models is an important practical problem. We want to explain individual predictions from such models by learning simple, interpretable explanations. Shapley value is a game theoretic concept that can be used for this purpose. The Shapley value framework has a series of desirable theoretical properties, and can in principle handle any predictive model. Kernel SHAP is a computationally efficient approximation to Shapley values in higher dimensions. Like several other existing methods, this approach assumes that the features are independent. Since Shapley values currently suffer from inclusion of unrealistic data instances when features are correlated, the explanations may be very misleading. This is the case even if a simple linear model is used for predictions. In this paper, we extend the Kernel SHAP method to handle dependent features. We provide several examples of linear and non-linear models with various degrees of feature dependence, where our method gives more accurate approximations to the true Shapley values.