It can be difficult to ascertain which variables machine learning and artificial intelligence (AI) place emphasis on and how automated decisions are made. Because of this, we are currently developing methods for Explainable Artificial Intelligence (XAI) which will allow us to gain insight into the black box. This will contribute to quality-assured calculations and accurate explanations.
Selecting appropriate methods for XAI
One of the central issues is the ability to provide understandable explanations regarding how systems rooted in machine learning and AI calculate predictions or make decisions. A myriad of methods have been developed in recent years in order to determine these processes, but not all are useful or correct.
NR has developed eXplego, a decision tree toolkit, to ease navigation in this environment. eXplego provides interactive guidance to developers in the process of selecting an XAI-method.
Counterfactual explanations and Shapley values
We have specifically worked with two classes of explanation:
- Counterfactual explanations
- Shapley values
Counterfactual explanations assess what is required to achieve an alternative outcome with various input. For instance, if your income is adjusted higher or lower.
Shapley values derive from game theory and strive to distribute the importance of each component implemented into the model in an equitable way.
Regardless of methodology, our biggest concern lies in the accuracy of explanations.
Conditional variables provide more accurate explanations
A significant obstacle is that variables in a machine learning model are usually not independent, yet a lot of reputable explanation methods conveniently presume independence. However, the size of your income will, for example, often correlate with your age. By considering this conditionality in a realistic way, we can provide accurate explanations regarding the behaviour of machine learning models. In this context, our statistical competence is of significant advantage.
- The Norwegian Labour and Welfare Administration (NAV)
- The University of Oslo
- Software for Shapley values (R + Python)
- Software for counterfactual explanations (R + Python)
- eXplego: A toolkit for selecting an appropriate explanation method
- Using Shapley values and variational autoencoders to explain predictive models with dependent mixed features.
- Explaining individual predictions when features are dependent: More accurate approximations to Shapley values.