Explainable Artificial Intelligence
- Fields involved Machine learning
- Industries involved Technology and industry
Artificial intelligence and machine learning are increasingly shaping decisions that impact our lives, from healthcare to public services. Yet, it can often be unclear which variables these systems prioritise or how their automated decisions are made. Because of this, we are currently developing methods for Explainable Artificial Intelligence (XAI) which will allow us to gain insight into the ‘black box.’ This will contribute to quality-assured calculations and accurate explanations.
Selecting appropriate methods for XAI
One of the central issues is the ability to provide understandable explanations regarding how systems rooted in machine learning and AI calculate predictions or make decisions. A myriad of methods have been developed in recent years in order to determine these processes, but not all are useful or correct.
NR has developed eXplego, a decision tree toolkit, to ease navigation in this environment. eXplego provides interactive guidance to developers in the process of selecting an XAI-method.

Counterfactual explanations and Shapley values
We have specifically worked with two classes of explanation:
- Counterfactual explanations
- Shapley values
Counterfactual explanations assess what is required to achieve an alternative outcome with various input. For instance, if your income is adjusted higher or lower.
Shapley values derive from game theory and strive to distribute the importance of each component implemented into the model in an equitable way.
Regardless of methodology, our biggest concern lies in the accuracy of explanations.
Conditional variables provide more accurate explanations
A significant obstacle is that variables in a machine learning model are usually not independent, yet a lot of reputable explanation methods conveniently presume independence. However, the size of your income will, for example, often correlate with your age. By considering this conditionality in a realistic way, we can provide accurate explanations regarding the behaviour of machine learning models. In this context, our statistical competence is of significant advantage.
Current projects
To learn more about our work in explainable artificial intelligence, please contact:
Partners
- The Norwegian Labour and Welfare Administration (NAV)
- Gjensidige
- FundingPartner
- The University of Oslo
Digital resources
- BigInsight
- Software for Shapley values (R + Python)
- Software for counterfactual explanations (R + Python)
- eXplego: A toolkit for selecting an appropriate explanation method
Research articles