{"id":21716,"date":"2023-07-12T11:09:50","date_gmt":"2023-07-12T09:09:50","guid":{"rendered":"https:\/\/nr.no\/en\/?post_type=bc_area&#038;p=21716"},"modified":"2025-09-24T08:39:27","modified_gmt":"2025-09-24T06:39:27","slug":"explainable-artificial-intelligence-xai","status":"publish","type":"bc_area","link":"https:\/\/nr.no\/en\/areas\/statistical-modeling-machine-learning-and-artificial-intelligence-ai\/explainable-artificial-intelligence-xai\/","title":{"rendered":"Explainable Artificial Intelligence"},"content":{"rendered":"\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-28f84493 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:66.66%\">\n<p class=\"has-sizing-large\"><strong>Artificial intelligence and machine learning are increasingly shaping decisions that impact our lives, from healthcare to public services. Yet, it can often be unclear which variables these systems prioritise or how their automated decisions are made. Because of this, we are currently developing methods for Explainable Artificial Intelligence (XAI) which will allow us to gain insight into the &#8216;black box.&#8217; This will contribute to quality-assured calculations and accurate explanations. <\/strong><\/p>\n\n\n\n<p><\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Selecting appropriate methods for XAI<\/h2>\n\n\n\n<p>One of the central issues is the ability to provide understandable explanations regarding how systems rooted in machine learning and AI calculate predictions or make decisions. A myriad of methods have been developed in recent years in order to determine these processes, but not all are useful or correct. <\/p>\n\n\n\n<p>NR has developed eXplego, a decision tree toolkit, to ease navigation in this environment. eXplego provides interactive guidance to developers in the process of selecting an XAI-method.<\/p>\n\n\n\n<figure class=\"wp-block-image aligncenter size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"602\" height=\"231\" src=\"https:\/\/nr.no\/en\/content\/uploads\/sites\/2\/2023\/07\/Shapleyverdier-2.png\" alt=\"The figure shows Shapley values for three different predictions. The graph grey and values are highlighted in red, blue and green columns.\" class=\"wp-image-21770\" srcset=\"https:\/\/nr.no\/content\/uploads\/sites\/2\/2023\/07\/Shapleyverdier-2.png 602w, https:\/\/nr.no\/content\/uploads\/sites\/2\/2023\/07\/Shapleyverdier-2-300x115.png 300w\" sizes=\"auto, (max-width: 602px) 100vw, 602px\" \/><figcaption class=\"wp-element-caption\">Figure caption: Shapley values for three different predictions, calculated with three separate methods. The numbers shown on the vertical axis denote how the variables contribute, positively or negatively, to predictions. The presumption of independence (the red columns) often result in different and often misleading explanations. The two methods that consider dependence (Ctree and VAEAC) display more consensus and are presumably more accurate. Figure: Olsen, Lars Henry Berge, Ingrid Kristine Glad, Martin Jullum, and Kjersti Aas. &#8220;Using Shapley Values and Variational Autoencoders to Explain Predictive Models with Dependent Mixed Features.&#8221; JLMR 23, no. 213 (2022): 1-51.<\/figcaption><\/figure>\n\n\n\n<h2 class=\"wp-block-heading\">Counterfactual explanations and Shapley values <\/h2>\n\n\n\n<p>We have specifically worked with two classes of explanation: <\/p>\n\n\n\n<ol type=\"1\" class=\"wp-block-list\">\n<li>Counterfactual explanations<\/li>\n\n\n\n<li>Shapley values <\/li>\n<\/ol>\n\n\n\n<p>Counterfactual explanations assess what is required to achieve an alternative outcome with various input. For instance, if your income is adjusted higher or lower.<\/p>\n\n\n\n<p>Shapley values derive from game theory and strive to distribute the importance of each component implemented into the model in an equitable way. <\/p>\n\n\n\n<p>Regardless of methodology, our biggest concern lies in the accuracy of explanations. <\/p>\n\n\n\n\n\n<h2 class=\"wp-block-heading\">Conditional variables provide more accurate explanations<\/h2>\n\n\n\n<p>A significant obstacle is that variables in a machine learning model are usually not independent, yet a lot of reputable explanation methods conveniently presume independence. However, the size of your income will, for example, often correlate with your age. By considering this conditionality in a realistic way, we can provide accurate explanations regarding the behaviour of machine learning models. In this context, our statistical competence is of significant advantage. <\/p>\n\n\n\n\n\n<h3 class=\"wp-block-heading has-text-align-center has-sizing-large\">Current projects <\/h3>\n\n\n\t\t<div id=\"post-type-multi-block_bc4f6bc19c95455ebe1af34b8f533ccf\" class=\"wp-block-post-type-multi aligncenter type-manual style-card-bc_project-sm t2-grid\">\n\t\t\t\t\t\t\t<div class=\"t2-grid-item-col-12\">\n\t\t\t\t\t\t<a href=\"https:\/\/nr.no\/en\/projects\/a-credit-model-for-small-and-medium-sized-enterprises\/\" class=\"card-post card-project\">\n\t\t\t\t\t<figure>\n\t\t\t\t<img decoding=\"async\" src=\"https:\/\/nr.no\/content\/uploads\/sites\/2\/2023\/06\/austin-distel-wD1LRb9OeEo-unsplash-scaled.jpg\" alt=\"A group of young professionals sit around a coffee table while a man in a white shirt and black jeans holds a presentation, pointing to a whiteboard. It is a modern corporate space and one person has a laptop open on his lap.\">\n\t\t\t<\/figure>\n\t\t\t\t<div class=\"card-post__content\">\n\t\t\t\t\t\t\t<ul class=\"card-post__categories\">\n\t\t\t\t\t\t\t\t\t\t\t<li>Machine learning<\/li>\n\t\t\t\t\t\t\t\t\t<\/ul>\n\t\t\t\t\t\t<h3 class=\"card-post__title\">A credit model for small and medium-sized businesses<\/h3>\n\t\t<\/div>\n\t<\/a>\n\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:33.33%\">\n<h3 class=\"wp-block-heading\">To learn more about our work in explainable artificial intelligence, please contact:<\/h3>\n\n\n\t\t<div id=\"post-type-multi-block_1b7d26d07d92099a05aed203b28fb556\" class=\"wp-block-post-type-multi type-manual style-card-bc_employee t2-grid\">\n\t\t\t\t\t\t\t<div class=\"t2-grid-item-col-12\">\n\t\t\t\t\t\t<a href=\"https:\/\/nr.no\/en\/employees\/anders-loland\/\" class='card-employee'>\n\t\t\t\t\t<figure>\n\t\t\t\t<img decoding=\"async\" src=\"https:\/\/nr.no\/content\/uploads\/sites\/2\/2024\/05\/anders-loland-12.jpg\" alt=\"\">\n\t\t\t<\/figure>\n\t\t\t\t<div class=\"card-employee__content\">\n\t\t\t<p class=\"card-employee__name\">Anders L\u00f8land<\/p>\n\t\t\t\t\t\t\t<p class=\"card-employee__position\">Research Director<\/p>\n\t\t\t\t\t\t<svg xmlns=\"http:\/\/www.w3.org\/2000\/svg\" viewBox=\"0 0 24 24\" height=\"24\" width=\"24\" class=\"t2-icon t2-icon-arrowforward\" aria-hidden=\"true\" focusable=\"false\"><path d=\"M15.9 4.259a1.438 1.438 0 0 1-.147.037c-.139.031-.339.201-.421.359-.084.161-.084.529-.001.685.035.066 1.361 1.416 2.947 3l2.882 2.88-10.19.02c-8.543.017-10.206.029-10.29.075-.282.155-.413.372-.413.685 0 .313.131.53.413.685.084.046 1.747.058 10.29.075l10.19.02-2.882 2.88c-1.586 1.584-2.912 2.934-2.947 3-.077.145-.085.521-.013.66a.849.849 0 0 0 .342.35c.156.082.526.081.68-.001.066-.035 1.735-1.681 3.709-3.656 2.526-2.53 3.606-3.637 3.65-3.742A.892.892 0 0 0 23.76 12a.892.892 0 0 0-.061-.271c-.044-.105-1.124-1.212-3.65-3.742-1.974-1.975-3.634-3.616-3.689-3.645-.105-.055-.392-.107-.46-.083\"\/><\/svg>\n\t\t<\/div>\n\t<\/a>\n\t\t\t\t<\/div>\n\t\t\t\t\t\t\t<div class=\"t2-grid-item-col-12\">\n\t\t\t\t\t\t<a href=\"https:\/\/nr.no\/en\/employees\/kjersti-aas\/\" class='card-employee'>\n\t\t\t\t\t<figure>\n\t\t\t\t<img decoding=\"async\" src=\"https:\/\/nr.no\/content\/uploads\/sites\/2\/2026\/01\/kjersti-aas-20.jpg\" alt=\"\">\n\t\t\t<\/figure>\n\t\t\t\t<div class=\"card-employee__content\">\n\t\t\t<p class=\"card-employee__name\">Kjersti Aas<\/p>\n\t\t\t\t\t\t\t<p class=\"card-employee__position\">Research Director SAMBA<\/p>\n\t\t\t\t\t\t<svg xmlns=\"http:\/\/www.w3.org\/2000\/svg\" viewBox=\"0 0 24 24\" height=\"24\" width=\"24\" class=\"t2-icon t2-icon-arrowforward\" aria-hidden=\"true\" focusable=\"false\"><path d=\"M15.9 4.259a1.438 1.438 0 0 1-.147.037c-.139.031-.339.201-.421.359-.084.161-.084.529-.001.685.035.066 1.361 1.416 2.947 3l2.882 2.88-10.19.02c-8.543.017-10.206.029-10.29.075-.282.155-.413.372-.413.685 0 .313.131.53.413.685.084.046 1.747.058 10.29.075l10.19.02-2.882 2.88c-1.586 1.584-2.912 2.934-2.947 3-.077.145-.085.521-.013.66a.849.849 0 0 0 .342.35c.156.082.526.081.68-.001.066-.035 1.735-1.681 3.709-3.656 2.526-2.53 3.606-3.637 3.65-3.742A.892.892 0 0 0 23.76 12a.892.892 0 0 0-.061-.271c-.044-.105-1.124-1.212-3.65-3.742-1.974-1.975-3.634-3.616-3.689-3.645-.105-.055-.392-.107-.46-.083\"\/><\/svg>\n\t\t<\/div>\n\t<\/a>\n\t\t\t\t<\/div>\n\t\t\t\t\t\t\t<div class=\"t2-grid-item-col-12\">\n\t\t\t\t\t\t<a href=\"https:\/\/nr.no\/en\/employees\/martin-jullum\/\" class='card-employee'>\n\t\t\t\t\t<figure>\n\t\t\t\t<img decoding=\"async\" src=\"https:\/\/nr.no\/content\/uploads\/sites\/2\/2024\/05\/martin-jullum-17.jpg\" alt=\"\">\n\t\t\t<\/figure>\n\t\t\t\t<div class=\"card-employee__content\">\n\t\t\t<p class=\"card-employee__name\">Martin Jullum<\/p>\n\t\t\t\t\t\t\t<p class=\"card-employee__position\">Senior Research Scientist<\/p>\n\t\t\t\t\t\t<svg xmlns=\"http:\/\/www.w3.org\/2000\/svg\" viewBox=\"0 0 24 24\" height=\"24\" width=\"24\" class=\"t2-icon t2-icon-arrowforward\" aria-hidden=\"true\" focusable=\"false\"><path d=\"M15.9 4.259a1.438 1.438 0 0 1-.147.037c-.139.031-.339.201-.421.359-.084.161-.084.529-.001.685.035.066 1.361 1.416 2.947 3l2.882 2.88-10.19.02c-8.543.017-10.206.029-10.29.075-.282.155-.413.372-.413.685 0 .313.131.53.413.685.084.046 1.747.058 10.29.075l10.19.02-2.882 2.88c-1.586 1.584-2.912 2.934-2.947 3-.077.145-.085.521-.013.66a.849.849 0 0 0 .342.35c.156.082.526.081.68-.001.066-.035 1.735-1.681 3.709-3.656 2.526-2.53 3.606-3.637 3.65-3.742A.892.892 0 0 0 23.76 12a.892.892 0 0 0-.061-.271c-.044-.105-1.124-1.212-3.65-3.742-1.974-1.975-3.634-3.616-3.689-3.645-.105-.055-.392-.107-.46-.083\"\/><\/svg>\n\t\t<\/div>\n\t<\/a>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t\n\n\n<div class=\"wp-block-group has-nr-dark-yellow-background-color has-background\">\n<p><strong>Partners<\/strong><\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>The Norwegian Labour and Welfare Administration (NAV)<\/li>\n\n\n\n<li>Gjensidige<\/li>\n\n\n\n<li>FundingPartner<\/li>\n\n\n\n<li>The University of Oslo<\/li>\n<\/ul>\n<\/div>\n\n\n\n\n\n<div class=\"wp-block-group has-nr-dark-grey-background-color has-background\">\n<p><strong>Digital resources<\/strong><\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><a rel=\"noreferrer noopener\" href=\"https:\/\/nr.no\/fagfelt\/statistisk-modellering-maskinlaering-og-kunstig-intelligens-ai\/big-insight\/\" data-type=\"URL\" data-id=\"https:\/\/nr.no\/fagfelt\/statistisk-modellering-maskinlaering-og-kunstig-intelligens-ai\/big-insight\/\" target=\"_blank\">BigInsight <\/a><\/li>\n\n\n\n<li><a rel=\"noreferrer noopener\" href=\"https:\/\/github.com\/NorskRegnesentral\/shapr\" data-type=\"URL\" data-id=\"https:\/\/github.com\/NorskRegnesentral\/shapr\" target=\"_blank\">Software for Shapley values <\/a> (R + Python)<\/li>\n\n\n\n<li><a rel=\"noreferrer noopener\" href=\"https:\/\/github.com\/NorskRegnesentral\/mcceR\" data-type=\"URL\" data-id=\"https:\/\/github.com\/NorskRegnesentral\/mcceR\" target=\"_blank\">Software for counterfactual explanations<\/a> (R + Python)<\/li>\n\n\n\n<li><a rel=\"noreferrer noopener\" href=\"https:\/\/explego.nr.no\/\" data-type=\"URL\" data-id=\"https:\/\/explego.nr.no\/\" target=\"_blank\">eXplego: A toolkit for selecting an appropriate explanation method<\/a><\/li>\n<\/ul>\n\n\n\n\n\n<p>Research articles <\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><a href=\"https:\/\/doi.org\/10.48550\/arXiv.2305.09536\" target=\"_blank\" rel=\"noreferrer noopener\">A comparative study of methods for estimating conditional Shapley values and when to use them.<\/a><\/li>\n<\/ul>\n\n\n\n<ul class=\"wp-block-list\">\n<li><a href=\"https:\/\/www.jmlr.org\/papers\/v23\/21-1413.html\" target=\"_blank\" rel=\"noreferrer noopener\">Using Shapley values and variational autoencoders to explain predictive models with dependent mixed features.<\/a><\/li>\n<\/ul>\n\n\n\n<ul class=\"wp-block-list\">\n<li><a href=\"https:\/\/doi.org\/10.1016\/j.artint.2021.103502\" target=\"_blank\" rel=\"noreferrer noopener\">Explaining individual predictions when features are dependent: More accurate approximations to Shapley values.<\/a><\/li>\n<\/ul>\n\n\n\n<ul class=\"wp-block-list\">\n<li><a href=\"https:\/\/doi.org\/10.1515\/demo-2021-0103\" target=\"_blank\" rel=\"noreferrer noopener\">Explaining predictive models using Shapley values and non-parametric vine copulas.<\/a><\/li>\n<\/ul>\n\n\n\n<ul class=\"wp-block-list\">\n<li><a rel=\"noreferrer noopener\" href=\"https:\/\/hdl.handle.net\/11250\/2985131\" target=\"_blank\">Comparison of contextual importance and utility with LIME and Shapley values.<\/a><\/li>\n<\/ul>\n\n\n\n<ul class=\"wp-block-list\">\n<li><a href=\"https:\/\/doi.org\/10.48550\/arXiv.2106.12228\" target=\"_blank\" rel=\"noreferrer noopener\">groupShapley: Efficient prediction explanation with Shapley values for feature groups.<\/a><\/li>\n<\/ul>\n\n\n\n<ul class=\"wp-block-list\">\n<li><a href=\"https:\/\/hdl.handle.net\/11250\/2985827\" target=\"_blank\" rel=\"noreferrer noopener\">Efficient and simple prediction explanations with groupShapley: A practical perspective.<\/a><\/li>\n<\/ul>\n\n\n\n<ul class=\"wp-block-list\">\n<li><a href=\"https:\/\/doi.org\/10.48550\/arXiv.2111.09790\" target=\"_blank\" rel=\"noreferrer noopener\">MCCE: Monte Carlo sampling of realistic counterfactual explanations.<\/a><\/li>\n<\/ul>\n\n\n\n<ul class=\"wp-block-list\">\n<li><a href=\"https:\/\/doi.org\/10.21105\/joss.02027\" target=\"_blank\" rel=\"noreferrer noopener\">shapr: An R-package for explaining machine learning models with dependence-aware Shapley values.<\/a><\/li>\n<\/ul>\n\n\n\n<ul class=\"wp-block-list\">\n<li><a href=\"https:\/\/hdl.handle.net\/11250\/2731037\">Explaining predictive models with mixed features using Shapley values and conditional inference trees.<\/a><\/li>\n<\/ul>\n<\/div>\n<\/div>\n<\/div>\n\n\n\n<p><\/p>\n","protected":false},"featured_media":21717,"parent":6893,"menu_order":16,"template":"","meta":{"_acf_changed":false,"_trash_the_other_posts":false,"editor_notices":[],"footnotes":""},"class_list":["post-21716","bc_area","type-bc_area","status-publish","has-post-thumbnail"],"acf":[],"_links":{"self":[{"href":"https:\/\/nr.no\/en\/wp-json\/wp\/v2\/bc_area\/21716","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/nr.no\/en\/wp-json\/wp\/v2\/bc_area"}],"about":[{"href":"https:\/\/nr.no\/en\/wp-json\/wp\/v2\/types\/bc_area"}],"version-history":[{"count":5,"href":"https:\/\/nr.no\/en\/wp-json\/wp\/v2\/bc_area\/21716\/revisions"}],"predecessor-version":[{"id":39119,"href":"https:\/\/nr.no\/en\/wp-json\/wp\/v2\/bc_area\/21716\/revisions\/39119"}],"up":[{"embeddable":true,"href":"https:\/\/nr.no\/en\/wp-json\/wp\/v2\/bc_area\/6893"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/nr.no\/en\/wp-json\/wp\/v2\/media\/21717"}],"wp:attachment":[{"href":"https:\/\/nr.no\/en\/wp-json\/wp\/v2\/media?parent=21716"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}