Publikasjonsdetaljer
- Utgiver: Norsk Regnesentral
- Serie: NR-notat (BAMJO/09/24)
- År: 2024
- Utgave: BAMJO/09/24
- Antall sider: 24
In this study, we apply the recent explainability methodologies Concept Relevance Propagation (CRP) and Relevance Maximization (RelMax), for understanding the decision-making processes of a deep learning classifier trained for mammogram-based breast cancer detection. The primary goal was to leverage CRP and RelMax to identify learned concepts that significantly influence the model's classifications. We aimed to present these in a manner intelligible to humans, with the intention of using this insight for quality assurance of the model. The study found that, although most high relevance concepts were shared across cancer subclasses, the model had formed a small number of concepts exclusive to our selected subclasses. These identified patterns seemed to capture well our intuitive understanding of what the given cancer subclass should look like. In addition, they also aligned well with the metadata information provided by radiologists and what they are typically familiar with. We believe this could further enhance their trust in the cancer classifier, since what the model has learned becomes more comprehensible and visually accessible. Our results refine a previously used explainability method, which was limited to localizing where the model directed its attention, but lacked in providing what was its understanding or offering additional clarification on the identified cancer subclass.