Loading...
Thumbnail Image
Item

Sparseness-Optimized Feature Importance

Grau,Isel
Nápoles,Gonzalo
Abstract
In this paper, we propose a model-agnostic post-hoc explanation procedure devoted to computing feature attribution. The proposed method, termed Sparseness-Optimized Feature Importance (SOFI), entails solving an optimization problem related to the sparseness of feature importance explanations. The intuition behind this property is that the model’s performance is severely affected after marginalizing the most important features while remaining largely unaffected after marginalizing the least important ones. Existing post-hoc feature attribution methods do not optimize this property directly but rather implement proxies to obtain this behavior. Numerical simulations using both structured (tabular) and unstructured (image) classification datasets show the superiority of our proposal compared with state-of-the-art feature attribution explanation methods. The implementation of the method is available on https://github.com/igraugar/sofi.
Description
Publisher Copyright: © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024.
Date
2024
Journal Title
Journal ISSN
Volume Title
Publisher
Springer Cham
Research Projects
Organizational Units
Journal Issue
Keywords
feature importance, model-agnostic explainability, sparse explanations
Citation
Grau, I & Nápoles, G 2024, Sparseness-Optimized Feature Importance. in L Longo, S Lapuschkin & C Seifert (eds), Explainable Artificial Intelligence. xAI 2024. Communications in Computer and Information Science, vol. 2154 CCIS, Springer Cham, pp. 393-415. https://doi.org/10.1007/978-3-031-63797-1_20
License
info:eu-repo/semantics/openAccess
Embedded videos