


default search action
xAI 2024: Valletta, Malta
- Luca Longo
, Sebastian Lapuschkin
, Christin Seifert
:
Explainable Artificial Intelligence - Second World Conference, xAI 2024, Valletta, Malta, July 17-19, 2024, Proceedings, Part III. Communications in Computer and Information Science 2155, Springer 2024, ISBN 978-3-031-63799-5
Counterfactual Explanations and Causality for eXplainable AI
- Mario Refoyo
, David Luengo
:
Sub-SpaCE: Subsequence-Based Sparse Counterfactual Explanations for Time Series Classification Problems. 3-17 - Carlo Abrate
, Federico Siciliano
, Francesco Bonchi
, Fabrizio Silvestri
:
Human-in-the-Loop Personalized Counterfactual Recourse. 18-38 - Dmytro Shvetsov, Joonas Ariva, Marharyta Domnich
, Raul Vicente
, Dmytro Fishman
:
COIN: Counterfactual Inpainting for Weakly Supervised Semantic Segmentation for Medical Images. 39-59 - Marharyta Domnich
, Raul Vicente
:
Enhancing Counterfactual Explanation Search with Diffusion Distance and Directional Coherence. 60-84 - Susanne Dandl
, Kristin Blesch
, Timo Freiesleben
, Gunnar König
, Jan Kapar
, Bernd Bischl
, Marvin N. Wright
:
CountARFactuals - Generating Plausible Model-Agnostic Counterfactual Explanations with Adversarial Random Forests. 85-107 - Martina Cinquini
, Riccardo Guidotti
:
Causality-Aware Local Interpretable Model-Agnostic Explanations. 108-124 - Matteo Rizzo
, Cristina Conati
, Daesik Jang
, Hui Hu
:
Evaluating the Faithfulness of Causality in Saliency-Based Explanations of Deep Learning Models for Temporal Colour Constancy. 125-142 - Nils Ole Breuer, Andreas Sauter, Majid Mohammadi, Erman Acar:
CAGE: Causality-Aware Shapley Value for Global Explanations. 143-162
Fairness, Trust, Privacy, Security, Accountability and Actionability in eXplainable AI
- Raphael C. Engelhardt
, Moritz Lange
, Laurenz Wiskott
, Wolfgang Konen
:
Exploring the Reliability of SHAP Values in Reinforcement Learning. 165-184 - Francesco Giannini
, Stefano Fioravanti
, Pietro Barbiero, Alberto Tonda
, Pietro Liò, Elena Di Lavore
:
Categorical Foundation of Explainable AI: A Unifying Theory. 185-206 - Alireza Torabian, Ruth Urner
:
Investigating Calibrated Classification Scores Through the Lens of Interpretability. 207-231 - Sarah Seifi
, Tobias Sukianto
, Maximilian Strobel, Cecilia Carbonelli
, Lorenzo Servadei
, Robert Wille
:
XentricAI: A Gesture Sensing Calibration Approach Through Explainable and User-Centric AI. 232-246 - Niklas Koenen
, Marvin N. Wright
:
Toward Understanding the Disagreement Problem in Neural Network Feature Attribution. 247-269 - Fatima Rabia Yapicioglu
, Alessandra Stramiglio
, Fabio Vitali
:
ConformaSight: Conformal Prediction-Based Global and Model-Agnostic Explainability Framework. 270-293 - Fatima Ezzeddine, Mirna Saad, Omran Ayoub, Davide Andreoletti, Martin Gjoreski, Ihab Sbeity, Marc Langheinrich, Silvia Giordano:
Differential Privacy for Anomaly Detection: Analyzing the Trade-Off Between Privacy and Explainability. 294-318 - Swati Sachan
, Vinicius Dezem
, Dale S. Fickett
:
Blockchain for Ethical and Transparent Generative AI Utilization by Banking and Finance Lawyers. 319-333 - Fahmida Tasnim Lisa
, Sheikh Rabiul Islam, Neha Mohan Kumar:
Multi-modal Machine Learning Model for Interpretable Malware Classification. 334-349 - Samantha Visbeek, Erman Acar
, Floris den Hengst
:
Explainable Fraud Detection with Deep Symbolic Classification. 350-373 - Meirav Segal
, Anne-Marie George
, Ingrid Chieh Yu, Christos Dimitrakakis
:
Better Luck Next Time: About Robust Recourse in Binary Allocation Problems. 374-394 - Tobias Leemann
, Martin Pawelczyk
, Bardh Prenkaj
, Gjergji Kasneci
:
Towards Non-adversarial Algorithmic Recourse. 395-419 - Nijat Mehdiyev, Maxim Majlatow, Peter Fettke:
Communicating Uncertainty in Machine Learning Explanations: A Visualization Analytics Approach for Predictive Process Monitoring. 420-438 - Brigt Arve Toppe Håvardstun, Cèsar Ferri
, Kristian Flikka
, Jan Arne Telle
:
XAI for Time Series Classification: Evaluating the Benefits of Model Inspection for End-Users. 439-453

manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.