Methods and Issues in Explainable AI— Free Online Course

Methods and Issues in Explainable AI— Free Online Course

Sep 26, 2025·
Faried Abu Zaid
,
AppliedAI Institute for Europe
· 1 min read
Image credit: ChatGPT

This workshop surveys contemporary methods in explainable AI (XAI) with a focus on practical use: motivation and regulatory context, trade-offs between interpretability and performance, and integration of explainability into standard ML workflows. We highlight deep-learning-specific challenges and interpretable architectures for vision and forecasting. Course materials and recordings are available online.

Learning outcomes

  • Recognize when and why explainability is required (technical and regulatory drivers).
  • Use a concise taxonomy of XAI methods and select appropriate approaches for a task.
  • Integrate explainability into the ML workflow (EDA, feature engineering, model selection, evaluation).
  • Differentiate post-hoc explanations from intrinsically interpretable models and assess explanation quality.

Syllabus (brief)

  1. Introduction: motivation, taxonomy, and interpretability vs. black-box trade-offs.
  2. Post-hoc methods: feature attributions, partial dependence, LIME, SHAP.
  3. Deep-learning techniques: saliency maps, integrated gradients, data-valuation methods.
  4. Interpretable vision models: prototype and prototype-tree approaches (ProtoTreeNet).
  5. Interpretable forecasting: probabilistic and attention-based models; analyst-in-the-loop case studies.

Materials

Slides, notebooks, and recordings are linked from the course page. See the provided LMS link for full resources. Note that a free registration is neccessary in order to access the content.