Methods and Issues in Explainable AI— Free Online Course
Methods and Issues in Explainable AI— Free Online Course
Image credit: ChatGPTThis workshop surveys contemporary methods in explainable AI (XAI) with a focus on practical use: motivation and regulatory context, trade-offs between interpretability and performance, and integration of explainability into standard ML workflows. We highlight deep-learning-specific challenges and interpretable architectures for vision and forecasting. Course materials and recordings are available online.
Learning outcomes
- Recognize when and why explainability is required (technical and regulatory drivers).
- Use a concise taxonomy of XAI methods and select appropriate approaches for a task.
- Integrate explainability into the ML workflow (EDA, feature engineering, model selection, evaluation).
- Differentiate post-hoc explanations from intrinsically interpretable models and assess explanation quality.
Syllabus (brief)
- Introduction: motivation, taxonomy, and interpretability vs. black-box trade-offs.
- Post-hoc methods: feature attributions, partial dependence, LIME, SHAP.
- Deep-learning techniques: saliency maps, integrated gradients, data-valuation methods.
- Interpretable vision models: prototype and prototype-tree approaches (ProtoTreeNet).
- Interpretable forecasting: probabilistic and attention-based models; analyst-in-the-loop case studies.
Materials
Slides, notebooks, and recordings are linked from the course page. See the provided LMS link for full resources. Note that a free registration is neccessary in order to access the content.