XAI & MLI - or the Art of Making Machine Learning Intelligible and Fully Operational
What is XAI and why companies should care about it?
Watch the webinar
About this webinar
#1
📜 In April this year, the European Commission presented the first version of the EU AI Act, a Europe-wide framework aiming at making AI human-centric, trustworthy, and explainable.
#2
All Machine Learning (ML) models predictions are useful and more and more accurate. However, it is still very opaque as of why these models reach these numbers? Thus, having a direct impact on the validity of the subsequent decision-making process. What led these models / algorithms to output these numbers?
Can they really be trusted?
#3
As ML becomes ubiquitous, it becomes now urgent to ensure:
- the adoption of these technologies by providing clear, understandable and relevant explanations for their results and predictions;
- the compliance of the models with the existing and future regulations (see GDPR and AI EU Act);
- the sustainability of the achieved predictions and their relevance.
Watch this webinar
👉 Access this on-demand webinar to learn the importance and relevance of XAI (eXplainable Artifical Intelligence) & MLI (Machine Learning Intelligibility) approaches, as well as the keys to successful implementation and insertion within the existing Data Science pipelines and workflows.
Key Takeaways
🤔 What is XAI&MLI and why organizations should care about it
🧐 How to approach XAI&MLI
💡 Real-world use cases: Churn Prediction in Media & ML Model Understanding in Automotive Industry