XAI Basics
XAI Basics
XAI Basics
• Find Laws
• AI – Black Box
Black box
• Unable to answer crucial questions about the operations of the
“machine”:
• Is it making reliable predictions? Is it making those
• But a big question? How do we make sense of AI results in the real world? This is
where Explainable AI comes in.
• It’s all about making AI’s decisions easy to understand and apply in real-life
situations.
• Explainable AI helps bridge the gap between AI models and practical use.
What is Explainable AI or XAI?
• Explainable AI (XAI) refers to methods and techniques in the
application of artificial intelligence technology such that the results of
the solution can be understood by humans.
• In the early phases of AI adoption, it was okay to not understand what
the model predicts in a certain way, as long as it gives the correct
outputs.
• Explaining how they work was not the first priority.
• Now, the focus is turning to build human interpretable models.
What it is
• Explainable AI is used to describe an AI model, its expected impact
and potential biases.
• It helps characterize model accuracy, fairness, transparency and
outcomes in AI-powered decision making.
• Explainable AI is crucial for an organization in building trust and
confidence when putting AI models into production.
Example
• For example, hospitals can use explainable AI for cancer detection
and treatment, where algorithms show the reasoning behind a given
model's decision-making.
• This makes it easier not only for doctors to make treatment decisions,
but also provide data-backed explanations to their patients.
What Makes an Explanation Good?
• Different stakeholders
• a good explanation depends on which stakeholders it’s aimed at.
• Different audiences often require different explanations
• Examples
• Tailoring an explanation to the audience: tradeoffs between
accuracy and explainability.
When We Need Explainability
• Not Free
• Takes time and resources
• Examples:
• Image recognition AI may be used to help clients tag photos of their dogs
when they upload their photos to the cloud. Accuracy may matter a great deal,
but exactly how the model does it may not matter so much.
• AI that predicts when the shipment of screws will arrive at the toy factory;
there may be no great need for explainability there.
• In the industry, you will often hear that business stakeholders tend to prefer
models that are more interpretable like linear models (linear\logistic
regression) and trees which are intuitive, easy to validate, and explain to a non-
expert in data science.
• In contrast, when we look at the complex structure of real-life data, in the model
building & selection phase, the interest is mostly shifted towards more advanced
models. That way, we are more likely to obtain improved predictions.
Challenge of Complex Models
• But, HOW?
Suggesting Model
• When we suggest this model to stakeholders, will they completely
trust it and immediately start using it? NO.
• They will ask questions and we should be ready to answer them.
• Why should I trust your model?
• So, models take inputs and process them to get outputs. What if our
data is biased?
• It will also make our model biased and therefore untrustworthy.
• It is important to understand & be able to explain to our models so that
we can also trust their predictions and maybe even detect issues and
fix them before presenting them to others.
Techniques for Model Interpretability
• SKATER
• These libraries use
• feature importance,
• partial dependence plots,
• individual conditional expectation plots
• Global surrogate models take the original inputs and your black-box machine
learning predictions. When this new dataset is used to train and test the
appropriate global surrogate model (more interpretable models such as linear
model, decision tree, etc.), it basically tries to mimic your black-box model’s
predictions. By interpreting and visualizing this “easier” model, we get a better
understanding of how our actual model predicts in a certain way.
• Other interpretability tools are LIME, SHAP, ELI5, and SKATER libraries. We
will talk about them in the next post, over a guided implementation.
Building an Explainability Framework
• It empowers us to harness the full potential of AI, making its inner workings
accessible to all.