Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

The Case for Human-Centered XAI

Intel
Intel Tech
Published in
3 min readDec 12, 2023

New approaches focus on helping end users to understand AI.

Photo by Sid Balachandran on Unsplash

Presented by Ezequiel Lanza — AI Open Source Evangelist (Intel)

As AI continues to revolutionize industries, convincing users that AI models can be trusted will play an important role in driving widespread adoption. Explainable AI (XAI) provides frameworks and tools to help users understand why models make certain decisions, enabling users to embrace AI with confidence.

But while XAI approaches are proving to be useful for developers, how do we know we’re providing the explanations end users need? After all, AI models are meant to serve end users; end users should be able to understand and interpret model predictions. One way to determine if end users are getting helpful explanations is by simply asking them. Recent human-centered approaches have leveraged real user studies to expose the potential pitfalls in existing XAI techniques.

In a 2022 study (“Help Me Help the AI”: Understanding How Explainability Can Support Human-AI Interaction Sunnie S. Y. Kim, Elizabeth Anne Watkins, Olga Russakovsky, Ruth Fong, Andrés Monroy-Hernández (https://arxiv.org/abs/2210.03735)), Intel Labs interviewed end users of the Merlin* bird identification application to help evaluate the usefulness of the application’s explanations — what the end users’ needs are, how end users intend to use the explanations, and how they perceive existing XAI approaches. As part of the study, the application served participants with four kinds of explanations:

  • A heatmap-based explanation depicting the weight of each bird feature used to identify a species.
  • An example-based explanation that provides pictures of birds that are similar to the bird in the picture.
  • A concept-based explanation that reveals how much weight each bird feature received in the model’s prediction, similar to a shapley additive explanations (SHAP)* approach.
  • A prototype-based explanation highlighting which features of a bird were most useful to help the model identify it.
End users saw four kinds of explanations of how the model identified a bird.

The results were different for participants with high AI knowledge and those with low AI knowledge. Users with high AI knowledge found the heatmap and concept-based explanations to be intuitive, while those with less AI experience said these approaches were too technical.

Only participants with high AI knowledge could make sense of the explanations in the study.

What we end up with is a gap. Current XAI approaches are perfect for AI creators and experienced end users, but users with low AI knowledge don’t care about existing explanations — and they likely won’t until they receive more practically useful insights they can understand.

The study suggests that creators are building XAI models that serve themselves, not the end user

New human-centered approaches emphasize the need for XAI techniques that prioritize the end user. While there are gaps in user comprehension, listening to real users can help us improve XAI models and ensure that explanations address whymodels make certain decisions, not just what they decide.

We encourage the community to help close the knowledge gap by focusing on building models that can scale across industries — what worked in healthcare may also work in finance. There’s no centralized system for ideas and solutions, but we can overcome this challenge by collaborating and sharing insights across the community.

· -Learn more about existing XAI techniques

· Watch the full talk

Acknowledgments
The author thanks Watkins, Elizabeth and, Nafus, Dawn for their incredible insights.

About the author

Ezequiel Lanza, AI Open Source Evangelist, Intel

Ezequiel Lanza is an open source evangelist on Intel’s Open Ecosystem team, passionate about helping people discover the exciting world of AI. He’s also a frequent AI conference presenter and creator of use cases, tutorials, and guides to help developers adopt open source AI tools like TensorFlow and Hugging Face*. Find him on X at @eze_lanza.

--

--

Intel
Intel Tech

Intel news, views & events about global tech innovation.