What Is XAI? What Makes an Explanation Good?

Silvie   Spreeuwenberg
Silvie Spreeuwenberg Founder / Director, LibRT Read Author Bio || Read All Articles by Silvie Spreeuwenberg

If you start thinking about the people you trust in your environment, it's likely that you end up more confused than enlightened. That is what happened to me! There are people I have trusted right from the beginning, people who have gained my trust over time, and people whom I trust but have habits that are a bit annoying, like always being late. Although I know perfectly when I trust someone and when I don't, it's a difficult concept to define.

What is trust?

At a recent training conducted by Hayat Chedid at ScaleUp Nation, I learned that 'trust' includes a couple of concepts that interact with each other. One way to describe this interaction is to use the following trust-equation as defined by Charles H. Green:

trust = (credibility + reliability + intimacy) / self-orientation

Being an engineer, I understand this equation, and it also helps me to understand the differences between the people I trust. One of them may not be very reliable but has a lot of credibility. So, this Trust equation is a good explanation for me. Is it for you too?

Do you trust the explanation?

Likewise, a good explanation is built up of several components that interrelate to one another and may be perceived differently per task, domain, or even the domain expert's personal preference. Research confirms this hypothesis and has been able to unravel some of these components.

Explanations have been a focus of philosophy for millennia as part of understanding the nature of knowledge, causality, beliefs, and justification.

This is the sixth article in a series on Explainable AI (XAI), and I would like to share what makes an explanation a good explanation.

Findings from social sciences on explanations

There are many valuable bodies of research in philosophy, psychology, and cognitive science on how people define, generate, select, evaluate, and present explanations. The research to date suggests that people have certain biases and social expectations towards the explanation process itself.[1]

This sounds like a counterintuitive conclusion: to prevent decision biases, we use explanations that take into account the biases of people when evaluating explanations. Yes, I admit, it sounds like a catch-22.

Other more practical findings relevant to generating explanations of AI models are:

  • Explanations must show the difference between outcomes (the technical term is that they must be contrastive).

  • Explanations may be incomplete; they can show one or two specific, selected causes, and not necessarily be the complete cause of an event.

  • Explanations must be relevant; the most likely explanation is not always the best explanation for a person, and statistical generalizations based on probabilities are perceived as unsatisfying.

  • Explanations should be believable; they are part of a social process where knowledge is transferred in a conversation or interaction and presented in relation to the explainer's beliefs about the explainee's beliefs.

These four points all converge around a single point: explanations are contextual. As a result, XAI will need a separate layer of models to generate explanations for a specific context.

AI's first steps towards better explanations

Since explanations are contextual, XAI will need a separate layer of models to generate explanations for a specific context.

This is exactly the approach I took in the research for my graduate thesis. I worked on a model that helped a breeder to select crops in the fifth year of a breeding process. The model used data from the previous 10 years and was generated by a genetic algorithm.[2] The breeder did not trust the model and sometimes had additional knowledge; for example, some parts of the field have more shadow or contain mouse holes, and that information was not captured in the data set. By generating a decision tree from the model using an algorithm (C4.5), we could explain to the breeder which data elements were most relevant for a specific outcome and how these data elements had been combined. This way, the breeder could understand the reasoning of the model and make his own decision to follow the model or divert from the model.

Nowadays, the practically-oriented research of DARPA (the Defense Advanced Research Projects Agency in the United States) is following the same approach by creating an 'explanation interface' that should help a user:

  • understand why the model produced the result;
  • understand why the model did not produce a (different) result;
  • know when the model is successful;
  • know when the model failed;
  • trust the system.

Other experiments

The research on explainable learning algorithms can be divided into three research areas:

  • Change deep learning methods to ensure that only 'features' that are easy to explain are learned. Explainability, next to accuracy and precision, is one of the optimizations criteria in the model.

  • Improve techniques that teach explainable models such as Bayesian networks. An example of a Bayesian network is a well-structured model that shows symptoms and causes, with their dependencies and the likelihood of the dependency in a directed graph. These causal models are easier to understand and may therefore be used directly as an explanation.

  • Understand how to use techniques to generate explanations, such as decision tree generation, from black box models. The generated model is used to explain the result of the black box model.

The idea that explainability is at the expense of accuracy is deeply rooted in the AI community. Unfortunately, this has hampered research into good explainable models and indicates that the human factor in AI is underestimated.

What do users say?

For clinical users, machine learning and AI look promising. There is a lot of data, and also much uncertainty, so every improvement counts. However, accurate AI systems are not necessarily sufficient to be used by clinical staff. The adoption has been more challenging than anticipated, and the reason is that even reliable machine learning systems cannot earn a clinician's trust for acceptability. Research about the reasons for the lack of trust indicate that accuracy is not enough. Clinicians know that the complexity of clinical medicine is such that no model is likely to achieve perfect predictions. Therefore, clinicians expect a non-accurate model, and a system based on a model that acknowledges this challenge promotes trustworthiness.

Thus, clinicians find it more important that the system help them to justify their clinical decision making by:

  • aligning the relevant model features with evidence-based medical practice. This can be achieved by only showing a relevant subset of the features that derive the model outcome.

  • providing the context in which the model operates. This can be achieved by showing what data is used to train the model and how certain one is that the data is relevant, complete, and correct.

  • promoting the awareness on situations where the model may fall short. This can be achieved by showing certainty scores for certain outputs and indicating which combinations have not been in the training set.

  • showing a transparent design that resembles the analytical process a clinician would follow himself. This can be achieved by showing the relevant path in a decision tree.

This research on clinicians can be generalized to define what makes up a good explanation for AI models.

What makes up a good explanation in XAI?

A good explanation uses believable model features, shows that the training data is relevant for the given situation, and distinguishes the result from other outcomes. The explanation should be simple and transparent. By analogy with our equation about trust, we find the following equation for a 'good explanation':

Good explanation = (Believable + Relevant + Distinguishable) / Complexity

The AI and machine learning community has resorted to developing novel techniques to measure model performance as a way to explain the model, but accuracy, precision, and robustness are not the measures that will convince a domain expert. The model doesn't even have to be more accurate than an expert's opinion and the explanation doesn't even have to be complete.

Instead, we need to develop separate interfaces to support the communication of an explanation, and interaction with the user helps justify the decision by taking into account the four elements that make up a good explanation:

  • The explanation should be aligned with the user's beliefs.

  • The explanation should indicate that relevant data is used to train the model.

  • The explanation should indicate how to distinguish situations where the model is not applicable.

  • The explanation should be transparent by not being complex and by following common reasoning patterns.

The community also needs to be aware that false positives (for example, when a model incorrectly classifies an applicant as being eligible) result in high costs and causes domain experts to be inherently skeptical about AI models. In such domains, explanations must help the domain expert to find the false positives of the AI system before these costly errors are made. This is what our explanation interface should provide to the expert. It should act as a mirror for the expert, revealing the model's and the expert's decision biases.

References

[1] Tim Miller, "Explanation in Artificial Intelligence: Insights from the Social Sciences," August 2018, https://people.eng.unimelb.edu.au/tmiller/pubs/explanation-review.pdf

[2] "What Is the Genetic Algorithm?" https://www.mathworks.com/help/gads/what-is-the-genetic-algorithm.html

# # #

Standard citation for this article:


citations icon
Silvie Spreeuwenberg , "What Is XAI? What Makes an Explanation Good?" Business Rules Journal, Vol. 20, No. 9, (Sep. 2019)
URL: http://www.brcommunity.com/a2019/c006.html

About our Contributor:


Silvie   Spreeuwenberg
Silvie Spreeuwenberg Founder / Director, LibRT

Silvie Spreeuwenberg has a background in artificial intelligence and is the co-founder and director of LibRT. With LibRT, she helps clients draft business rules in the most efficient and effective way possible. Her clients are characterized by a need for agility and excellence in executing their unique business strategy or policy. Silvie's experience has resulted in the development of tools and techniques to increase the quality of business rules. She writes, "We believe that one should focus on quality management of business rules to make full profit of the business rules approach." LibRT is located in the Netherlands; for more information visit www.silviespreeuwenberg.com & www.librt.com

Read All Articles by Silvie Spreeuwenberg

Online Interactive Training Series

In response to a great many requests, Business Rule Solutions now offers at-a-distance learning options. No travel, no backlogs, no hassles. Same great instructors, but with schedules, content and pricing designed to meet the special needs of busy professionals.