Explainable AI (XAI): The Future of Trust in AI

Ilyas Ahmed
7 min readJun 6, 2023

Introduction

Artificial intelligence (AI) is rapidly becoming a part of our everyday lives. From self-driving cars to medical diagnosis, AI is being used to make decisions that have a significant impact on our lives. However, as AI systems become more complex, it is becoming increasingly difficult for humans to understand how they work and why they make the decisions they do. This lack of transparency can lead to a number of problems, including:

  • Lack of trust: If people don’t understand how an AI system works, they are less likely to trust it. This can make it difficult for AI systems to be adopted in a wide range of applications.
  • Bias: AI systems can be biased, which can lead to unfair or inaccurate decisions. For example, an AI system that is used to make loan decisions may be biased against people of color, which could lead to people of color being less likely to get loans, even if they are qualified.
  • Misuse: If people don’t understand how an AI system works, they may misuse it. For example, an AI system that is used to generate fake news could be used to spread misinformation.

Explainable AI (XAI)

Explainable AI (XAI) is a set of techniques that allow humans to understand, interpret, and explain the decisions made by AI systems. XAI is important for a number of reasons, including:

  • Building trust: XAI can help users trust AI systems by providing them with insights into how the systems work and why they make the decisions they do. This can be especially important in situations where the decisions made by AI systems have a significant impact on people’s lives, such as in healthcare or finance.
  • Identifying bias: XAI can help identify bias in AI systems. This is important because bias can lead to unfair or inaccurate decisions. For example, if an AI system is used to make loan decisions, and the system is biased against people of color, then people of color may be less likely to get loans, even if they are qualified.
  • Improving performance: XAI can be used to improve the performance of AI systems. By understanding how the systems work, developers can make changes that improve the accuracy and fairness of the systems.

Types of XAI

There are a number of different XAI techniques, each with its own strengths and weaknesses. Some of the most common XAI techniques include:

  • Feature importance: This technique identifies the features that are most important for making a prediction. This can be helpful for understanding why the system made a particular decision.
  • Local explanations: This technique provides explanations for individual predictions. This can be helpful for understanding how the system made a particular decision for a specific data point.
  • Counterfactual explanations: This technique shows how a prediction would change if one or more features were changed. This can be helpful for understanding how the system is sensitive to different features.

The Top XAI Libraries in Python

There are a number of different XAI libraries in Python, each with its own strengths and weaknesses. Here are a few of the most popular XAI libraries:

  • SHAP: SHAP (SHapley Additive exPlanations) is a model-agnostic approach to explainability that uses Shapley values to explain the predictions of machine learning models. SHAP values are a measure of how much each feature contributes to a prediction.
  • LIME: LIME (Local Interpretable Model-agnostic Explanations) is another model-agnostic approach to explainability that uses local surrogate models to explain the predictions of machine learning models. LIME explanations are local, which means that they only explain a single prediction.
  • ALE: ALE (Attribution-LIME Explainer) is a combination of SHAP and LIME. ALE explanations are global, which means that they explain all of the predictions of a model. ALE explanations are also more interpretable than SHAP explanations.
  • TensorFlow Explainability Toolkit: The TensorFlow Explainability Toolkit is a set of tools for explaining the predictions of TensorFlow models. The toolkit includes a number of different explainability methods, including SHAP, LIME, and ALE.
  • SOptX: SOptX is a Python library for explaining the predictions of machine learning models. SOptX includes a number of different explainability methods, including SHAP, LIME, and ALE.

When choosing an XAI library, it is important to consider the following factors:

  • The type of model you are using: Some XAI libraries are only compatible with certain types of models. For example, SHAP is only compatible with models that have a single output.
  • The level of detail you need: Some XAI libraries provide more detailed explanations than others. For example, ALE explanations are more detailed than SHAP explanations.
  • The ease of use: Some XAI libraries are easier to use than others. For example, TensorFlow Explainability Toolkit is more difficult to use than SOptX.

Challenges of XAI

Despite the benefits of XAI, there are a number of challenges that need to be addressed before XAI can be widely adopted. Some of the challenges of XAI include:

  • Complexity: XAI techniques can be complex and difficult to understand. This can make it difficult for developers to implement XAI techniques in their AI systems.
  • Cost: XAI techniques can be expensive to implement. This can make it difficult for small businesses and startups to adopt XAI.
  • Acceptance: Some people may not be comfortable with the idea of AI systems that are able to explain their decisions. This could make it difficult for AI systems to be adopted in some applications.

Python Code Using XAI

Here we have used a sample dataset from Kaggle to predict the price of houses based on a number of features. We will be using the LIME library to explain how the model works to predict a certain value. The dataset shown below gives a clear picture.

House Price Prediction Dataset from Kaggle

We can see from the dataset that there are a lot of features that define the price of the house. In this example, we will use a few features to show how the LIME explainer depicts the way a prediction is reached.

import lime
import lime.lime_tabular

explainer = lime.lime_tabular.LimeTabularExplainer(X_train.values, feature_names=X_train.columns.values.tolist(),
class_names=['price'], verbose=True, mode='regression')

record_num = 7
exp = explainer.explain_instance(X_test.values[record_num], model.predict, num_features=6)

exp.show_in_notebook(show_table=True)

The code given above will import the Lime and LimeTabular libraries, create a LimeTabularExplainer object, explain a single prediction, and show the explanation in a notebook. The output of the code will be a table that shows the importance of each feature for the prediction.

Here is a more detailed explanation of the code:

  • The import lime and import lime.lime_tabular lines import the Lime and LimeTabular libraries. These libraries will be used to create the Lime explainer.
  • The explainer = lime.lime_tabular.LimeTabularExplainer(X_train.values, feature_names=X_train.columns.values.tolist(), class_names=['price'], verbose=True, mode='regression') line creates a LimeTabularExplainer object. The X_train.values parameter specifies the training data. The feature_names parameter specifies the names of the features. The class_names parameter specifies the names of the classes. The verbose parameter specifies whether or not the explainer should print status messages. The mode parameter specifies the mode of the explainer. In this case, the mode is regression, which means that the explainer will be used to explain a regression prediction.
  • The record_num = 7 line specifies the record number that will be explained.
  • The exp = explainer.explain_instance(X_test.values[record_num], model.predict, num_features=6) line explains a single prediction. The X_test.values[record_num] parameter specifies the data point that the explanation will be generated for. The model.predict function is used to predict the value of the data point. The num_features parameter specifies the number of features that the explainer will consider.
  • The exp.show_in_notebook(show_table=True) line shows the explanation in a notebook. The show_table parameter specifies whether or not the explanation should be shown as a table.
LIME Explainer output in Jupyter Notebook

The above image depicts the following:

  • The house price was predicted as 6466502.44 USD
  • The variables bathrooms, basement and stories have a positive influence on the prediction
  • The variables airconditioning, hotwaterheating and parking have a negative impact on the prediction

A lot more can be done with these powerful XAI libraries. A small example in depicted here.

The Future of XAI

Despite the challenges, XAI is a rapidly growing field, and there are many new techniques being developed all the time. As XAI techniques improve, they will become increasingly important for building trust in AI systems, identifying bias, and improving performance.

Conclusion

Explainable AI is a critical technology that will help to ensure that AI systems are trustworthy, fair, and accurate. As XAI techniques continue to improve, we can expect to see a wide range of applications for XAI in a variety of industries. It is a rapidly growing field and the techniques allow humans to understand, interpret, and explain the decisions made by AI systems. This can help to build trust with users, identify bias in systems, and improve the performance of systems. Python is a popular language for developing XAI systems. There are a number of XAI libraries available in Python, including SHAP, LIME, and ALE. These libraries can be used to explain the predictions of a wide variety of AI models.

For more exciting ML articles, please support me by following Ilyas Ahmed

Until next time, build AI models away with XAI!

--

--

Ilyas Ahmed
Ilyas Ahmed

Written by Ilyas Ahmed

Data Scientist at Wipro Arabia Ltd. Experienced in ML, NLP and Comp. Vision. Sharing what I know :)

No responses yet