Bytes
Data Science

Unravelling the Black Box: Understanding Model Predictions with SHAP

Last Updated: 9th June, 2023
icon

Mahima Phalkey

Data Science Consultant at almaBetter

Use SHAP (SHapley Additive exPlanations) to gain model explainability and interpretability for black box models.

Have you ever wondered how data scientists choose which feature-importance algorithm to use for a particular black box model? Can feature importance algorithms be used to identify which features are causing bias in a black box model? What if the client asks why your black box model has predicted particular output? The answer to all your questions lies in Model Interpretability and explainability.

Frame 860-min.png

Let's dive deep into how Data Scientists leverage SHAP to explain black box models to understand which features contribute positively or negatively to predict certain output.

Black box models VS White box models

Image showing difference between black box and white box models

Black-box and white-box models are two different types of Machine Learning models that differ in interpretability and transparency.

A black box model is a type of Machine Learning model that is very complex and difficult to interpret. The model's inner workings are invisible to the user, and it is unclear how the model arrived at its output. This was the main question we first asked. What if a client asks how your model came to its conclusions?

Examples of black box models include deep neural networks and some types of ensemble models. While black box models can be very accurate and robust, they can be challenging in applications where interpretability and transparency are essential.

In contrast, white-box models are highly interpretable and transparent Machine Learning models. The model's inner workings are visible to the user, making it clear how the model reached its output. Examples of white-box models are linear regression, decision trees, and some types of rule-based models.

As the black box models are complicated, we must model interpretability and explainability.

What is Model Interpretability and Explainability?

Interpretability refers to understanding how the model works and what factors it uses to make its predictions. This is important for gaining insights into the underlying relationships in the data and identifying areas for improvement. Interpretability also helps identify potential sources of model error and bias.

Frame 870-min (1).png

Explainability also helps identify potential sources of model bias and discrimination. This is important for building confidence in the model, basing decisions on relevant factors, and identifying potential sources of bias and error.

What is SHAP?

SHAP (SHapley Additive exPlanation) is a method for interpreting the output of complex Machine Learning models. It provides a way to explain a model's prediction by assigning an importance score to each feature in the input data. SHAP values are based on Shapley values, a cooperative game theory concept.

 Image explaining the usage of SHAP

The basic idea behind SHAP is to determine how much each feature in the input data contributes to the model's predictions. This is done by comparing the actual output of the model with a reference output produced by simulation in the absence of features. By comparing these two outputs, SHAP can determine the contribution of each feature to the overall prediction.

Limitations of traditional model interpretation methods:

Traditional model interpretation methods have several limitations, such as feature importance scores and partial dependence plots. Here are some of the common limitations:

  1. Lack of individual-level interpretability: Traditional methods often provide global model-level insights but may need to provide insights into how individual predictions are made. This can be a problem in cases where we need to explain why a particular prediction was made.
  2. Limited ability to capture complex interactions: Many real-world problems involve complex interactions between features. Traditional methods, such as feature importance scores, may need to capture these interactions accurately.
  3. Inability to account for non-linear relationships: Traditional methods are often based on linear assumptions and may not accurately capture nonlinear relationships between features and the target variable.
  4. Dependence on model assumptions: Traditional methods may make assumptions about the model that may not hold true in practice. For example, partial dependency diagrams assume that functionality is independent of each other, which may not apply to your real problem.
  5. Limited ability to handle high dimensional data: Traditional methods may need to handle high dimensional data better as they may require a large number of charts and visualizations to provide meaningful insight.

Understanding the Basics of SHAP (SHapley Additive exPlanation): How Does it Work?

Frame 863-min.png

  • SHAP is a model-independent method. It provides a way to explain the model's output for each individual prediction by assigning a contribution score to each function. Regardless of the underlying algorithm or architecture, it can be used with any machine learning model.
  • The contribution score for each feature represents the impact of that feature on the model output for a given prediction. The contribution score is calculated by estimating the marginal contribution of that feature, which is the difference between the prediction with that feature included and the prediction without it.
  • To compute the contribution scores, SHAP uses a technique called the Shapley value, which is a concept from cooperative game theory. The Shapley value provides a way to fairly distribute the payoff from a cooperative game among the players. In the context of SHAP, the players are the input features, and the payoff is the difference in the model output.
  • To calculate the Shapley value, SHAP considers all possible feature coalitions and computes each feature's average marginal contribution across all coalitions. The contribution of each feature to a particular prediction is then calculated by weighting its marginal contribution by its share in each coalition.
  • Finally, the contribution scores for all features are combined using the Shapley value to arrive at an overall explanation for the prediction. This explanation shows how each feature contributes to the prediction, positively and negatively, and how they interact.
  • Post ratings can also be visualized using various charts. A bar chart or scatter chart. These charts provide a clear and intuitive way to understand the impact of each feature on the model output for a given forecast.

SHAP for Feature Importance

SHAP feature importance scores can be used to identify essential features in a data set. These scores are calculated per feature and can be used to understand the contribution of each feature to the model's output. Additionally, these values can be used to compare different models and explain how they differ in terms of performance.

Real-world example: SHAP Feature Importance can be used to analyze customer churn. Customer churn is when customers decide to end their relationship with a company. Understanding why customers are leaving and which factors affect their decision can help companies improve customer retention and reduce churn. Using SHAP Feature Importance, a Data Scientist can analyze customer data to identify the top features that are most influential in predicting customer churn. With this information, a company can focus on providing better services and offers to customers at risk of leaving.

SHAP Implementation

pip install shap

import shap

from sklearn.datasets import fetch_california_housing

from sklearn.model_selection import train_test_split

from sklearn.ensemble import RandomForestRegressor

# Load the California Housing dataset

data = fetch_california_housing()

X_train, X_test, y_train, y_test = train_test_split(data.data, data.target, test_size=0.2, random_state=42)

# Train a Random Forest regressor

model = RandomForestRegressor(n_estimators=100, random_state=42)

model.fit(X_train, y_train)

# Create an explainer object

explainer = shap.Explainer(model, X_train)

# Generate SHAP values

shap_values = explainer(X_test, check_additivity=False)

# Plot the SHAP values for a single instance

shap.plots.waterfall(shap_values[0])

# Plot the SHAP summary plot

shap.plots.beeswarm(shap_values)

We have created an explainer object using the shape.Explainer function from the shap library, passing in the trained random forest regressor and the training data. This creates an object that can be used to calculate SHAP values for any input data.

We then use the explainer object to calculate SHAP values for the test set by calling it with X_test as its argument. The resulting shap_values object contains the SHAP values for each feature in the test set.

Finally, we plot the SHAP values for a specific feature (MedInc in this example) using shap.plots.scatter. The color argument specifies that we want the color of the dots to represent the SHAP values.

Waterfall chart of SHAPBeeswarm chart of SHAP

This plot is made of all the dots in the train data. It demonstrates the following information:

  1. Feature importance: Variables are ranked in descending order.
  2. Impact: The horizontal location shows whether the effect of that value is associated with a higher or lower prediction.
  3. Original value: Color shows whether that variable is high (in red) or low (in blue) for that observation.
  4. Correlation: A high level of the "Type_of_Cab" content has a low and Positive impact on the "Surge_Pricing_Type". The "low" comes from the blue color, and the "positive" impact is shown on the X-axis. Similarly, the "Trip_Distance" negatively correlates with the target variable.

Red indicates a higher, and blue indicates a lower. We can verify the impact (Positive or Negative) from the X-axis for that specific data.

Examples of Using SHAP for Subtopic Discovery & Extraction in Various Industries

  • Healthcare: He can use SHAP to identify essential features in a patient's medical records and explain how these features contribute to the model's output.
  • Retail: SHAP can be used to identify essential features in customer data and explain how these features contribute to the model's output.
  • Finance: SHAP can be used to identify important features in financial data and explain how these features contribute to the model's output.
  • Insurance: SHAP can be used to identify important features in insurance data and explain how these features contribute to the model's output.

SHAP vs Other Explainability Techniques

SHAP (SHapley Additive exPlanations) is a popular explainability technique that helps explain machine learning models' output. It is one of the many explainability techniques available in the field of machine learning. Here are some of the key differences between SHAP and other explainability techniques:

  1. Local vs Global Explanations: SHAP provides local explanations, which explain how the model arrived at a particular prediction for a given input. Global descriptions, on the other hand, aim to describe the model's behavior as a whole and can be used to identify the most important features of the model.
  2. Model-specific vs Model-agnostic: SHAP is a model-independent technique. It can be used with any Machine Learning model. Other techniques, such as Local Interpretable Model-Agnostic Explanations (LIME), are also model-agnostic, while others are specific to certain types of models, such as decision tree-based techniques.
  3. Model Inversion: Some techniques, such as model inversion, attempt to reverse-engineer the model by generating inputs that would lead to a certain output. In contrast, SHAP does not try to reverse-engineer the model but instead provides explanations of how the model is behaving.
  4. Complexity: Some techniques, such as, for example, methods based on decision trees, are relatively simple and easy to understand. Other techniques, such as SHAP, can be more complex and require a deeper understanding of the underlying mathematics and algorithms.
  5. Visualization: Some techniques, such as LIME and SHAP, provide visualizations to help users understand the explanations. Other techniques, such as decision tree-based methods, may not provide visualizations or may provide visualizations that are difficult to interpret.

Advantages and Disadvantages of SHAP

Advantages

  • Ability to calculate feature importance scores
  • Ability to interpret the results of the model
  • Ability to visualize the results
  • Ability to compare different models
  • Ability to explain how models differ in terms of their performance

Disadvantages

  • Computationally intensive
  • Calculating SHAP values for large datasets can be difficult
  • Interpreting the results of the model can be challenging

Conclusion

Harness the Power of SHAP for Better Model Interpretation & Explainability

  1. SHAP (SHapley Additive exPlanations) is a powerful tool for understanding the behavior of machine learning models. It can be used to explain the model's predictions and uncover patterns in the data that traditional methods could not discover. By harnessing the power of SHAP, businesses can gain insights into their models and make better decisions about using them.
  2. SHAP also helps improve model interpretability and explainability, allowing users to understand why certain predictions were made and identify areas where improvements could be made. With its unique combination of features, SHAP is an invaluable tool for any business looking to maximize its machine-learning efforts.
  3. Now we can understand the models better, use SHAP to make better decisions and explain the results to others. After many hours of practice, he understood the models better and used SHAP to explain the results to his peers. He finally solved his problem and made better decisions with his data.

Interview Questions:

  1. What experience do you have in model interpretability and explainability using SHAP?

Answer:
I have experience in model interpretability and explainability using SHAP (SHapley Additive exPlanations) at a basic level. I have used SHAP to explain the output of classification models by plotting the feature importance of individual features and calculating the 'SHAP values for each feature, which quantify the contribution of each feature to the prediction. I have also used SHAP to investigate how different feature values can influence the model's decisions by plotting the SHAP values of different feature values to show how the model behaves when given different inputs.

  1. How would you define SHAP and explain the methodology behind it?

Answer:
SHAP (SHapley Additive exPlanations) is a model interpretability and explainability method which uses game theory to explain the contribution of each feature to the model's prediction. SHAP assigns each feature an importance score which quantifies the contribution of that feature to the final prediction. This score is based on the Shapley value from game theory, which assigns each player (in this case, each feature) a fair share of the total payoff (in this case, the prediction) based on their contribution. SHAP also provides an explanation of how the model behaves when given different inputs by plotting the SHAP values of different feature values, which shows how the model behaves when given different inputs.

  1. Can you provide examples of successful model interpretability and explainability projects you have worked on using SHAP?

Answer:
Yes, I have worked on a number of successful projects involving model interpretability and explainability using SHAP. For example, I worked on a project to improve customer churn prediction. Using SHAP feature importance, I identified the top features that were most influential in predicting customer churn and used this information to focus on providing better services and offer to customers at risk of leaving.

  1. How do you ensure that your models are making accurate and fair predictions?

Answer:
We use SHAP to explain why the model made a particular prediction for an individual instance. This helps me to uncover potential biases in the data that can lead to inaccurate results and to identify any correlations between features that may be causing unfair results. We can also use other techniques, such as cross-validation and data visualization, to identify potential issues and ensure that the model makes accurate and fair predictions.

  1. What challenges have you faced in using SHAP for model interpretability and explainability?

Answer:
One challenge that we might face was understanding the SHAP values. The SHAP values can be difficult to interpret, so we have to spend some time learning how to interpret them properly. Another challenge we might face was ensuring that the right features were selected for the SHAP computation. The wrong features can lead to inaccurate results, so we have to carefully select the right features for the SHAP computation.

Did you know that the average salary of a Data Scientist is Rs.12 Lakhs per annum? So it's never too late to explore new things in life, especially if you're interested in pursuing a career as one of the hottest jobs of the 21st century: Data Scientist. Click here to kickstart your career as a Data Scientist.

Frequently asked Questions

Q1: What is the main goal of model interpretability and explainability?

Ans: To understand the behavior of a model.

Q2: What type of tool uses SHAP to explain a model's predictions?

Ans: Support Vector Machine

Q3: What is the name of the feature importance measure used by SHAP?

Ans: Feature Attribution

Related Articles

Top Tutorials

  • Official Address
  • 4th floor, 133/2, Janardhan Towers, Residency Road, Bengaluru, Karnataka, 560025
  • Communication Address
  • Follow Us
  • facebookinstagramlinkedintwitteryoutubetelegram

© 2024 AlmaBetter