Can you explain your experience with using SHAP and LIME for model explainability?

How To Approach: Associate

  1. Detail working knowledge of SHAP and LIME tools.
  2. Give a practical professional example using these tools.
  3. Discuss the specific contribution made to the interpretability project.
  4. Share the outcome and business impact of the project.

Sample Response: Associate

As a data scientist at Insuro-Logic, I have regularly utilized tools like SHAP and LIME to foster explainability in our machine learning models. A recent project involved developing a model which predicts the likelihood of customers filing insurance claims. Due to the severe consequences associated with false positives and negatives, model interpretability was particularly critical in this scenario.

Using LIME, I was able to create a local surrogate model to break down predictions for individual policy holders. The simplicity and granularity of the LIME explanation helped to communicate the decision drivers to non-technical stakeholders. Subsequently, leveraging SHAP, I created global interpretability reports for the model. Our model's SHAP values demonstrated which features were key drivers in the model's prediction across the data set, strengthening our global understanding of the model's behavior.

This interpretability process, apart from strengthening stakeholders' trust in the model, also contributed materially to refining our underwriting process, enhancing the accuracy of premium calculations.