xailib.explainers.lime_explainer

LIME (Local Interpretable Model-agnostic Explanations) implementation for XAI-Lib.

This module provides LIME explainers for tabular, image, and text data. LIME is a popular explanation method that approximates the behavior of a black-box model locally using an interpretable surrogate model.

LIME works by:
  1. Generating a neighborhood of perturbed samples around the instance to explain

  2. Getting predictions for these samples from the black-box model

  3. Training an interpretable model (e.g., linear model) on the neighborhood

  4. Using the interpretable model to explain the prediction

Classes:

LimeXAITabularExplanation: Explanation class for LIME tabular explanations. LimeXAITabularExplainer: LIME explainer for tabular data. LimeXAIImageExplainer: LIME explainer for image data. LimeXAITextExplainer: LIME explainer for text data.

References

Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). “Why Should I Trust You?”: Explaining the Predictions of Any Classifier. KDD 2016.

Example

Using LIME for tabular data:

from xailib.explainers.lime_explainer import LimeXAITabularExplainer
from xailib.models.sklearn_classifier_wrapper import sklearn_classifier_wrapper

bb = sklearn_classifier_wrapper(trained_model)
explainer = LimeXAITabularExplainer(bb)
explainer.fit(df, 'target', config={'discretize_continuous': True})
explanation = explainer.explain(instance)
explanation.plot_features_importance()

See also

lime: The underlying LIME library. xailib.explainers.shap_explainer_tab.ShapXAITabularExplainer: Alternative explanation method.

Classes

LimeXAIImageExplainer(bb)

LIME explainer for image data.

LimeXAITabularExplainer(bb)

LIME explainer for tabular data.

LimeXAITabularExplanation(lime_exp)

Explanation class for LIME tabular explanations.

LimeXAITextExplainer(bb)

LIME explainer for text data.