xailib.explainers.lime_explainer
LIME (Local Interpretable Model-agnostic Explanations) implementation for XAI-Lib.
This module provides LIME explainers for tabular, image, and text data. LIME is a popular explanation method that approximates the behavior of a black-box model locally using an interpretable surrogate model.
- LIME works by:
Generating a neighborhood of perturbed samples around the instance to explain
Getting predictions for these samples from the black-box model
Training an interpretable model (e.g., linear model) on the neighborhood
Using the interpretable model to explain the prediction
- Classes:
LimeXAITabularExplanation: Explanation class for LIME tabular explanations. LimeXAITabularExplainer: LIME explainer for tabular data. LimeXAIImageExplainer: LIME explainer for image data. LimeXAITextExplainer: LIME explainer for text data.
References
Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). “Why Should I Trust You?”: Explaining the Predictions of Any Classifier. KDD 2016.
Example
Using LIME for tabular data:
from xailib.explainers.lime_explainer import LimeXAITabularExplainer
from xailib.models.sklearn_classifier_wrapper import sklearn_classifier_wrapper
bb = sklearn_classifier_wrapper(trained_model)
explainer = LimeXAITabularExplainer(bb)
explainer.fit(df, 'target', config={'discretize_continuous': True})
explanation = explainer.explain(instance)
explanation.plot_features_importance()
See also
lime: The underlying LIME library.
xailib.explainers.shap_explainer_tab.ShapXAITabularExplainer: Alternative explanation method.
Classes
LIME explainer for image data. |
|
LIME explainer for tabular data. |
|
|
Explanation class for LIME tabular explanations. |
LIME explainer for text data. |