xailib packageο
Subpackagesο
- xailib.data_loaders package
- xailib.explainers package
- Submodules
- xailib.explainers.abele_explainer module
- xailib.explainers.gradcam_explainer module
- xailib.explainers.intgrad_explainer module
- xailib.explainers.lasts_explainer module
- xailib.explainers.lime_explainer module
LimeXAIImageExplainerLimeXAITabularExplainerLimeXAITabularExplanationLimeXAITabularExplanation.expLimeXAITabularExplanation.getCounterExemplars()LimeXAITabularExplanation.getCounterfactualRules()LimeXAITabularExplanation.getExemplars()LimeXAITabularExplanation.getFeaturesImportance()LimeXAITabularExplanation.getRules()LimeXAITabularExplanation.plot_features_importance()
LimeXAITextExplainer
- xailib.explainers.lore_explainer module
- xailib.explainers.nam_explainer_tab module
- xailib.explainers.rise_explainer module
- xailib.explainers.shap_explainer_tab module
- Module contents
- xailib.metrics package
- xailib.models package
Submodulesο
xailib.xailib_base moduleο
Base classes for XAI-Lib explainability framework.
This module defines the abstract base classes that serve as the foundation for all explainers and explanations in the XAI-Lib library. These classes provide a unified interface for implementing various explanation methods across different data types (tabular, image, text, time series).
- Classes:
Explainer: Abstract base class for all explainer implementations. Explanation: Abstract base class for all explanation representations.
Example
Creating a custom explainer:
from xailib.xailib_base import Explainer, Explanation
class MyExplanation(Explanation):
def getFeaturesImportance(self):
return self.feature_weights
class MyExplainer(Explainer):
def fit(self, X, y):
# Train the explainer
pass
def explain(self, x):
# Generate explanation for instance x
return MyExplanation()
- class xailib.xailib_base.Explainer[source]ο
Bases:
ABCAbstract base class for all explainer implementations.
This class defines the interface that all explainers must implement. An explainer is responsible for generating explanations for predictions made by black-box machine learning models.
- The typical workflow involves:
- None defined at the base level. Subclasses should define their own.
See also
Explanation: The corresponding base class for explanations.xailib.xailib_tabular.TabularExplainer: Explainer for tabular data.xailib.xailib_image.ImageExplainer: Explainer for image data.- abstract explain(x)[source]ο
Generate an explanation for a single instance.
This method creates an explanation for why the black-box model made a specific prediction for the given instance.
- Parameters:
x β The instance to explain. The format depends on the data type: - For tabular data: 1D numpy array or pandas Series - For image data: numpy array representing an image - For text data: string or text document
- Returns:
An Explanation object containing the explanation details.
- Return type:
- Raises:
NotImplementedError β If the subclass does not implement this method.
- abstract fit(X, y)[source]ο
Fit the explainer to the training data.
This method prepares the explainer by learning from the training data. The specific behavior depends on the explanation method being used.
- Parameters:
X β Training features. The format depends on the data type: - For tabular data: pandas DataFrame or numpy array of shape (n_samples, n_features) - For image data: numpy array of images - For text data: list of strings or text documents
y β Training labels or target values. numpy array of shape (n_samples,)
- Returns:
None. The explainer is fitted in-place.
- Raises:
NotImplementedError β If the subclass does not implement this method.
- class xailib.xailib_base.Explanation[source]ο
Bases:
ABCAbstract base class for all explanation representations.
This class defines the interface for accessing different aspects of an explanation. Explanations can provide various types of information including feature importance, exemplars, rules, and counterfactuals.
Different explanation methods may only support a subset of these information types. Methods that are not supported by a particular explanation type should return None.
- None defined at the base level. Subclasses should define their own.
See also
Explainer: The corresponding base class for explainers.xailib.xailib_tabular.TabularExplanation: Explanation for tabular data.xailib.xailib_image.ImageExplanation: Explanation for image data.- abstract getCounterExemplars()[source]ο
Get counter-exemplar instances from the explanation.
Counter-exemplars are instances that are similar to the explained instance but received a different prediction.
- Returns:
Counter-exemplar instances, or None if not available for this explanation type. The format depends on the specific explanation method.
- abstract getCounterfactualRules()[source]ο
Get counterfactual rules from the explanation.
Counterfactual rules describe what minimal changes to the input would result in a different prediction.
- Returns:
Counterfactual rules as a list or dictionary, or None if not available. Each counterfactual rule describes conditions that would lead to a different outcome.
Example
>>> cf_rules = explanation.getCounterfactualRules() >>> for rule in cf_rules: ... print(f"To get {rule['cons']}: {rule['premise']}")
- abstract getExemplars()[source]ο
Get exemplar instances from the explanation.
Exemplars are instances from the training data that are similar to the explained instance and received the same prediction.
- Returns:
Exemplar instances, or None if not available for this explanation type. The format depends on the specific explanation method.
- abstract getFeaturesImportance()[source]ο
Get the feature importance values from the explanation.
Feature importance indicates how much each feature contributed to the modelβs prediction for the explained instance.
- Returns:
List of tuples: [(feature_name, importance_value), β¦]
numpy array: Array of importance values
None: If feature importance is not available for this explanation type
- Return type:
Feature importance values. The format depends on the explanation method
Example
>>> explanation = explainer.explain(instance) >>> importance = explanation.getFeaturesImportance() >>> for feature, value in importance: ... print(f"{feature}: {value:.4f}")
- abstract getRules()[source]ο
Get the decision rules from the explanation.
Rules are logical conditions that describe why the model made its prediction for the explained instance.
- Returns:
Decision rules as a list or dictionary, or None if not available. For rule-based explanations like LORE, this returns the rule that led to the prediction.
Example
>>> rules = explanation.getRules() >>> print(rules) {'premise': [{'att': 'age', 'op': '>', 'thr': 30}], 'cons': 'approved'}
xailib.xailib_image moduleο
Image data explainability classes for XAI-Lib.
This module provides base classes for explaining predictions on image data.
It extends the base Explainer and
Explanation classes with image-specific
functionality.
Image explanations typically highlight which regions of an image contributed most to a modelβs prediction, using techniques such as:
Saliency maps and heatmaps
Superpixel importance
Activation visualizations
- Classes:
ImageExplainer: Base class for image data explainers. ImageExplanation: Base class for image data explanations.
Example
Using GradCAM for image explanation:
from xailib.explainers.gradcam_explainer import GradCAMImageExplainer
from xailib.models.pytorch_classifier_wrapper import pytorch_classifier_wrapper
# Wrap your model
bb = pytorch_classifier_wrapper(your_pytorch_model)
# Create and fit explainer
explainer = GradCAMImageExplainer(bb)
explainer.fit(target_layers=[model.layer4])
# Generate explanation
heatmap = explainer.explain(image, class_index)
See also
xailib.explainers.gradcam_explainer: GradCAM implementation for image data.
xailib.explainers.lime_explainer: LIME implementation for image data.
xailib.explainers.rise_explainer: RISE implementation for image data.
xailib.explainers.intgrad_explainer: Integrated Gradients for image data.
- class xailib.xailib_image.ImageExplainer[source]ο
Bases:
ExplainerAbstract base class for image data explainers.
This class extends the base
Explainerclass with functionality specific to image data. Image explainers work with numpy arrays representing images and provide visual explanations (typically heatmaps or saliency maps) for model predictions.Subclasses implement specific explanation methods such as GradCAM, LIME, RISE, or Integrated Gradients for image data.
- Defined by subclasses. Common attributes include the black-box model
- wrapper and target layers for gradient-based methods.
See also
xailib.explainers.gradcam_explainer.GradCAMImageExplainer: GradCAM implementation.xailib.explainers.lime_explainer.LimeXAIImageExplainer: LIME implementation.xailib.explainers.rise_explainer.RiseXAIImageExplainer: RISE implementation.- abstract explain(b, x)[source]ο
Generate an explanation for an image instance.
- Parameters:
b β Black-box model or prediction function.
x β Image to explain as a numpy array.
- Returns:
Explanation output, typically a heatmap or saliency map as a numpy array with the same spatial dimensions as the input.
- abstract fit(X, y)[source]ο
Fit the explainer to the image training data.
For most image explainers, this method sets up the necessary components for generating explanations (e.g., target layers for GradCAM, mask generation for RISE).
- Parameters:
X β Training images or configuration parameters. The exact format depends on the specific method.
y β Training labels or additional configuration.
- Returns:
None. The explainer is fitted in-place.
- class xailib.xailib_image.ImageExplanation[source]ο
Bases:
ExplanationAbstract base class for image data explanations.
This class extends the base
Explanationclass with functionality specific to image data. Image explanations typically contain saliency maps, heatmaps, or segmentation-based importance values.Note
Most image explainers return the explanation directly (as a numpy array) rather than wrapping it in an ImageExplanation object. This class is provided for consistency and future extensions.
- Defined by subclasses. Common attributes include the saliency map
- and segment importance values.
- abstract getCounterExemplars()[source]ο
Get counter-exemplar images with different predictions.
- Returns:
Counter-exemplar images, or None if not supported.
- abstract getCounterfactualRules()[source]ο
Get counterfactual rules for the image prediction.
- Returns:
Counterfactual rules, or None if not supported.
- abstract getExemplars()[source]ο
Get exemplar images similar to the explained image.
- Returns:
Exemplar images, or None if not supported.
xailib.xailib_tabular moduleο
Tabular data explainability classes for XAI-Lib.
This module provides base classes for explaining predictions on tabular
(structured) data. It extends the base Explainer
and Explanation classes with tabular-specific
functionality, including interactive feature importance visualization.
- Tabular data explanations are commonly used for:
Understanding feature contributions to predictions
Generating human-readable decision rules
Identifying similar and contrasting examples
Creating counterfactual explanations
- Classes:
TabularExplanation: Base class for tabular data explanations. TabularExplainer: Base class for tabular data explainers.
Example
Using LIME for tabular explanation:
from xailib.explainers.lime_explainer import LimeXAITabularExplainer
from xailib.models.sklearn_classifier_wrapper import sklearn_classifier_wrapper
# Wrap your model
bb = sklearn_classifier_wrapper(your_sklearn_model)
# Create and fit explainer
explainer = LimeXAITabularExplainer(bb)
explainer.fit(df, 'target_column', config={})
# Generate explanation
explanation = explainer.explain(instance)
explanation.plot_features_importance()
See also
xailib.explainers.lime_explainer: LIME implementation for tabular data.
xailib.explainers.shap_explainer_tab: SHAP implementation for tabular data.
xailib.explainers.lore_explainer: LORE implementation for tabular data.
- class xailib.xailib_tabular.TabularExplainer[source]ο
Bases:
ExplainerAbstract base class for tabular data explainers.
This class extends the base
Explainerclass with functionality specific to tabular (structured) data. Tabular explainers work with pandas DataFrames and provide explanations for predictions on structured data.Subclasses implement specific explanation methods such as LIME, SHAP, or LORE for tabular data.
- Defined by subclasses. Common attributes include the black-box model
- wrapper and configuration parameters.
See also
xailib.explainers.lime_explainer.LimeXAITabularExplainer: LIME implementation.xailib.explainers.shap_explainer_tab.ShapXAITabularExplainer: SHAP implementation.xailib.explainers.lore_explainer.LoreTabularExplainer: LORE implementation.- abstract explain(b, x) TabularExplanation[source]ο
Generate an explanation for a tabular data instance.
- Parameters:
b β Black-box model or prediction function (depends on implementation).
x β Instance to explain as a numpy array or pandas Series.
- Returns:
- An explanation object containing feature importance,
rules, or other explanation information.
- Return type:
- abstract fit(X, y, config)[source]ο
Fit the explainer to the tabular training data.
- Parameters:
X (pd.DataFrame) β Training data as a pandas DataFrame.
y β Target column name (str) or target values.
config (dict) β
Configuration dictionary with method-specific parameters. Common keys include:
βfeature_selectionβ: Feature selection method
βdiscretize_continuousβ: Whether to discretize continuous features
βsample_around_instanceβ: Sampling strategy
Additional method-specific parameters
- Returns:
None. The explainer is fitted in-place.
- class xailib.xailib_tabular.TabularExplanation[source]ο
Bases:
ExplanationAbstract base class for tabular data explanations.
This class extends the base
Explanationclass with functionality specific to tabular (structured) data, including interactive visualization of feature importance using Altair charts.Subclasses should implement the abstract methods to provide access to different types of explanation information (feature importance, rules, exemplars, etc.).
- Defined by subclasses. Common attributes include the raw explanation
- object from the underlying library.
See also
xailib.explainers.lime_explainer.LimeXAITabularExplanation: LIME explanation.xailib.explainers.shap_explainer_tab.ShapXAITabularExplanation: SHAP explanation.xailib.explainers.lore_explainer.LoreTabularExplanation: LORE explanation.- abstract getCounterExemplars()[source]ο
Get counter-exemplar instances with different predictions.
- Returns:
Counter-exemplar instances, or None if not supported by this explanation method.
- abstract getCounterfactualRules()[source]ο
Get counterfactual rules for alternative outcomes.
- Returns:
Counterfactual rules describing how to change the prediction, or None if not supported by this explanation method.
- abstract getExemplars()[source]ο
Get exemplar instances similar to the explained instance.
- Returns:
Exemplar instances with the same prediction, or None if not supported by this explanation method.
- abstract getFeaturesImportance()[source]ο
Get feature importance values for the explained instance.
- Returns:
Feature importance as a list of tuples, numpy array, or pandas DataFrame. The exact format depends on the explanation method. Returns None if feature importance is not available.
- abstract getRules()[source]ο
Get decision rules explaining the prediction.
- Returns:
Decision rules as a dictionary or list, or None if not supported by this explanation method.
- plot_features_importance_from(dataToPlot: DataFrame, fontDimension=10)[source]ο
Create an interactive feature importance visualization using Altair.
This method generates an interactive bar chart showing feature importance values with a slider to filter features by importance threshold. Features are color-coded by their importance value (positive vs negative).
- Parameters:
dataToPlot (pd.DataFrame) β
DataFrame containing feature importance data with columns:
βnameβ: Feature names (string)
βvalueβ: Importance values (float)
fontDimension (int, optional) β Base font size for the chart. Defaults to 10.
- Returns:
None. Displays the interactive chart using IPython display.
Note
This method is intended to be called within a Jupyter notebook environment for proper rendering of the interactive chart.
Example
>>> import pandas as pd >>> data = pd.DataFrame({ ... 'name': ['feature1', 'feature2', 'feature3'], ... 'value': [0.5, -0.3, 0.1] ... }) >>> explanation.plot_features_importance_from(data, fontDimension=12)
xailib.xailib_text moduleο
Text data explainability classes for XAI-Lib.
This module provides base classes for explaining predictions on text data.
It extends the base Explainer and
Explanation classes with text-specific
functionality.
Text explanations typically highlight which words or phrases contributed most to a modelβs prediction. Common use cases include:
Sentiment analysis explanation
Text classification explanation
Named entity recognition explanation
- Classes:
TextExplainer: Base class for text data explainers. TextExplanation: Base class for text data explanations.
Example
Using LIME for text explanation:
from xailib.explainers.lime_explainer import LimeXAITextExplainer
from xailib.models.sklearn_classifier_wrapper import sklearn_classifier_wrapper
# Wrap your model
bb = sklearn_classifier_wrapper(your_text_classifier)
# Create and fit explainer
explainer = LimeXAITextExplainer(bb)
explainer.fit(class_names=['negative', 'positive'])
# Generate explanation
explanation = explainer.explain("This movie was great!")
See also
xailib.explainers.lime_explainer: LIME implementation for text data.
Note
Text explanation support is currently being expanded. Additional methods will be added in future releases.
- class xailib.xailib_text.TextExplainer[source]ο
Bases:
ExplainerAbstract base class for text data explainers.
This class extends the base
Explainerclass with functionality specific to text data. Text explainers work with strings or text documents and provide word/phrase-level importance scores for model predictions.Subclasses implement specific explanation methods such as LIME for text classification models.
- Defined by subclasses. Common attributes include the black-box model
- wrapper and text preprocessing parameters.
See also
xailib.explainers.lime_explainer.LimeXAITextExplainer: LIME implementation.- abstract explain(b, x)[source]ο
Generate an explanation for a text instance.
- Parameters:
b β Black-box model or prediction function.
x β Text to explain as a string.
- Returns:
Explanation with word/phrase importance scores.
- abstract fit(X, y)[source]ο
Fit the explainer for text data.
For most text explainers, this method sets up the necessary components for generating explanations (e.g., tokenizer, class names).
- Parameters:
X β Training texts or configuration parameters.
y β Training labels or class names.
- Returns:
None. The explainer is fitted in-place.
- class xailib.xailib_text.TextExplanation[source]ο
Bases:
ExplanationAbstract base class for text data explanations.
This class extends the base
Explanationclass with functionality specific to text data. Text explanations typically contain word or phrase importance scores.- Defined by subclasses. Common attributes include word importance
- scores and highlighted text segments.
- abstract getCounterExemplars()[source]ο
Get counter-exemplar texts with different predictions.
- Returns:
Counter-exemplar texts, or None if not supported.
- abstract getCounterfactualRules()[source]ο
Get counterfactual rules for the text prediction.
- Returns:
Counterfactual rules, or None if not supported.
- abstract getExemplars()[source]ο
Get exemplar texts similar to the explained text.
- Returns:
Exemplar texts, or None if not supported.
xailib.xailib_transparent_by_design moduleο
Transparent-by-design model classes for XAI-Lib.
This module provides base classes for inherently interpretable models that are transparent by design. Unlike post-hoc explanation methods, these models provide built-in interpretability without requiring additional explanation techniques.
- Examples of transparent-by-design models include:
Neural Additive Models (NAM)
Generalized Additive Models (GAM)
Decision trees and rule-based models
Linear models with interpretable features
- Classes:
Explainer: Base class for transparent models (extended with predict methods). Explanation: Base class for transparent model explanations.
Example
Using a transparent-by-design model:
from xailib.explainers.nam_explainer_tab import NAMExplainer
# Create and fit the transparent model
explainer = NAMExplainer()
explainer.fit(X_train, y_train)
# Get predictions with built-in explanations
prediction = explainer.predict(X_test)
explanation = explainer.explain(X_test[0])
Note
Models in this module can both make predictions AND provide explanations, unlike post-hoc explainers that only explain existing black-box models.
- class xailib.xailib_transparent_by_design.Explainer[source]ο
Bases:
ABCAbstract base class for transparent-by-design models.
This class extends the standard explainer interface with prediction methods, allowing transparent models to serve as both predictors and explainers. Unlike post-hoc explanation methods, transparent models provide inherent interpretability.
- The workflow for transparent models:
Initialize the model
Call
fit()to train the modelCall
predict()orpredict_proba()for predictionsCall
explain()for interpretable explanations
- Defined by subclasses. Common attributes include model parameters
- and learned feature contributions.
See also
xailib.xailib_base.Explainer: Base explainer for post-hoc methods.xailib.explainers.nam_explainer_tab: NAM implementation.- abstract explain(x)[source]ο
Generate an explanation for an instance.
For transparent models, explanations are derived directly from the modelβs internal structure (e.g., feature contributions).
- Parameters:
x β Instance to explain.
- Returns:
An explanation object with interpretable information.
- Return type:
- abstract fit(X, y)[source]ο
Fit the transparent model to training data.
- Parameters:
X β Training features as a numpy array or pandas DataFrame.
y β Training labels or target values.
- Returns:
None. The model is fitted in-place.
- class xailib.xailib_transparent_by_design.Explanation[source]ο
Bases:
ABCAbstract base class for transparent model explanations.
This class provides the standard interface for accessing explanation information from transparent-by-design models. Explanations from transparent models are typically more detailed and accurate than post-hoc explanations.
- Defined by subclasses. Common attributes include feature
- contributions and model-specific interpretable components.
- abstract getCounterExemplars()[source]ο
Get counter-exemplar instances.
- Returns:
Counter-exemplar instances, or None if not supported.
- abstract getCounterfactualRules()[source]ο
Get counterfactual rules.
- Returns:
Counterfactual rules, or None if not supported.
- abstract getExemplars()[source]ο
Get exemplar instances.
- Returns:
Exemplar instances, or None if not supported.
xailib.xailib_ts moduleο
Time series data explainability classes for XAI-Lib.
This module provides base classes for explaining predictions on time series data.
It extends the base Explainer and
Explanation classes with time series-specific
functionality.
Time series explanations typically highlight which time steps or temporal patterns contributed most to a modelβs prediction. Common use cases include:
Anomaly detection explanation
Time series classification explanation
Forecasting explanation
- Classes:
TSExplainer: Base class for time series data explainers. TSExplanation: Base class for time series data explanations.
Example
Using LASTS for time series explanation:
from xailib.explainers.lasts_explainer import LastsExplainer
from xailib.models.keras_ts_classifier_wrapper import KerasTSClassifierWrapper
# Wrap your model
bb = KerasTSClassifierWrapper(your_ts_model)
# Create and fit explainer
explainer = LastsExplainer(bb)
explainer.fit(X_train, y_train, config)
# Generate explanation
explanation = explainer.explain(time_series)
See also
xailib.explainers.lasts_explainer: LASTS implementation for time series.
Note
Time series explanation support is currently being expanded. Additional methods will be added in future releases.
- class xailib.xailib_ts.TSExplainer[source]ο
Bases:
ExplainerAbstract base class for time series data explainers.
This class extends the base
Explainerclass with functionality specific to time series data. Time series explainers work with sequential data and provide temporal importance explanations for model predictions.Subclasses implement specific explanation methods such as LASTS for time series classification models.
- Defined by subclasses. Common attributes include the black-box model
- wrapper and temporal configuration parameters.
See also
xailib.explainers.lasts_explainer.LastsExplainer: LASTS implementation.- abstract explain(b, x) TSExplanation[source]ο
Generate an explanation for a time series instance.
- Parameters:
b β Black-box model or prediction function.
x β Time series to explain as a numpy array.
- Returns:
- An explanation object containing temporal
importance scores and pattern information.
- Return type:
- abstract fit(X, y, config)[source]ο
Fit the explainer to the time series training data.
- Parameters:
X β Training time series data, typically as a numpy array of shape (n_samples, n_timesteps) or (n_samples, n_timesteps, n_features).
y β Training labels or target values.
config (dict) β Configuration dictionary with method-specific parameters for the time series explainer.
- Returns:
None. The explainer is fitted in-place.
- class xailib.xailib_ts.TSExplanation[source]ο
Bases:
ExplanationAbstract base class for time series data explanations.
This class extends the base
Explanationclass with functionality specific to time series data. Time series explanations typically contain temporal importance scores and pattern-based explanations.- Defined by subclasses. Common attributes include temporal importance
- scores and identified patterns.
- abstract getCounterExemplars()[source]ο
Get counter-exemplar time series with different predictions.
- Returns:
Counter-exemplar time series, or None if not supported.
- abstract getCounterfactualRules()[source]ο
Get counterfactual rules for the time series prediction.
- Returns:
Counterfactual rules describing temporal changes that would alter the prediction, or None if not supported.
- abstract getExemplars()[source]ο
Get exemplar time series similar to the explained series.
- Returns:
Exemplar time series, or None if not supported.
Module contentsο
XAI-Lib: An Integrated Python Library for Explainable AI.
XAI-Lib provides a unified interface for various explanation methods, making machine learning models more interpretable and transparent. The library simplifies the process of explaining black-box models across different data types.
This project is part of the XAI Project - a European initiative focused on advancing explainable artificial intelligence research and applications.
Main Modulesο
Core Classesο
Base classes for XAI-Lib explainability framework. |
|
Tabular data explainability classes for XAI-Lib. |
|
Image data explainability classes for XAI-Lib. |
|
Text data explainability classes for XAI-Lib. |
|
Time series data explainability classes for XAI-Lib. |
|
Transparent-by-design model classes for XAI-Lib. |
Explainersο
LIME (Local Interpretable Model-agnostic Explanations) implementation for XAI-Lib. |
|
Model Wrappersο
Abstract base class for black-box model wrappers. |
|
Scikit-learn classifier wrapper for XAI-Lib. |
|
Keras classifier wrapper for XAI-Lib. |
Data Loadersο
DataFrame loading and preparation utilities for XAI-Lib. |
Metricsο
Quick Startο
Hereβs a simple example using LIME for tabular data explanation:
from xailib.explainers.lime_explainer import LimeXAITabularExplainer
from xailib.models.sklearn_classifier_wrapper import sklearn_classifier_wrapper
# Wrap your scikit-learn model
bb = sklearn_classifier_wrapper(your_sklearn_model)
# Create and fit the explainer
explainer = LimeXAITabularExplainer(bb)
explainer.fit(df, 'target_column', config={})
# Generate explanation for an instance
explanation = explainer.explain(instance)
# Visualize feature importance
explanation.plot_features_importance()
For more examples, see the examples/ directory in the repository.
Licenseο
This project is licensed under the MIT License.
Acknowledgmentsο
This library is developed as part of the XAI Project (https://xai-project.eu/), a European initiative dedicated to advancing explainable artificial intelligence.
- class xailib.Explainer[source]ο
Bases:
ABCAbstract base class for all explainer implementations.
This class defines the interface that all explainers must implement. An explainer is responsible for generating explanations for predictions made by black-box machine learning models.
- The typical workflow involves:
- None defined at the base level. Subclasses should define their own.
See also
Explanation: The corresponding base class for explanations.xailib.xailib_tabular.TabularExplainer: Explainer for tabular data.xailib.xailib_image.ImageExplainer: Explainer for image data.- abstract explain(x)[source]ο
Generate an explanation for a single instance.
This method creates an explanation for why the black-box model made a specific prediction for the given instance.
- Parameters:
x β The instance to explain. The format depends on the data type: - For tabular data: 1D numpy array or pandas Series - For image data: numpy array representing an image - For text data: string or text document
- Returns:
An Explanation object containing the explanation details.
- Return type:
- Raises:
NotImplementedError β If the subclass does not implement this method.
- abstract fit(X, y)[source]ο
Fit the explainer to the training data.
This method prepares the explainer by learning from the training data. The specific behavior depends on the explanation method being used.
- Parameters:
X β Training features. The format depends on the data type: - For tabular data: pandas DataFrame or numpy array of shape (n_samples, n_features) - For image data: numpy array of images - For text data: list of strings or text documents
y β Training labels or target values. numpy array of shape (n_samples,)
- Returns:
None. The explainer is fitted in-place.
- Raises:
NotImplementedError β If the subclass does not implement this method.
- class xailib.Explanation[source]ο
Bases:
ABCAbstract base class for all explanation representations.
This class defines the interface for accessing different aspects of an explanation. Explanations can provide various types of information including feature importance, exemplars, rules, and counterfactuals.
Different explanation methods may only support a subset of these information types. Methods that are not supported by a particular explanation type should return None.
- None defined at the base level. Subclasses should define their own.
See also
Explainer: The corresponding base class for explainers.xailib.xailib_tabular.TabularExplanation: Explanation for tabular data.xailib.xailib_image.ImageExplanation: Explanation for image data.- abstract getCounterExemplars()[source]ο
Get counter-exemplar instances from the explanation.
Counter-exemplars are instances that are similar to the explained instance but received a different prediction.
- Returns:
Counter-exemplar instances, or None if not available for this explanation type. The format depends on the specific explanation method.
- abstract getCounterfactualRules()[source]ο
Get counterfactual rules from the explanation.
Counterfactual rules describe what minimal changes to the input would result in a different prediction.
- Returns:
Counterfactual rules as a list or dictionary, or None if not available. Each counterfactual rule describes conditions that would lead to a different outcome.
Example
>>> cf_rules = explanation.getCounterfactualRules() >>> for rule in cf_rules: ... print(f"To get {rule['cons']}: {rule['premise']}")
- abstract getExemplars()[source]ο
Get exemplar instances from the explanation.
Exemplars are instances from the training data that are similar to the explained instance and received the same prediction.
- Returns:
Exemplar instances, or None if not available for this explanation type. The format depends on the specific explanation method.
- abstract getFeaturesImportance()[source]ο
Get the feature importance values from the explanation.
Feature importance indicates how much each feature contributed to the modelβs prediction for the explained instance.
- Returns:
List of tuples: [(feature_name, importance_value), β¦]
numpy array: Array of importance values
None: If feature importance is not available for this explanation type
- Return type:
Feature importance values. The format depends on the explanation method
Example
>>> explanation = explainer.explain(instance) >>> importance = explanation.getFeaturesImportance() >>> for feature, value in importance: ... print(f"{feature}: {value:.4f}")
- abstract getRules()[source]ο
Get the decision rules from the explanation.
Rules are logical conditions that describe why the model made its prediction for the explained instance.
- Returns:
Decision rules as a list or dictionary, or None if not available. For rule-based explanations like LORE, this returns the rule that led to the prediction.
Example
>>> rules = explanation.getRules() >>> print(rules) {'premise': [{'att': 'age', 'op': '>', 'thr': 30}], 'cons': 'approved'}