xailib.xailib_transparent_by_design
Transparent-by-design model classes for XAI-Lib.
This module provides base classes for inherently interpretable models that are transparent by design. Unlike post-hoc explanation methods, these models provide built-in interpretability without requiring additional explanation techniques.
- Examples of transparent-by-design models include:
Neural Additive Models (NAM)
Generalized Additive Models (GAM)
Decision trees and rule-based models
Linear models with interpretable features
- Classes:
Explainer: Base class for transparent models (extended with predict methods). Explanation: Base class for transparent model explanations.
Example
Using a transparent-by-design model:
from xailib.explainers.nam_explainer_tab import NAMExplainer
# Create and fit the transparent model
explainer = NAMExplainer()
explainer.fit(X_train, y_train)
# Get predictions with built-in explanations
prediction = explainer.predict(X_test)
explanation = explainer.explain(X_test[0])
Note
Models in this module can both make predictions AND provide explanations, unlike post-hoc explainers that only explain existing black-box models.
Classes
Abstract base class for transparent-by-design models. |
|
Abstract base class for transparent model explanations. |