fairxai.explain.adapter package
Submodules
fairxai.explain.adapter.generic_explainer_adapter module
- class fairxai.explain.adapter.generic_explainer_adapter.GenericExplainerAdapter(model, dataset)[source]
Bases:
ABCAbstract base class for implementing generic explainer adapters.
This class provides a standardized structure for creating explanation strategies for machine learning models. It defines abstract methods for generating instance-specific (local) explanations and global explanations of a model. It also includes methods for determining compatibility with specific datasets and models and building a generic explanation structure. The class serves as a framework for subclasses to implement their own logic for interpretable machine learning purposes.
Initializes the instance with the given machine learning model and dataset.
Attributes: model (Any): The machine learning model used for explanation. dataset (Any): The dataset associated with the explanations.
Args: model: The machine learning model to be explained. dataset: The dataset on which the model operates.
- GLOBAL_EXPLANATION = 'global'
- LOCAL_EXPLANATION = 'local'
- WILDCARD = '*'
- abstract explain_global() List[GenericExplanation][source]
An abstract method to provide global interpretation or explanation for a model’s predictions. This method is part of an interpretability framework, ensuring that all implementing classes define their own logic for global explanation.
- Raises:
NotImplementedError – If the subclass does not implement this method.
- abstract explain_instance(instance, params: dict | None = None) List[GenericExplanation][source]
Represents an abstract method to explain a specific instance of data.
- to explain or provide details about a specific instance.
- Parameters:
instance – The instance of data that needs to be explained. Its specifics
subclass. (depend on the implementing)
- explainer_name = 'generic'
- classmethod is_compatible(dataset_type: str, model_type: str) bool[source]
Checks compatibility between the dataset type and model type.
Detailed evaluation to determine whether the provided dataset type and model type are supported and compatible with the current implementation of the class. This is evaluated based on internal compatibility checks for both dataset and model types.
- Parameters:
dataset_type – The type of the dataset to be analyzed for compatibility.
model_type – The type of the model to be analyzed for compatibility.
- Returns:
A boolean value indicating whether the provided dataset type and model type are compatible.
- supported_datasets = []
- supported_models = []