fairxai.explain.explaination package

Submodules

fairxai.explain.explaination.counter_example_explanation module

class fairxai.explain.explaination.counter_example_explanation.CounterExampleExplanation(explainer_name: str, counter_examples: List[Dict[str, Any]])[source]

Bases: GenericExplanation

Counter-Example Explanation that supports tabular fields and images.

Each example is a dict. If a value is:
  • PIL.Image.Image -> encoded as base64 PNG

  • numpy.ndarray -> converted to PIL.Image then encoded (if numpy available)

  • bytes -> assumed image bytes and base64-encoded

  • str path -> file read and base64-encoded (if file exists)

  • other primitive -> left as-is

to_dict() returns a JSON-ready wrapper with the serialized examples. visualize() returns the same structure (no printing).

Initialize a generic explanation object.

Parameters:
  • explainer_name (str) – Name of the explainer (e.g., SHAP, LIME).

  • explanation_type (str) – One of local/global.

  • data (dict) – Explanation payload (already structured and serializable).

GLOBAL_EXPLANATION = 'global'
LOCAL_EXPLANATION = 'local'
to_dict() Dict[str, Any][source]

Build the JSON-ready representation.

visualize() Dict[str, Any][source]

Return the same dict that to_dict() produces (no printing).

fairxai.explain.explaination.counterfactual_rule_explanation module

class fairxai.explain.explaination.counterfactual_rule_explanation.CounterfactualRuleExplanation(explainer_name: str, counterfactual_rules: list[dict], explanation_type=None)[source]

Bases: GenericExplanation

Counterfactual Rule Explanation. Shows minimal feature changes required to flip the prediction.

Initialize a generic explanation object.

Parameters:
  • explainer_name (str) – Name of the explainer (e.g., SHAP, LIME).

  • explanation_type (str) – One of local/global.

  • data (dict) – Explanation payload (already structured and serializable).

GLOBAL_EXPLANATION = 'global'
LOCAL_EXPLANATION = 'local'
to_dict() dict[source]

Serializable representation.

visualize()[source]

Return a structure ready for Streamlit.

fairxai.explain.explaination.example_explanation module

class fairxai.explain.explaination.example_explanation.ExampleExplanation(explainer_name: str, examples: list[dict])[source]

Bases: GenericExplanation

Handles example-based explanations for both tabular and image inputs.

Initialize a generic explanation object.

Parameters:
  • explainer_name (str) – Name of the explainer (e.g., SHAP, LIME).

  • explanation_type (str) – One of local/global.

  • data (dict) – Explanation payload (already structured and serializable).

GLOBAL_EXPLANATION = 'global'
LOCAL_EXPLANATION = 'local'
to_dict() Dict[str, Any][source]

Return a fully serializable representation of the explanation.

Every explanation—rule-based, counterfactual, feature-importance, etc.— will produce a JSON-ready dictionary with a consistent schema.

visualize() Dict[str, Any][source]

Subclasses must override this and return a Streamlit-friendly structure.

visualize() MUST: - NOT print anything - NOT generate plots directly - return a dict/list/string that Streamlit can render

This ensures the visualization responsibility stays with the frontend, not the explanation object.

fairxai.explain.explaination.feature_importance_explanation module

class fairxai.explain.explaination.feature_importance_explanation.FeatureImportanceExplanation(explainer_name: str, data: Dict[str, float], visualization: Dict[str, Any] | None = None, global_scope: bool = False)[source]

Bases: GenericExplanation

Serializable feature importance explanation with optional visualization.

Parameters:
  • explainer_name – name of the explainer that produced this explanation

  • data – mapping from feature identifier to numeric importance (e.g. “i,j” -> float)

  • visualization – optional dict containing visual assets (base64 PNGs, shape, metadata)

  • global_scope – whether the explanation is global

Initialize a generic explanation object.

Parameters:
  • explainer_name (str) – Name of the explainer (e.g., SHAP, LIME).

  • explanation_type (str) – One of local/global.

  • data (dict) – Explanation payload (already structured and serializable).

GLOBAL_EXPLANATION = 'global'
LOCAL_EXPLANATION = 'local'
to_dict() Dict[str, Any][source]

Return a JSON-serializable structure including optional visualization.

visualize() Dict[str, Any][source]

Return data for UI libraries (e.g. Streamlit). This includes visualization if present.

fairxai.explain.explaination.generic_explanation module

class fairxai.explain.explaination.generic_explanation.GenericExplanation(explainer_name: str, explanation_type: str, data: dict)[source]

Bases: object

Base class for all explanations in the framework.

It provides a consistent structure for: - storing explanation metadata, - storing explanation payloads, - returning serializable dictionaries, - providing Streamlit-friendly visualization output.

Subclasses must override visualize() to return a structure that can be rendered by Streamlit (NOT printed) and may optionally extend to_dict().

Initialize a generic explanation object.

Parameters:
  • explainer_name (str) – Name of the explainer (e.g., SHAP, LIME).

  • explanation_type (str) – One of local/global.

  • data (dict) – Explanation payload (already structured and serializable).

GLOBAL_EXPLANATION = 'global'
LOCAL_EXPLANATION = 'local'
to_dict() dict[source]

Return a fully serializable representation of the explanation.

Every explanation—rule-based, counterfactual, feature-importance, etc.— will produce a JSON-ready dictionary with a consistent schema.

visualize()[source]

Subclasses must override this and return a Streamlit-friendly structure.

visualize() MUST: - NOT print anything - NOT generate plots directly - return a dict/list/string that Streamlit can render

This ensures the visualization responsibility stays with the frontend, not the explanation object.

fairxai.explain.explaination.rule_based_explanation module

class fairxai.explain.explaination.rule_based_explanation.RuleBasedExplanation(explainer_name: str, rules: list[dict], explanation_type=None)[source]

Bases: GenericExplanation

Rule-Based Explanation.

Handles symbolic “if–then” rules. Rules should be passed in structured form, and the explanation can be serialized and consumed by Streamlit.

Example data: [

“IF income > 50000 AND age < 40 THEN class = ‘high spender’”, “IF education = ‘PhD’ THEN class = ‘premium customer’”

]

Initialize a generic explanation object.

Parameters:
  • explainer_name (str) – Name of the explainer (e.g., SHAP, LIME).

  • explanation_type (str) – One of local/global.

  • data (dict) – Explanation payload (already structured and serializable).

GLOBAL_EXPLANATION = 'global'
LOCAL_EXPLANATION = 'local'
to_dict() dict[source]

Return fully serializable representation (for JSON or Streamlit).

visualize()[source]

Return the formatted rules rather than printing them. Suitable for Streamlit.

Module contents