lore_sa.surrogate.DecisionTreeSurrogate

class lore_sa.surrogate.DecisionTreeSurrogate(kind=None, preprocessing=None, class_values=None, multi_label: bool = False, one_vs_rest: bool = False, cv=5, prune_tree: bool = False)[source]
__init__(kind=None, preprocessing=None, class_values=None, multi_label: bool = False, one_vs_rest: bool = False, cv=5, prune_tree: bool = False)[source]

Initialize the surrogate model.

Parameters:
  • kind (str, optional) – Type of surrogate model (e.g., ‘decision_tree’, ‘supertree’)

  • preprocessing (optional) – Preprocessing method to apply to the data before training

Methods

__init__([kind, preprocessing, ...])

Initialize the surrogate model.

apply_counterfactual(x, delta, feature_names)

check_feasibility_of_falsified_conditions(...)

Check if a falsifield confition is in an unadmittible feature list :param delta: :param unadmittible_features: :return: True or False

compact_premises(premises_list)

Remove the same premises with different values of threashold

get_counterfactual_rules(z, ...[, encoder, ...])

Generate counterfactual rules showing alternative scenarios.

get_falsified_conditions(x_dict, crule)

Check the wrong conditions :param x_dict: :param crule: :return: list of falsified premises

get_rule(z[, encoder])

Extract the rules as the promises and consequences {p -> y}, starting from a Decision Tree

is_leaf(inner_tree, index)

Check whether node is leaf node

prune_duplicate_leaves(dt)

Remove leaves if both

prune_index(inner_tree, decisions[, index])

Start pruning from the bottom - if we start from the top, we might miss nodes that become leaves during pruning.

train(Z, Yb[, weights])

param Z:

The training input samples

check_feasibility_of_falsified_conditions(delta, unadmittible_features: list)[source]

Check if a falsifield confition is in an unadmittible feature list :param delta: :param unadmittible_features: :return: True or False

compact_premises(premises_list)[source]

Remove the same premises with different values of threashold

Parameters:

premises_list – List of Expressions that defines the premises

Returns:

get_counterfactual_rules(z: numpy.array, neighborhood_train_X: numpy.array, neighborhood_train_Y: numpy.array, encoder: Optional[EncDec] = None, filter_crules=None, constraints: Optional[dict] = None, unadmittible_features: Optional[list] = None)[source]

Generate counterfactual rules showing alternative scenarios.

Counterfactual rules describe what changes to the instance would result in a different prediction. They answer “what if” questions like: “What if the age was lower? Would the prediction change?”

This method finds paths in the surrogate model that lead to different classes and extracts the minimal changes (deltas) needed to reach those predictions.

Parameters:
  • x (np.array) – Instance to explain, in encoded space, shape (n_encoded_features,)

  • neighborhood_train_X (np.array) – Neighborhood instances in encoded space, shape (n_samples, n_encoded_features)

  • neighborhood_train_Y (np.array) – Labels for neighborhood instances from the black box, shape (n_samples,)

  • encoder (EncDec, optional) – Encoder/decoder for converting rules to original space

  • filter_crules (optional) – Function to filter counterfactual rules

  • constraints (dict, optional) – Constraints on which features can be changed

  • unadmittible_features (list, optional) – List of features that cannot be changed (e.g., immutable features like age, gender)

Returns:

(counterfactual_rules, deltas) where:
  • counterfactual_rules (list): List of Rule objects for different classes

  • deltas (list): List of lists of Expression objects showing minimal changes needed for each counterfactual

Return type:

tuple

Example

>>> crules, deltas = surrogate.get_counterfactual_rules(
...     encoded_instance, neighborhood_X, neighborhood_y, encoder
... )
>>> print(f"Counterfactual: {crules[0]}")
>>> print(f"Changes needed: {deltas[0]}")
# Changes needed: [age >= 40, income > 60000]
get_falsified_conditions(x_dict: dict, crule: Rule)[source]

Check the wrong conditions :param x_dict: :param crule: :return: list of falsified premises

get_rule(z: numpy.array, encoder: Optional[EncDec] = None)[source]

Extract the rules as the promises and consequences {p -> y}, starting from a Decision Tree

{( income > 90) -> grant),

( job = employer) -> grant)

}

Parameters:
  • z ([Numpy Array]) – instance encoded of the dataset to extract the rule

  • encdec ([EncDec]) –

Return [Rule]:

Rule objects

is_leaf(inner_tree, index)[source]

Check whether node is leaf node

prune_duplicate_leaves(dt)[source]

Remove leaves if both

prune_index(inner_tree, decisions, index=0)[source]

Start pruning from the bottom - if we start from the top, we might miss nodes that become leaves during pruning. Do not use this directly - use prune_duplicate_leaves instead.

train(Z, Yb, weights=None)[source]
Parameters:
  • Z – The training input samples

  • Yb – The target values (class labels) as integers or strings.

  • weights – Sample weights.

  • class_values

  • multi_label ([bool]) –

  • one_vs_rest ([bool]) –

  • cv ([int]) –

  • prune_tree ([bool]) –

Returns: