lore_sa.surrogate.DecisionTreeSurrogate

class lore_sa.surrogate.DecisionTreeSurrogate(kind=None, preprocessing=None)[source]
__init__(kind=None, preprocessing=None)[source]

Methods

__init__([kind, preprocessing])

apply_counterfactual(x, delta, dataset[, ...])

check_feasibility_of_falsified_conditions(...)

Check if a falsifield confition is in an unadmittible feature list :param delta: :param unadmittible_features: :return: True or False

compact_premises(premises_list)

Remove the same premises with different values of threashold

get_counterfactual_rules(x, class_name, ...)

param [Numpy Array] x

instance encoded of the dataset

get_falsified_conditions(x_dict, crule)

Check the wrong conditions :param x_dict: :param crule: :return: list of falsified premises

get_rule(x, dataset[, encoder])

Extract the rules as the promises and consequences {p -> y}, starting from a Decision Tree

is_leaf(inner_tree, index)

Check whether node is leaf node

prune_duplicate_leaves(dt)

Remove leaves if both

prune_index(inner_tree, decisions[, index])

Start pruning from the bottom - if we start from the top, we might miss nodes that become leaves during pruning.

train(Z, Yb[, weights, class_values, ...])

param Z

The training input samples

check_feasibility_of_falsified_conditions(delta, unadmittible_features: list)[source]

Check if a falsifield confition is in an unadmittible feature list :param delta: :param unadmittible_features: :return: True or False

compact_premises(premises_list)[source]

Remove the same premises with different values of threashold

Parameters

premises_list – List of Expressions that defines the premises

Returns

get_counterfactual_rules(x: array, class_name, feature_names, neighborhood_dataset: TabularDataset, features_map_inv=None, multi_label: bool = False, encoder: EncDec = None, filter_crules=None, constraints: dict = None, unadmittible_features: list = None)[source]
Parameters
  • x ([Numpy Array]) – instance encoded of the dataset

  • neighborhood_dataset ([Numpy Array]) – Neighborhood instances

  • dataset ([TabularDataset]) –

  • features_map_inv

  • multi_label ([bool]) –

  • encdec ([EncDec]) –

  • filter_crules

  • constraints ([dict]) –

  • unadmittible_features ([list]) – List of unadmittible features

Returns

get_falsified_conditions(x_dict: dict, crule: Rule)[source]

Check the wrong conditions :param x_dict: :param crule: :return: list of falsified premises

get_rule(x: array, dataset: TabularDataset, encoder: EncDec = None)[source]

Extract the rules as the promises and consequences {p -> y}, starting from a Decision Tree

>>> {( income > 90) -> grant),
    ( job = employer) -> grant)
}
Parameters
  • x ([Numpy Array]) – instance encoded of the dataset to extract the rule

  • dataset ([TabularDataset]) – Neighborhood instances

  • encdec ([EncDec]) –

Return [Rule]

Rule objects

is_leaf(inner_tree, index)[source]

Check whether node is leaf node

prune_duplicate_leaves(dt)[source]

Remove leaves if both

prune_index(inner_tree, decisions, index=0)[source]

Start pruning from the bottom - if we start from the top, we might miss nodes that become leaves during pruning. Do not use this directly - use prune_duplicate_leaves instead.

train(Z, Yb, weights=None, class_values=None, multi_label: bool = False, one_vs_rest: bool = False, cv=5, prune_tree: bool = False)[source]
Parameters
  • Z – The training input samples

  • Yb – The target values (class labels) as integers or strings.

  • weights – Sample weights.

  • class_values

  • multi_label ([bool]) –

  • one_vs_rest ([bool]) –

  • cv ([int]) –

  • prune_tree ([bool]) –

Returns