causalml package

Submodules

causalml.inference.tree module

class causalml.inference.tree.CausalRandomForestRegressor(n_estimators: int = 100, *, control_name: int | str = 0, criterion: str = 'causal_mse', alpha: float = 0.05, max_depth: int | None = None, min_samples_split: int = 60, min_samples_leaf: int = 100, min_weight_fraction_leaf: float = 0.0, max_features: int | float | str = 1.0, max_leaf_nodes: int | None = None, min_impurity_decrease: float = -inf, bootstrap: bool = True, oob_score: bool = False, n_jobs: int | None = None, random_state: int | None = None, verbose: int = 0, warm_start: bool = False, ccp_alpha: float = 0.0, groups_penalty: float = 0.5, max_samples: int | None = None, groups_cnt: bool = True)[source]

Bases: ForestRegressor

calculate_error(X_train: ndarray, X_test: ndarray, inbag: ndarray | None = None, calibrate: bool = True, memory_constrained: bool = False, memory_limit: int | None = None) ndarray[source]

Calculate error bars from scikit-learn RandomForest estimators Source: https://github.com/scikit-learn-contrib/forest-confidence-interval

Parameters:
  • X_train – (np.ndarray), training subsample of feature matrix, (n_train_sample, n_features)

  • X_test – (np.ndarray), test subsample of feature matrix, (n_train_sample, n_features)

  • inbag – (ndarray, optional), The inbag matrix that fit the data. If set to None (default) it will be inferred from the forest. However, this only works for trees for which bootstrapping was set to True. That is, if sampling was done with replacement. Otherwise, users need to provide their own inbag matrix.

  • calibrate – (boolean, optional) Whether to apply calibration to mitigate Monte Carlo noise. Some variance estimates may be negative due to Monte Carlo effects if the number of trees in the forest is too small. To use calibration, Default: True

  • memory_constrained – (boolean, optional) Whether or not there is a restriction on memory. If False, it is assumed that a ndarray of shape (n_train_sample,n_test_sample) fits in main memory. Setting to True can actually provide a speedup if memory_limit is tuned to the optimal range.

  • memory_limit – (int, optional) An upper bound for how much memory the intermediate matrices will take up in Megabytes. This must be provided if memory_constrained=True.

Returns:

(np.ndarray), An array with the unbiased sampling variance for a RandomForest object.

fit(X: ndarray, treatment: ndarray, y: ndarray)[source]

Fit Causal RandomForest :param X: (np.ndarray), feature matrix :param treatment: (np.ndarray), treatment vector :param y: (np.ndarray), outcome vector

Returns:

self

predict(X: ndarray, with_outcomes: bool = False) ndarray[source]

Predict individual treatment effects

Parameters:
  • X (np.matrix) – a feature matrix

  • with_outcomes (bool) – include outcomes Y_hat(X|T=0), Y_hat(X|T=1) along with individual treatment effect

Returns:

individual treatment effect (ITE), dim=nx1

or ITE with outcomes [Y_hat(X|T=0), Y_hat(X|T=1), ITE], dim=nx3

Return type:

(np.matrix)

class causalml.inference.tree.CausalTreeRegressor(*, criterion: str = 'causal_mse', splitter: str = 'best', alpha: float = 0.05, control_name: int | str = 0, max_depth: int | None = None, min_samples_split: int | float = 60, min_weight_fraction_leaf: float = 0.0, max_features: int | float | str | None = None, max_leaf_nodes: int | None = None, min_impurity_decrease: float = -inf, ccp_alpha: float = 0.0, groups_penalty: float = 0.5, min_samples_leaf: int = 100, random_state: int | None = None, groups_cnt: bool = False, groups_cnt_mode: str = 'nodes')[source]

Bases: RegressorMixin, BaseCausalDecisionTree

A Causal Tree regressor class. The Causal Tree is a decision tree regressor with a split criteria for treatment effects. Details are available at Athey and Imbens (2015) (https://arxiv.org/abs/1504.01132)

bootstrap(X: ndarray, treatment: ndarray, y: ndarray, sample_size: int, seed: int) ndarray[source]

Runs a single bootstrap.

Fits on bootstrapped sample, then predicts on whole population.

Parameters:
  • X (np.ndarray) – a feature matrix

  • treatment (np.ndarray) – a treatment vector

  • y (np.ndarray) – an outcome vector

  • sample_size (int) – bootstrap sample size

  • seed – (int): bootstrap seed

Returns:

bootstrap predictions

Return type:

(np.ndarray)

bootstrap_pool(**kw)
estimate_ate(X: ndarray, treatment: ndarray, y: ndarray) tuple[source]

Estimate the Average Treatment Effect (ATE). :param X: a feature matrix :type X: np.matrix :param treatment: a treatment vector :type treatment: np.array :param y: an outcome vector :type y: np.array

Returns:

tuple, The mean and confidence interval (LB, UB) of the ATE estimate.

fit(X: ndarray, y: ndarray, treatment: ndarray | None = None, sample_weight: ndarray | None = None, check_input=False)[source]

Fit CausalTreeRegressor :param X: : (np.ndarray), feature matrix :param y: : (np.ndarray), outcome vector :param treatment: : (np.ndarray), treatment vector :param sample_weight: (np.ndarray), sample_weight :param check_input: (bool)

Returns:

self

fit_predict(X: ndarray, treatment: ndarray, y: ndarray, return_ci: bool = False, n_bootstraps: int = 1000, bootstrap_size: int = 10000, n_jobs: int = 1, verbose: bool = False) tuple[source]

Fit the Causal Tree model and predict treatment effects.

Parameters:
  • X (np.matrix) – a feature matrix

  • treatment (np.array) – a treatment vector

  • y (np.array) – an outcome vector

  • return_ci (bool) – whether to return confidence intervals

  • n_bootstraps (int) – number of bootstrap iterations

  • bootstrap_size (int) – number of samples per bootstrap

  • n_jobs (int) – the number of jobs for bootstrap

  • verbose (str) – whether to output progress logs

Returns:

  • te (numpy.ndarray): Predictions of treatment effects.

  • te_lower (numpy.ndarray, optional): lower bounds of treatment effects

  • te_upper (numpy.ndarray, optional): upper bounds of treatment effects

Return type:

(tuple)

predict(X: ndarray, with_outcomes: bool = False, check_input=True) ndarray[source]

Predict individual treatment effects

Parameters:
  • X (np.matrix) – a feature matrix

  • with_outcomes (bool) – include outcomes Y_hat(X|T=0), Y_hat(X|T=1) along with individual treatment effect

  • check_input (bool) – Allow to bypass several input checking.

Returns:

individual treatment effect (ITE), dim=nx1

or ITE with outcomes [Y_hat(X|T=0), Y_hat(X|T=1), ITE], dim=nx3

Return type:

(np.matrix)

class causalml.inference.tree.DecisionTree(classes_, col=-1, value=None, trueBranch=None, falseBranch=None, results=None, summary=None, maxDiffTreatment=None, maxDiffSign=1.0, nodeSummary=None, backupResults=None, bestTreatment=None, upliftScore=None, matchScore=None)

Bases: object

Tree Node Class

Tree node class to contain all the statistics of the tree node.

Parameters:
  • classes (list of str) – A list of the control and treatment group names.

  • col (int, optional (default = -1)) – The column index for splitting the tree node to children nodes.

  • value (float, optional (default = None)) – The value of the feature column to split the tree node to children nodes.

  • trueBranch (object of DecisionTree) – The true branch tree node (feature > value).

  • falseBranch (object of DecisionTree) – The false branch tree node (feature > value).

  • results (list of float) – The classification probability P(Y=1|T) for each of the control and treatment groups in the tree node.

  • summary (list of list) – Summary statistics of the tree nodes, including impurity, sample size, uplift score, etc.

  • maxDiffTreatment (int) – The treatment index generating the maximum difference between the treatment and control groups.

  • maxDiffSign (float) – The sign of the maximum difference (1. or -1.).

  • nodeSummary (list of list) – Summary statistics of the tree nodes [P(Y=1|T), N(T)], where y_mean stands for the target metric mean and n is the sample size.

  • backupResults (list of float) – The positive probabilities in each of the control and treatment groups in the parent node. The parent node information is served as a backup for the children node, in case no valid statistics can be calculated from the children node, the parent node information will be used in certain cases.

  • bestTreatment (int) – The treatment index providing the best uplift (treatment effect).

  • upliftScore (list) – The uplift score of this node: [max_Diff, p_value], where max_Diff stands for the maximum treatment effect, and p_value stands for the p_value of the treatment effect.

  • matchScore (float) – The uplift score by filling a trained tree with validation dataset or testing dataset.

class causalml.inference.tree.UpliftRandomForestClassifier(control_name, n_estimators=10, max_features=10, random_state=None, max_depth=5, min_samples_leaf=100, min_samples_treatment=10, n_reg=10, early_stopping_eval_diff_scale=1, evaluationFunction='KL', normalization=True, honesty=False, estimation_sample_size=0.5, n_jobs=-1, joblib_prefer: unicode = 'threads')

Bases: object

Uplift Random Forest for Classification Task.

Parameters:
  • n_estimators (integer, optional (default=10)) – The number of trees in the uplift random forest.

  • evaluationFunction (string) – Choose from one of the models: ‘KL’, ‘ED’, ‘Chi’, ‘CTS’, ‘DDP’, ‘IT’, ‘CIT’, ‘IDDP’.

  • max_features (int, optional (default=10)) – The number of features to consider when looking for the best split.

  • random_state (int, RandomState instance or None (default=None)) – A random seed or np.random.RandomState to control randomness in building the trees and forest.

  • max_depth (int, optional (default=5)) – The maximum depth of the tree.

  • min_samples_leaf (int, optional (default=100)) – The minimum number of samples required to be split at a leaf node.

  • min_samples_treatment (int, optional (default=10)) – The minimum number of samples required of the experiment group to be split at a leaf node.

  • n_reg (int, optional (default=10)) – The regularization parameter defined in Rzepakowski et al. 2012, the weight (in terms of sample size) of the parent node influence on the child node, only effective for ‘KL’, ‘ED’, ‘Chi’, ‘CTS’ methods.

  • early_stopping_eval_diff_scale (float, optional (default=1)) – If train and valid uplift score diff bigger than min(train_uplift_score,valid_uplift_score)/early_stopping_eval_diff_scale, stop.

  • control_name (string) – The name of the control group (other experiment groups will be regarded as treatment groups)

  • normalization (boolean, optional (default=True)) – The normalization factor defined in Rzepakowski et al. 2012, correcting for tests with large number of splits and imbalanced treatment and control splits

  • honesty (bool (default=False)) – True if the honest approach based on “Athey, S., & Imbens, G. (2016). Recursive partitioning for heterogeneous causal effects.” shall be used.

  • estimation_sample_size (float (default=0.5)) – Sample size for estimating the CATE score in the leaves if honesty == True.

  • n_jobs (int, optional (default=-1)) – The parallelization parameter to define how many parallel jobs need to be created. This is passed on to joblib library for parallelizing uplift-tree creation and prediction.

  • joblib_prefer (str, optional (default="threads")) – The preferred backend for joblib (passed as prefer to joblib.Parallel). See the joblib documentation for valid values.

  • Outputs

  • ----------

  • df_res (pandas dataframe) – A user-level results dataframe containing the estimated individual treatment effect.

static bootstrap(X, treatment, y, X_val, treatment_val, y_val, tree)
fit(X, treatment, y, X_val=None, treatment_val=None, y_val=None)

Fit the UpliftRandomForestClassifier.

Parameters:
  • X (ndarray, shape = [num_samples, num_features]) – An ndarray of the covariates used to train the uplift model.

  • treatment (array-like, shape = [num_samples]) – An array containing the treatment group for each unit.

  • y (array-like, shape = [num_samples]) – An array containing the outcome of interest for each unit.

  • X_val (ndarray, shape = [num_samples, num_features]) – An ndarray of the covariates used to valid the uplift model.

  • treatment_val (array-like, shape = [num_samples]) – An array containing the validation treatment group for each unit.

  • y_val (array-like, shape = [num_samples]) – An array containing the validation outcome of interest for each unit.

predict(X, full_output=False)

Returns the recommended treatment group and predicted optimal probability conditional on using the recommended treatment group.

Parameters:
  • X (ndarray, shape = [num_samples, num_features]) – An ndarray of the covariates used to train the uplift model.

  • full_output (bool, optional (default=False)) – Whether the UpliftTree algorithm returns upliftScores, pred_nodes alongside the recommended treatment group and p_hat in the treatment group.

Returns:

  • y_pred_list (ndarray, shape = (num_samples, num_treatments])) – An ndarray containing the predicted treatment effect of each treatment group for each sample

  • df_res (DataFrame, shape = [num_samples, (num_treatments * 2 + 3)]) – If full_output is True, a DataFrame containing the predicted outcome of each treatment and control group, the treatment effect of each treatment group, the treatment group with the highest treatment effect, and the maximum treatment effect for each sample.

class causalml.inference.tree.UpliftTreeClassifier(control_name, max_features=None, max_depth=3, min_samples_leaf=100, min_samples_treatment=10, n_reg=100, early_stopping_eval_diff_scale=1, evaluationFunction='KL', normalization=True, honesty=False, estimation_sample_size=0.5, random_state=None)

Bases: object

Uplift Tree Classifier for Classification Task.

A uplift tree classifier estimates the individual treatment effect by modifying the loss function in the classification trees.

The uplift tree classifier is used in uplift random forest to construct the trees in the forest.

Parameters:
  • evaluationFunction (string) – Choose from one of the models: ‘KL’, ‘ED’, ‘Chi’, ‘CTS’, ‘DDP’, ‘IT’, ‘CIT’, ‘IDDP’.

  • max_features (int, optional (default=None)) – The number of features to consider when looking for the best split.

  • max_depth (int, optional (default=3)) – The maximum depth of the tree.

  • min_samples_leaf (int, optional (default=100)) – The minimum number of samples required to be split at a leaf node.

  • min_samples_treatment (int, optional (default=10)) – The minimum number of samples required of the experiment group to be split at a leaf node.

  • n_reg (int, optional (default=100)) – The regularization parameter defined in Rzepakowski et al. 2012, the weight (in terms of sample size) of the parent node influence on the child node, only effective for ‘KL’, ‘ED’, ‘Chi’, ‘CTS’ methods.

  • early_stopping_eval_diff_scale (float, optional (default=1)) – If train and valid uplift score diff bigger than min(train_uplift_score,valid_uplift_score)/early_stopping_eval_diff_scale, stop.

  • control_name (string) – The name of the control group (other experiment groups will be regarded as treatment groups).

  • normalization (boolean, optional (default=True)) – The normalization factor defined in Rzepakowski et al. 2012, correcting for tests with large number of splits and imbalanced treatment and control splits.

  • honesty (bool (default=False)) – True if the honest approach based on “Athey, S., & Imbens, G. (2016). Recursive partitioning for heterogeneous causal effects.” shall be used. If ‘IDDP’ is used as evaluation function, this parameter is automatically set to true.

  • estimation_sample_size (float (default=0.5)) – Sample size for estimating the CATE score in the leaves if honesty == True.

  • random_state (int, RandomState instance or None (default=None)) – A random seed or np.random.RandomState to control randomness in building a tree.

static arr_evaluate_CIT(cur_node_summary_p, cur_node_summary_n, left_node_summary_p, left_node_summary_n, right_node_summary_p, right_node_summary_n)

Calculate likelihood ratio test statistic as split evaluation criterion for a given node

NOTE: n_class should be 2.

Parameters:
  • cur_node_summary_p (array of shape [n_class]) – Has type numpy.double. The positive probabilities of each of the control and treament groups of the current node, i.e. [P(Y=1|T=i)…]

  • cur_node_summary_n (array of shape [n_class]) – Has type numpy.int32. The counts of each of the control and treament groups of the current node, i.e. [N(T=i)…]

  • left_node_summary_p (array of shape [n_class]) – Has type numpy.double. The positive probabilities of each of the control and treament groups of the left node, i.e. [P(Y=1|T=i)…]

  • left_node_summary_n (array of shape [n_class]) – Has type numpy.int32. The counts of each of the control and treament groups of the left node, i.e. [N(T=i)…]

  • right_node_summary_p (array of shape [n_class]) – Has type numpy.double. The positive probabilities of each of the control and treament groups of the right node, i.e. [P(Y=1|T=i)…]

  • right_node_summary_n (array of shape [n_class]) – Has type numpy.int32. The counts of each of the control and treament groups of the right node, i.e. [N(T=i)…]

Returns:

lrt

Return type:

Likelihood ratio test statistic

static arr_evaluate_CTS(node_summary_p, node_summary_n)

Calculate CTS (conditional treatment selection) as split evaluation criterion for a given node.

Parameters:
  • node_summary_p (array of shape [n_class]) – Has type numpy.double. The positive probabilities of each of the control and treament groups of the current node, i.e. [P(Y=1|T=i)…]

  • node_summary_n (array of shape [n_class]) – Has type numpy.int32. The counts of each of the control and treament groups of the current node, i.e. [N(T=i)…]

Returns:

d_res

Return type:

CTS score

static arr_evaluate_Chi(node_summary_p, node_summary_n)

Calculate Chi-Square statistic as split evaluation criterion for a given node.

Parameters:
  • node_summary_p (array of shape [n_class]) – Has type numpy.double. The positive probabilities of each of the control and treament groups of the current node, i.e. [P(Y=1|T=i)…]

  • node_summary_n (array of shape [n_class]) – Has type numpy.int32. The counts of each of the control and treament groups of the current node, i.e. [N(T=i)…]

Returns:

d_res

Return type:

Chi-Square

static arr_evaluate_DDP(node_summary_p, node_summary_n)

Calculate Delta P as split evaluation criterion for a given node.

Parameters:
  • node_summary_p (array of shape [n_class]) – Has type numpy.double. The positive probabilities of each of the control and treament groups of the current node, i.e. [P(Y=1|T=i)…]

  • node_summary_n (array of shape [n_class]) – Has type numpy.int32. The counts of each of the control and treament groups of the current node, i.e. [N(T=i)…]

Returns:

d_res

Return type:

Delta P

static arr_evaluate_ED(node_summary_p, node_summary_n)

Calculate Euclidean Distance as split evaluation criterion for a given node.

Parameters:
  • node_summary_p (array of shape [n_class]) – Has type numpy.double. The positive probabilities of each of the control and treament groups of the current node, i.e. [P(Y=1|T=i)…]

  • node_summary_n (array of shape [n_class]) – Has type numpy.int32. The counts of each of the control and treament groups of the current node, i.e. [N(T=i)…]

Returns:

d_res

Return type:

Euclidean Distance

static arr_evaluate_IDDP(node_summary_p, node_summary_n)

Calculate Delta P as split evaluation criterion for a given node.

Parameters:
  • node_summary_p (array of shape [n_class]) – Has type numpy.double. The positive probabilities of each of the control and treament groups of the current node, i.e. [P(Y=1|T=i)…]

  • node_summary_n (array of shape [n_class]) – Has type numpy.int32. The counts of each of the control and treament groups of the current node, i.e. [N(T=i)…]

Returns:

d_res

Return type:

Delta P

static arr_evaluate_IT(left_node_summary_p, left_node_summary_n, right_node_summary_p, right_node_summary_n)

Calculate Squared T-Statistic as split evaluation criterion for a given node

NOTE: n_class should be 2.

Parameters:
  • left_node_summary_p (array of shape [n_class]) – Has type numpy.double. The positive probabilities of each of the control and treament groups of the left node, i.e. [P(Y=1|T=i)…]

  • left_node_summary_n (array of shape [n_class]) – Has type numpy.int32. The counts of each of the control and treament groups of the left node, i.e. [N(T=i)…]

  • right_node_summary_p (array of shape [n_class]) – Has type numpy.double. The positive probabilities of each of the control and treament groups of the right node, i.e. [P(Y=1|T=i)…]

  • right_node_summary_n (array of shape [n_class]) – Has type numpy.int32. The counts of each of the control and treament groups of the right node, i.e. [N(T=i)…]

Returns:

g_s

Return type:

Squared T-Statistic

static arr_evaluate_KL(node_summary_p, node_summary_n)

Calculate KL Divergence as split evaluation criterion for a given node. Modified to accept new node summary format.

Parameters:
  • node_summary_p (array of shape [n_class]) – Has type numpy.double. The positive probabilities of each of the control and treament groups of the current node, i.e. [P(Y=1|T=i)…]

  • node_summary_n (array of shape [n_class]) – Has type numpy.int32. The counts of each of the control and treament groups of the current node, i.e. [N(T=i)…]

Returns:

d_res

Return type:

KL Divergence

arr_normI(cur_node_summary_n, left_node_summary_n, alpha: float = 0.9, currentDivergence: float = 0.0) float

Normalization factor.

Parameters:
  • cur_node_summary_n (array of shape [n_class]) – Has type numpy.int32. The counts of each of the control and treament groups of the current node, i.e. [N(T=i)…]

  • left_node_summary_n (array of shape [n_class]) – Has type numpy.int32. The counts of each of the control and treament groups of the left node, i.e. [N(T=i)…]

  • alpha (float) – The weight used to balance different normalization parts.

Returns:

norm_res – Normalization factor.

Return type:

float

static classify(observations, tree, dataMissing=False)

Classifies (prediction) the observations according to the tree.

Parameters:
  • observations (list of list) – The internal data format for the training data (combining X, Y, treatment).

  • dataMissing (boolean, optional (default = False)) – An indicator for if data are missing or not.

Returns:

The results in the leaf node.

Return type:

tree.results, tree.upliftScore

static divideSet(X, treatment_idx, y, column, value)

Tree node split.

Parameters:
  • X (ndarray, shape = [num_samples, num_features]) – An ndarray of the covariates used to train the uplift model.

  • treatment_idx (array-like, shape = [num_samples]) – An array containing the treatment group index for each unit.

  • y (array-like, shape = [num_samples]) – An array containing the outcome of interest for each unit.

  • column (int) – The column used to split the data.

  • value (float or int) – The value in the column for splitting the data.

Returns:

(X_l, X_r, treatment_l, treatment_r, y_l, y_r) – The covariates, treatments and outcomes of left node and the right node.

Return type:

list of ndarray

static divideSet_len(X, treatment_idx, y, column, value)

Tree node split.

Modified from dividedSet(), but return the len(X_l) and len(X_r) instead of the split X_l and X_r, to avoid some overhead, intended to be used for finding the split. After finding the best splits, can split to find the X_l and X_r.

Parameters:
  • X (ndarray, shape = [num_samples, num_features]) – An ndarray of the covariates used to train the uplift model.

  • treatment_idx (array-like, shape = [num_samples]) – An array containing the treatment group index for each unit.

  • y (array-like, shape = [num_samples]) – An array containing the outcome of interest for each unit.

  • column (int) – The column used to split the data.

  • value (float or int) – The value in the column for splitting the data.

Returns:

(len_X_l, len_X_r, treatment_l, treatment_r, y_l, y_r) – The covariates nrows, treatments and outcomes of left node and the right node.

Return type:

list of ndarray

static evaluate_CIT(currentNodeSummary, leftNodeSummary, rightNodeSummary, y_l, y_r, w_l, w_r, y, w)

Calculate likelihood ratio test statistic as split evaluation criterion for a given node :param currentNodeSummary: The parent node summary statistics :type currentNodeSummary: list of lists :param leftNodeSummary: The left node summary statistics. :type leftNodeSummary: list of lists :param rightNodeSummary: The right node summary statistics. :type rightNodeSummary: list of lists :param y_l: An array containing the outcome of interest for each unit in the left node :type y_l: array-like, shape = [num_samples] :param y_r: An array containing the outcome of interest for each unit in the right node :type y_r: array-like, shape = [num_samples] :param w_l: An array containing the treatment for each unit in the left node :type w_l: array-like, shape = [num_samples] :param w_r: An array containing the treatment for each unit in the right node :type w_r: array-like, shape = [num_samples] :param y: An array containing the outcome of interest for each unit :type y: array-like, shape = [num_samples] :param w: An array containing the treatment for each unit :type w: array-like, shape = [num_samples]

Returns:

lrt

Return type:

Likelihood ratio test statistic

static evaluate_CTS(nodeSummary)

Calculate CTS (conditional treatment selection) as split evaluation criterion for a given node.

Parameters:

nodeSummary (list of list) – The tree node summary statistics, [P(Y=1|T), N(T)], produced by tree_node_summary() method.

Returns:

d_res

Return type:

CTS score

static evaluate_Chi(nodeSummary)

Calculate Chi-Square statistic as split evaluation criterion for a given node.

Parameters:

nodeSummary (dictionary) – The tree node summary statistics, produced by tree_node_summary() method.

Returns:

d_res

Return type:

Chi-Square

static evaluate_DDP(nodeSummary)

Calculate Delta P as split evaluation criterion for a given node.

Parameters:

nodeSummary (list of list) – The tree node summary statistics, [P(Y=1|T), N(T)], produced by tree_node_summary() method.

Returns:

d_res

Return type:

Delta P

static evaluate_ED(nodeSummary)

Calculate Euclidean Distance as split evaluation criterion for a given node.

Parameters:

nodeSummary (dictionary) – The tree node summary statistics, produced by tree_node_summary() method.

Returns:

d_res

Return type:

Euclidean Distance

static evaluate_IDDP(nodeSummary)

Calculate Delta P as split evaluation criterion for a given node.

Parameters:
  • nodeSummary (dictionary) – The tree node summary statistics, produced by tree_node_summary() method.

  • control_name (string) – The control group name.

Returns:

d_res

Return type:

Delta P

static evaluate_IT(leftNodeSummary, rightNodeSummary, w_l, w_r)

Calculate Squared T-Statistic as split evaluation criterion for a given node

Parameters:
  • leftNodeSummary (list of list) – The left node summary statistics.

  • rightNodeSummary (list of list) – The right node summary statistics.

  • w_l (array-like, shape = [num_samples]) – An array containing the treatment for each unit in the left node

  • w_r (array-like, shape = [num_samples]) – An array containing the treatment for each unit in the right node

Returns:

g_s

Return type:

Squared T-Statistic

static evaluate_KL(nodeSummary)

Calculate KL Divergence as split evaluation criterion for a given node.

Parameters:

nodeSummary (list of list) – The tree node summary statistics, [P(Y=1|T), N(T)], produced by tree_node_summary() method.

Returns:

d_res

Return type:

KL Divergence

fill(X, treatment, y)

Fill the data into an existing tree. This is a higher-level function to transform the original data inputs into lower level data inputs (list of list and tree).

Parameters:
  • X (ndarray, shape = [num_samples, num_features]) – An ndarray of the covariates used to train the uplift model.

  • treatment (array-like, shape = [num_samples]) – An array containing the treatment group for each unit.

  • y (array-like, shape = [num_samples]) – An array containing the outcome of interest for each unit.

Returns:

self

Return type:

object

fillTree(X, treatment_idx, y, tree)

Fill the data into an existing tree. This is a lower-level function to execute on the tree filling task.

Parameters:
  • X (ndarray, shape = [num_samples, num_features]) – An ndarray of the covariates used to train the uplift model.

  • treatment_idx (array-like, shape = [num_samples]) – An array containing the treatment group index for each unit.

  • y (array-like, shape = [num_samples]) – An array containing the outcome of interest for each unit.

  • tree (object) – object of DecisionTree class

Returns:

self

Return type:

object

fit(X, treatment, y, X_val=None, treatment_val=None, y_val=None)

Fit the uplift model.

Parameters:
  • X (ndarray, shape = [num_samples, num_features]) – An ndarray of the covariates used to train the uplift model.

  • treatment (array-like, shape = [num_samples]) – An array containing the treatment group for each unit.

  • y (array-like, shape = [num_samples]) – An array containing the outcome of interest for each unit.

Returns:

self

Return type:

object

group_uniqueCounts(treatment_idx, y)

Count sample size by experiment group.

Parameters:
  • treatment_idx (array-like, shape = [num_samples]) – An array containing the treatment group index for each unit.

  • y (array-like, shape = [num_samples]) – An array containing the outcome of interest for each unit.

Returns:

results – The negative and positive outcome sample sizes for each of the control and treatment groups.

Return type:

list of list

growDecisionTreeFrom(X, treatment_idx, y, X_val, treatment_val_idx, y_val, early_stopping_eval_diff_scale=1, max_depth=10, min_samples_leaf=100, depth=1, min_samples_treatment=10, n_reg=100, parentNodeSummary_p=None)

Train the uplift decision tree.

Parameters:
  • X (ndarray, shape = [num_samples, num_features]) – An ndarray of the covariates used to train the uplift model.

  • treatment_idx (array-like, shape = [num_samples]) – An array containing the treatment group idx for each unit. The dtype should be numpy.int8.

  • y (array-like, shape = [num_samples]) – An array containing the outcome of interest for each unit.

  • X_val (ndarray, shape = [num_samples, num_features]) – An ndarray of the covariates used to valid the uplift model.

  • treatment_val_idx (array-like, shape = [num_samples]) – An array containing the validation treatment group idx for each unit.

  • y_val (array-like, shape = [num_samples]) – An array containing the validation outcome of interest for each unit.

  • max_depth (int, optional (default=10)) – The maximum depth of the tree.

  • min_samples_leaf (int, optional (default=100)) – The minimum number of samples required to be split at a leaf node.

  • depth (int, optional (default = 1)) – The current depth.

  • min_samples_treatment (int, optional (default=10)) – The minimum number of samples required of the experiment group to be split at a leaf node.

  • n_reg (int, optional (default=10)) – The regularization parameter defined in Rzepakowski et al. 2012, the weight (in terms of sample size) of the parent node influence on the child node, only effective for ‘KL’, ‘ED’, ‘Chi’, ‘CTS’ methods.

  • parentNodeSummary_p (array-like, shape [n_class]) – Node summary probability statistics of the parent tree node.

Return type:

object of DecisionTree class

honestApproach(X_est, T_est, Y_est)

Apply the honest approach based on “Athey, S., & Imbens, G. (2016). Recursive partitioning for heterogeneous causal effects.” :param X_est: An ndarray of the covariates used to calculate the unbiased estimates in the leafs of the decision tree. :type X_est: ndarray, shape = [num_samples, num_features] :param T_est: An array containing the treatment group for each unit. :type T_est: array-like, shape = [num_samples] :param Y_est: An array containing the outcome of interest for each unit. :type Y_est: array-like, shape = [num_samples]

modifyEstimation(X_est, t_est, y_est, tree)

Modifies the leafs of the current decision tree to only contain unbiased estimates. Applies the honest approach based on “Athey, S., & Imbens, G. (2016). Recursive partitioning for heterogeneous causal effects.” :param X_est: An ndarray of the covariates used to calculate the unbiased estimates in the leafs of the decision tree. :type X_est: ndarray, shape = [num_samples, num_features] :param T_est: An array containing the treatment group for each unit. :type T_est: array-like, shape = [num_samples] :param Y_est: An array containing the outcome of interest for each unit. :type Y_est: array-like, shape = [num_samples] :param tree: object of DecisionTree class - the current decision tree that shall be modified :type tree: object

normI(n_c: int, n_c_left: int, n_t: list, n_t_left: list, alpha: float = 0.9, currentDivergence: float = 0.0) float

Normalization factor.

Parameters:
  • currentNodeSummary (list of list) – The summary statistics of the current tree node, [P(Y=1|T), N(T)].

  • leftNodeSummary (list of list) – The summary statistics of the left tree node, [P(Y=1|T), N(T)].

  • alpha (float) – The weight used to balance different normalization parts.

Returns:

norm_res – Normalization factor.

Return type:

float

predict(X)

Returns the recommended treatment group and predicted optimal probability conditional on using the recommended treatment group.

Parameters:

X (ndarray, shape = [num_samples, num_features]) – An ndarray of the covariates used to train the uplift model.

Returns:

pred – An ndarray of predicted treatment effects across treatments.

Return type:

ndarray, shape = [num_samples, num_treatments]

prune(X, treatment, y, minGain=0.0001, rule='maxAbsDiff')

Prune the uplift model. :param X: An ndarray of the covariates used to train the uplift model. :type X: ndarray, shape = [num_samples, num_features] :param treatment: An array containing the treatment group for each unit. :type treatment: array-like, shape = [num_samples] :param y: An array containing the outcome of interest for each unit. :type y: array-like, shape = [num_samples] :param minGain: The minimum gain required to make a tree node split. The children

tree branches are trimmed if the actual split gain is less than the minimum gain.

Parameters:

rule (string, optional (default = 'maxAbsDiff')) – The prune rules. Supported values are ‘maxAbsDiff’ for optimizing the maximum absolute difference, and ‘bestUplift’ for optimizing the node-size weighted treatment effect.

Returns:

self

Return type:

object

pruneTree(X, treatment_idx, y, tree, rule='maxAbsDiff', minGain=0.0, n_reg=0, parentNodeSummary=None)

Prune one single tree node in the uplift model. :param X: An ndarray of the covariates used to train the uplift model. :type X: ndarray, shape = [num_samples, num_features] :param treatment_idx: An array containing the treatment group index for each unit. :type treatment_idx: array-like, shape = [num_samples] :param y: An array containing the outcome of interest for each unit. :type y: array-like, shape = [num_samples] :param rule: The prune rules. Supported values are ‘maxAbsDiff’ for optimizing the maximum absolute difference, and

‘bestUplift’ for optimizing the node-size weighted treatment effect.

Parameters:
  • minGain (float, optional (default = 0.)) – The minimum gain required to make a tree node split. The children tree branches are trimmed if the actual split gain is less than the minimum gain.

  • n_reg (int, optional (default=0)) – The regularization parameter defined in Rzepakowski et al. 2012, the weight (in terms of sample size) of the parent node influence on the child node, only effective for ‘KL’, ‘ED’, ‘Chi’, ‘CTS’ methods.

  • parentNodeSummary (list of list, optional (default = None)) – Node summary statistics, [P(Y=1|T), N(T)] of the parent tree node.

Returns:

self

Return type:

object

tree_node_summary(treatment_idx, y, min_samples_treatment=10, n_reg=100, parentNodeSummary=None)

Tree node summary statistics.

Parameters:
  • treatment_idx (array-like, shape = [num_samples]) – An array containing the treatment group index for each unit.

  • y (array-like, shape = [num_samples]) – An array containing the outcome of interest for each unit.

  • min_samples_treatment (int, optional (default=10)) – The minimum number of samples required of the experiment group t be split at a leaf node.

  • n_reg (int, optional (default=10)) – The regularization parameter defined in Rzepakowski et al. 2012, the weight (in terms of sample size) of the parent node influence on the child node, only effective for ‘KL’, ‘ED’, ‘Chi’, ‘CTS’ methods.

  • parentNodeSummary (list of list) – The positive probabilities and sample sizes of each of the control and treatment groups in the parent node.

Returns:

nodeSummary – The positive probabilities and sample sizes of each of the control and treatment groups in the current node.

Return type:

list of list

static tree_node_summary_from_counts(group_count_arr, out_summary_p, out_summary_n, parentNodeSummary_p, has_parent_summary, min_samples_treatment=10, n_reg=100)

Tree node summary statistics.

Modified from tree_node_summary_to_arr, to use different format for the summary and to calculate based on already calculated group counts. Instead of [[P(Y=1|T=0), N(T=0)], [P(Y=1|T=1), N(T=1)], …], use two arrays [N(T=i)…] and [P(Y=1|T=i)…].

Parameters:
  • group_count_arr (array of shape [2*n_class]) – Has type numpy.int32. The grounp counts, where entry 2*i is N(Y=0, T=i), and entry 2*i+1 is N(Y=1, T=i).

  • out_summary_p (array of shape [n_class]) – Has type numpy.double. To be filled with the positive probabilities of each of the control and treament groups of the current node.

  • out_summary_n (array of shape [n_class]) – Has type numpy.int32. To be filled with the counts of each of the control and treament groups of the current node.

  • parentNodeSummary_p (array of shape [n_class]) – The positive probabilities of each of the control and treatment groups in the parent node.

  • has_parent_summary (bool as int) – If True (non-zero), then parentNodeSummary_p is a valid parent node summary probabilities. If False (0), assume no parent node summary and parentNodeSummary_p is not touched.

  • min_samples_treatment (int, optional (default=10)) – The minimum number of samples required of the experiment group t be split at a leaf node.

  • n_reg (int, optional (default=10)) – The regularization parameter defined in Rzepakowski et al. 2012, the weight (in terms of sample size) of the parent node influence on the child node, only effective for ‘KL’, ‘ED’, ‘Chi’, ‘CTS’ methods.

Return type:

No return values, but will modify out_summary_p and out_summary_n.

static tree_node_summary_to_arr(treatment_idx, y, out_summary_p, out_summary_n, buf_count_arr, parentNodeSummary_p, has_parent_summary, min_samples_treatment=10, n_reg=100)

Tree node summary statistics. Modified from tree_node_summary, to use different format for the summary. Instead of [[P(Y=1|T=0), N(T=0)], [P(Y=1|T=1), N(T=1)], …], use two arrays [N(T=i)…] and [P(Y=1|T=i)…].

Parameters:
  • treatment_idx (array-like, shape = [num_samples]) – An array containing the treatment group index for each unit. Has type numpy.int8.

  • y (array-like, shape = [num_samples]) – An array containing the outcome of interest for each unit. Has type numpy.int8.

  • out_summary_p (array of shape [n_class]) – Has type numpy.double. To be filled with the positive probabilities of each of the control and treament groups of the current node.

  • out_summary_n (array of shape [n_class]) – Has type numpy.int32. To be filled with the counts of each of the control and treament groups of the current node.

  • buf_count_arr (array of shape [2*n_class]) – Has type numpy.int32. To be use as temporary buffer for group_uniqueCounts_to_arr.

  • parentNodeSummary_p (array of shape [n_class]) – The positive probabilities of each of the control and treatment groups in the parent node.

  • has_parent_summary (bool as int) – If True (non-zero), then parentNodeSummary_p is a valid parent node summary probabilities. If False (0), assume no parent node summary and parentNodeSummary_p is not touched.

  • min_samples_treatment (int, optional (default=10)) – The minimum number of samples required of the experiment group t be split at a leaf node.

  • n_reg (int, optional (default=10)) – The regularization parameter defined in Rzepakowski et al. 2012, the weight (in terms of sample size) of the parent node influence on the child node, only effective for ‘KL’, ‘ED’, ‘Chi’, ‘CTS’ methods.

Return type:

No return values, but will modify out_summary_p and out_summary_n.

uplift_classification_results(treatment_idx, y)

Classification probability for each treatment in the tree node.

Parameters:
  • treatment_idx (array-like, shape = [num_samples]) – An array containing the treatment group index for each unit.

  • y (array-like, shape = [num_samples]) – An array containing the outcome of interest for each unit.

Returns:

res – The positive probabilities P(Y = 1) of each of the control and treatment groups

Return type:

list of list

causalml.inference.tree.cat_continuous(x, granularity='Medium')[source]

Categorize (bin) continuous variable based on percentile.

Parameters:
  • x (list) – Feature values.

  • granularity (string, optional, (default = 'Medium')) – Control the granularity of the bins, optional values are: ‘High’, ‘Medium’, ‘Low’.

Returns:

res – List of percentile bins for the feature value.

Return type:

list

causalml.inference.tree.cat_group(dfx, kpix, n_group=10)[source]

Category Reduction for Categorical Variables

Parameters:
  • dfx (dataframe) – The inputs data dataframe.

  • kpix (string) – The column of the feature.

  • n_group (int, optional (default = 10)) – The number of top category values to be remained, other category values will be put into “Other”.

Return type:

The transformed categorical feature value list.

causalml.inference.tree.cat_transform(dfx, kpix, kpi1)[source]

Encoding string features.

Parameters:
  • dfx (dataframe) – The inputs data dataframe.

  • kpix (string) – The column of the feature.

  • kpi1 (list) – The list of feature names.

Returns:

  • dfx (DataFrame) – The updated dataframe containing the encoded data.

  • kpi1 (list) – The updated feature names containing the new dummy feature names.

causalml.inference.tree.cv_fold_index(n, i, k, random_seed=2018)[source]

Encoding string features.

Parameters:
  • dfx (dataframe) – The inputs data dataframe.

  • kpix (string) – The column of the feature.

  • kpi1 (list) – The list of feature names.

Returns:

  • dfx (DataFrame) – The updated dataframe containing the encoded data.

  • kpi1 (list) – The updated feature names containing the new dummy feature names.

causalml.inference.tree.get_tree_leaves_mask(tree) ndarray[source]

Get mask array for tree leaves :param tree: CausalTreeRegressor

Tree object

Returns: np.ndarray

Mask array

causalml.inference.tree.kpi_transform(dfx, kpi_combo, kpi_combo_new)[source]

Feature transformation from continuous feature to binned features for a list of features

Parameters:
  • dfx (DataFrame) – DataFrame containing the features.

  • kpi_combo (list of string) – List of feature names to be transformed

  • kpi_combo_new (list of string) – List of new feature names to be assigned to the transformed features.

Returns:

dfx – Updated DataFrame containing the new features.

Return type:

DataFrame

causalml.inference.tree.plot_dist_tree_leaves_values(tree: CausalTreeRegressor, title: str = 'Leaves values distribution', figsize: tuple = (5, 5), fontsize: int = 12) None[source]

Create distplot for tree leaves values :param tree: (CausalTreeRegressor), Tree object :param title: (str), plot title :param figsize: (tuple), figure size :param fontsize: (int), title font size

Returns: None

causalml.inference.tree.uplift_tree_plot(decisionTree, x_names)[source]

Convert the tree to dot graph for plots.

Parameters:
  • decisionTree (object) – object of DecisionTree class

  • x_names (list) – List of feature names

Return type:

Dot class representing the tree graph.

causalml.inference.tree.uplift_tree_string(decisionTree, x_names)[source]

Convert the tree to string for print.

Parameters:
  • decisionTree (object) – object of DecisionTree class

  • x_names (list) – List of feature names

Return type:

A string representation of the tree.

causalml.inference.meta module

class causalml.inference.meta.BaseDRLearner(learner=None, control_outcome_learner=None, treatment_outcome_learner=None, treatment_effect_learner=None, ate_alpha=0.05, control_name=0)[source]

Bases: BaseLearner

A parent class for DR-learner regressor classes.

A DR-learner estimates treatment effects with machine learning models.

Details of DR-learner are available at Kennedy (2020) (https://arxiv.org/abs/2004.14497).

estimate_ate(X, treatment, y, p=None, bootstrap_ci=False, n_bootstraps=1000, bootstrap_size=10000, seed=None, pretrain=False)[source]

Estimate the Average Treatment Effect (ATE).

Parameters:
  • X (np.matrix or np.array or pd.Dataframe) – a feature matrix

  • treatment (np.array or pd.Series) – a treatment vector

  • y (np.array or pd.Series) – an outcome vector

  • p (np.ndarray or pd.Series or dict, optional) – an array of propensity scores of float (0,1) in the single-treatment case; or, a dictionary of treatment groups that map to propensity vectors of float (0,1); if None will run ElasticNetPropensityModel() to generate the propensity scores.

  • bootstrap_ci (bool) – whether run bootstrap for confidence intervals

  • n_bootstraps (int) – number of bootstrap iterations

  • bootstrap_size (int) – number of samples per bootstrap

  • seed (int) – random seed for cross-fitting

  • pretrain (bool) – whether a model has been fit, default False.

Returns:

The mean and confidence interval (LB, UB) of the ATE estimate.

fit(X, treatment, y, p=None, seed=None)[source]

Fit the inference model.

Parameters:
  • X (np.matrix or np.array or pd.Dataframe) – a feature matrix

  • treatment (np.array or pd.Series) – a treatment vector

  • y (np.array or pd.Series) – an outcome vector

  • p (np.ndarray or pd.Series or dict, optional) – an array of propensity scores of float (0,1) in the single-treatment case; or, a dictionary of treatment groups that map to propensity vectors of float (0,1); if None will run ElasticNetPropensityModel() to generate the propensity scores.

  • seed (int) – random seed for cross-fitting

fit_predict(X, treatment, y, p=None, return_ci=False, n_bootstraps=1000, bootstrap_size=10000, return_components=False, verbose=True, seed=None)[source]

Fit the treatment effect and outcome models of the R learner and predict treatment effects.

Parameters:
  • X (np.matrix or np.array or pd.Dataframe) – a feature matrix

  • treatment (np.array or pd.Series) – a treatment vector

  • y (np.array or pd.Series) – an outcome vector

  • p (np.ndarray or pd.Series or dict, optional) – an array of propensity scores of float (0,1) in the single-treatment case; or, a dictionary of treatment groups that map to propensity vectors of float (0,1); if None will run ElasticNetPropensityModel() to generate the propensity scores.

  • return_ci (bool) – whether to return confidence intervals

  • n_bootstraps (int) – number of bootstrap iterations

  • bootstrap_size (int) – number of samples per bootstrap

  • return_components (bool, optional) – whether to return outcome for treatment and control seperately

  • verbose (str) – whether to output progress logs

  • seed (int) – random seed for cross-fitting

Returns:

Predictions of treatment effects. Output dim: [n_samples, n_treatment]

If return_ci, returns CATE [n_samples, n_treatment], LB [n_samples, n_treatment], UB [n_samples, n_treatment]

Return type:

(numpy.ndarray)

predict(X, treatment=None, y=None, p=None, return_components=False, verbose=True)[source]

Predict treatment effects.

Parameters:
  • X (np.matrix or np.array or pd.Dataframe) – a feature matrix

  • treatment (np.array or pd.Series, optional) – a treatment vector

  • y (np.array or pd.Series, optional) – an outcome vector

  • verbose (bool, optional) – whether to output progress logs

Returns:

Predictions of treatment effects.

Return type:

(numpy.ndarray)

class causalml.inference.meta.BaseDRRegressor(learner=None, control_outcome_learner=None, treatment_outcome_learner=None, treatment_effect_learner=None, ate_alpha=0.05, control_name=0)[source]

Bases: BaseDRLearner

A parent class for DR-learner regressor classes.

class causalml.inference.meta.BaseRClassifier(outcome_learner=None, effect_learner=None, propensity_learner=LogisticRegressionCV(Cs=array([1.00230524, 2.15608891, 4.63802765, 9.97700064]), cv=StratifiedKFold(n_splits=4, random_state=42, shuffle=True), l1_ratios=array([0.001, 0.33366667, 0.66633333, 0.999]), penalty='elasticnet', random_state=42, solver='saga'), ate_alpha=0.05, control_name=0, n_fold=5, random_state=None)[source]

Bases: BaseRLearner

A parent class for R-learner classifier classes.

fit(X, treatment, y, p=None, sample_weight=None, verbose=True)[source]

Fit the treatment effect and outcome models of the R learner.

Parameters:
  • X (np.matrix or np.array or pd.Dataframe) – a feature matrix

  • treatment (np.array or pd.Series) – a treatment vector

  • y (np.array or pd.Series) – an outcome vector

  • p (np.ndarray or pd.Series or dict, optional) – an array of propensity scores of float (0,1) in the single-treatment case; or, a dictionary of treatment groups that map to propensity vectors of float (0,1); if None will run ElasticNetPropensityModel() to generate the propensity scores.

  • sample_weight (np.array or pd.Series, optional) – an array of sample weights indicating the weight of each observation for effect_learner. If None, it assumes equal weight.

  • verbose (bool, optional) – whether to output progress logs

predict(X, p=None)[source]

Predict treatment effects.

Parameters:

X (np.matrix or np.array or pd.Dataframe) – a feature matrix

Returns:

Predictions of treatment effects.

Return type:

(numpy.ndarray)

class causalml.inference.meta.BaseRLearner(learner=None, outcome_learner=None, effect_learner=None, propensity_learner=LogisticRegressionCV(Cs=array([1.00230524, 2.15608891, 4.63802765, 9.97700064]), cv=StratifiedKFold(n_splits=4, random_state=42, shuffle=True), l1_ratios=array([0.001, 0.33366667, 0.66633333, 0.999]), penalty='elasticnet', random_state=42, solver='saga'), ate_alpha=0.05, control_name=0, n_fold=5, random_state=None, cv_n_jobs=-1)[source]

Bases: BaseLearner

A parent class for R-learner classes.

An R-learner estimates treatment effects with two machine learning models and the propensity score.

Details of R-learner are available at Nie and Wager (2019) (https://arxiv.org/abs/1712.04912).

estimate_ate(X, treatment=None, y=None, p=None, sample_weight=None, bootstrap_ci=False, n_bootstraps=1000, bootstrap_size=10000, pretrain=False)[source]

Estimate the Average Treatment Effect (ATE).

Parameters:
  • X (np.matrix or np.array or pd.Dataframe) – a feature matrix

  • treatment (np.array or pd.Series) – only needed when pretrain=False, a treatment vector

  • y (np.array or pd.Series) – only needed when pretrain=False, an outcome vector

  • p (np.ndarray or pd.Series or dict, optional) – an array of propensity scores of float (0,1) in the single-treatment case; or, a dictionary of treatment groups that map to propensity vectors of float (0,1); if None will run ElasticNetPropensityModel() to generate the propensity scores.

  • sample_weight (np.array or pd.Series, optional) – an array of sample weights indicating the weight of each observation for effect_learner. If None, it assumes equal weight.

  • bootstrap_ci (bool) – whether run bootstrap for confidence intervals

  • n_bootstraps (int) – number of bootstrap iterations

  • bootstrap_size (int) – number of samples per bootstrap

  • pretrain (bool) – whether a model has been fit, default False.

Returns:

The mean and confidence interval (LB, UB) of the ATE estimate.

fit(X, treatment, y, p=None, sample_weight=None, verbose=True)[source]

Fit the treatment effect and outcome models of the R learner.

Parameters:
  • X (np.matrix or np.array or pd.Dataframe) – a feature matrix

  • treatment (np.array or pd.Series) – a treatment vector

  • y (np.array or pd.Series) – an outcome vector

  • p (np.ndarray or pd.Series or dict, optional) – an array of propensity scores of float (0,1) in the single-treatment case; or, a dictionary of treatment groups that map to propensity vectors of float (0,1); if None will run ElasticNetPropensityModel() to generate the propensity scores.

  • sample_weight (np.array or pd.Series, optional) – an array of sample weights indicating the weight of each observation for effect_learner. If None, it assumes equal weight.

  • verbose (bool, optional) – whether to output progress logs

fit_predict(X, treatment, y, p=None, sample_weight=None, return_ci=False, n_bootstraps=1000, bootstrap_size=10000, verbose=True)[source]

Fit the treatment effect and outcome models of the R learner and predict treatment effects.

Parameters:
  • X (np.matrix or np.array or pd.Dataframe) – a feature matrix

  • treatment (np.array or pd.Series) – a treatment vector

  • y (np.array or pd.Series) – an outcome vector

  • p (np.ndarray or pd.Series or dict, optional) – an array of propensity scores of float (0,1) in the single-treatment case; or, a dictionary of treatment groups that map to propensity vectors of float (0,1); if None will run ElasticNetPropensityModel() to generate the propensity scores.

  • sample_weight (np.array or pd.Series, optional) – an array of sample weights indicating the weight of each observation for effect_learner. If None, it assumes equal weight.

  • return_ci (bool) – whether to return confidence intervals

  • n_bootstraps (int) – number of bootstrap iterations

  • bootstrap_size (int) – number of samples per bootstrap

  • verbose (bool) – whether to output progress logs

Returns:

Predictions of treatment effects. Output dim: [n_samples, n_treatment].

If return_ci, returns CATE [n_samples, n_treatment], LB [n_samples, n_treatment], UB [n_samples, n_treatment]

Return type:

(numpy.ndarray)

predict(X, p=None)[source]

Predict treatment effects.

Parameters:

X (np.matrix or np.array or pd.Dataframe) – a feature matrix

Returns:

Predictions of treatment effects.

Return type:

(numpy.ndarray)

class causalml.inference.meta.BaseRRegressor(learner=None, outcome_learner=None, effect_learner=None, propensity_learner=LogisticRegressionCV(Cs=array([1.00230524, 2.15608891, 4.63802765, 9.97700064]), cv=StratifiedKFold(n_splits=4, random_state=42, shuffle=True), l1_ratios=array([0.001, 0.33366667, 0.66633333, 0.999]), penalty='elasticnet', random_state=42, solver='saga'), ate_alpha=0.05, control_name=0, n_fold=5, random_state=None)[source]

Bases: BaseRLearner

A parent class for R-learner regressor classes.

class causalml.inference.meta.BaseSClassifier(learner=None, ate_alpha=0.05, control_name=0)[source]

Bases: BaseSLearner

A parent class for S-learner classifier classes.

predict(X, treatment=None, y=None, p=None, return_components=False, verbose=True)[source]

Predict treatment effects. :param X: a feature matrix :type X: np.matrix or np.array or pd.Dataframe :param treatment: a treatment vector :type treatment: np.array or pd.Series, optional :param y: an outcome vector :type y: np.array or pd.Series, optional :param return_components: whether to return outcome for treatment and control seperately :type return_components: bool, optional :param verbose: whether to output progress logs :type verbose: bool, optional

Returns:

Predictions of treatment effects.

Return type:

(numpy.ndarray)

class causalml.inference.meta.BaseSLearner(learner=None, ate_alpha=0.05, control_name=0)[source]

Bases: BaseLearner

A parent class for S-learner classes. An S-learner estimates treatment effects with one machine learning model. Details of S-learner are available at Kunzel et al. (2018) (https://arxiv.org/abs/1706.03461).

estimate_ate(X, treatment, y, p=None, return_ci=False, bootstrap_ci=False, n_bootstraps=1000, bootstrap_size=10000, pretrain=False)[source]

Estimate the Average Treatment Effect (ATE).

Parameters:
  • X (np.matrix, np.array, or pd.Dataframe) – a feature matrix

  • treatment (np.array or pd.Series) – a treatment vector

  • y (np.array or pd.Series) – an outcome vector

  • return_ci (bool, optional) – whether to return confidence intervals

  • bootstrap_ci (bool) – whether to return confidence intervals

  • n_bootstraps (int) – number of bootstrap iterations

  • bootstrap_size (int) – number of samples per bootstrap

  • pretrain (bool) – whether a model has been fit, default False.

Returns:

The mean and confidence interval (LB, UB) of the ATE estimate.

fit(X, treatment, y, p=None)[source]

Fit the inference model :param X: a feature matrix :type X: np.matrix, np.array, or pd.Dataframe :param treatment: a treatment vector :type treatment: np.array or pd.Series :param y: an outcome vector :type y: np.array or pd.Series

fit_predict(X, treatment, y, p=None, return_ci=False, n_bootstraps=1000, bootstrap_size=10000, return_components=False, verbose=True)[source]

Fit the inference model of the S learner and predict treatment effects. :param X: a feature matrix :type X: np.matrix, np.array, or pd.Dataframe :param treatment: a treatment vector :type treatment: np.array or pd.Series :param y: an outcome vector :type y: np.array or pd.Series :param return_ci: whether to return confidence intervals :type return_ci: bool, optional :param n_bootstraps: number of bootstrap iterations :type n_bootstraps: int, optional :param bootstrap_size: number of samples per bootstrap :type bootstrap_size: int, optional :param return_components: whether to return outcome for treatment and control seperately :type return_components: bool, optional :param verbose: whether to output progress logs :type verbose: bool, optional

Returns:

Predictions of treatment effects. Output dim: [n_samples, n_treatment].

If return_ci, returns CATE [n_samples, n_treatment], LB [n_samples, n_treatment], UB [n_samples, n_treatment]

Return type:

(numpy.ndarray)

predict(X, treatment=None, y=None, p=None, return_components=False, verbose=True)[source]

Predict treatment effects. :param X: a feature matrix :type X: np.matrix or np.array or pd.Dataframe :param treatment: a treatment vector :type treatment: np.array or pd.Series, optional :param y: an outcome vector :type y: np.array or pd.Series, optional :param return_components: whether to return outcome for treatment and control seperately :type return_components: bool, optional :param verbose: whether to output progress logs :type verbose: bool, optional

Returns:

Predictions of treatment effects.

Return type:

(numpy.ndarray)

class causalml.inference.meta.BaseSRegressor(learner=None, ate_alpha=0.05, control_name=0)[source]

Bases: BaseSLearner

A parent class for S-learner regressor classes.

class causalml.inference.meta.BaseTClassifier(learner=None, control_learner=None, treatment_learner=None, ate_alpha=0.05, control_name=0)[source]

Bases: BaseTLearner

A parent class for T-learner classifier classes.

predict(X, treatment=None, y=None, p=None, return_components=False, verbose=True)[source]

Predict treatment effects.

Parameters:
  • X (np.matrix or np.array or pd.Dataframe) – a feature matrix

  • treatment (np.array or pd.Series, optional) – a treatment vector

  • y (np.array or pd.Series, optional) – an outcome vector

  • verbose (bool, optional) – whether to output progress logs

Returns:

Predictions of treatment effects.

Return type:

(numpy.ndarray)

class causalml.inference.meta.BaseTLearner(learner=None, control_learner=None, treatment_learner=None, ate_alpha=0.05, control_name=0)[source]

Bases: BaseLearner

A parent class for T-learner regressor classes.

A T-learner estimates treatment effects with two machine learning models.

Details of T-learner are available at Kunzel et al. (2018) (https://arxiv.org/abs/1706.03461).

estimate_ate(X, treatment, y, p=None, bootstrap_ci=False, n_bootstraps=1000, bootstrap_size=10000, pretrain=False)[source]

Estimate the Average Treatment Effect (ATE).

Parameters:
  • X (np.matrix or np.array or pd.Dataframe) – a feature matrix

  • treatment (np.array or pd.Series) – a treatment vector

  • y (np.array or pd.Series) – an outcome vector

  • bootstrap_ci (bool) – whether to return confidence intervals

  • n_bootstraps (int) – number of bootstrap iterations

  • bootstrap_size (int) – number of samples per bootstrap

Returns:

The mean and confidence interval (LB, UB) of the ATE estimate. pretrain (bool): whether a model has been fit, default False.

fit(X, treatment, y, p=None)[source]

Fit the inference model

Parameters:
  • X (np.matrix or np.array or pd.Dataframe) – a feature matrix

  • treatment (np.array or pd.Series) – a treatment vector

  • y (np.array or pd.Series) – an outcome vector

fit_predict(X, treatment, y, p=None, return_ci=False, n_bootstraps=1000, bootstrap_size=10000, return_components=False, verbose=True)[source]

Fit the inference model of the T learner and predict treatment effects.

Parameters:
  • X (np.matrix or np.array or pd.Dataframe) – a feature matrix

  • treatment (np.array or pd.Series) – a treatment vector

  • y (np.array or pd.Series) – an outcome vector

  • return_ci (bool) – whether to return confidence intervals

  • n_bootstraps (int) – number of bootstrap iterations

  • bootstrap_size (int) – number of samples per bootstrap

  • return_components (bool, optional) – whether to return outcome for treatment and control seperately

  • verbose (str) – whether to output progress logs

Returns:

Predictions of treatment effects. Output dim: [n_samples, n_treatment].

If return_ci, returns CATE [n_samples, n_treatment], LB [n_samples, n_treatment], UB [n_samples, n_treatment]

Return type:

(numpy.ndarray)

predict(X, treatment=None, y=None, p=None, return_components=False, verbose=True)[source]

Predict treatment effects.

Parameters:
  • X (np.matrix or np.array or pd.Dataframe) – a feature matrix

  • treatment (np.array or pd.Series, optional) – a treatment vector

  • y (np.array or pd.Series, optional) – an outcome vector

  • return_components (bool, optional) – whether to return outcome for treatment and control seperately

  • verbose (bool, optional) – whether to output progress logs

Returns:

Predictions of treatment effects.

Return type:

(numpy.ndarray)

class causalml.inference.meta.BaseTRegressor(learner=None, control_learner=None, treatment_learner=None, ate_alpha=0.05, control_name=0)[source]

Bases: BaseTLearner

A parent class for T-learner regressor classes.

class causalml.inference.meta.BaseXClassifier(outcome_learner=None, effect_learner=None, control_outcome_learner=None, treatment_outcome_learner=None, control_effect_learner=None, treatment_effect_learner=None, ate_alpha=0.05, control_name=0)[source]

Bases: BaseXLearner

A parent class for X-learner classifier classes.

fit(X, treatment, y, p=None)[source]

Fit the inference model.

Parameters:
  • X (np.matrix or np.array or pd.Dataframe) – a feature matrix

  • treatment (np.array or pd.Series) – a treatment vector

  • y (np.array or pd.Series) – an outcome vector

  • p (np.ndarray or pd.Series or dict, optional) – an array of propensity scores of float (0,1) in the single-treatment case; or, a dictionary of treatment groups that map to propensity vectors of float (0,1); if None will run ElasticNetPropensityModel() to generate the propensity scores.

predict(X, treatment=None, y=None, p=None, return_components=False, verbose=True)[source]

Predict treatment effects.

Parameters:
  • X (np.matrix or np.array or pd.Dataframe) – a feature matrix

  • treatment (np.array or pd.Series, optional) – a treatment vector

  • y (np.array or pd.Series, optional) – an outcome vector

  • p (np.ndarray or pd.Series or dict, optional) – an array of propensity scores of float (0,1) in the single-treatment case; or, a dictionary of treatment groups that map to propensity vectors of float (0,1); if None will run ElasticNetPropensityModel() to generate the propensity scores.

  • return_components (bool, optional) – whether to return outcome for treatment and control seperately

  • return_p_score (bool, optional) – whether to return propensity score

  • verbose (bool, optional) – whether to output progress logs

Returns:

Predictions of treatment effects.

Return type:

(numpy.ndarray)

class causalml.inference.meta.BaseXLearner(learner=None, control_outcome_learner=None, treatment_outcome_learner=None, control_effect_learner=None, treatment_effect_learner=None, ate_alpha=0.05, control_name=0)[source]

Bases: BaseLearner

A parent class for X-learner regressor classes.

An X-learner estimates treatment effects with four machine learning models.

Details of X-learner are available at Kunzel et al. (2018) (https://arxiv.org/abs/1706.03461).

estimate_ate(X, treatment, y, p=None, bootstrap_ci=False, n_bootstraps=1000, bootstrap_size=10000, pretrain=False)[source]

Estimate the Average Treatment Effect (ATE).

Parameters:
  • X (np.matrix or np.array or pd.Dataframe) – a feature matrix

  • treatment (np.array or pd.Series) – a treatment vector

  • y (np.array or pd.Series) – an outcome vector

  • p (np.ndarray or pd.Series or dict, optional) – an array of propensity scores of float (0,1) in the single-treatment case; or, a dictionary of treatment groups that map to propensity vectors of float (0,1); if None will run ElasticNetPropensityModel() to generate the propensity scores.

  • bootstrap_ci (bool) – whether run bootstrap for confidence intervals

  • n_bootstraps (int) – number of bootstrap iterations

  • bootstrap_size (int) – number of samples per bootstrap

  • pretrain (bool) – whether a model has been fit, default False.

Returns:

The mean and confidence interval (LB, UB) of the ATE estimate.

fit(X, treatment, y, p=None)[source]

Fit the inference model.

Parameters:
  • X (np.matrix or np.array or pd.Dataframe) – a feature matrix

  • treatment (np.array or pd.Series) – a treatment vector

  • y (np.array or pd.Series) – an outcome vector

  • p (np.ndarray or pd.Series or dict, optional) – an array of propensity scores of float (0,1) in the single-treatment case; or, a dictionary of treatment groups that map to propensity vectors of float (0,1); if None will run ElasticNetPropensityModel() to generate the propensity scores.

fit_predict(X, treatment, y, p=None, return_ci=False, n_bootstraps=1000, bootstrap_size=10000, return_components=False, verbose=True)[source]

Fit the treatment effect and outcome models of the R learner and predict treatment effects.

Parameters:
  • X (np.matrix or np.array or pd.Dataframe) – a feature matrix

  • treatment (np.array or pd.Series) – a treatment vector

  • y (np.array or pd.Series) – an outcome vector

  • p (np.ndarray or pd.Series or dict, optional) – an array of propensity scores of float (0,1) in the single-treatment case; or, a dictionary of treatment groups that map to propensity vectors of float (0,1); if None will run ElasticNetPropensityModel() to generate the propensity scores.

  • return_ci (bool) – whether to return confidence intervals

  • n_bootstraps (int) – number of bootstrap iterations

  • bootstrap_size (int) – number of samples per bootstrap

  • return_components (bool, optional) – whether to return outcome for treatment and control seperately

  • verbose (str) – whether to output progress logs

Returns:

Predictions of treatment effects. Output dim: [n_samples, n_treatment]

If return_ci, returns CATE [n_samples, n_treatment], LB [n_samples, n_treatment], UB [n_samples, n_treatment]

Return type:

(numpy.ndarray)

predict(X, treatment=None, y=None, p=None, return_components=False, verbose=True)[source]

Predict treatment effects.

Parameters:
  • X (np.matrix or np.array or pd.Dataframe) – a feature matrix

  • treatment (np.array or pd.Series, optional) – a treatment vector

  • y (np.array or pd.Series, optional) – an outcome vector

  • p (np.ndarray or pd.Series or dict, optional) – an array of propensity scores of float (0,1) in the single-treatment case; or, a dictionary of treatment groups that map to propensity vectors of float (0,1); if None will run ElasticNetPropensityModel() to generate the propensity scores.

  • return_components (bool, optional) – whether to return outcome for treatment and control seperately

  • verbose (bool, optional) – whether to output progress logs

Returns:

Predictions of treatment effects.

Return type:

(numpy.ndarray)

class causalml.inference.meta.BaseXRegressor(learner=None, control_outcome_learner=None, treatment_outcome_learner=None, control_effect_learner=None, treatment_effect_learner=None, ate_alpha=0.05, control_name=0)[source]

Bases: BaseXLearner

A parent class for X-learner regressor classes.

class causalml.inference.meta.LRSRegressor(ate_alpha=0.05, control_name=0)[source]

Bases: BaseSRegressor

estimate_ate(X, treatment, y, p=None, pretrain=False)[source]

Estimate the Average Treatment Effect (ATE). :param X: a feature matrix :type X: np.matrix, np.array, or pd.Dataframe :param treatment: a treatment vector :type treatment: np.array or pd.Series :param y: an outcome vector :type y: np.array or pd.Series

Returns:

The mean and confidence interval (LB, UB) of the ATE estimate.

class causalml.inference.meta.MLPTRegressor(ate_alpha=0.05, control_name=0, *args, **kwargs)[source]

Bases: BaseTRegressor

class causalml.inference.meta.TMLELearner(learner, ate_alpha=0.05, control_name=0, cv=None, calibrate_propensity=True)[source]

Bases: object

Targeted maximum likelihood estimation.

Ref: Gruber, S., & Van Der Laan, M. J. (2009). Targeted maximum likelihood estimation: A gentle introduction.

estimate_ate(X, treatment, y, p, segment=None, return_ci=False)[source]

Estimate the Average Treatment Effect (ATE).

Parameters:
  • X (np.matrix or np.array or pd.Dataframe) – a feature matrix

  • treatment (np.array or pd.Series) – a treatment vector

  • y (np.array or pd.Series) – an outcome vector

  • p (np.ndarray or pd.Series or dict) – an array of propensity scores of float (0,1) in the single-treatment case; or, a dictionary of treatment groups that map to propensity vectors of float (0,1)

  • segment (np.array, optional) – An optional segment vector of int. If given, the ATE and its CI will be estimated for each segment.

  • return_ci (bool, optional) – Whether to return confidence intervals

Returns:

The ATE and its confidence interval (LB, UB) for each treatment, t and segment, s

Return type:

(tuple)

class causalml.inference.meta.XGBDRRegressor(ate_alpha=0.05, control_name=0, *args, **kwargs)[source]

Bases: BaseDRRegressor

class causalml.inference.meta.XGBRRegressor(early_stopping=True, test_size=0.3, early_stopping_rounds=30, effect_learner_objective='reg:squarederror', effect_learner_n_estimators=500, random_state=42, *args, **kwargs)[source]

Bases: BaseRRegressor

fit(X, treatment, y, p=None, sample_weight=None, verbose=True)[source]

Fit the treatment effect and outcome models of the R learner.

Parameters:
  • X (np.matrix or np.array or pd.Dataframe) – a feature matrix

  • y (np.array or pd.Series) – an outcome vector

  • p (np.ndarray or pd.Series or dict, optional) – an array of propensity scores of float (0,1) in the single-treatment case; or, a dictionary of treatment groups that map to propensity vectors of float (0,1); if None will run ElasticNetPropensityModel() to generate the propensity scores.

  • sample_weight (np.array or pd.Series, optional) – an array of sample weights indicating the weight of each observation for effect_learner. If None, it assumes equal weight.

  • verbose (bool, optional) – whether to output progress logs

class causalml.inference.meta.XGBTRegressor(ate_alpha=0.05, control_name=0, *args, **kwargs)[source]

Bases: BaseTRegressor

causalml.inference.iv module

class causalml.inference.iv.BaseDRIVLearner(learner=None, control_outcome_learner=None, treatment_outcome_learner=None, treatment_effect_learner=None, ate_alpha=0.05, control_name=0)[source]

Bases: object

A parent class for DRIV-learner regressor classes.

A DRIV-learner estimates endogenous treatment effects for compliers with machine learning models.

Details of DR-learner are available at Kennedy (2020) (https://arxiv.org/abs/2004.14497). The DR moment condition for LATE comes from Chernozhukov et al (2018) (https://academic.oup.com/ectj/article/21/1/C1/5056401).

bootstrap(X, assignment, treatment, y, p, pZ, size=10000, seed=None)[source]

Runs a single bootstrap. Fits on bootstrapped sample, then predicts on whole population.

estimate_ate(X, assignment, treatment, y, p=None, pZ=None, bootstrap_ci=False, n_bootstraps=1000, bootstrap_size=10000, seed=None, calibrate=True)[source]

Estimate the Average Treatment Effect (ATE) for compliers.

Parameters:
  • X (np.matrix or np.array or pd.Dataframe) – a feature matrix

  • assignment (np.array or pd.Series) – an assignment vector. The assignment is the instrumental variable that does not depend on unknown confounders. The assignment status influences treatment in a monotonic way, i.e. one can only be more likely to take the treatment if assigned.

  • treatment (np.array or pd.Series) – a treatment vector

  • y (np.array or pd.Series) – an outcome vector

  • p (2-tuple of np.ndarray or pd.Series or dict, optional) – The first (second) element corresponds to unassigned (assigned) units. Each is an array of propensity scores of float (0,1) in the single-treatment case; or, a dictionary of treatment groups that map to propensity vectors of float (0,1). If None will run ElasticNetPropensityModel() to generate the propensity scores.

  • pZ (np.array or pd.Series, optional) – an array of assignment probability of float (0,1); if None will run ElasticNetPropensityModel() to generate the assignment probability score.

  • bootstrap_ci (bool) – whether run bootstrap for confidence intervals

  • n_bootstraps (int) – number of bootstrap iterations

  • bootstrap_size (int) – number of samples per bootstrap

  • seed (int) – random seed for cross-fitting

Returns:

The mean and confidence interval (LB, UB) of the ATE estimate.

fit(X, assignment, treatment, y, p=None, pZ=None, seed=None, calibrate=True)[source]

Fit the inference model.

Parameters:
  • X (np.matrix or np.array or pd.Dataframe) – a feature matrix

  • assignment (np.array or pd.Series) – a (0,1)-valued assignment vector. The assignment is the instrumental variable that does not depend on unknown confounders. The assignment status influences treatment in a monotonic way, i.e. one can only be more likely to take the treatment if assigned.

  • treatment (np.array or pd.Series) – a treatment vector

  • y (np.array or pd.Series) – an outcome vector

  • p (2-tuple of np.ndarray or pd.Series or dict, optional) – The first (second) element corresponds to unassigned (assigned) units. Each is an array of propensity scores of float (0,1) in the single-treatment case; or, a dictionary of treatment groups that map to propensity vectors of float (0,1). If None will run ElasticNetPropensityModel() to generate the propensity scores.

  • pZ (np.array or pd.Series, optional) – an array of assignment probability of float (0,1); if None will run ElasticNetPropensityModel() to generate the assignment probability score.

  • seed (int) – random seed for cross-fitting

fit_predict(X, assignment, treatment, y, p=None, pZ=None, return_ci=False, n_bootstraps=1000, bootstrap_size=10000, return_components=False, verbose=True, seed=None, calibrate=True)[source]

Fit the treatment effect and outcome models of the R learner and predict treatment effects.

Parameters:
  • X (np.matrix or np.array or pd.Dataframe) – a feature matrix

  • assignment (np.array or pd.Series) – a (0,1)-valued assignment vector. The assignment is the instrumental variable that does not depend on unknown confounders. The assignment status influences treatment in a monotonic way, i.e. one can only be more likely to take the treatment if assigned.

  • treatment (np.array or pd.Series) – a treatment vector

  • y (np.array or pd.Series) – an outcome vector

  • p (2-tuple of np.ndarray or pd.Series or dict, optional) – The first (second) element corresponds to unassigned (assigned) units. Each is an array of propensity scores of float (0,1) in the single-treatment case; or, a dictionary of treatment groups that map to propensity vectors of float (0,1). If None will run ElasticNetPropensityModel() to generate the propensity scores.

  • pZ (np.array or pd.Series, optional) – an array of assignment probability of float (0,1); if None will run ElasticNetPropensityModel() to generate the assignment probability score.

  • return_ci (bool) – whether to return confidence intervals

  • n_bootstraps (int) – number of bootstrap iterations

  • bootstrap_size (int) – number of samples per bootstrap

  • return_components (bool, optional) – whether to return outcome for treatment and control seperately

  • verbose (str) – whether to output progress logs

  • seed (int) – random seed for cross-fitting

Returns:

Predictions of treatment effects for compliers, , i.e. those individuals

who take the treatment only if they are assigned. Output dim: [n_samples, n_treatment] If return_ci, returns CATE [n_samples, n_treatment], LB [n_samples, n_treatment], UB [n_samples, n_treatment]

Return type:

(numpy.ndarray)

get_importance(X=None, tau=None, model_tau_feature=None, features=None, method='auto', normalize=True, test_size=0.3, random_state=None)[source]

Builds a model (using X to predict estimated/actual tau), and then calculates feature importances based on a specified method.

Currently supported methods are:
  • auto (calculates importance based on estimator’s default implementation of feature importance;

    estimator must be tree-based) Note: if none provided, it uses lightgbm’s LGBMRegressor as estimator, and “gain” as importance type

  • permutation (calculates importance based on mean decrease in accuracy when a feature column is permuted;

    estimator can be any form)

Hint: for permutation, downsample data for better performance especially if X.shape[1] is large

Parameters:
  • X (np.matrix or np.array or pd.Dataframe) – a feature matrix

  • tau (np.array) – a treatment effect vector (estimated/actual)

  • model_tau_feature (sklearn/lightgbm/xgboost model object) – an unfitted model object

  • features (np.array) – list/array of feature names. If None, an enumerated list will be used

  • method (str) – auto, permutation

  • normalize (bool) – normalize by sum of importances if method=auto (defaults to True)

  • test_size (float/int) – if float, represents the proportion of the dataset to include in the test split. If int, represents the absolute number of test samples (used for estimating permutation importance)

  • random_state (int/RandomState instance/None) – random state used in permutation importance estimation

get_shap_values(X=None, model_tau_feature=None, tau=None, features=None)[source]

Builds a model (using X to predict estimated/actual tau), and then calculates shapley values. :param X: a feature matrix :type X: np.matrix or np.array or pd.Dataframe :param tau: a treatment effect vector (estimated/actual) :type tau: np.array :param model_tau_feature: an unfitted model object :type model_tau_feature: sklearn/lightgbm/xgboost model object :param features: list/array of feature names. If None, an enumerated list will be used. :type features: optional, np.array

plot_importance(X=None, tau=None, model_tau_feature=None, features=None, method='auto', normalize=True, test_size=0.3, random_state=None)[source]

Builds a model (using X to predict estimated/actual tau), and then plots feature importances based on a specified method.

Currently supported methods are:
  • auto (calculates importance based on estimator’s default implementation of feature importance;

    estimator must be tree-based) Note: if none provided, it uses lightgbm’s LGBMRegressor as estimator, and “gain” as importance type

  • permutation (calculates importance based on mean decrease in accuracy when a feature column is permuted;

    estimator can be any form)

Hint: for permutation, downsample data for better performance especially if X.shape[1] is large

Parameters:
  • X (np.matrix or np.array or pd.Dataframe) – a feature matrix

  • tau (np.array) – a treatment effect vector (estimated/actual)

  • model_tau_feature (sklearn/lightgbm/xgboost model object) – an unfitted model object

  • features (optional, np.array) – list/array of feature names. If None, an enumerated list will be used

  • method (str) – auto, permutation

  • normalize (bool) – normalize by sum of importances if method=auto (defaults to True)

  • test_size (float/int) – if float, represents the proportion of the dataset to include in the test split. If int, represents the absolute number of test samples (used for estimating permutation importance)

  • random_state (int/RandomState instance/None) – random state used in permutation importance estimation

plot_shap_dependence(treatment_group, feature_idx, X, tau, model_tau_feature=None, features=None, shap_dict=None, interaction_idx='auto', **kwargs)[source]

Plots dependency of shapley values for a specified feature, colored by an interaction feature.

If shapley values have been pre-computed, pass it through the shap_dict parameter. If shap_dict is not provided, this builds a new model (using X to predict estimated/actual tau), and then calculates shapley values.

This plots the value of the feature on the x-axis and the SHAP value of the same feature on the y-axis. This shows how the model depends on the given feature, and is like a richer extension of the classical partial dependence plots. Vertical dispersion of the data points represents interaction effects.

Parameters:
  • treatment_group (str or int) – name of treatment group to create dependency plot on

  • feature_idx (str or int) – feature index / name to create dependency plot on

  • X (np.matrix or np.array or pd.Dataframe) – a feature matrix

  • tau (np.array) – a treatment effect vector (estimated/actual)

  • model_tau_feature (sklearn/lightgbm/xgboost model object) – an unfitted model object

  • features (optional, np.array) – list/array of feature names. If None, an enumerated list will be used.

  • shap_dict (optional, dict) – a dict of shapley value matrices. If None, shap_dict will be computed.

  • interaction_idx (optional, str or int) – feature index / name used in coloring scheme as interaction feature. If “auto” then shap.common.approximate_interactions is used to pick what seems to be the strongest interaction (note that to find to true strongest interaction you need to compute the SHAP interaction values).

plot_shap_values(X=None, tau=None, model_tau_feature=None, features=None, shap_dict=None, **kwargs)[source]

Plots distribution of shapley values.

If shapley values have been pre-computed, pass it through the shap_dict parameter. If shap_dict is not provided, this builds a new model (using X to predict estimated/actual tau), and then calculates shapley values.

Parameters:
  • X (np.matrix or np.array or pd.Dataframe) – a feature matrix. Required if shap_dict is None.

  • tau (np.array) – a treatment effect vector (estimated/actual)

  • model_tau_feature (sklearn/lightgbm/xgboost model object) – an unfitted model object

  • features (optional, np.array) – list/array of feature names. If None, an enumerated list will be used.

  • shap_dict (optional, dict) – a dict of shapley value matrices. If None, shap_dict will be computed.

predict(X, treatment=None, y=None, return_components=False, verbose=True)[source]

Predict treatment effects.

Parameters:
  • X (np.matrix or np.array or pd.Dataframe) – a feature matrix

  • treatment (np.array or pd.Series, optional) – a treatment vector

  • y (np.array or pd.Series, optional) – an outcome vector

  • verbose (bool, optional) – whether to output progress logs

Returns:

Predictions of treatment effects for compliers, i.e. those individuals

who take the treatment only if they are assigned.

Return type:

(numpy.ndarray)

class causalml.inference.iv.BaseDRIVRegressor(learner=None, control_outcome_learner=None, treatment_outcome_learner=None, treatment_effect_learner=None, ate_alpha=0.05, control_name=0)[source]

Bases: BaseDRIVLearner

A parent class for DRIV-learner regressor classes.

class causalml.inference.iv.IVRegressor[source]

Bases: object

A wrapper class that uses IV2SLS from statsmodel

A linear 2SLS model that estimates the average treatment effect with endogenous treatment variable.

fit(X, treatment, y, w)[source]

Fits the 2SLS model.

Parameters:
  • X (np.matrix or np.array or pd.Dataframe) – a feature matrix

  • treatment (np.array or pd.Series) – a treatment vector

  • y (np.array or pd.Series) – an outcome vector

  • w (np.array or pd.Series) – an instrument vector

predict()[source]

Returns the average treatment effect and its estimated standard error

Returns:

average treatment effect (float): standard error of the estimation

Return type:

(float)

class causalml.inference.iv.XGBDRIVRegressor(ate_alpha=0.05, control_name=0, *args, **kwargs)[source]

Bases: BaseDRIVRegressor

causalml.inference.nn module

class causalml.inference.nn.CEVAE(outcome_dist='studentt', latent_dim=20, hidden_dim=200, num_epochs=50, num_layers=3, batch_size=100, learning_rate=0.001, learning_rate_decay=0.1, num_samples=1000, weight_decay=0.0001)[source]

Bases: object

fit(X, treatment, y, p=None)[source]

Fits CEVAE.

Parameters:
  • X (np.matrix or np.array or pd.Dataframe) – a feature matrix

  • treatment (np.array or pd.Series) – a treatment vector

  • y (np.array or pd.Series) – an outcome vector

fit_predict(X, treatment, y, p=None)[source]

Fits the CEVAE model and then predicts.

Parameters:
  • X (np.matrix or np.array or pd.Dataframe) – a feature matrix

  • treatment (np.array or pd.Series) – a treatment vector

  • y (np.array or pd.Series) – an outcome vector

Returns:

Predictions of treatment effects.

Return type:

(np.ndarray)

predict(X, treatment=None, y=None, p=None)[source]

Calls predict on fitted DragonNet.

Parameters:

X (np.matrix or np.array or pd.Dataframe) – a feature matrix

Returns:

Predictions of treatment effects.

Return type:

(np.ndarray)

causalml.inference.tf module

causalml.optimize module

class causalml.optimize.CounterfactualUnitSelector(learner, nevertaker_payoff, alwaystaker_payoff, complier_payoff, defier_payoff, organic_conversion=None)[source]

Bases: object

A highly experimental implementation of the counterfactual unit selection model proposed by Li and Pearl (2019).

Parameters:
  • learner (object) – The base learner used to estimate the segment probabilities.

  • nevertaker_payoff (float) – The payoff from targeting a never-taker

  • alwaystaker_payoff (float) – The payoff from targeting an always-taker

  • complier_payoff (float) – The payoff from targeting a complier

  • defier_payoff (float) – The payoff from targeting a defier

  • organic_conversion (float, optional (default=None)) –

    The organic conversion rate in the population without an intervention. If None, the organic conversion rate is obtained from tne control group.

    NB: The organic conversion in the control group is not always the same as the organic conversion rate without treatment.

  • data (DataFrame) – A pandas DataFrame containing the features, treatment assignment indicator and the outcome of interest.

  • treatment (string) – A string corresponding to the name of the treatment column. The assumed coding in the column is 1 for treatment and 0 for control.

  • outcome (string) – A string corresponding to the name of the outcome column. The assumed coding in the column is 1 for conversion and 0 for no conversion.

References

Li, Ang, and Judea Pearl. 2019. “Unit Selection Based on Counterfactual Logic.” https://ftp.cs.ucla.edu/pub/stat_ser/r488.pdf.

fit(data, treatment, outcome)[source]

Fits the class.

predict(data, treatment, outcome)[source]

Predicts an individual-level payoff. If gain equality is satisfied, uses the exact function; if not, uses the midpoint between bounds.

class causalml.optimize.CounterfactualValueEstimator(treatment, control_name, treatment_names, y_proba, cate, value, conversion_cost, impression_cost, *args, **kwargs)[source]

Bases: object

Parameters:
  • treatment (array, shape = (num_samples, )) – An array of treatment group indicator values.

  • control_name (string) – The name of the control condition as a string. Must be contained in the treatment array.

  • treatment_names (list, length = cate.shape[1]) – A list of treatment group names. NB: The order of the items in the list must correspond to the order in which the conditional average treatment effect estimates are in cate_array.

  • y_proba (array, shape = (num_samples, )) – The predicted probability of conversion using the Y ~ X model across the total sample.

  • cate (array, shape = (num_samples, len(set(treatment)))) – Conditional average treatment effect estimations from any model.

  • value (array, shape = (num_samples, )) – Value of converting each unit.

  • conversion_cost (shape = (num_samples, len(set(treatment)))) – The cost of a treatment that is triggered if a unit converts after having been in the treatment, such as a promotion code.

  • impression_cost (shape = (num_samples, len(set(treatment)))) – The cost of a treatment that is the same for each unit whether or not they convert, such as a cost associated with a promotion channel.

Notes

Because we get the conditional average treatment effects from cate-learners relative to the control condition, we subtract the cate for the unit in their actual treatment group from y_proba for that unit, in order to recover the control outcome. We then add the cates to the control outcome to obtain y_proba under each condition. These outcomes are counterfactual because just one of them is actually observed.

predict_best()[source]

Predict the best treatment group based on the highest counterfactual value for a treatment.

predict_counterfactuals()[source]

Predict the counterfactual values for each treatment group.

class causalml.optimize.PolicyLearner(outcome_learner=GradientBoostingRegressor(), treatment_learner=GradientBoostingClassifier(), policy_learner=DecisionTreeClassifier(), clip_bounds=(0.001, 0.999), n_fold=5, random_state=None, calibration=False)[source]

Bases: object

A Learner that learns a treatment assignment policy with observational data using doubly robust estimator of causal effect for binary treatment.

Details of the policy learner are available at Athey and Wager (2018) (https://arxiv.org/abs/1702.02896).

fit(X, treatment, y, p=None, dhat=None)[source]

Fit the treatment assignment policy learner.

Parameters:
  • X (np.matrix) – a feature matrix

  • treatment (np.array) – a treatment vector (1 if treated, otherwise 0)

  • y (np.array) – an outcome vector

  • p (optional, np.array) – user provided propensity score vector between 0 and 1

  • dhat (optinal, np.array) – user provided predicted treatment effect vector

Returns:

returns an instance of self.

Return type:

self

predict(X)[source]

Predict treatment assignment that optimizes the outcome.

Parameters:

X (np.matrix) – a feature matrix

Returns:

predictions of treatment assignment.

Return type:

(numpy.ndarray)

predict_proba(X)[source]

Predict treatment assignment score that optimizes the outcome.

Parameters:

X (np.matrix) – a feature matrix

Returns:

predictions of treatment assignment score.

Return type:

(numpy.ndarray)

causalml.optimize.get_actual_value(treatment, observed_outcome, conversion_value, conditions, conversion_cost, impression_cost)[source]

Set the conversion and impression costs based on a dict of parameters.

Calculate the actual value of targeting a user with the actual treatment group using the above parameters.

Params

treatmentarray, shape = (num_samples, )

Treatment array.

observed_outcomearray, shape = (num_samples, )

Observed outcome array, aka y.

conversion_valuearray, shape = (num_samples, )

The value of converting a given user.

conditionslist, len = len(set(treatment))

List of treatment conditions.

conversion_costarray, shape = (num_samples, num_treatment)

Array of conversion costs for each unit in each treatment.

impression_costarray, shape = (num_samples, num_treatment)

Array of impression costs for each unit in each treatment.

returns:
  • actual_value (array, shape = (num_samples, )) – Array of actual values of havng a user in their actual treatment group.

  • conversion_value (array, shape = (num_samples, )) – Array of payoffs from converting a user.

causalml.optimize.get_pns_bounds(data_exp, data_obs, T, Y, type='PNS')[source]
Parameters:
  • data_exp (DataFrame) – Data from an experiment.

  • data_obs (DataFrame) – Data from an observational study

  • T (str) – Name of the binary treatment indicator

  • y (str) – Name of the binary outcome indicator

  • 'type' (str) – Type of probability of causation desired. Acceptable args are: * ‘PNS’: Probability of necessary and sufficient causation * ‘PS’: Probability of sufficient causation * ‘PN’: Probability of necessary causation

Notes

Based on Equation (24) in Tian and Pearl: https://ftp.cs.ucla.edu/pub/stat_ser/r271-A.pdf

To capture the counterfactual notation, we use `1’ and `0’ to indicate the actual and counterfactual values of a variable, respectively, and we use `do’ to indicate the effect of an intervention.

The experimental and observational data are either assumed to come to the same population, or from random samples of the population. If the data are from a sample, the bounds may be incorrectly calculated because the relevant quantities in the Tian-Pearl equations are defined e.g. as P(YifT), not P(YifT mid S) where S corresponds to sample selection. Bareinboim and Pearl (https://www.pnas.org/doi/10.1073/pnas.1510507113) discuss conditions under which P(YifT) can be recovered from P(YifT mid S).

causalml.optimize.get_treatment_costs(treatment, control_name, cc_dict, ic_dict)[source]

Set the conversion and impression costs based on a dict of parameters.

Calculate the actual cost of targeting a user with the actual treatment group using the above parameters.

Params

treatmentarray, shape = (num_samples, )

Treatment array.

control_name, str

Control group name as string.

cc_dictdict

Dict containing the conversion cost for each treatment.

ic_dict

Dict containing the impression cost for each treatment.

returns:
  • conversion_cost (ndarray, shape = (num_samples, num_treatments)) – An array of conversion costs for each treatment.

  • impression_cost (ndarray, shape = (num_samples, num_treatments)) – An array of impression costs for each treatment.

  • conditions (list, len = len(set(treatment))) – A list of experimental conditions.

causalml.optimize.get_uplift_best(cate, conditions)[source]

Takes the CATE prediction from a learner, adds the control outcome array and finds the name of the argmax conditon.

Params

catearray, shape = (num_samples, )

The conditional average treatment effect prediction.

conditions : list, len = len(set(treatment))

returns:

uplift_recomm_name – The experimental group recommended by the learner.

rtype:

array, shape = (num_samples, )

causalml.dataset module

causalml.dataset.bar_plot_summary(synthetic_summary, k, drop_learners=[], drop_cols=[], sort_cols=['MSE', 'Abs % Error of ATE'])[source]

Generates a bar plot comparing learner performance.

Parameters:
  • synthetic_summary (pd.DataFrame) – summary generated by get_synthetic_summary()

  • k (int) – number of simulations (used only for plot title text)

  • drop_learners (list, optional) – list of learners (str) to omit when plotting

  • drop_cols (list, optional) – list of metrics (str) to omit when plotting

  • sort_cols (list, optional) – list of metrics (str) to sort on when plotting

causalml.dataset.bar_plot_summary_holdout(train_summary, validation_summary, k, drop_learners=[], drop_cols=[])[source]

Generates a bar plot comparing learner performance by training and validation

Parameters:
  • train_summary (pd.DataFrame) – summary for training synthetic data generated by get_synthetic_summary_holdout()

  • validation_summary (pd.DataFrame) – summary for validation synthetic data generated by get_synthetic_summary_holdout()

  • k (int) – number of simulations (used only for plot title text)

  • drop_learners (list, optional) – list of learners (str) to omit when plotting

  • drop_cols (list, optional) – list of metrics (str) to omit when plotting

causalml.dataset.distr_plot_single_sim(synthetic_preds, kind='kde', drop_learners=[], bins=50, histtype='step', alpha=1, linewidth=1, bw_method=1)[source]

Plots the distribution of each learner’s predictions (for a single simulation). Kernel Density Estimation (kde) and actual histogram plots supported.

Parameters:
  • synthetic_preds (dict) – dictionary of predictions generated by get_synthetic_preds()

  • kind (str, optional) – ‘kde’ or ‘hist’

  • drop_learners (list, optional) – list of learners (str) to omit when plotting

  • bins (int, optional) – number of bins to plot if kind set to ‘hist’

  • histtype (str, optional) – histogram type if kind set to ‘hist’

  • alpha (float, optional) – alpha (transparency) for plotting

  • linewidth (int, optional) – line width for plotting

  • bw_method (float, optional) – parameter for kde

causalml.dataset.get_synthetic_auuc(synthetic_preds, drop_learners=[], outcome_col='y', treatment_col='w', treatment_effect_col='tau', plot=True)[source]

Get auuc values for cumulative gains of model estimates in quantiles.

For details, reference get_cumgain() and plot_gain() :param synthetic_preds: dictionary of predictions generated by get_synthetic_preds() :type synthetic_preds: dict :param or get_synthetic_preds_holdout(): :param outcome_col: the column name for the actual outcome :type outcome_col: str, optional :param treatment_col: the column name for the treatment indicator (0 or 1) :type treatment_col: str, optional :param treatment_effect_col: the column name for the true treatment effect :type treatment_effect_col: str, optional :param plot: plot the cumulative gain chart or not :type plot: boolean,optional

Returns:

auuc values by learner for cumulative gains of model estimates

Return type:

(pandas.DataFrame)

causalml.dataset.get_synthetic_preds(synthetic_data_func, n=1000, estimators={})[source]

Generate predictions for synthetic data using specified function (single simulation)

Parameters:
  • synthetic_data_func (function) – synthetic data generation function

  • n (int, optional) – number of samples

  • estimators (dict of object) – dict of names and objects of treatment effect estimators

Returns:

dict of the actual and estimates of treatment effects

Return type:

(dict)

causalml.dataset.get_synthetic_preds_holdout(synthetic_data_func, n=1000, valid_size=0.2, estimators={})[source]

Generate predictions for synthetic data using specified function (single simulation) for train and holdout

Parameters:
  • synthetic_data_func (function) – synthetic data generation function

  • n (int, optional) – number of samples

  • valid_size (float,optional) – validaiton/hold out data size

  • estimators (dict of object) – dict of names and objects of treatment effect estimators

Returns:

synthetic training and validation data dictionaries:

  • preds_dict_train (dict): synthetic training data dictionary

  • preds_dict_valid (dict): synthetic validation data dictionary

Return type:

(tuple)

causalml.dataset.get_synthetic_summary(synthetic_data_func, n=1000, k=1, estimators={})[source]

Generate a summary for predictions on synthetic data using specified function

Parameters:
  • synthetic_data_func (function) – synthetic data generation function

  • n (int, optional) – number of samples per simulation

  • k (int, optional) – number of simulations

causalml.dataset.get_synthetic_summary_holdout(synthetic_data_func, n=1000, valid_size=0.2, k=1)[source]

Generate a summary for predictions on synthetic data for train and holdout using specified function

Parameters:
  • synthetic_data_func (function) – synthetic data generation function

  • n (int, optional) – number of samples per simulation

  • valid_size (float,optional) – validation/hold out data size

  • k (int, optional) – number of simulations

Returns:

summary evaluation metrics of predictions for train and validation:

  • summary_train (pandas.DataFrame): training data evaluation summary

  • summary_train (pandas.DataFrame): validation data evaluation summary

Return type:

(tuple)

causalml.dataset.make_uplift_classification(n_samples=1000, treatment_name=['control', 'treatment1', 'treatment2', 'treatment3'], y_name='conversion', n_classification_features=10, n_classification_informative=5, n_classification_redundant=0, n_classification_repeated=0, n_uplift_increase_dict={'treatment1': 2, 'treatment2': 2, 'treatment3': 2}, n_uplift_decrease_dict={'treatment1': 0, 'treatment2': 0, 'treatment3': 0}, delta_uplift_increase_dict={'treatment1': 0.02, 'treatment2': 0.05, 'treatment3': 0.1}, delta_uplift_decrease_dict={'treatment1': 0.0, 'treatment2': 0.0, 'treatment3': 0.0}, n_uplift_increase_mix_informative_dict={'treatment1': 1, 'treatment2': 1, 'treatment3': 1}, n_uplift_decrease_mix_informative_dict={'treatment1': 0, 'treatment2': 0, 'treatment3': 0}, positive_class_proportion=0.5, random_seed=20190101)[source]

Generate a synthetic dataset for classification uplift modeling problem.

Parameters:
  • n_samples (int, optional (default=1000)) – The number of samples to be generated for each treatment group.

  • treatment_name (list, optional (default = ['control','treatment1','treatment2','treatment3'])) – The list of treatment names.

  • y_name (string, optional (default = 'conversion')) – The name of the outcome variable to be used as a column in the output dataframe.

  • n_classification_features (int, optional (default = 10)) – Total number of features for base classification

  • n_classification_informative (int, optional (default = 5)) – Total number of informative features for base classification

  • n_classification_redundant (int, optional (default = 0)) – Total number of redundant features for base classification

  • n_classification_repeated (int, optional (default = 0)) – Total number of repeated features for base classification

  • n_uplift_increase_dict (dictionary, optional (default: {'treatment1': 2, 'treatment2': 2, 'treatment3': 2})) – Number of features for generating positive treatment effects for corresponding treatment group. Dictionary of {treatment_key: number_of_features_for_increase_uplift}.

  • n_uplift_decrease_dict (dictionary, optional (default: {'treatment1': 0, 'treatment2': 0, 'treatment3': 0})) – Number of features for generating negative treatment effects for corresponding treatment group. Dictionary of {treatment_key: number_of_features_for_increase_uplift}.

  • delta_uplift_increase_dict (dictionary, optional (default: {'treatment1': .02, 'treatment2': .05, 'treatment3': .1})) – Positive treatment effect created by the positive uplift features on the base classification label. Dictionary of {treatment_key: increase_delta}.

  • delta_uplift_decrease_dict (dictionary, optional (default: {'treatment1': 0., 'treatment2': 0., 'treatment3': 0.})) – Negative treatment effect created by the negative uplift features on the base classification label. Dictionary of {treatment_key: increase_delta}.

  • n_uplift_increase_mix_informative_dict (dictionary, optional (default: {'treatment1': 1, 'treatment2': 1, 'treatment3': 1})) – Number of positive mix features for each treatment. The positive mix feature is defined as a linear combination of a randomly selected informative classification feature and a randomly selected positive uplift feature. The linear combination is made by two coefficients sampled from a uniform distribution between -1 and 1.

  • n_uplift_decrease_mix_informative_dict (dictionary, optional (default: {'treatment1': 0, 'treatment2': 0, 'treatment3': 0})) – Number of negative mix features for each treatment. The negative mix feature is defined as a linear combination of a randomly selected informative classification feature and a randomly selected negative uplift feature. The linear combination is made by two coefficients sampled from a uniform distribution between -1 and 1.

  • positive_class_proportion (float, optional (default = 0.5)) – The proportion of positive label (1) in the control group.

  • random_seed (int, optional (default = 20190101)) – The random seed to be used in the data generation process.

Returns:

  • df_res (DataFrame) – A data frame containing the treatment label, features, and outcome variable.

  • x_name (list) – The list of feature names generated.

Notes

The algorithm for generating the base classification dataset is adapted from the make_classification method in the sklearn package, that uses the algorithm in Guyon [1] designed to generate the “Madelon” dataset.

References

causalml.dataset.make_uplift_classification_logistic(n_samples=10000, treatment_name=['control', 'treatment1', 'treatment2', 'treatment3'], y_name='conversion', n_classification_features=10, n_classification_informative=5, n_classification_redundant=0, n_classification_repeated=0, n_uplift_dict={'treatment1': 2, 'treatment2': 2, 'treatment3': 3}, n_mix_informative_uplift_dict={'treatment1': 1, 'treatment2': 1, 'treatment3': 0}, delta_uplift_dict={'treatment1': 0.02, 'treatment2': 0.05, 'treatment3': -0.05}, positive_class_proportion=0.1, random_seed=20200101, feature_association_list=['linear', 'quadratic', 'cubic', 'relu', 'sin', 'cos'], random_select_association=True, error_std=0.05)[source]

Generate a synthetic dataset for classification uplift modeling problem.

Parameters:
  • n_samples (int, optional (default=1000)) – The number of samples to be generated for each treatment group.

  • treatment_name (list, optional (default = ['control','treatment1','treatment2','treatment3'])) – The list of treatment names. The first element must be ‘control’ as control group, and the rest are treated as treatment groups.

  • y_name (string, optional (default = 'conversion')) – The name of the outcome variable to be used as a column in the output dataframe.

  • n_classification_features (int, optional (default = 10)) – Total number of features for base classification

  • n_classification_informative (int, optional (default = 5)) – Total number of informative features for base classification

  • n_classification_redundant (int, optional (default = 0)) – Total number of redundant features for base classification

  • n_classification_repeated (int, optional (default = 0)) – Total number of repeated features for base classification

  • n_uplift_dict (dictionary, optional (default: {'treatment1': 2, 'treatment2': 2, 'treatment3': 3})) – Number of features for generating heterogeneous treatment effects for corresponding treatment group. Dictionary of {treatment_key: number_of_features_for_uplift}.

  • n_mix_informative_uplift_dict (dictionary, optional (default: {'treatment1': 1, 'treatment2': 1, 'treatment3': 1})) – Number of mix features for each treatment. The mix feature is defined as a linear combination of a randomly selected informative classification feature and a randomly selected uplift feature. The mixture is made by a weighted sum (p*feature1 + (1-p)*feature2), where the weight p is drawn from a uniform distribution between 0 and 1.

  • delta_uplift_dict (dictionary, optional (default: {'treatment1': .02, 'treatment2': .05, 'treatment3': -.05})) – Treatment effect (delta), can be positive or negative. Dictionary of {treatment_key: delta}.

  • positive_class_proportion (float, optional (default = 0.1)) – The proportion of positive label (1) in the control group, or the mean of outcome variable for control group.

  • random_seed (int, optional (default = 20200101)) – The random seed to be used in the data generation process.

  • feature_association_list (list, optional (default = ['linear','quadratic','cubic','relu','sin','cos'])) – List of uplift feature association patterns to the treatment effect. For example, if the feature pattern is ‘quadratic’, then the treatment effect will increase or decrease quadratically with the feature. The values in the list must be one of (‘linear’,’quadratic’,’cubic’,’relu’,’sin’,’cos’). However, the same value can appear multiple times in the list.

  • random_select_association (boolean, optional (default = True)) – How the feature patterns are selected from the feature_association_list to be applied in the data generation process. If random_select_association = True, then for every uplift feature, a random feature association pattern is selected from the list. If random_select_association = False, then the feature association pattern is selected from the list in turns to be applied to each feature one by one.

  • error_std (float, optional (default = 0.05)) – Standard deviation to be used in the error term of the logistic regression. The error is drawn from a normal distribution with mean 0 and standard deviation specified in this argument.

Returns:

  • df1 (DataFrame) – A data frame containing the treatment label, features, and outcome variable.

  • x_name (list) – The list of feature names generated.

causalml.dataset.scatter_plot_single_sim(synthetic_preds)[source]

Creates a grid of scatter plots comparing each learner’s predictions with the truth (for a single simulation).

Parameters:

synthetic_preds (dict) – dictionary of predictions generated by get_synthetic_preds() or get_synthetic_preds_holdout()

causalml.dataset.scatter_plot_summary(synthetic_summary, k, drop_learners=[], drop_cols=[])[source]

Generates a scatter plot comparing learner performance. Each learner’s performance is plotted as a point in the (Abs % Error of ATE, MSE) space.

Parameters:
  • synthetic_summary (pd.DataFrame) – summary generated by get_synthetic_summary()

  • k (int) – number of simulations (used only for plot title text)

  • drop_learners (list, optional) – list of learners (str) to omit when plotting

  • drop_cols (list, optional) – list of metrics (str) to omit when plotting

causalml.dataset.scatter_plot_summary_holdout(train_summary, validation_summary, k, label=['Train', 'Validation'], drop_learners=[], drop_cols=[])[source]

Generates a scatter plot comparing learner performance by training and validation.

Parameters:
  • train_summary (pd.DataFrame) – summary for training synthetic data generated by get_synthetic_summary_holdout()

  • validation_summary (pd.DataFrame) – summary for validation synthetic data generated by get_synthetic_summary_holdout()

  • label (string, optional) – legend label for plot

  • k (int) – number of simulations (used only for plot title text)

  • drop_learners (list, optional) – list of learners (str) to omit when plotting

  • drop_cols (list, optional) – list of metrics (str) to omit when plotting

causalml.dataset.simulate_easy_propensity_difficult_baseline(n=1000, p=5, sigma=1.0, adj=0.0)[source]
Synthetic data with easy propensity and a difficult baseline

From Setup C in Nie X. and Wager S. (2018) ‘Quasi-Oracle Estimation of Heterogeneous Treatment Effects’

Parameters:
  • n (int, optional) – number of observations

  • p (int optional) – number of covariates (>=3)

  • sigma (float) – standard deviation of the error term

  • adj (float) – no effect. added for consistency

Returns:

Synthetically generated samples with the following outputs:
  • y ((n,)-array): outcome variable.

  • X ((n,p)-ndarray): independent variables.

  • w ((n,)-array): treatment flag with value 0 or 1.

  • tau ((n,)-array): individual treatment effect.

  • b ((n,)-array): expected outcome.

  • e ((n,)-array): propensity of receiving treatment.

Return type:

(tuple)

causalml.dataset.simulate_hidden_confounder(n=10000, p=5, sigma=1.0, adj=0.0)[source]
Synthetic dataset with a hidden confounder biasing treatment.

From Louizos et al. (2018) “Causal Effect Inference with Deep Latent-Variable Models”

Parameters:
  • n (int, optional) – number of observations

  • p (int optional) – number of covariates (>=3)

  • sigma (float) – standard deviation of the error term

  • adj (float) – no effect. added for consistency

Returns:

Synthetically generated samples with the following outputs:
  • y ((n,)-array): outcome variable.

  • X ((n,p)-ndarray): independent variables.

  • w ((n,)-array): treatment flag with value 0 or 1.

  • tau ((n,)-array): individual treatment effect.

  • b ((n,)-array): expected outcome.

  • e ((n,)-array): propensity of receiving treatment.

Return type:

(tuple)

causalml.dataset.simulate_nuisance_and_easy_treatment(n=1000, p=5, sigma=1.0, adj=0.0)[source]
Synthetic data with a difficult nuisance components and an easy treatment effect

From Setup A in Nie X. and Wager S. (2018) ‘Quasi-Oracle Estimation of Heterogeneous Treatment Effects’

Parameters:
  • n (int, optional) – number of observations

  • p (int optional) – number of covariates (>=5)

  • sigma (float) – standard deviation of the error term

  • adj (float) – adjustment term for the distribution of propensity, e. Higher values shift the distribution to 0.

Returns:

Synthetically generated samples with the following outputs:
  • y ((n,)-array): outcome variable.

  • X ((n,p)-ndarray): independent variables.

  • w ((n,)-array): treatment flag with value 0 or 1.

  • tau ((n,)-array): individual treatment effect.

  • b ((n,)-array): expected outcome.

  • e ((n,)-array): propensity of receiving treatment.

Return type:

(tuple)

causalml.dataset.simulate_randomized_trial(n=1000, p=5, sigma=1.0, adj=0.0)[source]
Synthetic data of a randomized trial

From Setup B in Nie X. and Wager S. (2018) ‘Quasi-Oracle Estimation of Heterogeneous Treatment Effects’

Parameters:
  • n (int, optional) – number of observations

  • p (int optional) – number of covariates (>=5)

  • sigma (float) – standard deviation of the error term

  • adj (float) – no effect. added for consistency

Returns:

Synthetically generated samples with the following outputs:
  • y ((n,)-array): outcome variable.

  • X ((n,p)-ndarray): independent variables.

  • w ((n,)-array): treatment flag with value 0 or 1.

  • tau ((n,)-array): individual treatment effect.

  • b ((n,)-array): expected outcome.

  • e ((n,)-array): propensity of receiving treatment.

Return type:

(tuple)

causalml.dataset.simulate_unrelated_treatment_control(n=1000, p=5, sigma=1.0, adj=0.0)[source]
Synthetic data with unrelated treatment and control groups.

From Setup D in Nie X. and Wager S. (2018) ‘Quasi-Oracle Estimation of Heterogeneous Treatment Effects’

Parameters:
  • n (int, optional) – number of observations

  • p (int optional) – number of covariates (>=3)

  • sigma (float) – standard deviation of the error term

  • adj (float) – adjustment term for the distribution of propensity, e. Higher values shift the distribution to 0.

Returns:

Synthetically generated samples with the following outputs:
  • y ((n,)-array): outcome variable.

  • X ((n,p)-ndarray): independent variables.

  • w ((n,)-array): treatment flag with value 0 or 1.

  • tau ((n,)-array): individual treatment effect.

  • b ((n,)-array): expected outcome.

  • e ((n,)-array): propensity of receiving treatment.

Return type:

(tuple)

causalml.dataset.synthetic_data(mode=1, n=1000, p=5, sigma=1.0, adj=0.0)[source]

Synthetic data in Nie X. and Wager S. (2018) ‘Quasi-Oracle Estimation of Heterogeneous Treatment Effects’ :param mode: mode of the simulation: 1 for difficult nuisance components and an easy treatment effect. 2 for a randomized trial. 3 for an easy propensity and a difficult baseline. 4 for unrelated treatment and control groups. 5 for a hidden confounder biasing treatment. :type mode: int, optional :param n: number of observations :type n: int, optional :param p: number of covariates (>=5) :type p: int optional :param sigma: standard deviation of the error term :type sigma: float :param adj: adjustment term for the distribution of propensity, e. Higher values shift the distribution to 0.

It does not apply to mode == 2 or 3.

Returns:

Synthetically generated samples with the following outputs:
  • y ((n,)-array): outcome variable.

  • X ((n,p)-ndarray): independent variables.

  • w ((n,)-array): treatment flag with value 0 or 1.

  • tau ((n,)-array): individual treatment effect.

  • b ((n,)-array): expected outcome.

  • e ((n,)-array): propensity of receiving treatment.

Return type:

(tuple)

causalml.match module

class causalml.match.MatchOptimizer(treatment_col='is_treatment', ps_col='pihat', user_col=None, matching_covariates=['pihat'], max_smd=0.1, max_deviation=0.1, caliper_range=(0.01, 0.5), max_pihat_range=(0.95, 0.999), max_iter_per_param=5, min_users_per_group=1000, smd_cols=['pihat'], dev_cols_transformations={'pihat': <function mean>}, dev_factor=1.0, verbose=True)[source]

Bases: object

check_table_one(tableone, matched, score_cols, pihat_threshold, caliper)[source]
match_and_check(score_cols, pihat_threshold, caliper)[source]
search_best_match(df)[source]
single_match(score_cols, pihat_threshold, caliper)[source]
class causalml.match.NearestNeighborMatch(caliper=0.2, replace=False, ratio=1, shuffle=True, random_state=None, n_jobs=-1)[source]

Bases: object

Propensity score matching based on the nearest neighbor algorithm.

caliper

threshold to be considered as a match.

Type:

float

replace

whether to match with replacement or not

Type:

bool

ratio

ratio of control / treatment to be matched. used only if replace=True.

Type:

int

shuffle

whether to shuffle the treatment group data before matching

Type:

bool

random_state

RandomState or an int seed

Type:

numpy.random.RandomState or int

n_jobs

The number of parallel jobs to run for neighbors search. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors

Type:

int

match(data, treatment_col, score_cols)[source]

Find matches from the control group by matching on specified columns (propensity preferred).

Parameters:
  • data (pandas.DataFrame) – total input data

  • treatment_col (str) – the column name for the treatment

  • score_cols (list) – list of column names for matching (propensity column should be included)

Returns:

The subset of data consisting of matched

treatment and control group data.

Return type:

(pandas.DataFrame)

match_by_group(data, treatment_col, score_cols, groupby_col)[source]

Find matches from the control group stratified by groupby_col, by matching on specified columns (propensity preferred).

Parameters:
  • data (pandas.DataFrame) – total sample data

  • treatment_col (str) – the column name for the treatment

  • score_cols (list) – list of column names for matching (propensity column should be included)

  • groupby_col (str) – the column name to be used for stratification

Returns:

The subset of data consisting of matched

treatment and control group data.

Return type:

(pandas.DataFrame)

causalml.match.create_table_one(data, treatment_col, features)[source]

Report balance in input features between the treatment and control groups.

References

R’s tableone at CRAN: https://github.com/kaz-yos/tableone Python’s tableone at PyPi: https://github.com/tompollard/tableone

Parameters:
  • data (pandas.DataFrame) – total or matched sample data

  • treatment_col (str) – the column name for the treatment

  • features (list of str) – the column names of features

Returns:

A table with the means and standard deviations in

the treatment and control groups, and the SMD between two groups for the features.

Return type:

(pandas.DataFrame)

causalml.match.smd(feature, treatment)[source]

Calculate the standard mean difference (SMD) of a feature between the treatment and control groups.

The definition is available at https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3144483/#s11title

Parameters:
  • feature (pandas.Series) – a column of a feature to calculate SMD for

  • treatment (pandas.Series) – a column that indicate whether a row is in the treatment group or not

Returns:

The SMD of the feature

Return type:

(float)

causalml.propensity module

class causalml.propensity.ElasticNetPropensityModel(clip_bounds=(0.001, 0.999), **model_kwargs)[source]

Bases: LogisticRegressionPropensityModel

class causalml.propensity.GradientBoostedPropensityModel(early_stop=False, clip_bounds=(0.001, 0.999), **model_kwargs)[source]

Bases: PropensityModel

Gradient boosted propensity score model with optional early stopping.

Notes

Please see the xgboost documentation for more information on gradient boosting tuning parameters: https://xgboost.readthedocs.io/en/latest/python/python_api.html

fit(X, y, early_stopping_rounds=10, stop_val_size=0.2)[source]

Fit a propensity model.

Parameters:
  • X (numpy.ndarray) – a feature matrix

  • y (numpy.ndarray) – a binary target vector

predict(X)[source]

Predict propensity scores.

Parameters:

X (numpy.ndarray) – a feature matrix

Returns:

Propensity scores between 0 and 1.

Return type:

(numpy.ndarray)

class causalml.propensity.LogisticRegressionPropensityModel(clip_bounds=(0.001, 0.999), **model_kwargs)[source]

Bases: PropensityModel

Propensity regression model based on the LogisticRegression algorithm.

class causalml.propensity.PropensityModel(clip_bounds=(0.001, 0.999), **model_kwargs)[source]

Bases: object

fit(X, y)[source]

Fit a propensity model.

Parameters:
  • X (numpy.ndarray) – a feature matrix

  • y (numpy.ndarray) – a binary target vector

fit_predict(X, y)[source]

Fit a propensity model and predict propensity scores.

Parameters:
  • X (numpy.ndarray) – a feature matrix

  • y (numpy.ndarray) – a binary target vector

Returns:

Propensity scores between 0 and 1.

Return type:

(numpy.ndarray)

predict(X)[source]

Predict propensity scores.

Parameters:

X (numpy.ndarray) – a feature matrix

Returns:

Propensity scores between 0 and 1.

Return type:

(numpy.ndarray)

causalml.propensity.calibrate(ps, treatment)[source]

Calibrate propensity scores with logistic GAM.

Ref: https://pygam.readthedocs.io/en/latest/api/logisticgam.html

Parameters:
  • ps (numpy.array) – a propensity score vector

  • treatment (numpy.array) – a binary treatment vector (0: control, 1: treated)

Returns:

a calibrated propensity score vector

Return type:

(numpy.array)

causalml.propensity.compute_propensity_score(X, treatment, p_model=None, X_pred=None, treatment_pred=None, calibrate_p=True)[source]

Generate propensity score if user didn’t provide

Parameters:
  • X (np.matrix) – features for training

  • treatment (np.array or pd.Series) – a treatment vector for training

  • p_model (propensity model object, optional) – ElasticNetPropensityModel (default) / GradientBoostedPropensityModel

  • X_pred (np.matrix, optional) – features for prediction

  • treatment_pred (np.array or pd.Series, optional) – a treatment vector for prediciton

  • calibrate_p (bool, optional) – whether calibrate the propensity score

Returns:

(tuple)
  • p (numpy.ndarray): propensity score

  • p_model (PropensityModel): a trained PropensityModel object

causalml.metrics module

class causalml.metrics.Sensitivity(df, inference_features, p_col, treatment_col, outcome_col, learner, *args, **kwargs)[source]

Bases: object

A Sensitivity Check class to support Placebo Treatment, Irrelevant Additional Confounder and Subset validation refutation methods to verify causal inference.

Reference: https://github.com/microsoft/dowhy/blob/master/dowhy/causal_refuters/

get_ate_ci(X, p, treatment, y)[source]

Return the confidence intervals for treatment effects prediction.

Parameters:
  • X (np.matrix) – a feature matrix

  • p (np.array) – a propensity score vector between 0 and 1

  • treatment (np.array) – a treatment vector (1 if treated, otherwise 0)

  • y (np.array) – an outcome vector

Returns:

Mean and confidence interval (LB, UB) of the ATE estimate.

Return type:

(numpy.ndarray)

static get_class_object(method_name, *args, **kwargs)[source]

Return class object based on input method :param method_name: a list of sensitivity analysis method :type method_name: list of str

Returns:

Sensitivy Class

Return type:

(class)

get_prediction(X, p, treatment, y)[source]

Return the treatment effects prediction.

Parameters:
  • X (np.matrix) – a feature matrix

  • p (np.array) – a propensity score vector between 0 and 1

  • treatment (np.array) – a treatment vector (1 if treated, otherwise 0)

  • y (np.array) – an outcome vector

Returns:

Predictions of treatment effects

Return type:

(numpy.ndarray)

sensitivity_analysis(methods, sample_size=None, confound='one_sided', alpha_range=None)[source]

Return the sensitivity data by different method

Parameters:
  • method (list of str) – a list of sensitivity analysis method

  • sample_size (float, optional) – ratio for subset the original data

  • confound (string, optional) – the name of confouding function

  • alpha_range (np.array, optional) – a parameter to pass the confounding function

Returns:

a feature matrix p (np.array): a propensity score vector between 0 and 1 treatment (np.array): a treatment vector (1 if treated, otherwise 0) y (np.array): an outcome vector

Return type:

X (np.matrix)

sensitivity_estimate()[source]
summary(method)[source]

Summary report :param method_name: sensitivity analysis method :type method_name: str

Returns:

a summary dataframe

Return type:

(pd.DataFrame)

class causalml.metrics.SensitivityPlaceboTreatment(*args, **kwargs)[source]

Bases: Sensitivity

Replaces the treatment variable with a new variable randomly generated.

sensitivity_estimate()[source]

Summary report :param return_ci: sensitivity analysis method :type return_ci: str

Returns:

a summary dataframe

Return type:

(pd.DataFrame)

class causalml.metrics.SensitivityRandomCause(*args, **kwargs)[source]

Bases: Sensitivity

Adds an irrelevant random covariate to the dataframe.

sensitivity_estimate()[source]
class causalml.metrics.SensitivityRandomReplace(*args, **kwargs)[source]

Bases: Sensitivity

Replaces a random covariate with an irrelevant variable.

sensitivity_estimate()[source]

Replaces a random covariate with an irrelevant variable.

class causalml.metrics.SensitivitySelectionBias(*args, confound='one_sided', alpha_range=None, sensitivity_features=None, **kwargs)[source]

Bases: Sensitivity

Reference:

[1] Blackwell, Matthew. “A selection bias approach to sensitivity analysis for causal effects.” Political Analysis 22.2 (2014): 169-182. https://www.mattblackwell.org/files/papers/causalsens.pdf

[2] Confouding parameter alpha_range using the same range as in: https://github.com/mattblackwell/causalsens/blob/master/R/causalsens.R

causalsens()[source]
static partial_rsqs_confounding(sens_df, feature_name, partial_rsqs_value, range=0.01)[source]

Check partial rsqs values of feature corresponding confounding amonunt of ATE :param sens_df: a data frame output from causalsens :type sens_df: pandas.DataFrame :param feature_name: feature name to check :type feature_name: str :param partial_rsqs_value: partial rsquare value of feature :type partial_rsqs_value: float :param range: range to search from sens_df :type range: float

Return: min and max value of confounding amount

static plot(sens_df, partial_rsqs_df=None, type='raw', ci=False, partial_rsqs=False)[source]

Plot the results of a sensitivity analysis against unmeasured :param sens_df: a data frame output from causalsens :type sens_df: pandas.DataFrame :param partial_rsqs_d: a data frame output from causalsens including partial rsqure :type partial_rsqs_d: pandas.DataFrame :param type: the type of plot to draw, ‘raw’ or ‘r.squared’ are supported :type type: str, optional :param ci: whether plot confidence intervals :type ci: bool, optional :param partial_rsqs: whether plot partial rsquare results :type partial_rsqs: bool, optional

summary(method='Selection Bias')[source]

Summary report for Selection Bias Method :param method_name: sensitivity analysis method :type method_name: str

Returns:

a summary dataframe

Return type:

(pd.DataFrame)

class causalml.metrics.SensitivitySubsetData(*args, **kwargs)[source]

Bases: Sensitivity

Takes a random subset of size sample_size of the data.

sensitivity_estimate()[source]
causalml.metrics.ape(y, p)[source]

Absolute Percentage Error (APE). :param y: target :type y: float :param p: prediction :type p: float

Returns:

APE

Return type:

e (float)

causalml.metrics.auuc_score(df, outcome_col='y', treatment_col='w', treatment_effect_col='tau', normalize=True, tmle=False, *args, **kwarg)[source]

Calculate the AUUC (Area Under the Uplift Curve) score.

Args:

df (pandas.DataFrame): a data frame with model estimates and actual data as columns outcome_col (str, optional): the column name for the actual outcome treatment_col (str, optional): the column name for the treatment indicator (0 or 1) treatment_effect_col (str, optional): the column name for the true treatment effect normalize (bool, optional): whether to normalize the y-axis to 1 or not

Returns:

the AUUC score

Return type:

(float)

causalml.metrics.classification_metrics(y, p, w=None, metrics={'AUC': <function roc_auc_score>, 'Log Loss': <function logloss>})[source]

Log metrics for classifiers.

Parameters:
  • y (numpy.array) – target

  • p (numpy.array) – prediction

  • w (numpy.array, optional) – a treatment vector (1 or True: treatment, 0 or False: control). If given, log metrics for the treatment and control group separately

  • metrics (dict, optional) – a dictionary of the metric names and functions

causalml.metrics.get_cumgain(df, outcome_col='y', treatment_col='w', treatment_effect_col='tau', normalize=False, random_seed=42)[source]

Get cumulative gains of model estimates in population.

If the true treatment effect is provided (e.g. in synthetic data), it’s calculated as the cumulative gain of the true treatment effect in each population. Otherwise, it’s calculated as the cumulative difference between the mean outcomes of the treatment and control groups in each population.

For details, see Section 4.1 of Gutierrez and G{‘e}rardy (2016), Causal Inference and Uplift Modeling: A review of the literature.

For the former, treatment_effect_col should be provided. For the latter, both outcome_col and treatment_col should be provided.

Parameters:
  • df (pandas.DataFrame) – a data frame with model estimates and actual data as columns

  • outcome_col (str, optional) – the column name for the actual outcome

  • treatment_col (str, optional) – the column name for the treatment indicator (0 or 1)

  • treatment_effect_col (str, optional) – the column name for the true treatment effect

  • normalize (bool, optional) – whether to normalize the y-axis to 1 or not

  • random_seed (int, optional) – random seed for numpy.random.rand()

Returns:

cumulative gains of model estimates in population

Return type:

(pandas.DataFrame)

causalml.metrics.get_cumlift(df, outcome_col='y', treatment_col='w', treatment_effect_col='tau', random_seed=42)[source]

Get average uplifts of model estimates in cumulative population.

If the true treatment effect is provided (e.g. in synthetic data), it’s calculated as the mean of the true treatment effect in each of cumulative population. Otherwise, it’s calculated as the difference between the mean outcomes of the treatment and control groups in each of cumulative population.

For details, see Section 4.1 of Gutierrez and G{‘e}rardy (2016), Causal Inference and Uplift Modeling: A review of the literature.

For the former, treatment_effect_col should be provided. For the latter, both outcome_col and treatment_col should be provided.

Parameters:
  • df (pandas.DataFrame) – a data frame with model estimates and actual data as columns

  • outcome_col (str, optional) – the column name for the actual outcome

  • treatment_col (str, optional) – the column name for the treatment indicator (0 or 1)

  • treatment_effect_col (str, optional) – the column name for the true treatment effect

  • random_seed (int, optional) – random seed for numpy.random.rand()

Returns:

average uplifts of model estimates in cumulative population

Return type:

(pandas.DataFrame)

causalml.metrics.get_qini(df, outcome_col='y', treatment_col='w', treatment_effect_col='tau', normalize=False, random_seed=42)[source]

Get Qini of model estimates in population.

If the true treatment effect is provided (e.g. in synthetic data), it’s calculated as the cumulative gain of the true treatment effect in each population. Otherwise, it’s calculated as the cumulative difference between the mean outcomes of the treatment and control groups in each population.

For details, see Radcliffe (2007), Using Control Group to Target on Predicted Lift: Building and Assessing Uplift Models

For the former, treatment_effect_col should be provided. For the latter, both outcome_col and treatment_col should be provided.

Parameters:
  • df (pandas.DataFrame) – a data frame with model estimates and actual data as columns

  • outcome_col (str, optional) – the column name for the actual outcome

  • treatment_col (str, optional) – the column name for the treatment indicator (0 or 1)

  • treatment_effect_col (str, optional) – the column name for the true treatment effect

  • normalize (bool, optional) – whether to normalize the y-axis to 1 or not

  • random_seed (int, optional) – random seed for numpy.random.rand()

Returns:

cumulative gains of model estimates in population

Return type:

(pandas.DataFrame)

causalml.metrics.get_tmlegain(df, inference_col, learner=LGBMRegressor(learning_rate=0.05, n_estimators=300, num_leaves=64), outcome_col='y', treatment_col='w', p_col='p', n_segment=5, cv=None, calibrate_propensity=True, ci=False)[source]

Get TMLE based average uplifts of model estimates of segments.

Parameters:
  • df (pandas.DataFrame) – a data frame with model estimates and actual data as columns

  • inferenece_col (list of str) – a list of columns that used in learner for inference

  • learner (optional) – a model used by TMLE to estimate the outcome

  • outcome_col (str, optional) – the column name for the actual outcome

  • treatment_col (str, optional) – the column name for the treatment indicator (0 or 1)

  • p_col (str, optional) – the column name for propensity score

  • n_segment (int, optional) – number of segment that TMLE will estimated for each

  • cv (sklearn.model_selection._BaseKFold, optional) – sklearn CV object

  • calibrate_propensity (bool, optional) – whether calibrate propensity score or not

  • ci (bool, optional) – whether return confidence intervals for ATE or not

Returns:

cumulative gains of model estimates based of TMLE

Return type:

(pandas.DataFrame)

causalml.metrics.get_tmleqini(df, inference_col, learner=LGBMRegressor(learning_rate=0.05, n_estimators=300, num_leaves=64), outcome_col='y', treatment_col='w', p_col='p', n_segment=5, cv=None, calibrate_propensity=True, ci=False, normalize=False)[source]

Get TMLE based Qini of model estimates by segments.

Parameters:
  • df (pandas.DataFrame) – a data frame with model estimates and actual data as columns

  • inferenece_col (list of str) – a list of columns that used in learner for inference

  • learner (optional) – a model used by TMLE to estimate the outcome

  • outcome_col (str, optional) – the column name for the actual outcome

  • treatment_col (str, optional) – the column name for the treatment indicator (0 or 1)

  • p_col (str, optional) – the column name for propensity score

  • n_segment (int, optional) – number of segment that TMLE will estimated for each

  • cv (sklearn.model_selection._BaseKFold, optional) – sklearn CV object

  • calibrate_propensity (bool, optional) – whether calibrate propensity score or not

  • ci (bool, optional) – whether return confidence intervals for ATE or not

Returns:

cumulative gains of model estimates based of TMLE

Return type:

(pandas.DataFrame)

causalml.metrics.gini(y, p)[source]

Normalized Gini Coefficient.

Parameters:
  • y (numpy.array) – target

  • p (numpy.array) – prediction

Returns:

normalized Gini coefficient

Return type:

e (numpy.float64)

causalml.metrics.logloss(y, p)[source]

Bounded log loss error. :param y: target :type y: numpy.array :param p: prediction :type p: numpy.array

Returns:

bounded log loss error

causalml.metrics.mae(y_true, y_pred, *, sample_weight=None, multioutput='uniform_average')

Mean absolute error regression loss.

Read more in the User Guide.

Parameters:
  • y_true (array-like of shape (n_samples,) or (n_samples, n_outputs)) – Ground truth (correct) target values.

  • y_pred (array-like of shape (n_samples,) or (n_samples, n_outputs)) – Estimated target values.

  • sample_weight (array-like of shape (n_samples,), default=None) – Sample weights.

  • multioutput ({'raw_values', 'uniform_average'} or array-like of shape (n_outputs,), default='uniform_average') –

    Defines aggregating of multiple output values. Array-like value defines weights used to average errors.

    ’raw_values’ :

    Returns a full set of errors in case of multioutput input.

    ’uniform_average’ :

    Errors of all outputs are averaged with uniform weight.

Returns:

loss – If multioutput is ‘raw_values’, then mean absolute error is returned for each output separately. If multioutput is ‘uniform_average’ or an ndarray of weights, then the weighted average of all output errors is returned.

MAE output is non-negative floating point. The best value is 0.0.

Return type:

float or ndarray of floats

Examples

>>> from sklearn.metrics import mean_absolute_error
>>> y_true = [3, -0.5, 2, 7]
>>> y_pred = [2.5, 0.0, 2, 8]
>>> mean_absolute_error(y_true, y_pred)
0.5
>>> y_true = [[0.5, 1], [-1, 1], [7, -6]]
>>> y_pred = [[0, 2], [-1, 2], [8, -5]]
>>> mean_absolute_error(y_true, y_pred)
0.75
>>> mean_absolute_error(y_true, y_pred, multioutput='raw_values')
array([0.5, 1. ])
>>> mean_absolute_error(y_true, y_pred, multioutput=[0.3, 0.7])
0.85...
causalml.metrics.mape(y, p)[source]

Mean Absolute Percentage Error (MAPE). :param y: target :type y: numpy.array :param p: prediction :type p: numpy.array

Returns:

MAPE

Return type:

e (numpy.float64)

causalml.metrics.plot(df, kind='gain', tmle=False, n=100, figsize=(8, 8), *args, **kwarg)[source]

Plot one of the lift/gain/Qini charts of model estimates.

A factory method for plot_lift(), plot_gain(), plot_qini(), plot_tmlegain() and plot_tmleqini(). For details, pleas see docstrings of each function.

Parameters:
  • df (pandas.DataFrame) – a data frame with model estimates and actual data as columns.

  • kind (str, optional) – the kind of plot to draw. ‘lift’, ‘gain’, and ‘qini’ are supported.

  • n (int, optional) – the number of samples to be used for plotting.

causalml.metrics.plot_gain(df, outcome_col='y', treatment_col='w', treatment_effect_col='tau', normalize=False, random_seed=42, n=100, figsize=(8, 8))[source]

Plot the cumulative gain chart (or uplift curve) of model estimates.

If the true treatment effect is provided (e.g. in synthetic data), it’s calculated as the cumulative gain of the true treatment effect in each population. Otherwise, it’s calculated as the cumulative difference between the mean outcomes of the treatment and control groups in each population.

For details, see Section 4.1 of Gutierrez and G{‘e}rardy (2016), Causal Inference and Uplift Modeling: A review of the literature.

For the former, treatment_effect_col should be provided. For the latter, both outcome_col and treatment_col should be provided.

Parameters:
  • df (pandas.DataFrame) – a data frame with model estimates and actual data as columns

  • outcome_col (str, optional) – the column name for the actual outcome

  • treatment_col (str, optional) – the column name for the treatment indicator (0 or 1)

  • treatment_effect_col (str, optional) – the column name for the true treatment effect

  • normalize (bool, optional) – whether to normalize the y-axis to 1 or not

  • random_seed (int, optional) – random seed for numpy.random.rand()

  • n (int, optional) – the number of samples to be used for plotting

causalml.metrics.plot_lift(df, outcome_col='y', treatment_col='w', treatment_effect_col='tau', random_seed=42, n=100, figsize=(8, 8))[source]

Plot the lift chart of model estimates in cumulative population.

If the true treatment effect is provided (e.g. in synthetic data), it’s calculated as the mean of the true treatment effect in each of cumulative population. Otherwise, it’s calculated as the difference between the mean outcomes of the treatment and control groups in each of cumulative population.

For details, see Section 4.1 of Gutierrez and G{‘e}rardy (2016), Causal Inference and Uplift Modeling: A review of the literature.

For the former, treatment_effect_col should be provided. For the latter, both outcome_col and treatment_col should be provided.

Parameters:
  • df (pandas.DataFrame) – a data frame with model estimates and actual data as columns

  • outcome_col (str, optional) – the column name for the actual outcome

  • treatment_col (str, optional) – the column name for the treatment indicator (0 or 1)

  • treatment_effect_col (str, optional) – the column name for the true treatment effect

  • random_seed (int, optional) – random seed for numpy.random.rand()

  • n (int, optional) – the number of samples to be used for plotting

causalml.metrics.plot_qini(df, outcome_col='y', treatment_col='w', treatment_effect_col='tau', normalize=False, random_seed=42, n=100, figsize=(8, 8))[source]

Plot the Qini chart (or uplift curve) of model estimates.

If the true treatment effect is provided (e.g. in synthetic data), it’s calculated as the cumulative gain of the true treatment effect in each population. Otherwise, it’s calculated as the cumulative difference between the mean outcomes of the treatment and control groups in each population.

For details, see Radcliffe (2007), Using Control Group to Target on Predicted Lift: Building and Assessing Uplift Models

For the former, treatment_effect_col should be provided. For the latter, both outcome_col and treatment_col should be provided.

Parameters:
  • df (pandas.DataFrame) – a data frame with model estimates and actual data as columns

  • outcome_col (str, optional) – the column name for the actual outcome

  • treatment_col (str, optional) – the column name for the treatment indicator (0 or 1)

  • treatment_effect_col (str, optional) – the column name for the true treatment effect

  • normalize (bool, optional) – whether to normalize the y-axis to 1 or not

  • random_seed (int, optional) – random seed for numpy.random.rand()

  • n (int, optional) – the number of samples to be used for plotting

  • ci (bool, optional) – whether return confidence intervals for ATE or not

causalml.metrics.plot_tmlegain(df, inference_col, learner=LGBMRegressor(learning_rate=0.05, n_estimators=300, num_leaves=64), outcome_col='y', treatment_col='w', p_col='tau', n_segment=5, cv=None, calibrate_propensity=True, ci=False, figsize=(8, 8))[source]

Plot the lift chart based of TMLE estimation

Parameters:
  • df (pandas.DataFrame) – a data frame with model estimates and actual data as columns

  • inferenece_col (list of str) – a list of columns that used in learner for inference

  • learner (optional) – a model used by TMLE to estimate the outcome

  • outcome_col (str, optional) – the column name for the actual outcome

  • treatment_col (str, optional) – the column name for the treatment indicator (0 or 1)

  • p_col (str, optional) – the column name for propensity score

  • n_segment (int, optional) – number of segment that TMLE will estimated for each

  • cv (sklearn.model_selection._BaseKFold, optional) – sklearn CV object

  • calibrate_propensity (bool, optional) – whether calibrate propensity score or not

  • ci (bool, optional) – whether return confidence intervals for ATE or not

causalml.metrics.plot_tmleqini(df, inference_col, learner=LGBMRegressor(learning_rate=0.05, n_estimators=300, num_leaves=64), outcome_col='y', treatment_col='w', p_col='tau', n_segment=5, cv=None, calibrate_propensity=True, ci=False, figsize=(8, 8))[source]

Plot the qini chart based of TMLE estimation

Parameters:
  • df (pandas.DataFrame) – a data frame with model estimates and actual data as columns

  • inferenece_col (list of str) – a list of columns that used in learner for inference

  • learner (optional) – a model used by TMLE to estimate the outcome

  • outcome_col (str, optional) – the column name for the actual outcome

  • treatment_col (str, optional) – the column name for the treatment indicator (0 or 1)

  • p_col (str, optional) – the column name for propensity score

  • n_segment (int, optional) – number of segment that TMLE will estimated for each

  • cv (sklearn.model_selection._BaseKFold, optional) – sklearn CV object

  • calibrate_propensity (bool, optional) – whether calibrate propensity score or not

  • ci (bool, optional) – whether return confidence intervals for ATE or not

causalml.metrics.qini_score(df, outcome_col='y', treatment_col='w', treatment_effect_col='tau', normalize=True, tmle=False, *args, **kwarg)[source]

Calculate the Qini score: the area between the Qini curves of a model and random.

For details, see Radcliffe (2007), Using Control Group to Target on Predicted Lift: Building and Assessing Uplift Models

Args:

df (pandas.DataFrame): a data frame with model estimates and actual data as columns outcome_col (str, optional): the column name for the actual outcome treatment_col (str, optional): the column name for the treatment indicator (0 or 1) treatment_effect_col (str, optional): the column name for the true treatment effect normalize (bool, optional): whether to normalize the y-axis to 1 or not

Returns:

the Qini score

Return type:

(float)

causalml.metrics.r2_score(y_true, y_pred, *, sample_weight=None, multioutput='uniform_average')[source]

\(R^2\) (coefficient of determination) regression score function.

Best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0.

Read more in the User Guide.

Parameters:
  • y_true (array-like of shape (n_samples,) or (n_samples, n_outputs)) – Ground truth (correct) target values.

  • y_pred (array-like of shape (n_samples,) or (n_samples, n_outputs)) – Estimated target values.

  • sample_weight (array-like of shape (n_samples,), default=None) – Sample weights.

  • multioutput ({'raw_values', 'uniform_average', 'variance_weighted'}, array-like of shape (n_outputs,) or None, default='uniform_average') –

    Defines aggregating of multiple output scores. Array-like value defines weights used to average scores. Default is “uniform_average”.

    ’raw_values’ :

    Returns a full set of scores in case of multioutput input.

    ’uniform_average’ :

    Scores of all outputs are averaged with uniform weight.

    ’variance_weighted’ :

    Scores of all outputs are averaged, weighted by the variances of each individual output.

    Changed in version 0.19: Default value of multioutput is ‘uniform_average’.

Returns:

z – The \(R^2\) score or ndarray of scores if ‘multioutput’ is ‘raw_values’.

Return type:

float or ndarray of floats

Notes

This is not a symmetric function.

Unlike most other scores, \(R^2\) score may be negative (it need not actually be the square of a quantity R).

This metric is not well-defined for single samples and will return a NaN value if n_samples is less than two.

References

Examples

>>> from sklearn.metrics import r2_score
>>> y_true = [3, -0.5, 2, 7]
>>> y_pred = [2.5, 0.0, 2, 8]
>>> r2_score(y_true, y_pred)
0.948...
>>> y_true = [[0.5, 1], [-1, 1], [7, -6]]
>>> y_pred = [[0, 2], [-1, 2], [8, -5]]
>>> r2_score(y_true, y_pred,
...          multioutput='variance_weighted')
0.938...
>>> y_true = [1, 2, 3]
>>> y_pred = [1, 2, 3]
>>> r2_score(y_true, y_pred)
1.0
>>> y_true = [1, 2, 3]
>>> y_pred = [2, 2, 2]
>>> r2_score(y_true, y_pred)
0.0
>>> y_true = [1, 2, 3]
>>> y_pred = [3, 2, 1]
>>> r2_score(y_true, y_pred)
-3.0
causalml.metrics.regression_metrics(y, p, w=None, metrics={'Gini': <function gini>, 'RMSE': <function rmse>, 'sMAPE': <function smape>})[source]

Log metrics for regressors.

Parameters:
  • y (numpy.array) – target

  • p (numpy.array) – prediction

  • w (numpy.array, optional) – a treatment vector (1 or True: treatment, 0 or False: control). If given, log metrics for the treatment and control group separately

  • metrics (dict, optional) – a dictionary of the metric names and functions

causalml.metrics.rmse(y, p)[source]

Root Mean Squared Error (RMSE). :param y: target :type y: numpy.array :param p: prediction :type p: numpy.array

Returns:

RMSE

Return type:

e (numpy.float64)

causalml.metrics.roc_auc_score(y_true, y_score, *, average='macro', sample_weight=None, max_fpr=None, multi_class='raise', labels=None)[source]

Compute Area Under the Receiver Operating Characteristic Curve (ROC AUC) from prediction scores.

Note: this implementation can be used with binary, multiclass and multilabel classification, but some restrictions apply (see Parameters).

Read more in the User Guide.

Parameters:
  • y_true (array-like of shape (n_samples,) or (n_samples, n_classes)) – True labels or binary label indicators. The binary and multiclass cases expect labels with shape (n_samples,) while the multilabel case expects binary label indicators with shape (n_samples, n_classes).

  • y_score (array-like of shape (n_samples,) or (n_samples, n_classes)) –

    Target scores.

    • In the binary case, it corresponds to an array of shape (n_samples,). Both probability estimates and non-thresholded decision values can be provided. The probability estimates correspond to the probability of the class with the greater label, i.e. estimator.classes_[1] and thus estimator.predict_proba(X, y)[:, 1]. The decision values corresponds to the output of estimator.decision_function(X, y). See more information in the User guide;

    • In the multiclass case, it corresponds to an array of shape (n_samples, n_classes) of probability estimates provided by the predict_proba method. The probability estimates must sum to 1 across the possible classes. In addition, the order of the class scores must correspond to the order of labels, if provided, or else to the numerical or lexicographical order of the labels in y_true. See more information in the User guide;

    • In the multilabel case, it corresponds to an array of shape (n_samples, n_classes). Probability estimates are provided by the predict_proba method and the non-thresholded decision values by the decision_function method. The probability estimates correspond to the probability of the class with the greater label for each output of the classifier. See more information in the User guide.

  • average ({'micro', 'macro', 'samples', 'weighted'} or None, default='macro') –

    If None, the scores for each class are returned. Otherwise, this determines the type of averaging performed on the data: Note: multiclass ROC AUC currently only handles the ‘macro’ and ‘weighted’ averages.

    'micro':

    Calculate metrics globally by considering each element of the label indicator matrix as a label.

    'macro':

    Calculate metrics for each label, and find their unweighted mean. This does not take label imbalance into account.

    'weighted':

    Calculate metrics for each label, and find their average, weighted by support (the number of true instances for each label).

    'samples':

    Calculate metrics for each instance, and find their average.

    Will be ignored when y_true is binary.

  • sample_weight (array-like of shape (n_samples,), default=None) – Sample weights.

  • max_fpr (float > 0 and <= 1, default=None) – If not None, the standardized partial AUC [2] over the range [0, max_fpr] is returned. For the multiclass case, max_fpr, should be either equal to None or 1.0 as AUC ROC partial computation currently is not supported for multiclass.

  • multi_class ({'raise', 'ovr', 'ovo'}, default='raise') –

    Only used for multiclass targets. Determines the type of configuration to use. The default value raises an error, so either 'ovr' or 'ovo' must be passed explicitly.

    'ovr':

    Stands for One-vs-rest. Computes the AUC of each class against the rest [3] [4]. This treats the multiclass case in the same way as the multilabel case. Sensitive to class imbalance even when average == 'macro', because class imbalance affects the composition of each of the ‘rest’ groupings.

    'ovo':

    Stands for One-vs-one. Computes the average AUC of all possible pairwise combinations of classes [5]. Insensitive to class imbalance when average == 'macro'.

  • labels (array-like of shape (n_classes,), default=None) – Only used for multiclass targets. List of labels that index the classes in y_score. If None, the numerical or lexicographical order of the labels in y_true is used.

Returns:

auc

Return type:

float

References

See also

average_precision_score

Area under the precision-recall curve.

roc_curve

Compute Receiver operating characteristic (ROC) curve.

RocCurveDisplay.from_estimator

Plot Receiver Operating Characteristic (ROC) curve given an estimator and some data.

RocCurveDisplay.from_predictions

Plot Receiver Operating Characteristic (ROC) curve given the true and predicted values.

Examples

Binary case:

>>> from sklearn.datasets import load_breast_cancer
>>> from sklearn.linear_model import LogisticRegression
>>> from sklearn.metrics import roc_auc_score
>>> X, y = load_breast_cancer(return_X_y=True)
>>> clf = LogisticRegression(solver="liblinear", random_state=0).fit(X, y)
>>> roc_auc_score(y, clf.predict_proba(X)[:, 1])
0.99...
>>> roc_auc_score(y, clf.decision_function(X))
0.99...

Multiclass case:

>>> from sklearn.datasets import load_iris
>>> X, y = load_iris(return_X_y=True)
>>> clf = LogisticRegression(solver="liblinear").fit(X, y)
>>> roc_auc_score(y, clf.predict_proba(X), multi_class='ovr')
0.99...

Multilabel case:

>>> import numpy as np
>>> from sklearn.datasets import make_multilabel_classification
>>> from sklearn.multioutput import MultiOutputClassifier
>>> X, y = make_multilabel_classification(random_state=0)
>>> clf = MultiOutputClassifier(clf).fit(X, y)
>>> # get a list of n_output containing probability arrays of shape
>>> # (n_samples, n_classes)
>>> y_pred = clf.predict_proba(X)
>>> # extract the positive columns for each output
>>> y_pred = np.transpose([pred[:, 1] for pred in y_pred])
>>> roc_auc_score(y, y_pred, average=None)
array([0.82..., 0.86..., 0.94..., 0.85... , 0.94...])
>>> from sklearn.linear_model import RidgeClassifierCV
>>> clf = RidgeClassifierCV().fit(X, y)
>>> roc_auc_score(y, clf.decision_function(X), average=None)
array([0.81..., 0.84... , 0.93..., 0.87..., 0.94...])
causalml.metrics.smape(y, p)[source]

Symmetric Mean Absolute Percentage Error (sMAPE). :param y: target :type y: numpy.array :param p: prediction :type p: numpy.array

Returns:

sMAPE

Return type:

e (numpy.float64)

causalml.feature_selection module

class causalml.feature_selection.FilterSelect[source]

Bases: object

A class for feature importance methods.

filter_D(data, features, y_name, n_bins=10, method='KL', control_group='control', experiment_group_column='treatment_group_key', null_impute=None)[source]

Rank features based on the chosen divergence measure.

Parameters:
  • data (pd.Dataframe) – DataFrame containing outcome, features, and experiment group

  • treatment_indicator (string) – the column name for binary indicator of treatment (value 1) or control (value 0)

  • features (list of string) – list of feature names, that are columns in the data DataFrame

  • y_name (string) – name of the outcome variable

  • method (string, optional, default = 'KL') – taking one of the following values {‘F’, ‘LR’, ‘KL’, ‘ED’, ‘Chi’} The feature selection method to be used to rank the features. ‘F’ for F-test ‘LR’ for likelihood ratio test ‘KL’, ‘ED’, ‘Chi’ for bin-based uplift filter methods, KL divergence, Euclidean distance, Chi-Square respectively

  • experiment_group_column (string, optional, default = 'treatment_group_key') – the experiment column name in the DataFrame, which contains the treatment and control assignment label

  • control_group (string, optional, default = 'control') – name for control group, value in the experiment group column

  • n_bins (int, optional, default = 10) – number of bins to be used for bin-based uplift filter methods

  • null_impute (str, optional, default=None) – impute np.nan present in the data taking on of the following strategy values {‘mean’, ‘median’, ‘most_frequent’, None}. If Value is None and null is present then exception will be raised

Returns:

pd.DataFrame

a data frame containing the feature importance statistics

Return type:

all_result

filter_F(data, treatment_indicator, features, y_name, order=1)[source]

Rank features based on the F-statistics of the interaction.

Parameters:
  • data (pd.Dataframe) – DataFrame containing outcome, features, and experiment group

  • treatment_indicator (string) – the column name for binary indicator of treatment (value 1) or control (value 0)

  • features (list of string) – list of feature names, that are columns in the data DataFrame

  • y_name (string) – name of the outcome variable

  • order (int) – the order of feature to be evaluated with the treatment effect, order takes 3 values: 1,2,3. order = 1 corresponds to linear importance of the feature, order=2 corresponds to quadratic and linear importance of the feature,

  • forms. (order= 3 will calculate feature importance up to cubic) –

Returns:

pd.DataFrame

a data frame containing the feature importance statistics

Return type:

all_result

filter_LR(data, treatment_indicator, features, y_name, order=1, disp=True)[source]

Rank features based on the LRT-statistics of the interaction.

Parameters:
  • data (pd.Dataframe) – DataFrame containing outcome, features, and experiment group

  • treatment_indicator (string) – the column name for binary indicator of treatment (value 1) or control (value 0)

  • feature_name (string) – feature name, as one column in the data DataFrame

  • y_name (string) – name of the outcome variable

  • order (int) – the order of feature to be evaluated with the treatment effect, order takes 3 values: 1,2,3. order = 1 corresponds to linear importance of the feature, order=2 corresponds to quadratic and linear importance of the feature,

  • forms. (order= 3 will calculate feature importance up to cubic) –

Returns:

pd.DataFrame

a data frame containing the feature importance statistics

Return type:

all_result

get_importance(data, features, y_name, method, experiment_group_column='treatment_group_key', control_group='control', treatment_group='treatment', n_bins=5, null_impute=None, order=1, disp=False)[source]

Rank features based on the chosen statistic of the interaction.

Parameters:
  • data (pd.Dataframe) – DataFrame containing outcome, features, and experiment group

  • features (list of string) – list of feature names, that are columns in the data DataFrame

  • y_name (string) – name of the outcome variable

  • method (string, optional, default = 'KL') – taking one of the following values {‘F’, ‘LR’, ‘KL’, ‘ED’, ‘Chi’} The feature selection method to be used to rank the features. ‘F’ for F-test ‘LR’ for likelihood ratio test ‘KL’, ‘ED’, ‘Chi’ for bin-based uplift filter methods, KL divergence, Euclidean distance, Chi-Square respectively

  • experiment_group_column (string) – the experiment column name in the DataFrame, which contains the treatment and control assignment label

  • control_group (string) – name for control group, value in the experiment group column

  • treatment_group (string) – name for treatment group, value in the experiment group column

  • n_bins (int, optional) – number of bins to be used for bin-based uplift filter methods

  • null_impute (str, optional, default=None) – impute np.nan present in the data taking on of the following strategy values {‘mean’, ‘median’, ‘most_frequent’, None}. If value is None and null is present then exception will be raised

  • order (int) – the order of feature to be evaluated with the treatment effect for F filter and LR filter, order takes 3 values: 1,2,3. order = 1 corresponds to linear importance of the feature, order=2 corresponds to quadratic and linear importance of the feature,

  • forms. (order= 3 will calculate feature importance up to cubic) –

  • disp (bool) – Set to True to print convergence messages for Logistic regression convergence in LR method.

Returns:

pd.DataFrame

a data frame with following columns: [‘method’, ‘feature’, ‘rank’, ‘score’, ‘p_value’, ‘misc’]

Return type:

all_result

causalml.features module

class causalml.features.LabelEncoder(min_obs=10)[source]

Bases: BaseEstimator

Label Encoder that groups infrequent values into one label.

Code from https://github.com/jeongyoonlee/Kaggler/blob/master/kaggler/preprocessing/data.py

min_obs

minimum number of observation to assign a label.

Type:

int

label_encoders

label encoders for columns

Type:

list of dict

label_maxes

maximum of labels for columns

Type:

list of int

fit(X, y=None)[source]
fit_transform(X, y=None)[source]

Encode categorical columns into label encoded columns

Parameters:

X (pandas.DataFrame) – categorical columns to encode

Returns:

label encoded columns

Return type:

X (pandas.DataFrame)

transform(X)[source]

Encode categorical columns into label encoded columns

Parameters:

X (pandas.DataFrame) – categorical columns to encode

Returns:

label encoded columns

Return type:

X (pandas.DataFrame)

class causalml.features.OneHotEncoder(min_obs=10)[source]

Bases: BaseEstimator

One-Hot-Encoder that groups infrequent values into one dummy variable.

Code from https://github.com/jeongyoonlee/Kaggler/blob/master/kaggler/preprocessing/data.py

min_obs

minimum number of observation to create a dummy variable

Type:

int

label_encoders

label encoders and their maximums for columns

Type:

list of (dict, int)

fit(X, y=None)[source]
fit_transform(X, y=None)[source]

Encode categorical columns into sparse matrix with one-hot-encoding.

Parameters:

X (pandas.DataFrame) – categorical columns to encode

Returns:

sparse matrix encoding categorical variables into dummy variables

transform(X)[source]

Encode categorical columns into sparse matrix with one-hot-encoding.

Parameters:

X (pandas.DataFrame) – categorical columns to encode

Returns:

sparse matrix encoding categorical

variables into dummy variables

Return type:

X_new (scipy.sparse.coo_matrix)

causalml.features.load_data(data, features, transformations={})[source]

Load data and set the feature matrix and label vector.

Parameters:
  • data (pandas.DataFrame) – total input data

  • features (list of str) – column names to be used in the inference model

  • transformation (dict of (str, func)) – transformations to be applied to features

Returns:

a feature matrix

Return type:

X (numpy.matrix)

Module contents