econml


Nameeconml JSON
Version 0.14.1 PyPI version JSON
download
home_pagehttps://github.com/py-why/EconML
SummaryThis package contains several methods for calculating Conditional Average Treatment Effects
upload_time2023-05-22 14:10:01
maintainer
docs_urlNone
authorPyWhy contributors
requires_python
licenseMIT
keywords treatment-effect
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            [![Build status](https://github.com/py-why/EconML/actions/workflows/ci.yml/badge.svg)](https://github.com/py-why/EconML/actions/workflows/ci.yml)
[![PyPI version](https://img.shields.io/pypi/v/econml.svg)](https://pypi.org/project/econml/)
[![PyPI wheel](https://img.shields.io/pypi/wheel/econml.svg)](https://pypi.org/project/econml/)
[![Supported Python versions](https://img.shields.io/pypi/pyversions/econml.svg)](https://pypi.org/project/econml/)



<h1><img src="doc/econml-logo-icon.png" width="80px" align="left" style="margin-right: 10px;"> EconML: A Python Package for ML-Based Heterogeneous Treatment Effects Estimation</h1>

**EconML** is a Python package for estimating heterogeneous treatment effects from observational data via machine learning. This package was designed and built as part of the [ALICE project](https://www.microsoft.com/en-us/research/project/alice/) at Microsoft Research with the goal to combine state-of-the-art machine learning 
techniques with econometrics to bring automation to complex causal inference problems. The promise of EconML:

* Implement recent techniques in the literature at the intersection of econometrics and machine learning
* Maintain flexibility in modeling the effect heterogeneity (via techniques such as random forests, boosting, lasso and neural nets), while preserving the causal interpretation of the learned model and often offering valid confidence intervals
* Use a unified API
* Build on standard Python packages for Machine Learning and Data Analysis

One of the biggest promises of machine learning is to automate decision making in a multitude of domains. At the core of many data-driven personalized decision scenarios is the estimation of heterogeneous treatment effects: what is the causal effect of an intervention on an outcome of interest for a sample with a particular set of features? In a nutshell, this toolkit is designed to measure the causal effect of some treatment variable(s) `T` on an outcome 
variable `Y`, controlling for a set of features `X, W` and how does that effect vary as a function of `X`. The methods implemented are applicable even with observational (non-experimental or historical) datasets. For the estimation results to have a causal interpretation, some methods assume no unobserved confounders (i.e. there is no unobserved variable not included in `X, W` that simultaneously has an effect on both `T` and `Y`), while others assume access to an instrument `Z` (i.e. an observed variable `Z` that has an effect on the treatment `T` but no direct effect on the outcome `Y`). Most methods provide confidence intervals and inference results.

For detailed information about the package, consult the documentation at https://econml.azurewebsites.net/.

For information on use cases and background material on causal inference and heterogeneous treatment effects see our webpage at https://www.microsoft.com/en-us/research/project/econml/

<details>
<summary><strong><em>Table of Contents</em></strong></summary>

- [News](#news)
- [Getting Started](#getting-started)
  - [Installation](#installation)
  - [Usage Examples](#usage-examples)
    - [Estimation Methods](#estimation-methods)
    - [Interpretability](#interpretability)
    - [Causal Model Selection and Cross-Validation](#causal-model-selection-and-cross-validation)
    - [Inference](#inference)
    - [Policy Learning](#policy-learning)
- [For Developers](#for-developers)
  - [Running the tests](#running-the-tests)
  - [Generating the documentation](#generating-the-documentation)
- [Blogs and Publications](#blogs-and-publications)
- [Citation](#citation)
- [Contributing and Feedback](#contributing-and-feedback)
- [References](#references)

</details>

# News

**May 19, 2023:** Release v0.14.1, see release notes [here](https://github.com/py-why/EconML/releases/tag/v0.14.1)

<details><summary>Previous releases</summary>

**November 16, 2022:** Release v0.14.0, see release notes [here](https://github.com/py-why/EconML/releases/tag/v0.14.0)

**June 17, 2022:** Release v0.13.1, see release notes [here](https://github.com/py-why/EconML/releases/tag/v0.13.1)

**January 31, 2022:** Release v0.13.0, see release notes [here](https://github.com/py-why/EconML/releases/tag/v0.13.0)

**August 13, 2021:** Release v0.12.0, see release notes [here](https://github.com/py-why/EconML/releases/tag/v0.12.0)

**August 5, 2021:** Release v0.12.0b6, see release notes [here](https://github.com/py-why/EconML/releases/tag/v0.12.0b6)

**August 3, 2021:** Release v0.12.0b5, see release notes [here](https://github.com/py-why/EconML/releases/tag/v0.12.0b5)

**July 9, 2021:** Release v0.12.0b4, see release notes [here](https://github.com/py-why/EconML/releases/tag/v0.12.0b4)

**June 25, 2021:** Release v0.12.0b3, see release notes [here](https://github.com/py-why/EconML/releases/tag/v0.12.0b3)

**June 18, 2021:** Release v0.12.0b2, see release notes [here](https://github.com/py-why/EconML/releases/tag/v0.12.0b2)

**June 7, 2021:** Release v0.12.0b1, see release notes [here](https://github.com/py-why/EconML/releases/tag/v0.12.0b1)

**May 18, 2021:** Release v0.11.1, see release notes [here](https://github.com/py-why/EconML/releases/tag/v0.11.1)

**May 8, 2021:** Release v0.11.0, see release notes [here](https://github.com/py-why/EconML/releases/tag/v0.11.0)

**March 22, 2021:** Release v0.10.0, see release notes [here](https://github.com/py-why/EconML/releases/tag/v0.10.0)

**March 11, 2021:** Release v0.9.2, see release notes [here](https://github.com/py-why/EconML/releases/tag/v0.9.2)

**March 3, 2021:** Release v0.9.1, see release notes [here](https://github.com/py-why/EconML/releases/tag/v0.9.1)

**February 20, 2021:** Release v0.9.0, see release notes [here](https://github.com/py-why/EconML/releases/tag/v0.9.0)

**January 20, 2021:** Release v0.9.0b1, see release notes [here](https://github.com/py-why/EconML/releases/tag/v0.9.0b1)

**November 20, 2020:** Release v0.8.1, see release notes [here](https://github.com/py-why/EconML/releases/tag/v0.8.1)

**November 18, 2020:** Release v0.8.0, see release notes [here](https://github.com/py-why/EconML/releases/tag/v0.8.0)

**September 4, 2020:** Release v0.8.0b1, see release notes [here](https://github.com/py-why/EconML/releases/tag/v0.8.0b1)

**March 6, 2020:** Release v0.7.0, see release notes [here](https://github.com/py-why/EconML/releases/tag/v0.7.0)

**February 18, 2020:** Release v0.7.0b1, see release notes [here](https://github.com/py-why/EconML/releases/tag/v0.7.0b1)

**January 10, 2020:** Release v0.6.1, see release notes [here](https://github.com/py-why/EconML/releases/tag/v0.6.1)

**December 6, 2019:** Release v0.6, see release notes [here](https://github.com/py-why/EconML/releases/tag/v0.6)

**November 21, 2019:** Release v0.5, see release notes [here](https://github.com/py-why/EconML/releases/tag/v0.5). 

**June 3, 2019:** Release v0.4, see release notes [here](https://github.com/py-why/EconML/releases/tag/v0.4). 

**May 3, 2019:** Release v0.3, see release notes [here](https://github.com/py-why/EconML/releases/tag/v0.3).

**April 10, 2019:** Release v0.2, see release notes [here](https://github.com/py-why/EconML/releases/tag/v0.2).

**March 6, 2019:** Release v0.1, welcome to have a try and provide feedback.

</details>

# Getting Started

## Installation

Install the latest release from [PyPI](https://pypi.org/project/econml/):
```
pip install econml
```
To install from source, see [For Developers](#for-developers) section below.

## Usage Examples
### Estimation Methods

<details>
  <summary>Double Machine Learning (aka RLearner) (click to expand)</summary>

  * Linear final stage

  ```Python
  from econml.dml import LinearDML
  from sklearn.linear_model import LassoCV
  from econml.inference import BootstrapInference

  est = LinearDML(model_y=LassoCV(), model_t=LassoCV())
  ### Estimate with OLS confidence intervals
  est.fit(Y, T, X=X, W=W) # W -> high-dimensional confounders, X -> features
  treatment_effects = est.effect(X_test)
  lb, ub = est.effect_interval(X_test, alpha=0.05) # OLS confidence intervals

  ### Estimate with bootstrap confidence intervals
  est.fit(Y, T, X=X, W=W, inference='bootstrap')  # with default bootstrap parameters
  est.fit(Y, T, X=X, W=W, inference=BootstrapInference(n_bootstrap_samples=100))  # or customized
  lb, ub = est.effect_interval(X_test, alpha=0.05) # Bootstrap confidence intervals
  ```

  * Sparse linear final stage

  ```Python
  from econml.dml import SparseLinearDML
  from sklearn.linear_model import LassoCV

  est = SparseLinearDML(model_y=LassoCV(), model_t=LassoCV())
  est.fit(Y, T, X=X, W=W) # X -> high dimensional features
  treatment_effects = est.effect(X_test)
  lb, ub = est.effect_interval(X_test, alpha=0.05) # Confidence intervals via debiased lasso
  ```

  * Generic Machine Learning last stage
  
  ```Python
  from econml.dml import NonParamDML
  from sklearn.ensemble import RandomForestRegressor, RandomForestClassifier

  est = NonParamDML(model_y=RandomForestRegressor(),
                    model_t=RandomForestClassifier(),
                    model_final=RandomForestRegressor(),
                    discrete_treatment=True)
  est.fit(Y, T, X=X, W=W) 
  treatment_effects = est.effect(X_test)
  ```

</details>

<details>
  <summary>Dynamic Double Machine Learning (click to expand)</summary>

  ```Python
  from econml.panel.dml import DynamicDML
  # Use defaults
  est = DynamicDML()
  # Or specify hyperparameters
  est = DynamicDML(model_y=LassoCV(cv=3), 
                   model_t=LassoCV(cv=3), 
                   cv=3)
  est.fit(Y, T, X=X, W=None, groups=groups, inference="auto")
  # Effects
  treatment_effects = est.effect(X_test)
  # Confidence intervals
  lb, ub = est.effect_interval(X_test, alpha=0.05)
  ```
</details>

<details>
  <summary>Causal Forests (click to expand)</summary>

  ```Python
  from econml.dml import CausalForestDML
  from sklearn.linear_model import LassoCV
  # Use defaults
  est = CausalForestDML()
  # Or specify hyperparameters
  est = CausalForestDML(criterion='het', n_estimators=500,       
                        min_samples_leaf=10, 
                        max_depth=10, max_samples=0.5,
                        discrete_treatment=False,
                        model_t=LassoCV(), model_y=LassoCV())
  est.fit(Y, T, X=X, W=W)
  treatment_effects = est.effect(X_test)
  # Confidence intervals via Bootstrap-of-Little-Bags for forests
  lb, ub = est.effect_interval(X_test, alpha=0.05)
  ```
</details>


<details>
  <summary>Orthogonal Random Forests (click to expand)</summary>

  ```Python
  from econml.orf import DMLOrthoForest, DROrthoForest
  from econml.sklearn_extensions.linear_model import WeightedLasso, WeightedLassoCV
  # Use defaults
  est = DMLOrthoForest()
  est = DROrthoForest()
  # Or specify hyperparameters
  est = DMLOrthoForest(n_trees=500, min_leaf_size=10,
                       max_depth=10, subsample_ratio=0.7,
                       lambda_reg=0.01,
                       discrete_treatment=False,
                       model_T=WeightedLasso(alpha=0.01), model_Y=WeightedLasso(alpha=0.01),
                       model_T_final=WeightedLassoCV(cv=3), model_Y_final=WeightedLassoCV(cv=3))
  est.fit(Y, T, X=X, W=W)
  treatment_effects = est.effect(X_test)
  # Confidence intervals via Bootstrap-of-Little-Bags for forests
  lb, ub = est.effect_interval(X_test, alpha=0.05)
  ```
</details>

<details>

<summary>Meta-Learners (click to expand)</summary>
  
  * XLearner

  ```Python
  from econml.metalearners import XLearner
  from sklearn.ensemble import GradientBoostingClassifier, GradientBoostingRegressor

  est = XLearner(models=GradientBoostingRegressor(),
                propensity_model=GradientBoostingClassifier(),
                cate_models=GradientBoostingRegressor())
  est.fit(Y, T, X=np.hstack([X, W]))
  treatment_effects = est.effect(np.hstack([X_test, W_test]))

  # Fit with bootstrap confidence interval construction enabled
  est.fit(Y, T, X=np.hstack([X, W]), inference='bootstrap')
  treatment_effects = est.effect(np.hstack([X_test, W_test]))
  lb, ub = est.effect_interval(np.hstack([X_test, W_test]), alpha=0.05) # Bootstrap CIs
  ```
  
  * SLearner

  ```Python
  from econml.metalearners import SLearner
  from sklearn.ensemble import GradientBoostingRegressor

  est = SLearner(overall_model=GradientBoostingRegressor())
  est.fit(Y, T, X=np.hstack([X, W]))
  treatment_effects = est.effect(np.hstack([X_test, W_test]))
  ```

  * TLearner

  ```Python
  from econml.metalearners import TLearner
  from sklearn.ensemble import GradientBoostingRegressor

  est = TLearner(models=GradientBoostingRegressor())
  est.fit(Y, T, X=np.hstack([X, W]))
  treatment_effects = est.effect(np.hstack([X_test, W_test]))
  ```
</details>

<details>
<summary>Doubly Robust Learners (click to expand)
</summary>

* Linear final stage

```Python
from econml.dr import LinearDRLearner
from sklearn.ensemble import GradientBoostingRegressor, GradientBoostingClassifier

est = LinearDRLearner(model_propensity=GradientBoostingClassifier(),
                      model_regression=GradientBoostingRegressor())
est.fit(Y, T, X=X, W=W)
treatment_effects = est.effect(X_test)
lb, ub = est.effect_interval(X_test, alpha=0.05)
```

* Sparse linear final stage

```Python
from econml.dr import SparseLinearDRLearner
from sklearn.ensemble import GradientBoostingRegressor, GradientBoostingClassifier

est = SparseLinearDRLearner(model_propensity=GradientBoostingClassifier(),
                            model_regression=GradientBoostingRegressor())
est.fit(Y, T, X=X, W=W)
treatment_effects = est.effect(X_test)
lb, ub = est.effect_interval(X_test, alpha=0.05)
```

* Nonparametric final stage

```Python
from econml.dr import ForestDRLearner
from sklearn.ensemble import GradientBoostingRegressor, GradientBoostingClassifier

est = ForestDRLearner(model_propensity=GradientBoostingClassifier(),
                      model_regression=GradientBoostingRegressor())
est.fit(Y, T, X=X, W=W) 
treatment_effects = est.effect(X_test)
lb, ub = est.effect_interval(X_test, alpha=0.05)
```
</details>

<details>
<summary>Double Machine Learning with Instrumental Variables (click to expand)</summary>

* Orthogonal instrumental variable learner

```Python
from econml.iv.dml import OrthoIV

est = OrthoIV(projection=False, 
              discrete_treatment=True, 
              discrete_instrument=True)
est.fit(Y, T, Z=Z, X=X, W=W)
treatment_effects = est.effect(X_test)
lb, ub = est.effect_interval(X_test, alpha=0.05) # OLS confidence intervals
```
* Nonparametric double machine learning with instrumental variable

```Python
from econml.iv.dml import NonParamDMLIV

est = NonParamDMLIV(projection=False, 
                    discrete_treatment=True, 
                    discrete_instrument=True)
est.fit(Y, T, Z=Z, X=X, W=W) # no analytical confidence interval available
treatment_effects = est.effect(X_test)
```
</details>

<details>
<summary>Doubly Robust Machine Learning with Instrumental Variables (click to expand)</summary>

* Linear final stage
```Python
from econml.iv.dr import LinearDRIV

est = LinearDRIV(discrete_instrument=True, discrete_treatment=True)
est.fit(Y, T, Z=Z, X=X, W=W)
treatment_effects = est.effect(X_test)
lb, ub = est.effect_interval(X_test, alpha=0.05) # OLS confidence intervals
```

* Sparse linear final stage

```Python
from econml.iv.dr import SparseLinearDRIV

est = SparseLinearDRIV(discrete_instrument=True, discrete_treatment=True)
est.fit(Y, T, Z=Z, X=X, W=W)
treatment_effects = est.effect(X_test)
lb, ub = est.effect_interval(X_test, alpha=0.05) # Debiased lasso confidence intervals
```

* Nonparametric final stage
```Python
from econml.iv.dr import ForestDRIV

est = ForestDRIV(discrete_instrument=True, discrete_treatment=True)
est.fit(Y, T, Z=Z, X=X, W=W)
treatment_effects = est.effect(X_test)
# Confidence intervals via Bootstrap-of-Little-Bags for forests
lb, ub = est.effect_interval(X_test, alpha=0.05) 
```

* Linear intent-to-treat (discrete instrument, discrete treatment)

```Python
from econml.iv.dr import LinearIntentToTreatDRIV
from sklearn.ensemble import GradientBoostingRegressor, GradientBoostingClassifier

est = LinearIntentToTreatDRIV(model_y_xw=GradientBoostingRegressor(),
                              model_t_xwz=GradientBoostingClassifier(),
                              flexible_model_effect=GradientBoostingRegressor())
est.fit(Y, T, Z=Z, X=X, W=W)
treatment_effects = est.effect(X_test)
lb, ub = est.effect_interval(X_test, alpha=0.05) # OLS confidence intervals
```
</details>

<details>
<summary>Deep Instrumental Variables (click to expand)</summary>

```Python
import keras
from econml.iv.nnet import DeepIV

treatment_model = keras.Sequential([keras.layers.Dense(128, activation='relu', input_shape=(2,)),
                                    keras.layers.Dropout(0.17),
                                    keras.layers.Dense(64, activation='relu'),
                                    keras.layers.Dropout(0.17),
                                    keras.layers.Dense(32, activation='relu'),
                                    keras.layers.Dropout(0.17)])
response_model = keras.Sequential([keras.layers.Dense(128, activation='relu', input_shape=(2,)),
                                  keras.layers.Dropout(0.17),
                                  keras.layers.Dense(64, activation='relu'),
                                  keras.layers.Dropout(0.17),
                                  keras.layers.Dense(32, activation='relu'),
                                  keras.layers.Dropout(0.17),
                                  keras.layers.Dense(1)])
est = DeepIV(n_components=10, # Number of gaussians in the mixture density networks)
             m=lambda z, x: treatment_model(keras.layers.concatenate([z, x])), # Treatment model
             h=lambda t, x: response_model(keras.layers.concatenate([t, x])), # Response model
             n_samples=1 # Number of samples used to estimate the response
             )
est.fit(Y, T, X=X, Z=Z) # Z -> instrumental variables
treatment_effects = est.effect(X_test)
```
</details>

See the <a href="#references">References</a> section for more details.

### Interpretability
<details>
  <summary>Tree Interpreter of the CATE model (click to expand)</summary>
  
  ```Python
  from econml.cate_interpreter import SingleTreeCateInterpreter
  intrp = SingleTreeCateInterpreter(include_model_uncertainty=True, max_depth=2, min_samples_leaf=10)
  # We interpret the CATE model's behavior based on the features used for heterogeneity
  intrp.interpret(est, X)
  # Plot the tree
  plt.figure(figsize=(25, 5))
  intrp.plot(feature_names=['A', 'B', 'C', 'D'], fontsize=12)
  plt.show()
  ```
  ![image](notebooks/images/dr_cate_tree.png)
  
</details>

<details>
  <summary>Policy Interpreter of the CATE model (click to expand)</summary>

  ```Python
  from econml.cate_interpreter import SingleTreePolicyInterpreter
  # We find a tree-based treatment policy based on the CATE model
  intrp = SingleTreePolicyInterpreter(risk_level=0.05, max_depth=2, min_samples_leaf=1,min_impurity_decrease=.001)
  intrp.interpret(est, X, sample_treatment_costs=0.2)
  # Plot the tree
  plt.figure(figsize=(25, 5))
  intrp.plot(feature_names=['A', 'B', 'C', 'D'], fontsize=12)
  plt.show()
  ```
  ![image](notebooks/images/dr_policy_tree.png)

</details>

<details>
  <summary>SHAP values for the CATE model (click to expand)</summary>

  ```Python
  import shap
  from econml.dml import CausalForestDML
  est = CausalForestDML()
  est.fit(Y, T, X=X, W=W)
  shap_values = est.shap_values(X)
  shap.summary_plot(shap_values['Y0']['T0'])
  ```

</details>


### Causal Model Selection and Cross-Validation


<details>
  <summary>Causal model selection with the `RScorer` (click to expand)</summary>

  ```Python
  from econml.score import RScorer

  # split data in train-validation
  X_train, X_val, T_train, T_val, Y_train, Y_val = train_test_split(X, T, y, test_size=.4)

  # define list of CATE estimators to select among
  reg = lambda: RandomForestRegressor(min_samples_leaf=20)
  clf = lambda: RandomForestClassifier(min_samples_leaf=20)
  models = [('ldml', LinearDML(model_y=reg(), model_t=clf(), discrete_treatment=True,
                               linear_first_stages=False, cv=3)),
            ('xlearner', XLearner(models=reg(), cate_models=reg(), propensity_model=clf())),
            ('dalearner', DomainAdaptationLearner(models=reg(), final_models=reg(), propensity_model=clf())),
            ('slearner', SLearner(overall_model=reg())),
            ('drlearner', DRLearner(model_propensity=clf(), model_regression=reg(),
                                    model_final=reg(), cv=3)),
            ('rlearner', NonParamDML(model_y=reg(), model_t=clf(), model_final=reg(),
                                     discrete_treatment=True, cv=3)),
            ('dml3dlasso', DML(model_y=reg(), model_t=clf(),
                               model_final=LassoCV(cv=3, fit_intercept=False),
                               discrete_treatment=True,
                               featurizer=PolynomialFeatures(degree=3),
                               linear_first_stages=False, cv=3))
  ]

  # fit cate models on train data
  models = [(name, mdl.fit(Y_train, T_train, X=X_train)) for name, mdl in models]

  # score cate models on validation data
  scorer = RScorer(model_y=reg(), model_t=clf(),
                   discrete_treatment=True, cv=3, mc_iters=2, mc_agg='median')
  scorer.fit(Y_val, T_val, X=X_val)
  rscore = [scorer.score(mdl) for _, mdl in models]
  # select the best model
  mdl, _ = scorer.best_model([mdl for _, mdl in models])
  # create weighted ensemble model based on score performance
  mdl, _ = scorer.ensemble([mdl for _, mdl in models])
  ```

</details>

<details>
  <summary>First Stage Model Selection (click to expand)</summary>

First stage models can be selected either by passing in cross-validated models (e.g. `sklearn.linear_model.LassoCV`) to EconML's estimators or perform the first stage model selection outside of EconML and pass in the selected model. Unless selecting among a large set of hyperparameters, choosing first stage models externally is the preferred method due to statistical and computational advantages.

```Python
from econml.dml import LinearDML
from sklearn import clone
from sklearn.ensemble import RandomForestRegressor
from sklearn.model_selection import GridSearchCV

cv_model = GridSearchCV(
              estimator=RandomForestRegressor(),
              param_grid={
                  "max_depth": [3, None],
                  "n_estimators": (10, 30, 50, 100, 200),
                  "max_features": (2, 4, 6),
              },
              cv=5,
           )
# First stage model selection within EconML
# This is more direct, but computationally and statistically less efficient
est = LinearDML(model_y=cv_model, model_t=cv_model)
# First stage model selection ouside of EconML
# This is the most efficient, but requires boilerplate code
model_t = clone(cv_model).fit(W, T).best_estimator_
model_y = clone(cv_model).fit(W, Y).best_estimator_
est = LinearDML(model_y=model_t, model_t=model_y)
```


</details>

### Inference

Whenever inference is enabled, then one can get a more structure `InferenceResults` object with more elaborate inference information, such
as p-values and z-statistics. When the CATE model is linear and parametric, then a `summary()` method is also enabled. For instance:

  ```Python
  from econml.dml import LinearDML
  # Use defaults
  est = LinearDML()
  est.fit(Y, T, X=X, W=W)
  # Get the effect inference summary, which includes the standard error, z test score, p value, and confidence interval given each sample X[i]
  est.effect_inference(X_test).summary_frame(alpha=0.05, value=0, decimals=3)
  # Get the population summary for the entire sample X
  est.effect_inference(X_test).population_summary(alpha=0.1, value=0, decimals=3, tol=0.001)
  #  Get the parameter inference summary for the final model
  est.summary()
  ```
  
  <details><summary>Example Output (click to expand)</summary>
  
  ```Python
  # Get the effect inference summary, which includes the standard error, z test score, p value, and confidence interval given each sample X[i]
  est.effect_inference(X_test).summary_frame(alpha=0.05, value=0, decimals=3)
  ```
  ![image](notebooks/images/summary_frame.png)
  
  ```Python
  # Get the population summary for the entire sample X
  est.effect_inference(X_test).population_summary(alpha=0.1, value=0, decimals=3, tol=0.001)
  ```
  ![image](notebooks/images/population_summary.png)
  
  ```Python
  #  Get the parameter inference summary for the final model
  est.summary()
  ```
  ![image](notebooks/images/summary.png)
  
  </details>
  

### Policy Learning

You can also perform direct policy learning from observational data, using the doubly robust method for offline
policy learning. These methods directly predict a recommended treatment, without internally fitting an explicit
model of the conditional average treatment effect.

<details>
  <summary>Doubly Robust Policy Learning (click to expand)</summary>

```Python
from econml.policy import DRPolicyTree, DRPolicyForest
from sklearn.ensemble import RandomForestRegressor

# fit a single binary decision tree policy
policy = DRPolicyTree(max_depth=1, min_impurity_decrease=0.01, honest=True)
policy.fit(y, T, X=X, W=W)
# predict the recommended treatment
recommended_T = policy.predict(X)
# plot the binary decision tree
plt.figure(figsize=(10,5))
policy.plot()
# get feature importances
importances = policy.feature_importances_

# fit a binary decision forest
policy = DRPolicyForest(max_depth=1, min_impurity_decrease=0.01, honest=True)
policy.fit(y, T, X=X, W=W)
# predict the recommended treatment
recommended_T = policy.predict(X)
# plot the first tree in the ensemble
plt.figure(figsize=(10,5))
policy.plot(0)
# get feature importances
importances = policy.feature_importances_
```


  ![image](images/policy_tree.png)
</details>

To see more complex examples, go to the [notebooks](https://github.com/py-why/EconML/tree/main/notebooks) section of the repository. For a more detailed description of the treatment effect estimation algorithms, see the EconML [documentation](https://econml.azurewebsites.net/).

# For Developers

You can get started by cloning this repository. We use 
[setuptools](https://setuptools.readthedocs.io/en/latest/index.html) for building and distributing our package.
We rely on some recent features of setuptools, so make sure to upgrade to a recent version with
`pip install setuptools --upgrade`.  Then from your local copy of the repository you can run `pip install -e .` to get started (but depending on what you're doing you might want to install with extras instead, like `pip install -e .[plt]` if you want to use matplotlib integration, or you can use  `pip install -e .[all]` to include all extras).

## Running the tests

This project uses [pytest](https://docs.pytest.org/) for testing.  To run tests locally after installing the package, you can use `pip install pytest-runner` followed by `python setup.py pytest`.

We have added pytest marks to some tests to make it easier to run a subset, and you can set the PYTEST_ADDOPTS environment variable to take advantage of this.  For instance, you can set it to `-m "not (notebook or automl)"` to skip notebook and automl tests that have some additional dependencies. 

## Generating the documentation

This project's documentation is generated via [Sphinx](https://www.sphinx-doc.org/en/main/index.html).  Note that we use [graphviz](https://graphviz.org/)'s 
`dot` application to produce some of the images in our documentation, so you should make sure that `dot` is installed and in your path.

To generate a local copy of the documentation from a clone of this repository, just run `python setup.py build_sphinx -W -E -a`, which will build the documentation and place it under the `build/sphinx/html` path. 

The reStructuredText files that make up the documentation are stored in the [docs directory](https://github.com/py-why/EconML/tree/main/doc); module documentation is automatically generated by the Sphinx build process.

## Release process

We use GitHub Actions to build and publish the package and documentation.  To create a new release, an admin should perform the following steps:

1. Update the version number in `econml/_version.py` and add a mention of the new version in the news section of this file and commit the changes.
2. Manually run the publish_package.yml workflow to build and publish the package to PyPI.
3. Manually run the publish_docs.yml workflow to build and publish the documentation.
4. Under https://github.com/py-why/EconML/releases, create a new release with a corresponding tag, and update the release notes.

# Blogs and Publications

* June 2019: [Treatment Effects with Instruments paper](https://arxiv.org/pdf/1905.10176.pdf)

* May 2019: [Open Data Science Conference Workshop](https://odsc.com/speakers/machine-learning-estimation-of-heterogeneous-treatment-effect-the-microsoft-econml-library/) 

* 2018: [Orthogonal Random Forests paper](http://proceedings.mlr.press/v97/oprescu19a.html)

* 2017: [DeepIV paper](http://proceedings.mlr.press/v70/hartford17a/hartford17a.pdf)

# Citation

If you use EconML in your research, please cite us as follows:

   Keith Battocchi, Eleanor Dillon, Maggie Hei, Greg Lewis, Paul Oka, Miruna Oprescu, Vasilis Syrgkanis. **EconML: A Python Package for ML-Based Heterogeneous Treatment Effects Estimation.** https://github.com/py-why/EconML, 2019. Version 0.x.

BibTex:

```
@misc{econml,
  author={Keith Battocchi, Eleanor Dillon, Maggie Hei, Greg Lewis, Paul Oka, Miruna Oprescu, Vasilis Syrgkanis},
  title={{EconML}: {A Python Package for ML-Based Heterogeneous Treatment Effects Estimation}},
  howpublished={https://github.com/py-why/EconML},
  note={Version 0.x},
  year={2019}
}
```

# Contributing and Feedback

This project welcomes contributions and suggestions.  We use the [DCO bot](https://github.com/apps/dco) to enforce a [Developer Certificate of Origin](https://developercertificate.org/) which requires users to sign-off on their commits.  This is a simple way to certify that you wrote or otherwise have the right to submit the code you are contributing to the project.  Git provides a `-s` command line option to include this automatically when you commit via `git commit`.

When you submit a pull request, a CLA-bot will automatically determine whether you need to provide
a CLA and decorate the PR appropriately (e.g., label, comment). Simply follow the instructions
provided by the bot. You will only need to do this once across all repos using our CLA.

This project has adopted the [PyWhy Code of Conduct](https://github.com/py-why/governance/blob/main/CODE-OF-CONDUCT.md).

# References

Athey, Susan, and Stefan Wager.
**Policy learning with observational data.**
Econometrica 89.1 (2021): 133-161.

X Nie, S Wager.
**Quasi-Oracle Estimation of Heterogeneous Treatment Effects.**
[*Biometrika*](https://doi.org/10.1093/biomet/asaa076), 2020

V. Syrgkanis, V. Lei, M. Oprescu, M. Hei, K. Battocchi, G. Lewis.
**Machine Learning Estimation of Heterogeneous Treatment Effects with Instruments.**
[*Proceedings of the 33rd Conference on Neural Information Processing Systems (NeurIPS)*](https://arxiv.org/abs/1905.10176), 2019
**(Spotlight Presentation)**

D. Foster, V. Syrgkanis.
**Orthogonal Statistical Learning.**
[*Proceedings of the 32nd Annual Conference on Learning Theory (COLT)*](https://arxiv.org/pdf/1901.09036.pdf), 2019
**(Best Paper Award)**

M. Oprescu, V. Syrgkanis and Z. S. Wu.
**Orthogonal Random Forest for Causal Inference.**
[*Proceedings of the 36th International Conference on Machine Learning (ICML)*](http://proceedings.mlr.press/v97/oprescu19a.html), 2019.

S. Künzel, J. Sekhon, J. Bickel and B. Yu.
**Metalearners for estimating heterogeneous treatment effects using machine learning.**
[*Proceedings of the national academy of sciences, 116(10), 4156-4165*](https://www.pnas.org/content/116/10/4156), 2019.

S. Athey, J. Tibshirani, S. Wager.
**Generalized random forests.**
[*Annals of Statistics, 47, no. 2, 1148--1178*](https://projecteuclid.org/euclid.aos/1547197251), 2019.

V. Chernozhukov, D. Nekipelov, V. Semenova, V. Syrgkanis.
**Plug-in Regularized Estimation of High-Dimensional Parameters in Nonlinear Semiparametric Models.**
[*Arxiv preprint arxiv:1806.04823*](https://arxiv.org/abs/1806.04823), 2018.

S. Wager, S. Athey.
**Estimation and Inference of Heterogeneous Treatment Effects using Random Forests.**
[*Journal of the American Statistical Association, 113:523, 1228-1242*](https://www.tandfonline.com/doi/citedby/10.1080/01621459.2017.1319839), 2018.

Jason Hartford, Greg Lewis, Kevin Leyton-Brown, and Matt Taddy. **Deep IV: A flexible approach for counterfactual prediction.** [*Proceedings of the 34th International Conference on Machine Learning, ICML'17*](http://proceedings.mlr.press/v70/hartford17a/hartford17a.pdf), 2017.

V. Chernozhukov, D. Chetverikov, M. Demirer, E. Duflo, C. Hansen, and a. W. Newey. **Double Machine Learning for Treatment and Causal Parameters.** [*ArXiv preprint arXiv:1608.00060*](https://arxiv.org/abs/1608.00060), 2016.

Dudik, M., Erhan, D., Langford, J., & Li, L.
**Doubly robust policy evaluation and optimization.**
Statistical Science, 29(4), 485-511, 2014.

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/py-why/EconML",
    "name": "econml",
    "maintainer": "",
    "docs_url": null,
    "requires_python": "",
    "maintainer_email": "",
    "keywords": "treatment-effect",
    "author": "PyWhy contributors",
    "author_email": "",
    "download_url": "https://files.pythonhosted.org/packages/2c/5a/cbfd4690a9759d324a821ee354ca1b0fe194c6e12dd6721e20858139f54a/econml-0.14.1.tar.gz",
    "platform": null,
    "description": "[![Build status](https://github.com/py-why/EconML/actions/workflows/ci.yml/badge.svg)](https://github.com/py-why/EconML/actions/workflows/ci.yml)\n[![PyPI version](https://img.shields.io/pypi/v/econml.svg)](https://pypi.org/project/econml/)\n[![PyPI wheel](https://img.shields.io/pypi/wheel/econml.svg)](https://pypi.org/project/econml/)\n[![Supported Python versions](https://img.shields.io/pypi/pyversions/econml.svg)](https://pypi.org/project/econml/)\n\n\n\n<h1><img src=\"doc/econml-logo-icon.png\" width=\"80px\" align=\"left\" style=\"margin-right: 10px;\"> EconML: A Python Package for ML-Based Heterogeneous Treatment Effects Estimation</h1>\n\n**EconML** is a Python package for estimating heterogeneous treatment effects from observational data via machine learning. This package was designed and built as part of the [ALICE project](https://www.microsoft.com/en-us/research/project/alice/) at Microsoft Research with the goal to combine state-of-the-art machine learning \ntechniques with econometrics to bring automation to complex causal inference problems. The promise of EconML:\n\n* Implement recent techniques in the literature at the intersection of econometrics and machine learning\n* Maintain flexibility in modeling the effect heterogeneity (via techniques such as random forests, boosting, lasso and neural nets), while preserving the causal interpretation of the learned model and often offering valid confidence intervals\n* Use a unified API\n* Build on standard Python packages for Machine Learning and Data Analysis\n\nOne of the biggest promises of machine learning is to automate decision making in a multitude of domains. At the core of many data-driven personalized decision scenarios is the estimation of heterogeneous treatment effects: what is the causal effect of an intervention on an outcome of interest for a sample with a particular set of features? In a nutshell, this toolkit is designed to measure the causal effect of some treatment variable(s) `T` on an outcome \nvariable `Y`, controlling for a set of features `X, W` and how does that effect vary as a function of `X`. The methods implemented are applicable even with observational (non-experimental or historical) datasets. For the estimation results to have a causal interpretation, some methods assume no unobserved confounders (i.e. there is no unobserved variable not included in `X, W` that simultaneously has an effect on both `T` and `Y`), while others assume access to an instrument `Z` (i.e. an observed variable `Z` that has an effect on the treatment `T` but no direct effect on the outcome `Y`). Most methods provide confidence intervals and inference results.\n\nFor detailed information about the package, consult the documentation at https://econml.azurewebsites.net/.\n\nFor information on use cases and background material on causal inference and heterogeneous treatment effects see our webpage at https://www.microsoft.com/en-us/research/project/econml/\n\n<details>\n<summary><strong><em>Table of Contents</em></strong></summary>\n\n- [News](#news)\n- [Getting Started](#getting-started)\n  - [Installation](#installation)\n  - [Usage Examples](#usage-examples)\n    - [Estimation Methods](#estimation-methods)\n    - [Interpretability](#interpretability)\n    - [Causal Model Selection and Cross-Validation](#causal-model-selection-and-cross-validation)\n    - [Inference](#inference)\n    - [Policy Learning](#policy-learning)\n- [For Developers](#for-developers)\n  - [Running the tests](#running-the-tests)\n  - [Generating the documentation](#generating-the-documentation)\n- [Blogs and Publications](#blogs-and-publications)\n- [Citation](#citation)\n- [Contributing and Feedback](#contributing-and-feedback)\n- [References](#references)\n\n</details>\n\n# News\n\n**May 19, 2023:** Release v0.14.1, see release notes [here](https://github.com/py-why/EconML/releases/tag/v0.14.1)\n\n<details><summary>Previous releases</summary>\n\n**November 16, 2022:** Release v0.14.0, see release notes [here](https://github.com/py-why/EconML/releases/tag/v0.14.0)\n\n**June 17, 2022:** Release v0.13.1, see release notes [here](https://github.com/py-why/EconML/releases/tag/v0.13.1)\n\n**January 31, 2022:** Release v0.13.0, see release notes [here](https://github.com/py-why/EconML/releases/tag/v0.13.0)\n\n**August 13, 2021:** Release v0.12.0, see release notes [here](https://github.com/py-why/EconML/releases/tag/v0.12.0)\n\n**August 5, 2021:** Release v0.12.0b6, see release notes [here](https://github.com/py-why/EconML/releases/tag/v0.12.0b6)\n\n**August 3, 2021:** Release v0.12.0b5, see release notes [here](https://github.com/py-why/EconML/releases/tag/v0.12.0b5)\n\n**July 9, 2021:** Release v0.12.0b4, see release notes [here](https://github.com/py-why/EconML/releases/tag/v0.12.0b4)\n\n**June 25, 2021:** Release v0.12.0b3, see release notes [here](https://github.com/py-why/EconML/releases/tag/v0.12.0b3)\n\n**June 18, 2021:** Release v0.12.0b2, see release notes [here](https://github.com/py-why/EconML/releases/tag/v0.12.0b2)\n\n**June 7, 2021:** Release v0.12.0b1, see release notes [here](https://github.com/py-why/EconML/releases/tag/v0.12.0b1)\n\n**May 18, 2021:** Release v0.11.1, see release notes [here](https://github.com/py-why/EconML/releases/tag/v0.11.1)\n\n**May 8, 2021:** Release v0.11.0, see release notes [here](https://github.com/py-why/EconML/releases/tag/v0.11.0)\n\n**March 22, 2021:** Release v0.10.0, see release notes [here](https://github.com/py-why/EconML/releases/tag/v0.10.0)\n\n**March 11, 2021:** Release v0.9.2, see release notes [here](https://github.com/py-why/EconML/releases/tag/v0.9.2)\n\n**March 3, 2021:** Release v0.9.1, see release notes [here](https://github.com/py-why/EconML/releases/tag/v0.9.1)\n\n**February 20, 2021:** Release v0.9.0, see release notes [here](https://github.com/py-why/EconML/releases/tag/v0.9.0)\n\n**January 20, 2021:** Release v0.9.0b1, see release notes [here](https://github.com/py-why/EconML/releases/tag/v0.9.0b1)\n\n**November 20, 2020:** Release v0.8.1, see release notes [here](https://github.com/py-why/EconML/releases/tag/v0.8.1)\n\n**November 18, 2020:** Release v0.8.0, see release notes [here](https://github.com/py-why/EconML/releases/tag/v0.8.0)\n\n**September 4, 2020:** Release v0.8.0b1, see release notes [here](https://github.com/py-why/EconML/releases/tag/v0.8.0b1)\n\n**March 6, 2020:** Release v0.7.0, see release notes [here](https://github.com/py-why/EconML/releases/tag/v0.7.0)\n\n**February 18, 2020:** Release v0.7.0b1, see release notes [here](https://github.com/py-why/EconML/releases/tag/v0.7.0b1)\n\n**January 10, 2020:** Release v0.6.1, see release notes [here](https://github.com/py-why/EconML/releases/tag/v0.6.1)\n\n**December 6, 2019:** Release v0.6, see release notes [here](https://github.com/py-why/EconML/releases/tag/v0.6)\n\n**November 21, 2019:** Release v0.5, see release notes [here](https://github.com/py-why/EconML/releases/tag/v0.5). \n\n**June 3, 2019:** Release v0.4, see release notes [here](https://github.com/py-why/EconML/releases/tag/v0.4). \n\n**May 3, 2019:** Release v0.3, see release notes [here](https://github.com/py-why/EconML/releases/tag/v0.3).\n\n**April 10, 2019:** Release v0.2, see release notes [here](https://github.com/py-why/EconML/releases/tag/v0.2).\n\n**March 6, 2019:** Release v0.1, welcome to have a try and provide feedback.\n\n</details>\n\n# Getting Started\n\n## Installation\n\nInstall the latest release from [PyPI](https://pypi.org/project/econml/):\n```\npip install econml\n```\nTo install from source, see [For Developers](#for-developers) section below.\n\n## Usage Examples\n### Estimation Methods\n\n<details>\n  <summary>Double Machine Learning (aka RLearner) (click to expand)</summary>\n\n  * Linear final stage\n\n  ```Python\n  from econml.dml import LinearDML\n  from sklearn.linear_model import LassoCV\n  from econml.inference import BootstrapInference\n\n  est = LinearDML(model_y=LassoCV(), model_t=LassoCV())\n  ### Estimate with OLS confidence intervals\n  est.fit(Y, T, X=X, W=W) # W -> high-dimensional confounders, X -> features\n  treatment_effects = est.effect(X_test)\n  lb, ub = est.effect_interval(X_test, alpha=0.05) # OLS confidence intervals\n\n  ### Estimate with bootstrap confidence intervals\n  est.fit(Y, T, X=X, W=W, inference='bootstrap')  # with default bootstrap parameters\n  est.fit(Y, T, X=X, W=W, inference=BootstrapInference(n_bootstrap_samples=100))  # or customized\n  lb, ub = est.effect_interval(X_test, alpha=0.05) # Bootstrap confidence intervals\n  ```\n\n  * Sparse linear final stage\n\n  ```Python\n  from econml.dml import SparseLinearDML\n  from sklearn.linear_model import LassoCV\n\n  est = SparseLinearDML(model_y=LassoCV(), model_t=LassoCV())\n  est.fit(Y, T, X=X, W=W) # X -> high dimensional features\n  treatment_effects = est.effect(X_test)\n  lb, ub = est.effect_interval(X_test, alpha=0.05) # Confidence intervals via debiased lasso\n  ```\n\n  * Generic Machine Learning last stage\n  \n  ```Python\n  from econml.dml import NonParamDML\n  from sklearn.ensemble import RandomForestRegressor, RandomForestClassifier\n\n  est = NonParamDML(model_y=RandomForestRegressor(),\n                    model_t=RandomForestClassifier(),\n                    model_final=RandomForestRegressor(),\n                    discrete_treatment=True)\n  est.fit(Y, T, X=X, W=W) \n  treatment_effects = est.effect(X_test)\n  ```\n\n</details>\n\n<details>\n  <summary>Dynamic Double Machine Learning (click to expand)</summary>\n\n  ```Python\n  from econml.panel.dml import DynamicDML\n  # Use defaults\n  est = DynamicDML()\n  # Or specify hyperparameters\n  est = DynamicDML(model_y=LassoCV(cv=3), \n                   model_t=LassoCV(cv=3), \n                   cv=3)\n  est.fit(Y, T, X=X, W=None, groups=groups, inference=\"auto\")\n  # Effects\n  treatment_effects = est.effect(X_test)\n  # Confidence intervals\n  lb, ub = est.effect_interval(X_test, alpha=0.05)\n  ```\n</details>\n\n<details>\n  <summary>Causal Forests (click to expand)</summary>\n\n  ```Python\n  from econml.dml import CausalForestDML\n  from sklearn.linear_model import LassoCV\n  # Use defaults\n  est = CausalForestDML()\n  # Or specify hyperparameters\n  est = CausalForestDML(criterion='het', n_estimators=500,       \n                        min_samples_leaf=10, \n                        max_depth=10, max_samples=0.5,\n                        discrete_treatment=False,\n                        model_t=LassoCV(), model_y=LassoCV())\n  est.fit(Y, T, X=X, W=W)\n  treatment_effects = est.effect(X_test)\n  # Confidence intervals via Bootstrap-of-Little-Bags for forests\n  lb, ub = est.effect_interval(X_test, alpha=0.05)\n  ```\n</details>\n\n\n<details>\n  <summary>Orthogonal Random Forests (click to expand)</summary>\n\n  ```Python\n  from econml.orf import DMLOrthoForest, DROrthoForest\n  from econml.sklearn_extensions.linear_model import WeightedLasso, WeightedLassoCV\n  # Use defaults\n  est = DMLOrthoForest()\n  est = DROrthoForest()\n  # Or specify hyperparameters\n  est = DMLOrthoForest(n_trees=500, min_leaf_size=10,\n                       max_depth=10, subsample_ratio=0.7,\n                       lambda_reg=0.01,\n                       discrete_treatment=False,\n                       model_T=WeightedLasso(alpha=0.01), model_Y=WeightedLasso(alpha=0.01),\n                       model_T_final=WeightedLassoCV(cv=3), model_Y_final=WeightedLassoCV(cv=3))\n  est.fit(Y, T, X=X, W=W)\n  treatment_effects = est.effect(X_test)\n  # Confidence intervals via Bootstrap-of-Little-Bags for forests\n  lb, ub = est.effect_interval(X_test, alpha=0.05)\n  ```\n</details>\n\n<details>\n\n<summary>Meta-Learners (click to expand)</summary>\n  \n  * XLearner\n\n  ```Python\n  from econml.metalearners import XLearner\n  from sklearn.ensemble import GradientBoostingClassifier, GradientBoostingRegressor\n\n  est = XLearner(models=GradientBoostingRegressor(),\n                propensity_model=GradientBoostingClassifier(),\n                cate_models=GradientBoostingRegressor())\n  est.fit(Y, T, X=np.hstack([X, W]))\n  treatment_effects = est.effect(np.hstack([X_test, W_test]))\n\n  # Fit with bootstrap confidence interval construction enabled\n  est.fit(Y, T, X=np.hstack([X, W]), inference='bootstrap')\n  treatment_effects = est.effect(np.hstack([X_test, W_test]))\n  lb, ub = est.effect_interval(np.hstack([X_test, W_test]), alpha=0.05) # Bootstrap CIs\n  ```\n  \n  * SLearner\n\n  ```Python\n  from econml.metalearners import SLearner\n  from sklearn.ensemble import GradientBoostingRegressor\n\n  est = SLearner(overall_model=GradientBoostingRegressor())\n  est.fit(Y, T, X=np.hstack([X, W]))\n  treatment_effects = est.effect(np.hstack([X_test, W_test]))\n  ```\n\n  * TLearner\n\n  ```Python\n  from econml.metalearners import TLearner\n  from sklearn.ensemble import GradientBoostingRegressor\n\n  est = TLearner(models=GradientBoostingRegressor())\n  est.fit(Y, T, X=np.hstack([X, W]))\n  treatment_effects = est.effect(np.hstack([X_test, W_test]))\n  ```\n</details>\n\n<details>\n<summary>Doubly Robust Learners (click to expand)\n</summary>\n\n* Linear final stage\n\n```Python\nfrom econml.dr import LinearDRLearner\nfrom sklearn.ensemble import GradientBoostingRegressor, GradientBoostingClassifier\n\nest = LinearDRLearner(model_propensity=GradientBoostingClassifier(),\n                      model_regression=GradientBoostingRegressor())\nest.fit(Y, T, X=X, W=W)\ntreatment_effects = est.effect(X_test)\nlb, ub = est.effect_interval(X_test, alpha=0.05)\n```\n\n* Sparse linear final stage\n\n```Python\nfrom econml.dr import SparseLinearDRLearner\nfrom sklearn.ensemble import GradientBoostingRegressor, GradientBoostingClassifier\n\nest = SparseLinearDRLearner(model_propensity=GradientBoostingClassifier(),\n                            model_regression=GradientBoostingRegressor())\nest.fit(Y, T, X=X, W=W)\ntreatment_effects = est.effect(X_test)\nlb, ub = est.effect_interval(X_test, alpha=0.05)\n```\n\n* Nonparametric final stage\n\n```Python\nfrom econml.dr import ForestDRLearner\nfrom sklearn.ensemble import GradientBoostingRegressor, GradientBoostingClassifier\n\nest = ForestDRLearner(model_propensity=GradientBoostingClassifier(),\n                      model_regression=GradientBoostingRegressor())\nest.fit(Y, T, X=X, W=W) \ntreatment_effects = est.effect(X_test)\nlb, ub = est.effect_interval(X_test, alpha=0.05)\n```\n</details>\n\n<details>\n<summary>Double Machine Learning with Instrumental Variables (click to expand)</summary>\n\n* Orthogonal instrumental variable learner\n\n```Python\nfrom econml.iv.dml import OrthoIV\n\nest = OrthoIV(projection=False, \n              discrete_treatment=True, \n              discrete_instrument=True)\nest.fit(Y, T, Z=Z, X=X, W=W)\ntreatment_effects = est.effect(X_test)\nlb, ub = est.effect_interval(X_test, alpha=0.05) # OLS confidence intervals\n```\n* Nonparametric double machine learning with instrumental variable\n\n```Python\nfrom econml.iv.dml import NonParamDMLIV\n\nest = NonParamDMLIV(projection=False, \n                    discrete_treatment=True, \n                    discrete_instrument=True)\nest.fit(Y, T, Z=Z, X=X, W=W) # no analytical confidence interval available\ntreatment_effects = est.effect(X_test)\n```\n</details>\n\n<details>\n<summary>Doubly Robust Machine Learning with Instrumental Variables (click to expand)</summary>\n\n* Linear final stage\n```Python\nfrom econml.iv.dr import LinearDRIV\n\nest = LinearDRIV(discrete_instrument=True, discrete_treatment=True)\nest.fit(Y, T, Z=Z, X=X, W=W)\ntreatment_effects = est.effect(X_test)\nlb, ub = est.effect_interval(X_test, alpha=0.05) # OLS confidence intervals\n```\n\n* Sparse linear final stage\n\n```Python\nfrom econml.iv.dr import SparseLinearDRIV\n\nest = SparseLinearDRIV(discrete_instrument=True, discrete_treatment=True)\nest.fit(Y, T, Z=Z, X=X, W=W)\ntreatment_effects = est.effect(X_test)\nlb, ub = est.effect_interval(X_test, alpha=0.05) # Debiased lasso confidence intervals\n```\n\n* Nonparametric final stage\n```Python\nfrom econml.iv.dr import ForestDRIV\n\nest = ForestDRIV(discrete_instrument=True, discrete_treatment=True)\nest.fit(Y, T, Z=Z, X=X, W=W)\ntreatment_effects = est.effect(X_test)\n# Confidence intervals via Bootstrap-of-Little-Bags for forests\nlb, ub = est.effect_interval(X_test, alpha=0.05) \n```\n\n* Linear intent-to-treat (discrete instrument, discrete treatment)\n\n```Python\nfrom econml.iv.dr import LinearIntentToTreatDRIV\nfrom sklearn.ensemble import GradientBoostingRegressor, GradientBoostingClassifier\n\nest = LinearIntentToTreatDRIV(model_y_xw=GradientBoostingRegressor(),\n                              model_t_xwz=GradientBoostingClassifier(),\n                              flexible_model_effect=GradientBoostingRegressor())\nest.fit(Y, T, Z=Z, X=X, W=W)\ntreatment_effects = est.effect(X_test)\nlb, ub = est.effect_interval(X_test, alpha=0.05) # OLS confidence intervals\n```\n</details>\n\n<details>\n<summary>Deep Instrumental Variables (click to expand)</summary>\n\n```Python\nimport keras\nfrom econml.iv.nnet import DeepIV\n\ntreatment_model = keras.Sequential([keras.layers.Dense(128, activation='relu', input_shape=(2,)),\n                                    keras.layers.Dropout(0.17),\n                                    keras.layers.Dense(64, activation='relu'),\n                                    keras.layers.Dropout(0.17),\n                                    keras.layers.Dense(32, activation='relu'),\n                                    keras.layers.Dropout(0.17)])\nresponse_model = keras.Sequential([keras.layers.Dense(128, activation='relu', input_shape=(2,)),\n                                  keras.layers.Dropout(0.17),\n                                  keras.layers.Dense(64, activation='relu'),\n                                  keras.layers.Dropout(0.17),\n                                  keras.layers.Dense(32, activation='relu'),\n                                  keras.layers.Dropout(0.17),\n                                  keras.layers.Dense(1)])\nest = DeepIV(n_components=10, # Number of gaussians in the mixture density networks)\n             m=lambda z, x: treatment_model(keras.layers.concatenate([z, x])), # Treatment model\n             h=lambda t, x: response_model(keras.layers.concatenate([t, x])), # Response model\n             n_samples=1 # Number of samples used to estimate the response\n             )\nest.fit(Y, T, X=X, Z=Z) # Z -> instrumental variables\ntreatment_effects = est.effect(X_test)\n```\n</details>\n\nSee the <a href=\"#references\">References</a> section for more details.\n\n### Interpretability\n<details>\n  <summary>Tree Interpreter of the CATE model (click to expand)</summary>\n  \n  ```Python\n  from econml.cate_interpreter import SingleTreeCateInterpreter\n  intrp = SingleTreeCateInterpreter(include_model_uncertainty=True, max_depth=2, min_samples_leaf=10)\n  # We interpret the CATE model's behavior based on the features used for heterogeneity\n  intrp.interpret(est, X)\n  # Plot the tree\n  plt.figure(figsize=(25, 5))\n  intrp.plot(feature_names=['A', 'B', 'C', 'D'], fontsize=12)\n  plt.show()\n  ```\n  ![image](notebooks/images/dr_cate_tree.png)\n  \n</details>\n\n<details>\n  <summary>Policy Interpreter of the CATE model (click to expand)</summary>\n\n  ```Python\n  from econml.cate_interpreter import SingleTreePolicyInterpreter\n  # We find a tree-based treatment policy based on the CATE model\n  intrp = SingleTreePolicyInterpreter(risk_level=0.05, max_depth=2, min_samples_leaf=1,min_impurity_decrease=.001)\n  intrp.interpret(est, X, sample_treatment_costs=0.2)\n  # Plot the tree\n  plt.figure(figsize=(25, 5))\n  intrp.plot(feature_names=['A', 'B', 'C', 'D'], fontsize=12)\n  plt.show()\n  ```\n  ![image](notebooks/images/dr_policy_tree.png)\n\n</details>\n\n<details>\n  <summary>SHAP values for the CATE model (click to expand)</summary>\n\n  ```Python\n  import shap\n  from econml.dml import CausalForestDML\n  est = CausalForestDML()\n  est.fit(Y, T, X=X, W=W)\n  shap_values = est.shap_values(X)\n  shap.summary_plot(shap_values['Y0']['T0'])\n  ```\n\n</details>\n\n\n### Causal Model Selection and Cross-Validation\n\n\n<details>\n  <summary>Causal model selection with the `RScorer` (click to expand)</summary>\n\n  ```Python\n  from econml.score import RScorer\n\n  # split data in train-validation\n  X_train, X_val, T_train, T_val, Y_train, Y_val = train_test_split(X, T, y, test_size=.4)\n\n  # define list of CATE estimators to select among\n  reg = lambda: RandomForestRegressor(min_samples_leaf=20)\n  clf = lambda: RandomForestClassifier(min_samples_leaf=20)\n  models = [('ldml', LinearDML(model_y=reg(), model_t=clf(), discrete_treatment=True,\n                               linear_first_stages=False, cv=3)),\n            ('xlearner', XLearner(models=reg(), cate_models=reg(), propensity_model=clf())),\n            ('dalearner', DomainAdaptationLearner(models=reg(), final_models=reg(), propensity_model=clf())),\n            ('slearner', SLearner(overall_model=reg())),\n            ('drlearner', DRLearner(model_propensity=clf(), model_regression=reg(),\n                                    model_final=reg(), cv=3)),\n            ('rlearner', NonParamDML(model_y=reg(), model_t=clf(), model_final=reg(),\n                                     discrete_treatment=True, cv=3)),\n            ('dml3dlasso', DML(model_y=reg(), model_t=clf(),\n                               model_final=LassoCV(cv=3, fit_intercept=False),\n                               discrete_treatment=True,\n                               featurizer=PolynomialFeatures(degree=3),\n                               linear_first_stages=False, cv=3))\n  ]\n\n  # fit cate models on train data\n  models = [(name, mdl.fit(Y_train, T_train, X=X_train)) for name, mdl in models]\n\n  # score cate models on validation data\n  scorer = RScorer(model_y=reg(), model_t=clf(),\n                   discrete_treatment=True, cv=3, mc_iters=2, mc_agg='median')\n  scorer.fit(Y_val, T_val, X=X_val)\n  rscore = [scorer.score(mdl) for _, mdl in models]\n  # select the best model\n  mdl, _ = scorer.best_model([mdl for _, mdl in models])\n  # create weighted ensemble model based on score performance\n  mdl, _ = scorer.ensemble([mdl for _, mdl in models])\n  ```\n\n</details>\n\n<details>\n  <summary>First Stage Model Selection (click to expand)</summary>\n\nFirst stage models can be selected either by passing in cross-validated models (e.g. `sklearn.linear_model.LassoCV`) to EconML's estimators or perform the first stage model selection outside of EconML and pass in the selected model. Unless selecting among a large set of hyperparameters, choosing first stage models externally is the preferred method due to statistical and computational advantages.\n\n```Python\nfrom econml.dml import LinearDML\nfrom sklearn import clone\nfrom sklearn.ensemble import RandomForestRegressor\nfrom sklearn.model_selection import GridSearchCV\n\ncv_model = GridSearchCV(\n              estimator=RandomForestRegressor(),\n              param_grid={\n                  \"max_depth\": [3, None],\n                  \"n_estimators\": (10, 30, 50, 100, 200),\n                  \"max_features\": (2, 4, 6),\n              },\n              cv=5,\n           )\n# First stage model selection within EconML\n# This is more direct, but computationally and statistically less efficient\nest = LinearDML(model_y=cv_model, model_t=cv_model)\n# First stage model selection ouside of EconML\n# This is the most efficient, but requires boilerplate code\nmodel_t = clone(cv_model).fit(W, T).best_estimator_\nmodel_y = clone(cv_model).fit(W, Y).best_estimator_\nest = LinearDML(model_y=model_t, model_t=model_y)\n```\n\n\n</details>\n\n### Inference\n\nWhenever inference is enabled, then one can get a more structure `InferenceResults` object with more elaborate inference information, such\nas p-values and z-statistics. When the CATE model is linear and parametric, then a `summary()` method is also enabled. For instance:\n\n  ```Python\n  from econml.dml import LinearDML\n  # Use defaults\n  est = LinearDML()\n  est.fit(Y, T, X=X, W=W)\n  # Get the effect inference summary, which includes the standard error, z test score, p value, and confidence interval given each sample X[i]\n  est.effect_inference(X_test).summary_frame(alpha=0.05, value=0, decimals=3)\n  # Get the population summary for the entire sample X\n  est.effect_inference(X_test).population_summary(alpha=0.1, value=0, decimals=3, tol=0.001)\n  #  Get the parameter inference summary for the final model\n  est.summary()\n  ```\n  \n  <details><summary>Example Output (click to expand)</summary>\n  \n  ```Python\n  # Get the effect inference summary, which includes the standard error, z test score, p value, and confidence interval given each sample X[i]\n  est.effect_inference(X_test).summary_frame(alpha=0.05, value=0, decimals=3)\n  ```\n  ![image](notebooks/images/summary_frame.png)\n  \n  ```Python\n  # Get the population summary for the entire sample X\n  est.effect_inference(X_test).population_summary(alpha=0.1, value=0, decimals=3, tol=0.001)\n  ```\n  ![image](notebooks/images/population_summary.png)\n  \n  ```Python\n  #  Get the parameter inference summary for the final model\n  est.summary()\n  ```\n  ![image](notebooks/images/summary.png)\n  \n  </details>\n  \n\n### Policy Learning\n\nYou can also perform direct policy learning from observational data, using the doubly robust method for offline\npolicy learning. These methods directly predict a recommended treatment, without internally fitting an explicit\nmodel of the conditional average treatment effect.\n\n<details>\n  <summary>Doubly Robust Policy Learning (click to expand)</summary>\n\n```Python\nfrom econml.policy import DRPolicyTree, DRPolicyForest\nfrom sklearn.ensemble import RandomForestRegressor\n\n# fit a single binary decision tree policy\npolicy = DRPolicyTree(max_depth=1, min_impurity_decrease=0.01, honest=True)\npolicy.fit(y, T, X=X, W=W)\n# predict the recommended treatment\nrecommended_T = policy.predict(X)\n# plot the binary decision tree\nplt.figure(figsize=(10,5))\npolicy.plot()\n# get feature importances\nimportances = policy.feature_importances_\n\n# fit a binary decision forest\npolicy = DRPolicyForest(max_depth=1, min_impurity_decrease=0.01, honest=True)\npolicy.fit(y, T, X=X, W=W)\n# predict the recommended treatment\nrecommended_T = policy.predict(X)\n# plot the first tree in the ensemble\nplt.figure(figsize=(10,5))\npolicy.plot(0)\n# get feature importances\nimportances = policy.feature_importances_\n```\n\n\n  ![image](images/policy_tree.png)\n</details>\n\nTo see more complex examples, go to the [notebooks](https://github.com/py-why/EconML/tree/main/notebooks) section of the repository. For a more detailed description of the treatment effect estimation algorithms, see the EconML [documentation](https://econml.azurewebsites.net/).\n\n# For Developers\n\nYou can get started by cloning this repository. We use \n[setuptools](https://setuptools.readthedocs.io/en/latest/index.html) for building and distributing our package.\nWe rely on some recent features of setuptools, so make sure to upgrade to a recent version with\n`pip install setuptools --upgrade`.  Then from your local copy of the repository you can run `pip install -e .` to get started (but depending on what you're doing you might want to install with extras instead, like `pip install -e .[plt]` if you want to use matplotlib integration, or you can use  `pip install -e .[all]` to include all extras).\n\n## Running the tests\n\nThis project uses [pytest](https://docs.pytest.org/) for testing.  To run tests locally after installing the package, you can use `pip install pytest-runner` followed by `python setup.py pytest`.\n\nWe have added pytest marks to some tests to make it easier to run a subset, and you can set the PYTEST_ADDOPTS environment variable to take advantage of this.  For instance, you can set it to `-m \"not (notebook or automl)\"` to skip notebook and automl tests that have some additional dependencies. \n\n## Generating the documentation\n\nThis project's documentation is generated via [Sphinx](https://www.sphinx-doc.org/en/main/index.html).  Note that we use [graphviz](https://graphviz.org/)'s \n`dot` application to produce some of the images in our documentation, so you should make sure that `dot` is installed and in your path.\n\nTo generate a local copy of the documentation from a clone of this repository, just run `python setup.py build_sphinx -W -E -a`, which will build the documentation and place it under the `build/sphinx/html` path. \n\nThe reStructuredText files that make up the documentation are stored in the [docs directory](https://github.com/py-why/EconML/tree/main/doc); module documentation is automatically generated by the Sphinx build process.\n\n## Release process\n\nWe use GitHub Actions to build and publish the package and documentation.  To create a new release, an admin should perform the following steps:\n\n1. Update the version number in `econml/_version.py` and add a mention of the new version in the news section of this file and commit the changes.\n2. Manually run the publish_package.yml workflow to build and publish the package to PyPI.\n3. Manually run the publish_docs.yml workflow to build and publish the documentation.\n4. Under https://github.com/py-why/EconML/releases, create a new release with a corresponding tag, and update the release notes.\n\n# Blogs and Publications\n\n* June 2019: [Treatment Effects with Instruments paper](https://arxiv.org/pdf/1905.10176.pdf)\n\n* May 2019: [Open Data Science Conference Workshop](https://odsc.com/speakers/machine-learning-estimation-of-heterogeneous-treatment-effect-the-microsoft-econml-library/) \n\n* 2018: [Orthogonal Random Forests paper](http://proceedings.mlr.press/v97/oprescu19a.html)\n\n* 2017: [DeepIV paper](http://proceedings.mlr.press/v70/hartford17a/hartford17a.pdf)\n\n# Citation\n\nIf you use EconML in your research, please cite us as follows:\n\n   Keith Battocchi, Eleanor Dillon, Maggie Hei, Greg Lewis, Paul Oka, Miruna Oprescu, Vasilis Syrgkanis. **EconML: A Python Package for ML-Based Heterogeneous Treatment Effects Estimation.** https://github.com/py-why/EconML, 2019. Version 0.x.\n\nBibTex:\n\n```\n@misc{econml,\n  author={Keith Battocchi, Eleanor Dillon, Maggie Hei, Greg Lewis, Paul Oka, Miruna Oprescu, Vasilis Syrgkanis},\n  title={{EconML}: {A Python Package for ML-Based Heterogeneous Treatment Effects Estimation}},\n  howpublished={https://github.com/py-why/EconML},\n  note={Version 0.x},\n  year={2019}\n}\n```\n\n# Contributing and Feedback\n\nThis project welcomes contributions and suggestions.  We use the [DCO bot](https://github.com/apps/dco) to enforce a [Developer Certificate of Origin](https://developercertificate.org/) which requires users to sign-off on their commits.  This is a simple way to certify that you wrote or otherwise have the right to submit the code you are contributing to the project.  Git provides a `-s` command line option to include this automatically when you commit via `git commit`.\n\nWhen you submit a pull request, a CLA-bot will automatically determine whether you need to provide\na CLA and decorate the PR appropriately (e.g., label, comment). Simply follow the instructions\nprovided by the bot. You will only need to do this once across all repos using our CLA.\n\nThis project has adopted the [PyWhy Code of Conduct](https://github.com/py-why/governance/blob/main/CODE-OF-CONDUCT.md).\n\n# References\n\nAthey, Susan, and Stefan Wager.\n**Policy learning with observational data.**\nEconometrica 89.1 (2021): 133-161.\n\nX Nie, S Wager.\n**Quasi-Oracle Estimation of Heterogeneous Treatment Effects.**\n[*Biometrika*](https://doi.org/10.1093/biomet/asaa076), 2020\n\nV. Syrgkanis, V. Lei, M. Oprescu, M. Hei, K. Battocchi, G. Lewis.\n**Machine Learning Estimation of Heterogeneous Treatment Effects with Instruments.**\n[*Proceedings of the 33rd Conference on Neural Information Processing Systems (NeurIPS)*](https://arxiv.org/abs/1905.10176), 2019\n**(Spotlight Presentation)**\n\nD. Foster, V. Syrgkanis.\n**Orthogonal Statistical Learning.**\n[*Proceedings of the 32nd Annual Conference on Learning Theory (COLT)*](https://arxiv.org/pdf/1901.09036.pdf), 2019\n**(Best Paper Award)**\n\nM. Oprescu, V. Syrgkanis and Z. S. Wu.\n**Orthogonal Random Forest for Causal Inference.**\n[*Proceedings of the 36th International Conference on Machine Learning (ICML)*](http://proceedings.mlr.press/v97/oprescu19a.html), 2019.\n\nS. K\u00fcnzel, J. Sekhon, J. Bickel and B. Yu.\n**Metalearners for estimating heterogeneous treatment effects using machine learning.**\n[*Proceedings of the national academy of sciences, 116(10), 4156-4165*](https://www.pnas.org/content/116/10/4156), 2019.\n\nS. Athey, J. Tibshirani, S. Wager.\n**Generalized random forests.**\n[*Annals of Statistics, 47, no. 2, 1148--1178*](https://projecteuclid.org/euclid.aos/1547197251), 2019.\n\nV. Chernozhukov, D. Nekipelov, V. Semenova, V. Syrgkanis.\n**Plug-in Regularized Estimation of High-Dimensional Parameters in Nonlinear Semiparametric Models.**\n[*Arxiv preprint arxiv:1806.04823*](https://arxiv.org/abs/1806.04823), 2018.\n\nS. Wager, S. Athey.\n**Estimation and Inference of Heterogeneous Treatment Effects using Random Forests.**\n[*Journal of the American Statistical Association, 113:523, 1228-1242*](https://www.tandfonline.com/doi/citedby/10.1080/01621459.2017.1319839), 2018.\n\nJason Hartford, Greg Lewis, Kevin Leyton-Brown, and Matt Taddy. **Deep IV: A flexible approach for counterfactual prediction.** [*Proceedings of the 34th International Conference on Machine Learning, ICML'17*](http://proceedings.mlr.press/v70/hartford17a/hartford17a.pdf), 2017.\n\nV. Chernozhukov, D. Chetverikov, M. Demirer, E. Duflo, C. Hansen, and a. W. Newey. **Double Machine Learning for Treatment and Causal Parameters.** [*ArXiv preprint arXiv:1608.00060*](https://arxiv.org/abs/1608.00060), 2016.\n\nDudik, M., Erhan, D., Langford, J., & Li, L.\n**Doubly robust policy evaluation and optimization.**\nStatistical Science, 29(4), 485-511, 2014.\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "This package contains several methods for calculating Conditional Average Treatment Effects",
    "version": "0.14.1",
    "project_urls": {
        "Bug Tracker": "https://github.com/py-why/EconML/Issues",
        "Documentation": "https://econml.azurewebsites.net/",
        "Homepage": "https://github.com/py-why/EconML",
        "Source Code": "https://github.com/py-why/EconML"
    },
    "split_keywords": [
        "treatment-effect"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "2fede5ef9932ff73bcbb1180bf62999c8e781c3be8233fa1eea50448beac4d83",
                "md5": "dfa4f65aa263eedb7133035f0c0e6516",
                "sha256": "16557c8202f0f74e20b443f434968cae1cd103cdb9d5f98b363e78450214a6ef"
            },
            "downloads": -1,
            "filename": "econml-0.14.1-cp310-cp310-macosx_10_9_x86_64.whl",
            "has_sig": false,
            "md5_digest": "dfa4f65aa263eedb7133035f0c0e6516",
            "packagetype": "bdist_wheel",
            "python_version": "cp310",
            "requires_python": null,
            "size": 1034124,
            "upload_time": "2023-05-22T14:09:32",
            "upload_time_iso_8601": "2023-05-22T14:09:32.616238Z",
            "url": "https://files.pythonhosted.org/packages/2f/ed/e5ef9932ff73bcbb1180bf62999c8e781c3be8233fa1eea50448beac4d83/econml-0.14.1-cp310-cp310-macosx_10_9_x86_64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "0aae207bb80b674e9e6f46e4e6b65bcfa05ec7ccfd99ed9c6f648a249d144b6b",
                "md5": "92cc982b63cc0838aec891b0b3847d67",
                "sha256": "b605b7e77a47fbc00737c407f54230e635537ce97fe3bd1e4794ccce91a935a8"
            },
            "downloads": -1,
            "filename": "econml-0.14.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl",
            "has_sig": false,
            "md5_digest": "92cc982b63cc0838aec891b0b3847d67",
            "packagetype": "bdist_wheel",
            "python_version": "cp310",
            "requires_python": null,
            "size": 3512528,
            "upload_time": "2023-05-22T14:09:34",
            "upload_time_iso_8601": "2023-05-22T14:09:34.970832Z",
            "url": "https://files.pythonhosted.org/packages/0a/ae/207bb80b674e9e6f46e4e6b65bcfa05ec7ccfd99ed9c6f648a249d144b6b/econml-0.14.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "c962d752ab6a11da5ca3a65320613efef32b7a7baad747a7c96160acfe465d20",
                "md5": "fe3415f1c3ac67fe6416583c64b81294",
                "sha256": "f2a2e2f05195ca2639aa7b59e71595058c505eb2959cf42feb6bd0d3667d90f3"
            },
            "downloads": -1,
            "filename": "econml-0.14.1-cp310-cp310-win_amd64.whl",
            "has_sig": false,
            "md5_digest": "fe3415f1c3ac67fe6416583c64b81294",
            "packagetype": "bdist_wheel",
            "python_version": "cp310",
            "requires_python": null,
            "size": 929589,
            "upload_time": "2023-05-22T14:09:37",
            "upload_time_iso_8601": "2023-05-22T14:09:37.751409Z",
            "url": "https://files.pythonhosted.org/packages/c9/62/d752ab6a11da5ca3a65320613efef32b7a7baad747a7c96160acfe465d20/econml-0.14.1-cp310-cp310-win_amd64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "ffcd35d6b570aad40292318a2275dc18f2dbeb14bca4975e61830a26cdb8988a",
                "md5": "6b8df9429db19ded0197cff95c005b1c",
                "sha256": "53879afb3f089944a03d72d21919d43ec6e7d9a59943eeb4b3aa3ff8762e35b1"
            },
            "downloads": -1,
            "filename": "econml-0.14.1-cp37-cp37m-macosx_10_9_x86_64.whl",
            "has_sig": false,
            "md5_digest": "6b8df9429db19ded0197cff95c005b1c",
            "packagetype": "bdist_wheel",
            "python_version": "cp37",
            "requires_python": null,
            "size": 1025955,
            "upload_time": "2023-05-22T14:09:40",
            "upload_time_iso_8601": "2023-05-22T14:09:40.036900Z",
            "url": "https://files.pythonhosted.org/packages/ff/cd/35d6b570aad40292318a2275dc18f2dbeb14bca4975e61830a26cdb8988a/econml-0.14.1-cp37-cp37m-macosx_10_9_x86_64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "864df6f6550532796019d4503467e6b4acf15e71ba58e22b262e044334948ade",
                "md5": "c0f6045f2198763038f474a7d64e90ab",
                "sha256": "8fc29769c8af93c7500dacdb592b7d24ec309570fc323ca8afc4fe4578a66337"
            },
            "downloads": -1,
            "filename": "econml-0.14.1-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl",
            "has_sig": false,
            "md5_digest": "c0f6045f2198763038f474a7d64e90ab",
            "packagetype": "bdist_wheel",
            "python_version": "cp37",
            "requires_python": null,
            "size": 3346511,
            "upload_time": "2023-05-22T14:09:42",
            "upload_time_iso_8601": "2023-05-22T14:09:42.106192Z",
            "url": "https://files.pythonhosted.org/packages/86/4d/f6f6550532796019d4503467e6b4acf15e71ba58e22b262e044334948ade/econml-0.14.1-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "2ea39536949b6ad0fe8d825e9065e915d82ec49979fb0f27f98dbce359f97a19",
                "md5": "51bcf79b1fa1848fcf80ad2bb6bfbc32",
                "sha256": "f8c18e35ba6edd374c3cdc48d66af5a116417feed381a94d450f325d19a297f5"
            },
            "downloads": -1,
            "filename": "econml-0.14.1-cp37-cp37m-win_amd64.whl",
            "has_sig": false,
            "md5_digest": "51bcf79b1fa1848fcf80ad2bb6bfbc32",
            "packagetype": "bdist_wheel",
            "python_version": "cp37",
            "requires_python": null,
            "size": 932137,
            "upload_time": "2023-05-22T14:09:45",
            "upload_time_iso_8601": "2023-05-22T14:09:45.199028Z",
            "url": "https://files.pythonhosted.org/packages/2e/a3/9536949b6ad0fe8d825e9065e915d82ec49979fb0f27f98dbce359f97a19/econml-0.14.1-cp37-cp37m-win_amd64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "154ffee4217f8f93ba0837022e1d80b0753204b0a397e7c297dace84f5ce2ee1",
                "md5": "9a285615b56d04b117d740a02ccf341f",
                "sha256": "7900cbe509fb077bd8983324f9592e497d01e5f37fbd7ab801023efe7cc0506a"
            },
            "downloads": -1,
            "filename": "econml-0.14.1-cp38-cp38-macosx_10_9_x86_64.whl",
            "has_sig": false,
            "md5_digest": "9a285615b56d04b117d740a02ccf341f",
            "packagetype": "bdist_wheel",
            "python_version": "cp38",
            "requires_python": null,
            "size": 1021931,
            "upload_time": "2023-05-22T14:09:47",
            "upload_time_iso_8601": "2023-05-22T14:09:47.541517Z",
            "url": "https://files.pythonhosted.org/packages/15/4f/fee4217f8f93ba0837022e1d80b0753204b0a397e7c297dace84f5ce2ee1/econml-0.14.1-cp38-cp38-macosx_10_9_x86_64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "f0129d6325d06895d429c51d9f2501abd9a16329dd556fd5b3747e546fd247e1",
                "md5": "b107fb88fd41edee521c62f2574b837e",
                "sha256": "016e23340bfebfb0c10ac923fad6729cd66e8fcce0a92fae0b05b6430da68546"
            },
            "downloads": -1,
            "filename": "econml-0.14.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl",
            "has_sig": false,
            "md5_digest": "b107fb88fd41edee521c62f2574b837e",
            "packagetype": "bdist_wheel",
            "python_version": "cp38",
            "requires_python": null,
            "size": 3561934,
            "upload_time": "2023-05-22T14:09:49",
            "upload_time_iso_8601": "2023-05-22T14:09:49.741603Z",
            "url": "https://files.pythonhosted.org/packages/f0/12/9d6325d06895d429c51d9f2501abd9a16329dd556fd5b3747e546fd247e1/econml-0.14.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "6e05a2e4bf9664376cb32eeac362277792ef5d647132c682261e07768588b6b2",
                "md5": "63e3582a348679d4cb831941d97383b3",
                "sha256": "825d73291fb1d12a0540aa929c48cfb0b58295715a0893d977899cc8d59329ed"
            },
            "downloads": -1,
            "filename": "econml-0.14.1-cp38-cp38-win_amd64.whl",
            "has_sig": false,
            "md5_digest": "63e3582a348679d4cb831941d97383b3",
            "packagetype": "bdist_wheel",
            "python_version": "cp38",
            "requires_python": null,
            "size": 938258,
            "upload_time": "2023-05-22T14:09:51",
            "upload_time_iso_8601": "2023-05-22T14:09:51.907221Z",
            "url": "https://files.pythonhosted.org/packages/6e/05/a2e4bf9664376cb32eeac362277792ef5d647132c682261e07768588b6b2/econml-0.14.1-cp38-cp38-win_amd64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "21013dc6e82b78f0fab7c0ff2aca9a5108ae43b12b94f14c6983cfc11e83e9c5",
                "md5": "71694fb115ac3ad7f56a9a9b0a436f82",
                "sha256": "8c904008db3af21d66324be38547aa01b0c6478b7c6b7b846ae22159f6715de8"
            },
            "downloads": -1,
            "filename": "econml-0.14.1-cp39-cp39-macosx_10_9_x86_64.whl",
            "has_sig": false,
            "md5_digest": "71694fb115ac3ad7f56a9a9b0a436f82",
            "packagetype": "bdist_wheel",
            "python_version": "cp39",
            "requires_python": null,
            "size": 1036950,
            "upload_time": "2023-05-22T14:09:54",
            "upload_time_iso_8601": "2023-05-22T14:09:54.310062Z",
            "url": "https://files.pythonhosted.org/packages/21/01/3dc6e82b78f0fab7c0ff2aca9a5108ae43b12b94f14c6983cfc11e83e9c5/econml-0.14.1-cp39-cp39-macosx_10_9_x86_64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "36fa1117320d0c3b772629dffdfb6e2b907f21d03b01ed87072bea074465b131",
                "md5": "b239def8220cc4dd6dd077d703ded3ee",
                "sha256": "9316f6c8fec86f4af3505909f1f066c5d9a19d133838e6cf2a73a3d479c5bc63"
            },
            "downloads": -1,
            "filename": "econml-0.14.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl",
            "has_sig": false,
            "md5_digest": "b239def8220cc4dd6dd077d703ded3ee",
            "packagetype": "bdist_wheel",
            "python_version": "cp39",
            "requires_python": null,
            "size": 3547428,
            "upload_time": "2023-05-22T14:09:56",
            "upload_time_iso_8601": "2023-05-22T14:09:56.347454Z",
            "url": "https://files.pythonhosted.org/packages/36/fa/1117320d0c3b772629dffdfb6e2b907f21d03b01ed87072bea074465b131/econml-0.14.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "05016c51f24f5ac6233ccfef8ed4b45438eeb89f959369c0be408af4e4db23a4",
                "md5": "0d8ec28df3f9bf7c5fc0204ceb626273",
                "sha256": "298d87c81d2e134cc37a01fa6b776abf830badaa4c1e03977beb762fce95678c"
            },
            "downloads": -1,
            "filename": "econml-0.14.1-cp39-cp39-win_amd64.whl",
            "has_sig": false,
            "md5_digest": "0d8ec28df3f9bf7c5fc0204ceb626273",
            "packagetype": "bdist_wheel",
            "python_version": "cp39",
            "requires_python": null,
            "size": 937685,
            "upload_time": "2023-05-22T14:09:59",
            "upload_time_iso_8601": "2023-05-22T14:09:59.130808Z",
            "url": "https://files.pythonhosted.org/packages/05/01/6c51f24f5ac6233ccfef8ed4b45438eeb89f959369c0be408af4e4db23a4/econml-0.14.1-cp39-cp39-win_amd64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "2c5acbfd4690a9759d324a821ee354ca1b0fe194c6e12dd6721e20858139f54a",
                "md5": "186587dd83c941d9e51c465afdd95b3f",
                "sha256": "ca5eac709b810f7a423f22510a837795e29373ae6d59a8dfc7b6a743fcca47ae"
            },
            "downloads": -1,
            "filename": "econml-0.14.1.tar.gz",
            "has_sig": false,
            "md5_digest": "186587dd83c941d9e51c465afdd95b3f",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 1431396,
            "upload_time": "2023-05-22T14:10:01",
            "upload_time_iso_8601": "2023-05-22T14:10:01.327466Z",
            "url": "https://files.pythonhosted.org/packages/2c/5a/cbfd4690a9759d324a821ee354ca1b0fe194c6e12dd6721e20858139f54a/econml-0.14.1.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-05-22 14:10:01",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "py-why",
    "github_project": "EconML",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "econml"
}
        
Elapsed time: 0.15253s