tarexp


Nametarexp JSON
Version 0.1.4 PyPI version JSON
download
home_pagehttps://github.com/eugene-yang/tarexp
SummaryA Python framework for Technology-Assisted Review experiments.
upload_time2024-03-18 01:53:40
maintainer
docs_urlNone
authorEugene Yang
requires_python>=3.7
license
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # TARexp: A Python Framework for Technology-Assisted Review Experiments

`TARexp` is an opensource Python framework for conducting TAR experiments with various
reference implementation to algorithms and methods that are commonly-used.

The experiments are fully reproducible and easy to conduct ablation studies. 
For studying components that do not change the selection process of the review documents, 
`TARexp` supports replying TAR runs and experimenting these components offline. 

Helper functions to support results analysis are also avaiable. 

Please visit our Google Colab Demo to check out the full running example [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/eugene-yang/tarexp/blob/main/examples/exp-demo.ipynb)

Please refer to the documentation for more detail: https://eugene.zone/tarexp. 

## Get Started

You can install `TARexp` from PyPi by running
```bash
pip install tarexp
```

Or install it with the lastest version from GitHub
```bash
pip install git+https://github.com/eugene-yang/tarexp.git
```

If you like to build it from source, please use
```bash
git clone https://github.com/eugene-yang/tarexp.git
cd tarexp
python setup.py bdist_wheel
pip install dist/*.whl
```

In Python, please use the following command to import both the main package and the components
```python
import tarexp
from tarexp import component
```

## Running Workflow

The following snippet is an example of creating a `dataset` instance for `TARexp`. 
For `scikit-learn` rankers, the structure of the dataset is bascially a sparse `scipy` matrix
for the vectorized dataset and a list or an array of binary labels with the same length of the matrix. 

```python
from sklearn import datasets
import pandas as pd
rcv1 = datasets.fetch_rcv1()
X = rcv1['data']
rel_info = pd.DataFrame(rcv1['target'].todense().astype(bool), columns=rcv1['target_names'])
ds = tarexp.SparseVectorDataset.from_sparse(X)
```

The following snippet defines a set of componets to use for a workflow, 
```python
setting = component.combine(component.SklearnRanker(LogisticRegression, solver='liblinear'), 
                            component.PerfectLabeler(), 
                            component.RelevanceSampler(), 
                            component.FixedRoundStoppingRule(max_round=20))()
```

And to declare a workflow, simply put in your dataset, setting, and other parameters to the workflow. 
```python
workflow = tarexp.OnePhaseTARWorkflow(
    ds.set_label(rel_info['GPRO']), 
    setting, 
    seed_doc=[1023], 
    batch_size=200, 
    random_seed=123
)
```

And finally, you can start executing the workflow by running it as an iterator. 
We also support everything from [`ir-measures`](https://ir-measur.es/en/latest/) as evaluation metrics.

```python
recording_metrics = [ir_measures.RPrec, tarexp.OptimisticCost(target_recall=0.8, cost_structure=(25,5,5,1))]
for ledger in workflow:
    print("Round {}: found {} positives in total".format(ledger.n_rounds, ledger.n_pos_annotated)) 
    print("metric:", workflow.getMetrics(recording_metrics))
```

Besides standard IR evaluation metrics, we also implement `OptimisticCost` as cost-based evaluation metrics in `TARexp`. Please refer to [this paper](https://arxiv.org/abs/2106.09866) for more information and consider citing it if you use this measurement. 

## Running Experiments

### TAR Experiments

`tarexp.TARExperiment` is a wrapper and dispatcher for running TAR experiments with different settings. 
It construct all combinations of the input settings and dispath each TAR run to execute.

The following command defines a set of 6 TAR runs which consists of 3 topics and each has 2 runs with batch size 200 and 100.  

```python
exp = tarexp.TARExperiment('./my_tar_exp/', random_seed=123, max_round_exec=20,
                            metrics=[RPrec, P@10, tarexp.OptimisticCost(target_recall=0.8, cost_structure=(1,10,1,10))],
                            tasks=tarexp.TaskFeeder(ds, rel_info[['GPRO', 'GOBIT', 'E141']]),
                            components=setting,
                            workflow=tarexp.OnePhaseTARWorkflow, batch_size=[200, 100])
```

To start running the experiment, please use the following command which will execute with single processor and resume from any crash runs if exist in the output directory. 
```python
results = exp.run(n_processes=1, resume=True, dump_frequency=10)
```

### Testing Stopping Rules

`TARexp` also encourages experiments on stopping rules. 
We have built-in a number of stopping rules in the package and continuing to update them. 

The following snippet is an exmaple for running a replay experiment based on a set of existing 
TAR runs with a list of stopping rules defined in `stopping_rules` arguments. 

```python
replay_exp = tarexp.StoppingExperimentOnReplay(
                    './test_stopping_rules', random_seed=123,
                    tasks=tarexp.TaskFeeder(ds, rel_info[['GPRO','GOBIT', 'E141']]),
                    replay=tarexp.OnePhaseTARWorkflowReplay,
                    saved_exp_path='./my_tar_exp',
                    metrics=[tarexp.OptimisticCost(target_recall=0.8, cost_structure=(1,1,1,1)),
                             tarexp.OptimisticCost(target_recall=0.9, cost_structure=(1,1,1,1))],
                    stopping_rules=[
                        component.KneeStoppingRule(), 
                        component.BudgetStoppingRule(), 
                        component.BatchPrecStoppingRule(), 
                        component.ReviewHalfStoppingRule(),
                        component.Rule2399StoppingRule(), 
                        component.QuantStoppingRule(0.4, 0), 
                        component.QuantStoppingRule(0.2, 0),
                        component.QuantStoppingRule(0.8, 0),
                        component.CHMHeuristicsStoppingRule(0.8),
                        component.CHMHeuristicsStoppingRule(0.4),
                        component.CHMHeuristicsStoppingRule(0.2),
                    ]
            )

stopping_results = replay_exp.run(resume=True, dump_frequency=10)
```

### Visualization

`TARexp` also provide visualization tools for TAR runs. 

`createDFfromResults` creates a pandas DataFrame from either the result variable
```python
df = tarexp.helper.createDFfromResults(results, remove_redundant_level=True)
```
Or the output directory
```python
df = tarexp.helper.createDFfromResults('./my_tar_exp', remove_redundant_level=True)
```

And the following command provides you the cost dynamic graph introduced in [this paper](https://arxiv.org/abs/2106.09866). 
```python
tarexp.helper.cost_dynamic(
    df.loc[:, 'GOBIT', :].groupby(level='dataset'),
    recall_targets=[0.8], cost_structures=[(1,1,1,1), (10, 10, 1, 1), (25, 5, 5, 1)],
    with_hatches=True
)
```

![](./examples/cost-dynamic-1.png)

Alternatively, you can also create this graph by using a command line interface
```bash
python -m tarexp.helper.plotting \
       --runs GPRO=./my_tar_exp/GPRO.61b1f31a0a29de634939db77c0dde383/  \
              GOBIT=./my_tar_exp/GOBIT.ae86e0b37809cb139dfa1f4cf914fb9b/  \
       --cost_structures 1-1-1-1 25-5-5-1 --y_thousands --with_hatches
```

![](./examples/cost-dynamic-2.png)

## Feedback

Any feedback is welcome! 
You can reach out to us either by emailing the author or rasing an issue! 

## Reference

The demo paper of `TARexp` is currently under review. 

If you use the cost measure or the cost dynamic graphs, pleas consider citing this paper
```bibtex
@inproceedings{cost-structure,
	author = {Eugene Yang and David D. Lewis and Ophir Frieder},
	title = {On Minimizing Cost in Legal Document Review Workflows},
	booktitle = {Proceedings of the ACM Symposium on Document Engineering (DocEng)},
	year = {2021},
	url = {https://arxiv.org/abs/2106.09866}
}
```








            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/eugene-yang/tarexp",
    "name": "tarexp",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.7",
    "maintainer_email": "",
    "keywords": "",
    "author": "Eugene Yang",
    "author_email": "eugene.yang@jhu.edu",
    "download_url": "https://files.pythonhosted.org/packages/08/11/6c1407b18bad4a6953131a994de1a24629a36aa15967d2b997fd025cc5bd/tarexp-0.1.4.tar.gz",
    "platform": null,
    "description": "# TARexp: A Python Framework for Technology-Assisted Review Experiments\n\n`TARexp` is an opensource Python framework for conducting TAR experiments with various\nreference implementation to algorithms and methods that are commonly-used.\n\nThe experiments are fully reproducible and easy to conduct ablation studies. \nFor studying components that do not change the selection process of the review documents, \n`TARexp` supports replying TAR runs and experimenting these components offline. \n\nHelper functions to support results analysis are also avaiable. \n\nPlease visit our Google Colab Demo to check out the full running example [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/eugene-yang/tarexp/blob/main/examples/exp-demo.ipynb)\n\nPlease refer to the documentation for more detail: https://eugene.zone/tarexp. \n\n## Get Started\n\nYou can install `TARexp` from PyPi by running\n```bash\npip install tarexp\n```\n\nOr install it with the lastest version from GitHub\n```bash\npip install git+https://github.com/eugene-yang/tarexp.git\n```\n\nIf you like to build it from source, please use\n```bash\ngit clone https://github.com/eugene-yang/tarexp.git\ncd tarexp\npython setup.py bdist_wheel\npip install dist/*.whl\n```\n\nIn Python, please use the following command to import both the main package and the components\n```python\nimport tarexp\nfrom tarexp import component\n```\n\n## Running Workflow\n\nThe following snippet is an example of creating a `dataset` instance for `TARexp`. \nFor `scikit-learn` rankers, the structure of the dataset is bascially a sparse `scipy` matrix\nfor the vectorized dataset and a list or an array of binary labels with the same length of the matrix. \n\n```python\nfrom sklearn import datasets\nimport pandas as pd\nrcv1 = datasets.fetch_rcv1()\nX = rcv1['data']\nrel_info = pd.DataFrame(rcv1['target'].todense().astype(bool), columns=rcv1['target_names'])\nds = tarexp.SparseVectorDataset.from_sparse(X)\n```\n\nThe following snippet defines a set of componets to use for a workflow, \n```python\nsetting = component.combine(component.SklearnRanker(LogisticRegression, solver='liblinear'), \n                            component.PerfectLabeler(), \n                            component.RelevanceSampler(), \n                            component.FixedRoundStoppingRule(max_round=20))()\n```\n\nAnd to declare a workflow, simply put in your dataset, setting, and other parameters to the workflow. \n```python\nworkflow = tarexp.OnePhaseTARWorkflow(\n    ds.set_label(rel_info['GPRO']), \n    setting, \n    seed_doc=[1023], \n    batch_size=200, \n    random_seed=123\n)\n```\n\nAnd finally, you can start executing the workflow by running it as an iterator. \nWe also support everything from [`ir-measures`](https://ir-measur.es/en/latest/) as evaluation metrics.\n\n```python\nrecording_metrics = [ir_measures.RPrec, tarexp.OptimisticCost(target_recall=0.8, cost_structure=(25,5,5,1))]\nfor ledger in workflow:\n    print(\"Round {}: found {} positives in total\".format(ledger.n_rounds, ledger.n_pos_annotated)) \n    print(\"metric:\", workflow.getMetrics(recording_metrics))\n```\n\nBesides standard IR evaluation metrics, we also implement `OptimisticCost` as cost-based evaluation metrics in `TARexp`. Please refer to [this paper](https://arxiv.org/abs/2106.09866) for more information and consider citing it if you use this measurement. \n\n## Running Experiments\n\n### TAR Experiments\n\n`tarexp.TARExperiment` is a wrapper and dispatcher for running TAR experiments with different settings. \nIt construct all combinations of the input settings and dispath each TAR run to execute.\n\nThe following command defines a set of 6 TAR runs which consists of 3 topics and each has 2 runs with batch size 200 and 100.  \n\n```python\nexp = tarexp.TARExperiment('./my_tar_exp/', random_seed=123, max_round_exec=20,\n                            metrics=[RPrec, P@10, tarexp.OptimisticCost(target_recall=0.8, cost_structure=(1,10,1,10))],\n                            tasks=tarexp.TaskFeeder(ds, rel_info[['GPRO', 'GOBIT', 'E141']]),\n                            components=setting,\n                            workflow=tarexp.OnePhaseTARWorkflow, batch_size=[200, 100])\n```\n\nTo start running the experiment, please use the following command which will execute with single processor and resume from any crash runs if exist in the output directory. \n```python\nresults = exp.run(n_processes=1, resume=True, dump_frequency=10)\n```\n\n### Testing Stopping Rules\n\n`TARexp` also encourages experiments on stopping rules. \nWe have built-in a number of stopping rules in the package and continuing to update them. \n\nThe following snippet is an exmaple for running a replay experiment based on a set of existing \nTAR runs with a list of stopping rules defined in `stopping_rules` arguments. \n\n```python\nreplay_exp = tarexp.StoppingExperimentOnReplay(\n                    './test_stopping_rules', random_seed=123,\n                    tasks=tarexp.TaskFeeder(ds, rel_info[['GPRO','GOBIT', 'E141']]),\n                    replay=tarexp.OnePhaseTARWorkflowReplay,\n                    saved_exp_path='./my_tar_exp',\n                    metrics=[tarexp.OptimisticCost(target_recall=0.8, cost_structure=(1,1,1,1)),\n                             tarexp.OptimisticCost(target_recall=0.9, cost_structure=(1,1,1,1))],\n                    stopping_rules=[\n                        component.KneeStoppingRule(), \n                        component.BudgetStoppingRule(), \n                        component.BatchPrecStoppingRule(), \n                        component.ReviewHalfStoppingRule(),\n                        component.Rule2399StoppingRule(), \n                        component.QuantStoppingRule(0.4, 0), \n                        component.QuantStoppingRule(0.2, 0),\n                        component.QuantStoppingRule(0.8, 0),\n                        component.CHMHeuristicsStoppingRule(0.8),\n                        component.CHMHeuristicsStoppingRule(0.4),\n                        component.CHMHeuristicsStoppingRule(0.2),\n                    ]\n            )\n\nstopping_results = replay_exp.run(resume=True, dump_frequency=10)\n```\n\n### Visualization\n\n`TARexp` also provide visualization tools for TAR runs. \n\n`createDFfromResults` creates a pandas DataFrame from either the result variable\n```python\ndf = tarexp.helper.createDFfromResults(results, remove_redundant_level=True)\n```\nOr the output directory\n```python\ndf = tarexp.helper.createDFfromResults('./my_tar_exp', remove_redundant_level=True)\n```\n\nAnd the following command provides you the cost dynamic graph introduced in [this paper](https://arxiv.org/abs/2106.09866). \n```python\ntarexp.helper.cost_dynamic(\n    df.loc[:, 'GOBIT', :].groupby(level='dataset'),\n    recall_targets=[0.8], cost_structures=[(1,1,1,1), (10, 10, 1, 1), (25, 5, 5, 1)],\n    with_hatches=True\n)\n```\n\n![](./examples/cost-dynamic-1.png)\n\nAlternatively, you can also create this graph by using a command line interface\n```bash\npython -m tarexp.helper.plotting \\\n       --runs GPRO=./my_tar_exp/GPRO.61b1f31a0a29de634939db77c0dde383/  \\\n              GOBIT=./my_tar_exp/GOBIT.ae86e0b37809cb139dfa1f4cf914fb9b/  \\\n       --cost_structures 1-1-1-1 25-5-5-1 --y_thousands --with_hatches\n```\n\n![](./examples/cost-dynamic-2.png)\n\n## Feedback\n\nAny feedback is welcome! \nYou can reach out to us either by emailing the author or rasing an issue! \n\n## Reference\n\nThe demo paper of `TARexp` is currently under review. \n\nIf you use the cost measure or the cost dynamic graphs, pleas consider citing this paper\n```bibtex\n@inproceedings{cost-structure,\n\tauthor = {Eugene Yang and David D. Lewis and Ophir Frieder},\n\ttitle = {On Minimizing Cost in Legal Document Review Workflows},\n\tbooktitle = {Proceedings of the ACM Symposium on Document Engineering (DocEng)},\n\tyear = {2021},\n\turl = {https://arxiv.org/abs/2106.09866}\n}\n```\n\n\n\n\n\n\n\n",
    "bugtrack_url": null,
    "license": "",
    "summary": "A Python framework for Technology-Assisted Review experiments.",
    "version": "0.1.4",
    "project_urls": {
        "Homepage": "https://github.com/eugene-yang/tarexp"
    },
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "7c1a20b9dd9f43f591679fe4a45949d3e29fcd2ce52d6d39f266d0d62273ac9a",
                "md5": "c02841b680d1f6c93b0d72b1541397e4",
                "sha256": "647f753cccc53c01bd889f15656c9d39e3c952216e36eb6f85ed338c61900f7e"
            },
            "downloads": -1,
            "filename": "tarexp-0.1.4-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "c02841b680d1f6c93b0d72b1541397e4",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.7",
            "size": 47588,
            "upload_time": "2024-03-18T01:53:39",
            "upload_time_iso_8601": "2024-03-18T01:53:39.089506Z",
            "url": "https://files.pythonhosted.org/packages/7c/1a/20b9dd9f43f591679fe4a45949d3e29fcd2ce52d6d39f266d0d62273ac9a/tarexp-0.1.4-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "08116c1407b18bad4a6953131a994de1a24629a36aa15967d2b997fd025cc5bd",
                "md5": "0e836a5aaf5af77f2cbf2d6af8314b17",
                "sha256": "b95592015360daf85a47691e8140ba266fe3d628db2683f29d9e8479e90cb73b"
            },
            "downloads": -1,
            "filename": "tarexp-0.1.4.tar.gz",
            "has_sig": false,
            "md5_digest": "0e836a5aaf5af77f2cbf2d6af8314b17",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.7",
            "size": 45301,
            "upload_time": "2024-03-18T01:53:40",
            "upload_time_iso_8601": "2024-03-18T01:53:40.981496Z",
            "url": "https://files.pythonhosted.org/packages/08/11/6c1407b18bad4a6953131a994de1a24629a36aa15967d2b997fd025cc5bd/tarexp-0.1.4.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-03-18 01:53:40",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "eugene-yang",
    "github_project": "tarexp",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [],
    "lcname": "tarexp"
}
        
Elapsed time: 0.22905s