python-terrier


Namepython-terrier JSON
Version 0.12.1 PyPI version JSON
download
home_pageNone
SummaryPyTerrier
upload_time2024-12-19 12:52:20
maintainerNone
docs_urlNone
authorNone
requires_python>=3.8
licenseNone
keywords
VCS
bugtrack_url
requirements numpy pandas more_itertools tqdm requests ir_datasets wget pyjnius deprecated scipy ir_measures pytrec_eval_terrier jinja2 statsmodels dill joblib chest
Travis-CI No Travis.
coveralls test coverage No coveralls.
            [![Continuous Testing](https://github.com/terrier-org/pyterrier/actions/workflows/push.yml/badge.svg)](https://github.com/terrier-org/pyterrier/actions/workflows/push.yml)
[![PyPI version](https://badge.fury.io/py/python-terrier.svg)](https://badge.fury.io/py/python-terrier)
[![Documentation Status](https://readthedocs.org/projects/pyterrier/badge/?version=latest)](https://pyterrier.readthedocs.io/en/latest/)


# PyTerrier

A Python API for Terrier - v.0.12

# Installation

The easiest way to get started with PyTerrier is to use one of our Colab notebooks - look for the ![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg) badges below.

### Linux or Google Colab or Windows
1. `pip install python-terrier`
2. You may need to set JAVA_HOME environment variable if Pyjnius cannot find your Java installation.

### macOS

1. You need to hava Java installed. Pyjnius/PyTerrier will pick up the location automatically.
2. `pip install python-terrier`

# Indexing

PyTerrier has a number of useful classes for creating indices:

 - You can create an index from TREC formatted collection using [TRECCollectionIndexer](https://pyterrier.readthedocs.io/en/latest/terrier-indexing.html#treccollectionindexer).    
 - For TXT, PDF, Microsoft Word files, etc files you can use [FilesIndexer](https://pyterrier.readthedocs.io/en/latest/terrier-indexing.html#filesindexer).
 - For any abitrary iterable dictionaries or a Pandas Dataframe, you can use [IterDictIndexer](https://pyterrier.readthedocs.io/en/latest/terrier-indexing.html#iterdictindexer).

See the [indexing documentation](https://pyterrier.readthedocs.io/en/latest/terrier-indexing.html), or the examples in the [indexing notebook](examples/notebooks/indexing.ipynb) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/terrier-org/pyterrier/blob/master/examples/notebooks/indexing.ipynb)

# Retrieval and Evaluation

```python
topics = pt.io.read_topics(topicsFile)
qrels = pt.io.read_qrels(qrelsFile)
BM25_r = pt.terrier.Retriever(index, wmodel="BM25")
res = BM25_r.transform(topics)
pt.Evaluate(res, qrels, metrics = ['map'])
```

See also the [retrieval documentation](https://pyterrier.readthedocs.io/en/latest/terrier-retrieval.html), or the worked example in the [retrieval and evaluation notebook](examples/notebooks/retrieval_and_evaluation.ipynb) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/terrier-org/pyterrier/blob/master/examples/notebooks/retrieval_and_evaluation.ipynb)

# Experiment - Perform Retrieval and Evaluation with a single function
PyTerrier provides an [Experiment](https://pyterrier.readthedocs.io/en/latest/experiments.html) function, which allows to compare multiple retrieval approaches on the same queries & relevance assessments:

```python
pt.Experiment([BM25_r, PL2_r], topics, qrels, ["map", "ndcg"])
```

There is a worked example in the [experiment notebook](examples/notebooks/experiment.ipynb) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/terrier-org/pyterrier/blob/master/examples/notebooks/experiment.ipynb)

# Pipelines

PyTerrier makes it easy to develop complex retrieval pipelines using Python operators such as `>>` to chain different retrieval components. Each retrieval approach is a [transformer](https://pyterrier.readthedocs.io/en/latest/transformer.html), having one key method, `transform()`, which takes a single Pandas dataframe as input, and returns another dataframe. Two examples might encapsulate applying the sequential dependence model, or a query expansion process:
```python
sdm_bm25 = pt.rewrite.SDM() >> pt.terrier.Retriever(indexref, wmodel="BM25")
bo1_qe = BM25_r >> pt.rewrite.Bo1QueryExpansion() >> BM25_r
```

There is documentation on [transformer operators](https://pyterrier.readthedocs.io/en/latest/operators.html) as well as [example pipelines](https://pyterrier.readthedocs.io/en/latest/pipeline_examples.html) show other common use cases. For more information, see the [PyTerrier data model](https://pyterrier.readthedocs.io/en/latest/datamodel.html).

# Neural Reranking and Dense Retrieval

PyTerrier has additional plugins for BERT (through OpenNIR), T5, ColBERT, doc2query and many more...
 - Pyterrier_DR: [[Github](https://github.com/terrierteam/pyterrier_colbert)] - single-representation dense retrieval
 - PyTerrier_ColBERT: [[Github](https://github.com/terrierteam/pyterrier_colbert)] - mulitple-representation dense retrieval and/or neural reranking
 - PyTerrier_PISA: [[Github](https://github.com/terrierteam/pyterrier_pisa)] - fast in-memory indexing and retrieval using [PISA](https://github.com/pisa-engine/pisa)
 - PyTerrier_T5: [[Github](https://github.com/terrierteam/pyterrier_t5)] - neural reranking: monoT5, duoT5
 - PyTerrier_GenRank [[Github](https://github.com/emory-irlab/pyterrier_genrank)] - generative listwise reranking: RankVicuna, RankZephyr
 - PyTerrier_doc2query: [[Github](https://github.com/terrierteam/pyterrier_doc2query)] - neural augmented indexing
 - PyTerrier_SPLADE: [[Github](https://github.com/cmacdonald/pyt_splade)] - neural augmented indexing

Older plugins include:
 - PyTerrier_ANCE: [[Github](https://github.com/terrierteam/pyterrier_ance)] - dense retrieval
 - PyTerrier_DeepCT: [[Github](https://github.com/terrierteam/pyterrier_deepct)] - neural augmented indexing
 - OpenNIR: [[Github](https://github.com/Georgetown-IR-Lab/OpenNIR)] [[Documentation](https://opennir.net/)]

You can see examples of how to use these, including notebooks that run on Google Colab, in the contents of our [Search Solutions 2022 tutorial](https://github.com/terrier-org/searchsolutions2022-tutorial).

# Learning to Rank

Complex learning to rank pipelines, including for learning-to-rank, can be constructed using PyTerrier's operator language. For example, to combine two features and make them available for learning, we can use the `**` operator.
```python
two_features = BM25_r >> ( 
  pt.terrier.Retriever(indexref, wmodel="DirichletLM") ** 
  pt.terrier.Retriever(indexref, wmodel="PL2") 
)
```

See also the [learning to rank documentation](https://pyterrier.readthedocs.io/en/latest/ltr.html), as well as the worked examples in the [learning-to-rank notebook](examples/notebooks/ltr.ipynb) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/terrier-org/pyterrier/blob/master/examples/notebooks/ltr.ipynb). Some pipelines can be automatically optimised - more detail about pipeline optimisation are included in our ICTIR 2020 paper.

# Dataset API

PyTerrier allows simple access to standard information retrieval test collections through its [dataset API](https://pyterrier.readthedocs.io/en/latest/datasets.html), which can download the topics, qrels, corpus or, for some test collections, a ready-made Terrier index.

```python
topics = pt.get_dataset("trec-robust-2004").get_topics()
qrels = pt.get_dataset("trec-robust-2004").get_qrels()
pt.Experiment([BM25_r, PL2_r], topics, qrels, eval_metrics)
```

You can index datasets that include a corpus using IterDictIndexer and get_corpus_iter:

```python
dataset = pt.get_dataset('irds:cord19/trec-covid')
indexer = pt.IterDictIndexer('./cord19-index')
index_ref = indexer.index(dataset.get_corpus_iter(), fields=('title', 'abstract'))
```

You can use `pt.list_datasets()` to see available test collections - if your favourite test collection is missing, [you can submit a Pull Request](https://github.com/terrier-org/pyterrier/pulls).

All datasets from the [ir_datasets package](https://github.com/allenai/ir_datasets) are available
under the `irds:` prefix. E.g., use `pt.datasets.get_dataset("irds:medline/2004/trec-genomics-2004")`
to get the TREC Genomics 2004 dataset. A full catalogue of ir_datasets is available [here](https://ir-datasets.com/all.html).

# Index API

All of the standard Terrier Index API can be access easily from Pyterrier. 

For instance, accessing term statistics is a single call on an index:
```python
index.getLexicon()["circuit"].getDocumentFrequency()
```

There are lots of examples in the [index API notebook](examples/notebooks/index_api.ipynb) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/terrier-org/pyterrier/blob/master/examples/notebooks/index_api.ipynb)

# Documentation

More documentation for PyTerrier is available at https://pyterrier.readthedocs.io/en/latest/.

# Open Source Licence

PyTerrier is subject to the terms detailed in the Mozilla Public License Version 2.0. The Mozilla Public License can be found in the file [LICENSE.txt](LICENSE.txt). By using this software, you have agreed to the licence.

# Citation Licence

The source and binary forms of PyTerrier are subject to the following citation license: 

By downloading and using PyTerrier, you agree to cite at the undernoted paper describing PyTerrier in any kind of material you produce where PyTerrier was used to conduct search or experimentation, whether be it a research paper, dissertation, article, poster, presentation, or documentation. By using this software, you have agreed to the citation licence.

[Declarative Experimentation in Information Retrieval using PyTerrier. Craig Macdonald and Nicola Tonellotto. In Proceedings of ICTIR 2020.](https://arxiv.org/abs/2007.14271)

```bibtex
@inproceedings{pyterrier2020ictir,
    author = {Craig Macdonald and Nicola Tonellotto},
    title = {Declarative Experimentation inInformation Retrieval using PyTerrier},
    booktitle = {Proceedings of ICTIR 2020},
    year = {2020}
}

```

# Credits

 - Alex Tsolov, University of Glasgow
 - Craig Macdonald, University of Glasgow
 - Nicola Tonellotto, University of Pisa
 - Arthur Câmara, TU Delft
 - Alberto Ueda, Federal University of Minas Gerais
 - Sean MacAvaney, Georgetown University/University of Glasgow
 - Chentao Xu, University of Glasgow
 - Sarawoot Kongyoung, University of Glasgow
 - Zhan Su, Copenhagen University
 - Marcus Schutte, TU Delft
 - Lukas Zeit-Altpeter, Friedrich Schiller University Jena

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "python-terrier",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": "Craig Macdonald <craig.macdonald@glasgow.ac.uk>",
    "keywords": null,
    "author": null,
    "author_email": "Craig Macdonald <craig.macdonald@glasgow.ac.uk>",
    "download_url": "https://files.pythonhosted.org/packages/13/cf/45a6ed75385b3df331e5ba58f0ab48f40e41d379d0c4e72185a93ba97d1a/python_terrier-0.12.1.tar.gz",
    "platform": null,
    "description": "[![Continuous Testing](https://github.com/terrier-org/pyterrier/actions/workflows/push.yml/badge.svg)](https://github.com/terrier-org/pyterrier/actions/workflows/push.yml)\n[![PyPI version](https://badge.fury.io/py/python-terrier.svg)](https://badge.fury.io/py/python-terrier)\n[![Documentation Status](https://readthedocs.org/projects/pyterrier/badge/?version=latest)](https://pyterrier.readthedocs.io/en/latest/)\n\n\n# PyTerrier\n\nA Python API for Terrier - v.0.12\n\n# Installation\n\nThe easiest way to get started with PyTerrier is to use one of our Colab notebooks - look for the ![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg) badges below.\n\n### Linux or Google Colab or Windows\n1. `pip install python-terrier`\n2. You may need to set JAVA_HOME environment variable if Pyjnius cannot find your Java installation.\n\n### macOS\n\n1. You need to hava Java installed. Pyjnius/PyTerrier will pick up the location automatically.\n2. `pip install python-terrier`\n\n# Indexing\n\nPyTerrier has a number of useful classes for creating indices:\n\n - You can create an index from TREC formatted collection using [TRECCollectionIndexer](https://pyterrier.readthedocs.io/en/latest/terrier-indexing.html#treccollectionindexer).    \n - For TXT, PDF, Microsoft Word files, etc files you can use [FilesIndexer](https://pyterrier.readthedocs.io/en/latest/terrier-indexing.html#filesindexer).\n - For any abitrary iterable dictionaries or a Pandas Dataframe, you can use [IterDictIndexer](https://pyterrier.readthedocs.io/en/latest/terrier-indexing.html#iterdictindexer).\n\nSee the [indexing documentation](https://pyterrier.readthedocs.io/en/latest/terrier-indexing.html), or the examples in the [indexing notebook](examples/notebooks/indexing.ipynb) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/terrier-org/pyterrier/blob/master/examples/notebooks/indexing.ipynb)\n\n# Retrieval and Evaluation\n\n```python\ntopics = pt.io.read_topics(topicsFile)\nqrels = pt.io.read_qrels(qrelsFile)\nBM25_r = pt.terrier.Retriever(index, wmodel=\"BM25\")\nres = BM25_r.transform(topics)\npt.Evaluate(res, qrels, metrics = ['map'])\n```\n\nSee also the [retrieval documentation](https://pyterrier.readthedocs.io/en/latest/terrier-retrieval.html), or the worked example in the [retrieval and evaluation notebook](examples/notebooks/retrieval_and_evaluation.ipynb) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/terrier-org/pyterrier/blob/master/examples/notebooks/retrieval_and_evaluation.ipynb)\n\n# Experiment - Perform Retrieval and Evaluation with a single function\nPyTerrier provides an [Experiment](https://pyterrier.readthedocs.io/en/latest/experiments.html) function, which allows to compare multiple retrieval approaches on the same queries & relevance assessments:\n\n```python\npt.Experiment([BM25_r, PL2_r], topics, qrels, [\"map\", \"ndcg\"])\n```\n\nThere is a worked example in the [experiment notebook](examples/notebooks/experiment.ipynb) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/terrier-org/pyterrier/blob/master/examples/notebooks/experiment.ipynb)\n\n# Pipelines\n\nPyTerrier makes it easy to develop complex retrieval pipelines using Python operators such as `>>` to chain different retrieval components. Each retrieval approach is a [transformer](https://pyterrier.readthedocs.io/en/latest/transformer.html), having one key method, `transform()`, which takes a single Pandas dataframe as input, and returns another dataframe. Two examples might encapsulate applying the sequential dependence model, or a query expansion process:\n```python\nsdm_bm25 = pt.rewrite.SDM() >> pt.terrier.Retriever(indexref, wmodel=\"BM25\")\nbo1_qe = BM25_r >> pt.rewrite.Bo1QueryExpansion() >> BM25_r\n```\n\nThere is documentation on [transformer operators](https://pyterrier.readthedocs.io/en/latest/operators.html) as well as [example pipelines](https://pyterrier.readthedocs.io/en/latest/pipeline_examples.html) show other common use cases. For more information, see the [PyTerrier data model](https://pyterrier.readthedocs.io/en/latest/datamodel.html).\n\n# Neural Reranking and Dense Retrieval\n\nPyTerrier has additional plugins for BERT (through OpenNIR), T5, ColBERT, doc2query and many more...\n - Pyterrier_DR: [[Github](https://github.com/terrierteam/pyterrier_colbert)] - single-representation dense retrieval\n - PyTerrier_ColBERT: [[Github](https://github.com/terrierteam/pyterrier_colbert)] - mulitple-representation dense retrieval and/or neural reranking\n - PyTerrier_PISA: [[Github](https://github.com/terrierteam/pyterrier_pisa)] - fast in-memory indexing and retrieval using [PISA](https://github.com/pisa-engine/pisa)\n - PyTerrier_T5: [[Github](https://github.com/terrierteam/pyterrier_t5)] - neural reranking: monoT5, duoT5\n - PyTerrier_GenRank [[Github](https://github.com/emory-irlab/pyterrier_genrank)] - generative listwise reranking: RankVicuna, RankZephyr\n - PyTerrier_doc2query: [[Github](https://github.com/terrierteam/pyterrier_doc2query)] - neural augmented indexing\n - PyTerrier_SPLADE: [[Github](https://github.com/cmacdonald/pyt_splade)] - neural augmented indexing\n\nOlder plugins include:\n - PyTerrier_ANCE: [[Github](https://github.com/terrierteam/pyterrier_ance)] - dense retrieval\n - PyTerrier_DeepCT: [[Github](https://github.com/terrierteam/pyterrier_deepct)] - neural augmented indexing\n - OpenNIR: [[Github](https://github.com/Georgetown-IR-Lab/OpenNIR)] [[Documentation](https://opennir.net/)]\n\nYou can see examples of how to use these, including notebooks that run on Google Colab, in the contents of our [Search Solutions 2022 tutorial](https://github.com/terrier-org/searchsolutions2022-tutorial).\n\n# Learning to Rank\n\nComplex learning to rank pipelines, including for learning-to-rank, can be constructed using PyTerrier's operator language. For example, to combine two features and make them available for learning, we can use the `**` operator.\n```python\ntwo_features = BM25_r >> ( \n  pt.terrier.Retriever(indexref, wmodel=\"DirichletLM\") ** \n  pt.terrier.Retriever(indexref, wmodel=\"PL2\") \n)\n```\n\nSee also the [learning to rank documentation](https://pyterrier.readthedocs.io/en/latest/ltr.html), as well as the worked examples in the [learning-to-rank notebook](examples/notebooks/ltr.ipynb) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/terrier-org/pyterrier/blob/master/examples/notebooks/ltr.ipynb). Some pipelines can be automatically optimised - more detail about pipeline optimisation are included in our ICTIR 2020 paper.\n\n# Dataset API\n\nPyTerrier allows simple access to standard information retrieval test collections through its [dataset API](https://pyterrier.readthedocs.io/en/latest/datasets.html), which can download the topics, qrels, corpus or, for some test collections, a ready-made Terrier index.\n\n```python\ntopics = pt.get_dataset(\"trec-robust-2004\").get_topics()\nqrels = pt.get_dataset(\"trec-robust-2004\").get_qrels()\npt.Experiment([BM25_r, PL2_r], topics, qrels, eval_metrics)\n```\n\nYou can index datasets that include a corpus using IterDictIndexer and get_corpus_iter:\n\n```python\ndataset = pt.get_dataset('irds:cord19/trec-covid')\nindexer = pt.IterDictIndexer('./cord19-index')\nindex_ref = indexer.index(dataset.get_corpus_iter(), fields=('title', 'abstract'))\n```\n\nYou can use `pt.list_datasets()` to see available test collections - if your favourite test collection is missing, [you can submit a Pull Request](https://github.com/terrier-org/pyterrier/pulls).\n\nAll datasets from the [ir_datasets package](https://github.com/allenai/ir_datasets) are available\nunder the `irds:` prefix. E.g., use `pt.datasets.get_dataset(\"irds:medline/2004/trec-genomics-2004\")`\nto get the TREC Genomics 2004 dataset. A full catalogue of ir_datasets is available [here](https://ir-datasets.com/all.html).\n\n# Index API\n\nAll of the standard Terrier Index API can be access easily from Pyterrier. \n\nFor instance, accessing term statistics is a single call on an index:\n```python\nindex.getLexicon()[\"circuit\"].getDocumentFrequency()\n```\n\nThere are lots of examples in the [index API notebook](examples/notebooks/index_api.ipynb) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/terrier-org/pyterrier/blob/master/examples/notebooks/index_api.ipynb)\n\n# Documentation\n\nMore documentation for PyTerrier is available at https://pyterrier.readthedocs.io/en/latest/.\n\n# Open Source Licence\n\nPyTerrier is subject to the terms detailed in the Mozilla Public License Version 2.0. The Mozilla Public License can be found in the file [LICENSE.txt](LICENSE.txt). By using this software, you have agreed to the licence.\n\n# Citation Licence\n\nThe source and binary forms of PyTerrier are subject to the following citation license: \n\nBy downloading and using PyTerrier, you agree to cite at the undernoted paper describing PyTerrier in any kind of material you produce where PyTerrier was used to conduct search or experimentation, whether be it a research paper, dissertation, article, poster, presentation, or documentation. By using this software, you have agreed to the citation licence.\n\n[Declarative Experimentation in Information Retrieval using PyTerrier. Craig Macdonald and Nicola Tonellotto. In Proceedings of ICTIR 2020.](https://arxiv.org/abs/2007.14271)\n\n```bibtex\n@inproceedings{pyterrier2020ictir,\n    author = {Craig Macdonald and Nicola Tonellotto},\n    title = {Declarative Experimentation inInformation Retrieval using PyTerrier},\n    booktitle = {Proceedings of ICTIR 2020},\n    year = {2020}\n}\n\n```\n\n# Credits\n\n - Alex Tsolov, University of Glasgow\n - Craig Macdonald, University of Glasgow\n - Nicola Tonellotto, University of Pisa\n - Arthur C\u00e2mara, TU Delft\n - Alberto Ueda, Federal University of Minas Gerais\n - Sean MacAvaney, Georgetown University/University of Glasgow\n - Chentao Xu, University of Glasgow\n - Sarawoot Kongyoung, University of Glasgow\n - Zhan Su, Copenhagen University\n - Marcus Schutte, TU Delft\n - Lukas Zeit-Altpeter, Friedrich Schiller University Jena\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "PyTerrier",
    "version": "0.12.1",
    "project_urls": {
        "Bug Tracker": "https://github.com/terrier-org/pyterrier/issues",
        "CI": "https://github.com/terrier-org/pyterrier/actions",
        "Changelog": "https://github.com/terrier-org/pyterrier/releases",
        "Repository": "https://github.com/terrier-org/pyterrier"
    },
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "0f7fd8ac2d0226d862ade7e5a0ece239b1bb23a0290df6a5ca33eb6ef549bf31",
                "md5": "3f9824313f6344200f10264a0748d39e",
                "sha256": "44d82ef1fe7ac892116fafd56be4313989a3fccf2fb4d517a5b99a93d43ca091"
            },
            "downloads": -1,
            "filename": "python_terrier-0.12.1-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "3f9824313f6344200f10264a0748d39e",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8",
            "size": 147785,
            "upload_time": "2024-12-19T12:52:18",
            "upload_time_iso_8601": "2024-12-19T12:52:18.040234Z",
            "url": "https://files.pythonhosted.org/packages/0f/7f/d8ac2d0226d862ade7e5a0ece239b1bb23a0290df6a5ca33eb6ef549bf31/python_terrier-0.12.1-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "13cf45a6ed75385b3df331e5ba58f0ab48f40e41d379d0c4e72185a93ba97d1a",
                "md5": "a217adb9655db9c815764a6a46fd94f0",
                "sha256": "00376d68b8a47f43cc05489bebc0c1346403fb8e64e18cc8af4087e851e187dc"
            },
            "downloads": -1,
            "filename": "python_terrier-0.12.1.tar.gz",
            "has_sig": false,
            "md5_digest": "a217adb9655db9c815764a6a46fd94f0",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 175919,
            "upload_time": "2024-12-19T12:52:20",
            "upload_time_iso_8601": "2024-12-19T12:52:20.585030Z",
            "url": "https://files.pythonhosted.org/packages/13/cf/45a6ed75385b3df331e5ba58f0ab48f40e41d379d0c4e72185a93ba97d1a/python_terrier-0.12.1.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-12-19 12:52:20",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "terrier-org",
    "github_project": "pyterrier",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [
        {
            "name": "numpy",
            "specs": []
        },
        {
            "name": "pandas",
            "specs": []
        },
        {
            "name": "more_itertools",
            "specs": []
        },
        {
            "name": "tqdm",
            "specs": []
        },
        {
            "name": "requests",
            "specs": []
        },
        {
            "name": "ir_datasets",
            "specs": [
                [
                    ">=",
                    "0.3.2"
                ]
            ]
        },
        {
            "name": "wget",
            "specs": []
        },
        {
            "name": "pyjnius",
            "specs": [
                [
                    ">=",
                    "1.4.2"
                ]
            ]
        },
        {
            "name": "deprecated",
            "specs": []
        },
        {
            "name": "scipy",
            "specs": []
        },
        {
            "name": "ir_measures",
            "specs": [
                [
                    ">=",
                    "0.3.1"
                ]
            ]
        },
        {
            "name": "pytrec_eval_terrier",
            "specs": [
                [
                    ">=",
                    "0.5.3"
                ]
            ]
        },
        {
            "name": "jinja2",
            "specs": []
        },
        {
            "name": "statsmodels",
            "specs": []
        },
        {
            "name": "dill",
            "specs": []
        },
        {
            "name": "joblib",
            "specs": []
        },
        {
            "name": "chest",
            "specs": []
        }
    ],
    "lcname": "python-terrier"
}
        
Elapsed time: 0.44661s