efaar-benchmarking


Nameefaar-benchmarking JSON
Version 0.1.0 PyPI version JSON
download
home_page
Summary
upload_time2023-10-24 10:11:58
maintainer
docs_urlNone
author
requires_python>=3.8
license
keywords efaar_benchmarking
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # EFAAR_benchmarking

This library enables computation and retrieval of metrics to benchmark a whole-genome perturbative map created by the pipeline.
Metrics that can be computed using this repo are pairwise gene-gene recall for the reactome, corum, and humap datasets which
are publicly available.

By default, we do not filter on perturbation fingerprint, although filtering is available through the parameters to the `benchmark` function.
We compute the metrics for three different random seeds used to generate empirical null entities.

See our bioRxiv paper for the details: `https://www.biorxiv.org/content/10.1101/2022.12.09.519400v1`

Here are the descriptions for the constants used in the code to configure and control various aspects of the benchmarking process:

`BENCHMARK_DATA_DIR`: The directory path to the benchmark annotations data. It is obtained using the resources module from the importlib package.

`BENCHMARK_SOURCES`: A list of benchmark sources, including "Reactome", "HuMAP", and "CORUM".

`PERT_LABEL_COL`: The column name for the gene perturbation labels.

`PERT_SIG_PVAL_COL`: The column name for the perturbation p-value.

`PERT_SIG_PVAL_THR`: The threshold value for the perturbation p-value.

`PERT_TYPE_COL`: The column name for the perturbation type.

`PERT_TYPE`: The specific perturbation type, which is "GENE" by default.

`WELL_TYPE_COL`: The column name for the well type.

`WELL_TYPE`: The specific well type, which is "query_guides".

`RECALL_PERC_THR_PAIR`: A tuple representing the threshold pair (lower threshold, upper threshold) for calculating recall percentages.

`RANDOM_SEED`: The random seed value used for random number generation.

`RANDOM_COUNT`: The number of runs for benchmarking.

`N_NULL_SAMPLES`: The number of null samples used in benchmarking.

`MIN_REQ_ENT_CNT`: The minimum required number of entities for benchmarking.

## Installation

This package is installable via `pip`.

```bash
pip install efaar_benchmarking
```

## References
**Reactome:**

_Gillespie, M., Jassal, B., Stephan, R., Milacic, M., Rothfels, K., Senff-Ribeiro, A., Griss, J., Sevilla, C., Matthews, L., Gong, C., et al. (2022). The reactome pathway knowledgebase 2022. Nucleic Acids Res. 50, D687–D692. 10.1093/nar/gkab1028._

**CORUM:**

_Giurgiu, M., Reinhard, J., Brauner, B., Dunger-Kaltenbach, I., Fobo, G., Frishman, G., Montrone, C., and Ruepp, A. (2019). CORUM: the comprehensive resource of mammalian protein complexes-2019. Nucleic Acids Res. 47, D559–D563. 10.1093/nar/gky973._

**HuMAP:**

_Drew, K., Wallingford, J.B., and Marcotte, E.M. (2021). hu.MAP 2.0: integration of over 15,000 proteomic experiments builds a global compendium of human multiprotein assemblies. Mol. Syst. Biol. 17, e10016. 10.15252/msb.202010016._

            

Raw data

            {
    "_id": null,
    "home_page": "",
    "name": "efaar-benchmarking",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": "",
    "keywords": "efaar_benchmarking",
    "author": "",
    "author_email": "Recursion Pharmaceuticals <devs@recursionpharma.com>",
    "download_url": "https://files.pythonhosted.org/packages/b4/34/920699f1b0cc09745e7e321adf111f4ca07e8f7f8279576d065d80eb6f9e/efaar_benchmarking-0.1.0.tar.gz",
    "platform": null,
    "description": "# EFAAR_benchmarking\n\nThis library enables computation and retrieval of metrics to benchmark a whole-genome perturbative map created by the pipeline.\nMetrics that can be computed using this repo are pairwise gene-gene recall for the reactome, corum, and humap datasets which\nare publicly available.\n\nBy default, we do not filter on perturbation fingerprint, although filtering is available through the parameters to the `benchmark` function.\nWe compute the metrics for three different random seeds used to generate empirical null entities.\n\nSee our bioRxiv paper for the details: `https://www.biorxiv.org/content/10.1101/2022.12.09.519400v1`\n\nHere are the descriptions for the constants used in the code to configure and control various aspects of the benchmarking process:\n\n`BENCHMARK_DATA_DIR`: The directory path to the benchmark annotations data. It is obtained using the resources module from the importlib package.\n\n`BENCHMARK_SOURCES`: A list of benchmark sources, including \"Reactome\", \"HuMAP\", and \"CORUM\".\n\n`PERT_LABEL_COL`: The column name for the gene perturbation labels.\n\n`PERT_SIG_PVAL_COL`: The column name for the perturbation p-value.\n\n`PERT_SIG_PVAL_THR`: The threshold value for the perturbation p-value.\n\n`PERT_TYPE_COL`: The column name for the perturbation type.\n\n`PERT_TYPE`: The specific perturbation type, which is \"GENE\" by default.\n\n`WELL_TYPE_COL`: The column name for the well type.\n\n`WELL_TYPE`: The specific well type, which is \"query_guides\".\n\n`RECALL_PERC_THR_PAIR`: A tuple representing the threshold pair (lower threshold, upper threshold) for calculating recall percentages.\n\n`RANDOM_SEED`: The random seed value used for random number generation.\n\n`RANDOM_COUNT`: The number of runs for benchmarking.\n\n`N_NULL_SAMPLES`: The number of null samples used in benchmarking.\n\n`MIN_REQ_ENT_CNT`: The minimum required number of entities for benchmarking.\n\n## Installation\n\nThis package is installable via `pip`.\n\n```bash\npip install efaar_benchmarking\n```\n\n## References\n**Reactome:**\n\n_Gillespie, M., Jassal, B., Stephan, R., Milacic, M., Rothfels, K., Senff-Ribeiro, A., Griss, J., Sevilla, C., Matthews, L., Gong, C., et al. (2022). The reactome pathway knowledgebase 2022. Nucleic Acids Res. 50, D687\u2013D692. 10.1093/nar/gkab1028._\n\n**CORUM:**\n\n_Giurgiu, M., Reinhard, J., Brauner, B., Dunger-Kaltenbach, I., Fobo, G., Frishman, G., Montrone, C., and Ruepp, A. (2019). CORUM: the comprehensive resource of mammalian protein complexes-2019. Nucleic Acids Res. 47, D559\u2013D563. 10.1093/nar/gky973._\n\n**HuMAP:**\n\n_Drew, K., Wallingford, J.B., and Marcotte, E.M. (2021). hu.MAP 2.0: integration of over 15,000 proteomic experiments builds a global compendium of human multiprotein assemblies. Mol. Syst. Biol. 17, e10016. 10.15252/msb.202010016._\n",
    "bugtrack_url": null,
    "license": "",
    "summary": "",
    "version": "0.1.0",
    "project_urls": null,
    "split_keywords": [
        "efaar_benchmarking"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "1ea6aae64ce736c3238b910df20757f0be39d12d71db4d8e2c65692e2eb4326e",
                "md5": "62089e250ffa56be3f9d5c77c9c9bd38",
                "sha256": "27a641cdc72ca58ad8437bf1f1d36cbc348413010ffa0c9678bf3143bdbcf658"
            },
            "downloads": -1,
            "filename": "efaar_benchmarking-0.1.0-py2.py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "62089e250ffa56be3f9d5c77c9c9bd38",
            "packagetype": "bdist_wheel",
            "python_version": "py2.py3",
            "requires_python": ">=3.8",
            "size": 15580417,
            "upload_time": "2023-10-24T10:11:54",
            "upload_time_iso_8601": "2023-10-24T10:11:54.847861Z",
            "url": "https://files.pythonhosted.org/packages/1e/a6/aae64ce736c3238b910df20757f0be39d12d71db4d8e2c65692e2eb4326e/efaar_benchmarking-0.1.0-py2.py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "b434920699f1b0cc09745e7e321adf111f4ca07e8f7f8279576d065d80eb6f9e",
                "md5": "a2f9fe857122477c28a1cc945d6d44b8",
                "sha256": "a32a385babe067afe07c3214f20d6c67adaf9fc0f8d7d1357f77d7831483ea17"
            },
            "downloads": -1,
            "filename": "efaar_benchmarking-0.1.0.tar.gz",
            "has_sig": false,
            "md5_digest": "a2f9fe857122477c28a1cc945d6d44b8",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 15561512,
            "upload_time": "2023-10-24T10:11:58",
            "upload_time_iso_8601": "2023-10-24T10:11:58.016347Z",
            "url": "https://files.pythonhosted.org/packages/b4/34/920699f1b0cc09745e7e321adf111f4ca07e8f7f8279576d065d80eb6f9e/efaar_benchmarking-0.1.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-10-24 10:11:58",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "lcname": "efaar-benchmarking"
}
        
Elapsed time: 0.13903s