openpredict


Nameopenpredict JSON
Version 0.2.1 PyPI version JSON
download
home_page
SummaryA package to help serve predictions of biomedical concepts associations as Translator Reasoner API.
upload_time2023-01-10 12:34:11
maintainer
docs_urlNone
authorArif Yilmaz, Elif
requires_python<3.11,>=3.8
licenseMIT License Copyright (c) 2020 Vincent Emonet Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
keywords biomedical data translator predictions python trapi
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # 🔮🐍 Translator OpenPredict

[![Python versions](https://img.shields.io/pypi/pyversions/openpredict)](https://pypi.org/project/openpredict) [![Version](https://img.shields.io/pypi/v/openpredict)](https://pypi.org/project/openpredict) [![Publish package](https://github.com/MaastrichtU-IDS/translator-openpredict/actions/workflows/publish-package.yml/badge.svg)](https://github.com/MaastrichtU-IDS/translator-openpredict/actions/workflows/publish-package.yml)

[![Test the production API](https://github.com/MaastrichtU-IDS/translator-openpredict/actions/workflows/test-prod.yml/badge.svg)](https://github.com/MaastrichtU-IDS/translator-openpredict/actions/workflows/test-prod.yml) [![Run integration tests for TRAPI](https://github.com/MaastrichtU-IDS/translator-openpredict/actions/workflows/test-integration.yml/badge.svg)](https://github.com/MaastrichtU-IDS/translator-openpredict/actions/workflows/test-integration.yml) [![SonarCloud Coverage](https://sonarcloud.io/api/project_badges/measure?project=MaastrichtU-IDS_translator-openpredict&metric=coverage)](https://sonarcloud.io/dashboard?id=MaastrichtU-IDS_translator-openpredict)

**OpenPredict** is a python package to help serve predictions of biomedical associations, as Translator Reasoner API (aka. TRAPI).

The [Translator Reasoner API](https://github.com/NCATSTranslator/ReasonerAPI) (TRAPI) defines a standard HTTP API for communicating biomedical questions and answers leveraging the [Biolink model](https://github.com/biolink/biolink-model/).

The package provides:

* a decorator `@trapi_predict` to which the developer can pass all informations required to integrate the prediction function to a Translator Reasoner API
* a `TRAPI` class to deploy a Translator Reasoner API serving a list of prediction functions decorated with `@trapi_predict`
* Helpers to store your models in a FAIR manner, using tools such as [`dvc`](https://dvc.org/) and [`mlem`](https://mlem.ai/)

Predictions are usually generated from machine learning models (e.g. predict disease treated by drug), but it can adapt to generic python function, as long as the input params and return object follow the expected structure.

Additionally to the library, this repository contains the code for the **OpenPredict Translator API** available at **[openpredict.semanticscience.org](https://openpredict.semanticscience.org)**, which serves a few prediction models developed at the Institute of Data Science.

## 📦️ Use the package

### Install

```bash
pip install openpredict
```

### Use

The `openpredict` package provides a decorator `@trapi_predict` to annotate your functions that generate predictions. The code for this package is in `src/openpredict/`.

Predictions generated from functions decorated with `@trapi_predict` can easily be imported in the Translator OpenPredict API, exposed as an API endpoint to get predictions from the web, and queried through the Translator Reasoner API (TRAPI)

```python
from openpredict import trapi_predict, PredictOptions, PredictOutput

@trapi_predict(path='/predict',
    name="Get predicted targets for a given entity",
    description="Return the predicted targets for a given entity: drug (DrugBank ID) or disease (OMIM ID), with confidence scores.",
    edges=[
        {
            'subject': 'biolink:Drug',
            'predicate': 'biolink:treats',
            'object': 'biolink:Disease',
        },
        {
            'subject': 'biolink:Disease',
            'predicate': 'biolink:treated_by',
            'object': 'biolink:Drug',
        },
    ],
	nodes={
        "biolink:Disease": {
            "id_prefixes": [
                "OMIM"
            ]
        },
        "biolink:Drug": {
            "id_prefixes": [
                "DRUGBANK"
            ]
        }
    }
)
def get_predictions(
        input_id: str, options: PredictOptions
    ) -> PredictOutput:
    # Add the code the load the model and get predictions here
    predictions = {
        "hits": [
            {
                "id": "DB00001",
                "type": "biolink:Drug",
                "score": 0.12345,
                "label": "Leipirudin",
            }
        ],
        "count": 1,
    }
    return predictions
```

> 🍪 You can use [**our cookiecutter template**](https://github.com/MaastrichtU-IDS/cookiecutter-openpredict-api/) to quickly bootstrap a repository with everything ready to start developing your prediction models, to then easily publish, and integrate them in the Translator ecosystem

## 🌐 The OpenPredict Translator API

Additionally to the library, this repository contains the code for the **OpenPredict Translator API** available at **[openpredict.semanticscience.org](https://openpredict.semanticscience.org)** and the predictions models it exposes:

* the code for the OpenPredict API endpoints in  `src/trapi/` defines:
  *  a TRAPI endpoint returning predictions for the loaded models
  * individual endpoints for each loaded models, taking an input id, and returning predicted hits
  * endpoints serving metadata about runs, models evaluations, features for the OpenPredict model, stored as RDF, using the [ML Schema ontology](http://ml-schema.github.io/documentation/ML%20Schema.html).
* various folders for **different prediction models** served by the OpenPredict API are available under `src/`:
  * the OpenPredict drug-disease prediction model in `src/openpredict_model/`
  * a model to compile the evidence path between a drug and a disease explaining the predictions of the OpenPredict model in `src/openpredict_evidence_path/`
  * a prediction model trained from the Drug Repurposing Knowledge Graph (aka. DRKG) in `src/drkg_model/`

The data used by the models in this repository is versionned using `dvc` in the `data/` folder, and stored **on DagsHub at https://dagshub.com/vemonet/translator-openpredict**

### Deploy the OpenPredict API locally

Requirements: Python 3.8+ and `pip` installed

1. Clone the repository:

   ```bash
   git clone https://github.com/MaastrichtU-IDS/translator-openpredict.git
   cd translator-openpredict
   ```

2. Pull the data required to run the models in the `data` folder with [`dvc`](https://dvc.org/):

   ```bash
   pip install dvc
   dvc pull
   ```


Start the API in development mode with docker on http://localhost:8808, the API will automatically reload when you make changes in the code:

```bash
docker-compose up api
# Or with the helper script:
./scripts/api.sh
```

> Contributions are welcome! If you wish to help improve OpenPredict, see the [instructions to contribute :woman_technologist:](/CONTRIBUTING.md) for more details on the development workflow

### Test the OpenPredict API

Run the tests locally with docker:

```bash
docker-compose run tests
# Or with the helper script:
./scripts/test.sh
```

> See the [`TESTING.md`](/TESTING.md) file for more details on testing the API.

You can change the entrypoint of the test container to run other commands, such as training a model:

```bash
docker-compose run --entrypoint "python src/openpredict_model/train.py train-model" tests
# Or with the helper script:
./scripts/run.sh python src/openpredict_model/train.py train-model
```

### Use the OpenPredict API


The user provides a drug or a disease identifier as a CURIE (e.g. DRUGBANK:DB00394, or OMIM:246300), and choose a prediction model (only the `Predict OMIM-DrugBank` classifier is currently implemented).

The API will return predicted targets for the given drug or disease:

* The **potential drugs treating a given disease** :pill:
* The **potential diseases a given drug could treat** :microbe:

> Feel free to try the API at **[openpredict.semanticscience.org](https://openpredict.semanticscience.org)**

#### TRAPI operations

Operations to query OpenPredict using the [Translator Reasoner API](https://github.com/NCATSTranslator/ReasonerAPI) standards.

##### Query operation

The `/query` operation will return the same predictions as the `/predict` operation, using the [ReasonerAPI](https://github.com/NCATSTranslator/ReasonerAPI) format, used within the [Translator project](https://ncats.nih.gov/translator/about).

The user sends a [ReasonerAPI](https://github.com/NCATSTranslator/ReasonerAPI) query asking for the predicted targets given: a source, and the relation to predict. The query is a graph with nodes and edges defined in JSON, and uses classes from the [BioLink model](https://biolink.github.io/biolink-model).

You can use the default TRAPI query of OpenPredict `/query` operation to try a working example.

Example of TRAPI query to retrieve drugs similar to a specific drug:

```json
{
    "message": {
        "query_graph": {
        "edges": {
            "e01": {
            "object": "n1",
            "predicates": [
                "biolink:similar_to"
            ],
            "subject": "n0"
            }
        },
        "nodes": {
            "n0": {
            "categories": [
                "biolink:Drug"
            ],
            "ids": [
                "DRUGBANK:DB00394"
            ]
            },
            "n1": {
            "categories": [
                "biolink:Drug"
            ]
            }
        }
        }
    },
    "query_options": {
        "n_results": 3
    }
}
```

##### Predicates operation

The `/predicates` operation will return the entities and relations provided by this API in a JSON object (following the [ReasonerAPI](https://github.com/NCATSTranslator/ReasonerAPI) specifications).

> Try it at [https://openpredict.semanticscience.org/predicates](https://openpredict.semanticscience.org/predicates)

#### Notebooks examples :notebook_with_decorative_cover:

We provide [Jupyter Notebooks](https://jupyter.org/) with examples to use the OpenPredict API:

1. [Query the OpenPredict API](https://github.com/MaastrichtU-IDS/translator-openpredict/blob/master/docs/openpredict-examples.ipynb)
2. [Generate embeddings with pyRDF2Vec](https://github.com/MaastrichtU-IDS/translator-openpredict/blob/master/docs/openpredict-pyrdf2vec-embeddings.ipynb), and import them in the OpenPredict API

#### Add embedding :station:

The default baseline model is `openpredict_baseline`. You can choose the base model when you post a new embeddings using the `/embeddings` call. Then the OpenPredict API will:

1. add embeddings to the provided model
2. train the model with the new embeddings
3. store the features and model using a unique ID for the run (e.g. `7621843c-1f5f-11eb-85ae-48a472db7414`)

Once the embedding has been added you can find the existing models previously generated (including `openpredict_baseline`), and use them as base model when you ask the model for prediction or add new embeddings.

#### Predict operation :crystal_ball:

Use this operation if you just want to easily retrieve predictions for a given entity. The `/predict` operation takes 4 parameters (1 required):

* A `drug_id` to get predicted diseases it could treat (e.g. `DRUGBANK:DB00394`)
  * **OR** a `disease_id` to get predicted drugs it could be treated with (e.g. `OMIM:246300`)
* The prediction model to use (default to `Predict OMIM-DrugBank`)
* The minimum score of the returned predictions, from 0 to 1 (optional)
* The limit of results to return, starting from the higher score, e.g. 42 (optional)

The API will return the list of predicted target for the given entity, the labels are resolved using the [Translator Name Resolver API](https://nodenormalization-sri.renci.org)

> Try it at [https://openpredict.semanticscience.org/predict?drug_id=DRUGBANK:DB00394](https://openpredict.semanticscience.org/predict?drug_id=DRUGBANK:DB00394)

---

### More about the data model :minidisc:

* The gold standard for drug-disease indications has been retrieved from https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3159979
* Metadata about runs, models evaluations, features are stored as RDF using the [ML Schema ontology](http://ml-schema.github.io/documentation/ML%20Schema.html).
  * See the [ML Schema documentation](http://ml-schema.github.io/documentation/ML%20Schema.html) for more details on the data model.

Diagram of the data model used for OpenPredict, based on the ML Schema ontology (`mls`):

![OpenPredict datamodel](https://raw.githubusercontent.com/MaastrichtU-IDS/translator-openpredict/master/docs/OpenPREDICT_datamodel.jpg)

---

## Translator application

### Service Summary
Query for drug-disease pairs predicted from pre-computed sets of graphs embeddings.

Add new embeddings to improve the predictive models, with versioning and scoring of the models.

### Component List
**API component**

1. Component Name: **OpenPredict API**

2. Component Description: **Python API to serve pre-computed set of drug-disease pair predictions from graphs embeddings**

3. GitHub Repository URL: https://github.com/MaastrichtU-IDS/translator-openpredict

4. Component Framework: **Knowledge Provider**

5. System requirements

    5.1. Specific OS and version if required: **python 3.8**

    5.2. CPU/Memory (for CI, TEST and PROD):  **32 CPUs and 32 Go memory ?**

    5.3. Disk size/IO throughput (for CI, TEST and PROD): **20 Go ?**

    5.4. Firewall policies: does the team need access to infrastructure components?
    **The NodeNormalization API https://nodenormalization-sri.renci.org**


6. External Dependencies (any components other than current one)

    6.1. External storage solution: **Models and database are stored in `/data/openpredict` in the Docker container**

7. Docker application:

    7.1. Path to the Dockerfile: **`Dockerfile`**

    7.2. Docker build command:

    ```bash
    docker build ghcr.io/maastrichtu-ids/openpredict-api .
    ```

    7.3. Docker run command:

	**Replace `${PERSISTENT_STORAGE}` with the path to persistent storage on host:**

    ```bash
    docker run -d -v ${PERSISTENT_STORAGE}:/data/openpredict -p 8808:8808 ghcr.io/maastrichtu-ids/openpredict-api
    ```

9. Logs of the application

    9.2. Format of the logs: **TODO**

# Acknowledgments​

* This service has been built from the [fair-workflows/openpredict](https://github.com/fair-workflows/openpredict) project.
* Predictions made using the [PREDICT method](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3159979/).
* Service funded by the [NIH NCATS Translator project](https://ncats.nih.gov/translator/about).

![Funded the the NIH NCATS Translator project](https://ncats.nih.gov/files/TranslatorGraphic2020_1100x420.jpg)

            

Raw data

            {
    "_id": null,
    "home_page": "",
    "name": "openpredict",
    "maintainer": "",
    "docs_url": null,
    "requires_python": "<3.11,>=3.8",
    "maintainer_email": "",
    "keywords": "Biomedical Data Translator,Predictions,Python,TRAPI",
    "author": "Arif Yilmaz, Elif",
    "author_email": "Remzi \u00c7elebi <r.celebi@maastrichtuniversity.nl>, Vincent Emonet <vincent.emonet@gmail.com>",
    "download_url": "https://files.pythonhosted.org/packages/0d/18/625f4b779198701ed2f7947d11a626742f48957455e3b459f173c6ebcf7c/openpredict-0.2.1.tar.gz",
    "platform": null,
    "description": "# \ud83d\udd2e\ud83d\udc0d Translator OpenPredict\n\n[![Python versions](https://img.shields.io/pypi/pyversions/openpredict)](https://pypi.org/project/openpredict) [![Version](https://img.shields.io/pypi/v/openpredict)](https://pypi.org/project/openpredict) [![Publish package](https://github.com/MaastrichtU-IDS/translator-openpredict/actions/workflows/publish-package.yml/badge.svg)](https://github.com/MaastrichtU-IDS/translator-openpredict/actions/workflows/publish-package.yml)\n\n[![Test the production API](https://github.com/MaastrichtU-IDS/translator-openpredict/actions/workflows/test-prod.yml/badge.svg)](https://github.com/MaastrichtU-IDS/translator-openpredict/actions/workflows/test-prod.yml) [![Run integration tests for TRAPI](https://github.com/MaastrichtU-IDS/translator-openpredict/actions/workflows/test-integration.yml/badge.svg)](https://github.com/MaastrichtU-IDS/translator-openpredict/actions/workflows/test-integration.yml) [![SonarCloud Coverage](https://sonarcloud.io/api/project_badges/measure?project=MaastrichtU-IDS_translator-openpredict&metric=coverage)](https://sonarcloud.io/dashboard?id=MaastrichtU-IDS_translator-openpredict)\n\n**OpenPredict** is a python package to help serve predictions of biomedical associations, as Translator Reasoner API (aka. TRAPI).\n\nThe [Translator Reasoner API](https://github.com/NCATSTranslator/ReasonerAPI) (TRAPI) defines a standard HTTP API for communicating biomedical questions and answers leveraging the [Biolink model](https://github.com/biolink/biolink-model/).\n\nThe package provides:\n\n* a decorator `@trapi_predict` to which the developer can pass all informations required to integrate the prediction function to a Translator Reasoner API\n* a `TRAPI` class to deploy a Translator Reasoner API serving a list of prediction functions decorated with `@trapi_predict`\n* Helpers to store your models in a FAIR manner, using tools such as [`dvc`](https://dvc.org/) and [`mlem`](https://mlem.ai/)\n\nPredictions are usually generated from machine learning models (e.g. predict disease treated by drug), but it can adapt to generic python function, as long as the input params and return object follow the expected structure.\n\nAdditionally to the library, this repository contains the code for the **OpenPredict Translator API** available at **[openpredict.semanticscience.org](https://openpredict.semanticscience.org)**, which serves a few prediction models developed at the Institute of Data Science.\n\n## \ud83d\udce6\ufe0f Use the package\n\n### Install\n\n```bash\npip install openpredict\n```\n\n### Use\n\nThe `openpredict` package provides a decorator `@trapi_predict` to annotate your functions that generate predictions. The code for this package is in `src/openpredict/`.\n\nPredictions generated from functions decorated with `@trapi_predict` can easily be imported in the Translator OpenPredict API, exposed as an API endpoint to get predictions from the web, and queried through the Translator Reasoner API (TRAPI)\n\n```python\nfrom openpredict import trapi_predict, PredictOptions, PredictOutput\n\n@trapi_predict(path='/predict',\n    name=\"Get predicted targets for a given entity\",\n    description=\"Return the predicted targets for a given entity: drug (DrugBank ID) or disease (OMIM ID), with confidence scores.\",\n    edges=[\n        {\n            'subject': 'biolink:Drug',\n            'predicate': 'biolink:treats',\n            'object': 'biolink:Disease',\n        },\n        {\n            'subject': 'biolink:Disease',\n            'predicate': 'biolink:treated_by',\n            'object': 'biolink:Drug',\n        },\n    ],\n\tnodes={\n        \"biolink:Disease\": {\n            \"id_prefixes\": [\n                \"OMIM\"\n            ]\n        },\n        \"biolink:Drug\": {\n            \"id_prefixes\": [\n                \"DRUGBANK\"\n            ]\n        }\n    }\n)\ndef get_predictions(\n        input_id: str, options: PredictOptions\n    ) -> PredictOutput:\n    # Add the code the load the model and get predictions here\n    predictions = {\n        \"hits\": [\n            {\n                \"id\": \"DB00001\",\n                \"type\": \"biolink:Drug\",\n                \"score\": 0.12345,\n                \"label\": \"Leipirudin\",\n            }\n        ],\n        \"count\": 1,\n    }\n    return predictions\n```\n\n> \ud83c\udf6a You can use [**our cookiecutter template**](https://github.com/MaastrichtU-IDS/cookiecutter-openpredict-api/) to quickly bootstrap a repository with everything ready to start developing your prediction models, to then easily publish, and integrate them in the Translator ecosystem\n\n## \ud83c\udf10 The OpenPredict Translator API\n\nAdditionally to the library, this repository contains the code for the **OpenPredict Translator API** available at **[openpredict.semanticscience.org](https://openpredict.semanticscience.org)** and the predictions models it exposes:\n\n* the code for the OpenPredict API endpoints in  `src/trapi/` defines:\n  *  a TRAPI endpoint returning predictions for the loaded models\n  * individual endpoints for each loaded models, taking an input id, and returning predicted hits\n  * endpoints serving metadata about runs, models evaluations, features for the OpenPredict model, stored as RDF, using the [ML Schema ontology](http://ml-schema.github.io/documentation/ML%20Schema.html).\n* various folders for **different prediction models** served by the OpenPredict API are available under `src/`:\n  * the OpenPredict drug-disease prediction model in `src/openpredict_model/`\n  * a model to compile the evidence path between a drug and a disease explaining the predictions of the OpenPredict model in `src/openpredict_evidence_path/`\n  * a prediction model trained from the Drug Repurposing Knowledge Graph (aka. DRKG) in `src/drkg_model/`\n\nThe data used by the models in this repository is versionned using `dvc` in the `data/` folder, and stored **on DagsHub at https://dagshub.com/vemonet/translator-openpredict**\n\n### Deploy the OpenPredict API locally\n\nRequirements: Python 3.8+ and `pip` installed\n\n1. Clone the repository:\n\n   ```bash\n   git clone https://github.com/MaastrichtU-IDS/translator-openpredict.git\n   cd translator-openpredict\n   ```\n\n2. Pull the data required to run the models in the `data` folder with [`dvc`](https://dvc.org/):\n\n   ```bash\n   pip install dvc\n   dvc pull\n   ```\n\n\nStart the API in development mode with docker on http://localhost:8808, the API will automatically reload when you make changes in the code:\n\n```bash\ndocker-compose up api\n# Or with the helper script:\n./scripts/api.sh\n```\n\n> Contributions are welcome! If you wish to help improve OpenPredict, see the [instructions to contribute :woman_technologist:](/CONTRIBUTING.md) for more details on the development workflow\n\n### Test the OpenPredict API\n\nRun the tests locally with docker:\n\n```bash\ndocker-compose run tests\n# Or with the helper script:\n./scripts/test.sh\n```\n\n> See the [`TESTING.md`](/TESTING.md) file for more details on testing the API.\n\nYou can change the entrypoint of the test container to run other commands, such as training a model:\n\n```bash\ndocker-compose run --entrypoint \"python src/openpredict_model/train.py train-model\" tests\n# Or with the helper script:\n./scripts/run.sh python src/openpredict_model/train.py train-model\n```\n\n### Use the OpenPredict API\n\n\nThe user provides a drug or a disease identifier as a CURIE (e.g. DRUGBANK:DB00394, or OMIM:246300), and choose a prediction model (only the `Predict OMIM-DrugBank` classifier is currently implemented).\n\nThe API will return predicted targets for the given drug or disease:\n\n* The **potential drugs treating a given disease** :pill:\n* The **potential diseases a given drug could treat** :microbe:\n\n> Feel free to try the API at **[openpredict.semanticscience.org](https://openpredict.semanticscience.org)**\n\n#### TRAPI operations\n\nOperations to query OpenPredict using the [Translator Reasoner API](https://github.com/NCATSTranslator/ReasonerAPI) standards.\n\n##### Query operation\n\nThe `/query` operation will return the same predictions as the `/predict` operation, using the [ReasonerAPI](https://github.com/NCATSTranslator/ReasonerAPI) format, used within the [Translator project](https://ncats.nih.gov/translator/about).\n\nThe user sends a [ReasonerAPI](https://github.com/NCATSTranslator/ReasonerAPI) query asking for the predicted targets given: a source, and the relation to predict. The query is a graph with nodes and edges defined in JSON, and uses classes from the [BioLink model](https://biolink.github.io/biolink-model).\n\nYou can use the default TRAPI query of OpenPredict `/query` operation to try a working example.\n\nExample of TRAPI query to retrieve drugs similar to a specific drug:\n\n```json\n{\n    \"message\": {\n        \"query_graph\": {\n        \"edges\": {\n            \"e01\": {\n            \"object\": \"n1\",\n            \"predicates\": [\n                \"biolink:similar_to\"\n            ],\n            \"subject\": \"n0\"\n            }\n        },\n        \"nodes\": {\n            \"n0\": {\n            \"categories\": [\n                \"biolink:Drug\"\n            ],\n            \"ids\": [\n                \"DRUGBANK:DB00394\"\n            ]\n            },\n            \"n1\": {\n            \"categories\": [\n                \"biolink:Drug\"\n            ]\n            }\n        }\n        }\n    },\n    \"query_options\": {\n        \"n_results\": 3\n    }\n}\n```\n\n##### Predicates operation\n\nThe `/predicates` operation will return the entities and relations provided by this API in a JSON object (following the [ReasonerAPI](https://github.com/NCATSTranslator/ReasonerAPI) specifications).\n\n> Try it at [https://openpredict.semanticscience.org/predicates](https://openpredict.semanticscience.org/predicates)\n\n#### Notebooks examples :notebook_with_decorative_cover:\n\nWe provide [Jupyter Notebooks](https://jupyter.org/) with examples to use the OpenPredict API:\n\n1. [Query the OpenPredict API](https://github.com/MaastrichtU-IDS/translator-openpredict/blob/master/docs/openpredict-examples.ipynb)\n2. [Generate embeddings with pyRDF2Vec](https://github.com/MaastrichtU-IDS/translator-openpredict/blob/master/docs/openpredict-pyrdf2vec-embeddings.ipynb), and import them in the OpenPredict API\n\n#### Add embedding :station:\n\nThe default baseline model is `openpredict_baseline`. You can choose the base model when you post a new embeddings using the `/embeddings` call. Then the OpenPredict API will:\n\n1. add embeddings to the provided model\n2. train the model with the new embeddings\n3. store the features and model using a unique ID for the run (e.g. `7621843c-1f5f-11eb-85ae-48a472db7414`)\n\nOnce the embedding has been added you can find the existing models previously generated (including `openpredict_baseline`), and use them as base model when you ask the model for prediction or add new embeddings.\n\n#### Predict operation :crystal_ball:\n\nUse this operation if you just want to easily retrieve predictions for a given entity. The `/predict` operation takes 4 parameters (1 required):\n\n* A `drug_id` to get predicted diseases it could treat (e.g. `DRUGBANK:DB00394`)\n  * **OR** a `disease_id` to get predicted drugs it could be treated with (e.g. `OMIM:246300`)\n* The prediction model to use (default to `Predict OMIM-DrugBank`)\n* The minimum score of the returned predictions, from 0 to 1 (optional)\n* The limit of results to return, starting from the higher score, e.g. 42 (optional)\n\nThe API will return the list of predicted target for the given entity, the labels are resolved using the [Translator Name Resolver API](https://nodenormalization-sri.renci.org)\n\n> Try it at [https://openpredict.semanticscience.org/predict?drug_id=DRUGBANK:DB00394](https://openpredict.semanticscience.org/predict?drug_id=DRUGBANK:DB00394)\n\n---\n\n### More about the data model :minidisc:\n\n* The gold standard for drug-disease indications has been retrieved from https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3159979\n* Metadata about runs, models evaluations, features are stored as RDF using the [ML Schema ontology](http://ml-schema.github.io/documentation/ML%20Schema.html).\n  * See the [ML Schema documentation](http://ml-schema.github.io/documentation/ML%20Schema.html) for more details on the data model.\n\nDiagram of the data model used for OpenPredict, based on the ML Schema ontology (`mls`):\n\n![OpenPredict datamodel](https://raw.githubusercontent.com/MaastrichtU-IDS/translator-openpredict/master/docs/OpenPREDICT_datamodel.jpg)\n\n---\n\n## Translator application\n\n### Service Summary\nQuery for drug-disease pairs predicted from pre-computed sets of graphs embeddings.\n\nAdd new embeddings to improve the predictive models, with versioning and scoring of the models.\n\n### Component List\n**API component**\n\n1. Component Name: **OpenPredict API**\n\n2. Component Description: **Python API to serve pre-computed set of drug-disease pair predictions from graphs embeddings**\n\n3. GitHub Repository URL: https://github.com/MaastrichtU-IDS/translator-openpredict\n\n4. Component Framework: **Knowledge Provider**\n\n5. System requirements\n\n    5.1. Specific OS and version if required: **python 3.8**\n\n    5.2. CPU/Memory (for CI, TEST and PROD):  **32 CPUs and 32 Go memory ?**\n\n    5.3. Disk size/IO throughput (for CI, TEST and PROD): **20 Go ?**\n\n    5.4. Firewall policies: does the team need access to infrastructure components?\n    **The NodeNormalization API https://nodenormalization-sri.renci.org**\n\n\n6. External Dependencies (any components other than current one)\n\n    6.1. External storage solution: **Models and database are stored in `/data/openpredict` in the Docker container**\n\n7. Docker application:\n\n    7.1. Path to the Dockerfile: **`Dockerfile`**\n\n    7.2. Docker build command:\n\n    ```bash\n    docker build ghcr.io/maastrichtu-ids/openpredict-api .\n    ```\n\n    7.3. Docker run command:\n\n\t**Replace `${PERSISTENT_STORAGE}` with the path to persistent storage on host:**\n\n    ```bash\n    docker run -d -v ${PERSISTENT_STORAGE}:/data/openpredict -p 8808:8808 ghcr.io/maastrichtu-ids/openpredict-api\n    ```\n\n9. Logs of the application\n\n    9.2. Format of the logs: **TODO**\n\n# Acknowledgments\u200b\n\n* This service has been built from the [fair-workflows/openpredict](https://github.com/fair-workflows/openpredict) project.\n* Predictions made using the [PREDICT method](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3159979/).\n* Service funded by the [NIH NCATS Translator project](https://ncats.nih.gov/translator/about).\n\n![Funded the the NIH NCATS Translator project](https://ncats.nih.gov/files/TranslatorGraphic2020_1100x420.jpg)\n",
    "bugtrack_url": null,
    "license": "MIT License  Copyright (c) 2020 Vincent Emonet  Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the \"Software\"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:  The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.  THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.",
    "summary": "A package to help serve predictions of biomedical concepts associations as Translator Reasoner API.",
    "version": "0.2.1",
    "split_keywords": [
        "biomedical data translator",
        "predictions",
        "python",
        "trapi"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "cb0c78c530f576b48919162fb412bb021a01e41a2ca6fe16f450879fc1be19a8",
                "md5": "47ebf3c756735f0bdb824ace6318bb76",
                "sha256": "b05c3d5f9c863af6f256270793da5ec34798090bb086892c9ecba4b14a75efd7"
            },
            "downloads": -1,
            "filename": "openpredict-0.2.1-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "47ebf3c756735f0bdb824ace6318bb76",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "<3.11,>=3.8",
            "size": 24342,
            "upload_time": "2023-01-10T12:34:10",
            "upload_time_iso_8601": "2023-01-10T12:34:10.345985Z",
            "url": "https://files.pythonhosted.org/packages/cb/0c/78c530f576b48919162fb412bb021a01e41a2ca6fe16f450879fc1be19a8/openpredict-0.2.1-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "0d18625f4b779198701ed2f7947d11a626742f48957455e3b459f173c6ebcf7c",
                "md5": "015d9db1087819a0bd5a5e6e94f1da95",
                "sha256": "43280badf7ae96ffacbaa3a6af41ddc38fafb99931212b22ec0e72db7deaa78f"
            },
            "downloads": -1,
            "filename": "openpredict-0.2.1.tar.gz",
            "has_sig": false,
            "md5_digest": "015d9db1087819a0bd5a5e6e94f1da95",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "<3.11,>=3.8",
            "size": 2723975,
            "upload_time": "2023-01-10T12:34:11",
            "upload_time_iso_8601": "2023-01-10T12:34:11.960037Z",
            "url": "https://files.pythonhosted.org/packages/0d/18/625f4b779198701ed2f7947d11a626742f48957455e3b459f173c6ebcf7c/openpredict-0.2.1.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-01-10 12:34:11",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "lcname": "openpredict"
}
        
Elapsed time: 0.03763s