COMPREDICT's AI CORE API Client
===============================
![GitHub Workflow Status](https://img.shields.io/github/workflow/status/COMPREDICT-GmbH/ai-sdk-python/ai-sdk-python)
![GitHub release (latest by date)](https://img.shields.io/github/v/release/COMPREDICT-GmbH/ai-sdk-python)
[![PyPI](https://img.shields.io/pypi/v/COMPREDICT-AI-SDK?color=orange)](https://pypi.org/project/COMPREDICT-AI-SDK/)
**Python client for connecting to the COMPREDICT V2 REST API.**
**To find out more, please visit** **[COMPREDICT website](https://compredict.ai/ai-core/)**.
Requirements
------------
**To connect to the API with basic auth you need the following:**
- Token generated with your AI Core username and password
- (Optional) Callback url to send the results
Installation
------------
You can use `pip` or `easy-install` to install the package:
~~~shell
$ pip install COMPREDICT-AI-SDK
~~~
or
~~~shell
$ easy-install COMPREDICT-AI-SDK
~~~
Configuration
-------------
### Basic Authentication
AI Core requires from the user, to authenticate with token, generated with user's AI CORE username and password.
**WARNING**: Bear in mind, that this type of authentication is working only for v2 of AI Core API.
**There are two ways in which user can generate needed token:**
1. **Generate token directly with utility function** (this approach requires user to pass url to AICore as well):
~~~python
from compredict.utils.authentications import generate_token
# user_username and user_password in this example, are of course credentials personal to each user
response = generate_token(url="https://core.compredict.ai/api/v2", username=user_username,
password=user_password)
response_json = response.json()
# access tokens or errors encountered
if response.status_code == 200:
token = response_json['access']
refresh_token = response_json['refresh']
print(token)
print(refresh_token)
elif response.status_code == 400:
print(response_json['errors'])
else:
print(response_json['error'])
~~~
Now, you can instantiate Client with freshly generated token.
~~~python
import compredict
compredict_client = compredict.client.api.get_instance(token=your_new_generated_token_here)
~~~
2. **Instantiate Client with your AICore username and password.** In this case, token, as well as refresh_token, will be
generated and assigned automatically to the Client. After this operation, you don't need to reinstantiate Client
with generated token. You should be able to directly call all Client methods as you like.
~~~python
import compredict
compredict_client = compredict.client.api.get_instance(username=username, password=password, callback_url=None)
~~~
### Accessing new access token with token refresh
Refresh token is used for generating new access token (mainly in case if previous access token is expired).
**New access token can be generated with refresh token in two ways:**
**1. By calling utility function:**
~~~python
from compredict.utils.authentications import generate_token_from_refresh_token
response = generate_token_from_refresh_token(url="https://core.compredict.ai/api/v2", token=refresh_token)
response_json = response.json()
# access token or errors encountered
if response.status_code == 200:
token = response_json['access']
print(token)
elif response.status_code == 400:
print(response_json['errors'])
else:
print(response_json['error'])
~~~
Then, you can instantiate Client with new access token.
**2. By calling Client method:**
If you generated token with passing to the Client your username and password, you don't need to pass
your refresh_token to generate_token_from_refresh_token() Client method, since your refresh_token is already stored
inside the Client.
~~~python
# look above for the explanation in which cases token_to_refresh is not required
token = compredict_client.generate_token_from_refresh_token(refresh_token)
~~~
### Check token validity
If user would like for Client to automatically check token validity while instantiating Client, **validate** needs to
enabled.
~~~python
import compredict
compredict_client = compredict.client.api.get_instance(token=your_new_generated_token_here, validate=True)
~~~
**User can manually verify token validity in two ways:**
**1. By calling utility function:**
~~~python
from compredict.utils.authentications import verify_token
response = verify_token(url="https://core.compredict.ai/api/v2", token=token_to_verify)
# check validity
if response.status_code == 200:
print(True)
else:
print(False)
~~~
**2. By calling Client method:**
If you generated token with passing to the Client your username and password, you don't need to pass
your token to verify_token() Client method, since your token is already stored inside the Client.
~~~python
# look above for the explanation in which cases token_to_verify is not required
validity = compredict_client.verify_token(token_to_verify)
print(validity)
~~~
In case of valid token, response will be empty with status_code 200.
**We highly advice that the SDK information are stored as environment variables.**
Accessing Algorithms (GET)
--------------------------
To list all the algorithms in a collection:
~~~python
algorithms = compredict_client.get_algorithms()
for algorithm in algorithms:
print(algorithm.name)
print(algorithm.version)
~~~
To access a single algorithm:
~~~python
algorithm = compredict_client.get_algorithm('ecolife')
print(algorithm.name)
print(algorithm.description)
~~~
Algorithm RUN (POST)
--------------------
Each algorithm, that user has access to, is different. It has different:
- Input data and structure
- Output data
- Parameters data
- Evaluation set
- Result instance
- Monitoring Tools
**Features data, used for prediction, always needs to be provided in parquet file, whereas
parameters data is always provided in json file.**
**User, taking advantage of this SDK, can specify features in
dictionary, list of dictionaries, DataFrame or string with path pointing out to parquet file.**
The `run` function has the following signature:
~~~python
Task|Result = algorithm.run(data, parameters=parameters, evaluate=True, encrypt=False, callback_url=None,
callback_param=None, monitor=True)
~~~
- `features`: data to be processed by the algorithm, it can be:
- `dict`: will be written into parquet file
- `str`: path to the file to be sent (only parquet file will be accepted)
- `pandas`: DataFrame containing the data, will be written into parquet file as well
- `parameters`: Parameters used for configuration of algorithm (specific for each algorithm). It is optional and can be:
- `dict`: will be converted into json file
- `str` : path to json file with parameters data
- `evaluate`: to evaluate the result of the algorithm. Check `algorithm.evaluations`, *more in depth later*.
- `callback_url`: If the result is `Task`, then AI core will send back the results to the provided URL once processed. It can be multiple callbacks
- `callback_param`: additional parameters to pass when results are sent to callback url. In case of multiple callbacks, it can be a single callback params for all, or multiple callback params for each callback url.
- `monitor`: boolean indicating if the output results of the model should be monitored or not. By default it is set
to True.
Depending on the algorithm's computation requirement `algorithm.result`, the result can be:
- **compredict.resources.Task**: holds a job id of the task that the user can query later to get the results.
- **compredict.resources.Result**: contains the result of the algorithm + evaluation + monitors
**Create list of urls for callbacks**
~~~python
callback_url = ["https://me.myportal.cloudapp.azure.com", "http://me.mydata.s3.amazonaws.com/my_bucket",
"http://my_website/my_data.com"]
~~~
After creating a list, use it when running algorithm:
~~~python
results = algorithm.run(data, callback_url=callback_url, evaluate=False)
~~~
**Example of specifying features data in a dictionary and sending it for prediction:**
~~~python
X_test = dict(
feature_1=[1, 2, 3, 4],
feature_2=[2, 3, 4, 5]
)
algorithm = compredict_client.get_algorithm('algorithm_id')
result = algorithm.run(X_test)
~~~
You can identify when the algorithm dispatches the processing of task to queue
or sends the results instantly by checking:
~~~python
>>> print(algorithm.results)
"The request will be sent to queue for processing"
~~~
or dynamically:
~~~python
results = algorithm.run(X_test, parameters=parameters, evaluate=True)
if isinstance(results, compredict.resources.Task):
print(results.job_id)
while results.status != results.STATUS_FINISHED:
print("task is not done yet.. waiting...")
sleep(15)
results.update()
if results.success is True:
print(results.predictions)
else:
print(results.error)
else: # not a Task, it is a Result Instance
print(results.predictions)
~~~
**Example of specifying features data in DataFrame and sending it for prediction:**
~~~python
import pandas as pd
X_test = pd.DataFrame(dict(
feature_1=[1, 2, 3, 4],
feature_2=[2, 3, 4, 5]
))
algorithm = compredict_client.get_algorithm('algorithm_id')
result = algorithm.run(X_test)
~~~
**Example specifying features data directly in parquet file and sending it for prediction:**
~~~python
algorithm = compredict_client.get_algorithm('algorithm_id')
result = algorithm.run("/path/to/file.parquet")
~~~
If you set up ``callback_url`` then the results will be POSTed automatically to you once the
calculation is finished.
Each algorithm has its own evaluation methods that are used to evaluate the performance of the algorithm given the data. You can identify the evaluation metric
by calling:
~~~python
algorithm.evaluations # associative array.
~~~
When running the algorithm, with `evaluate = True`, then the algorithm will be evaluated by the default parameters.
In order to tweak these parameters, you have to specify an associative array with the modified parameters. For example:
~~~python
evaluate = {"rainflow-counting": {"hysteresis": 0.2, "N":100000}} # evaluate name and its params
result = algorithm.run(X_test, evaluate=evaluate)
~~~
Handling Errors And Timeouts
----------------------------
For whatever reason, the HTTP requests at the heart of the API may not always
succeed.
Every method will return false if an error occurred, and you should always
check for this before acting on the results of the method call.
In some cases, you may also need to check the reason why the request failed.
This would most often be when you tried to save some data that did not validate
correctly.
~~~python
algorithms = compredict_client.get_algorithms()
if not algorithms:
error = compredict_client.last_error
~~~
Returning false on errors, and using error objects to provide context is good
for writing quick scripts but is not the most robust solution for larger and
more long-term applications.
An alternative approach to error handling is to configure the API client to
throw exceptions when errors occur. Bear in mind, that if you do this, you will
need to catch and handle the exception in code yourself. The exception throwing
behavior of the client is controlled using the failOnError method:
~~~python
compredict_client.fail_on_error()
try:
orders = compredict_client.get_algorithms()
raise compredict.exceptions.CompredictError as e:
...
~~~
The exceptions thrown are subclasses of Error, representing
client errors and server errors. The API documentation for response codes
contains a list of all the possible error conditions the client may encounter.
Verifying SSL certificates
--------------------------
By default, the client will attempt to verify the SSL certificate used by the
COMPREDICT AI Core. In cases where this is undesirable, or where an unsigned
certificate is being used, you can turn off this behavior using the verifyPeer
switch, which will disable certificate checking on all subsequent requests:
~~~python
compredict_client.verify_peer(False)
~~~
Raw data
{
"_id": null,
"home_page": "https://github.com/compredict/ai-sdk-python",
"name": "COMPREDICT-AI-SDK",
"maintainer": "",
"docs_url": null,
"requires_python": "",
"maintainer_email": "",
"keywords": "COMPREDICT,AI,SDK,API,rest",
"author": "Ousama Esbel",
"author_email": "esbel@compredict.de",
"download_url": "https://files.pythonhosted.org/packages/71/ca/187dc299e084dc4e8b31626d704f379bb49e3cf1855d48f13476c1ecc20f/COMPREDICT-AI-SDK-2.1.1.tar.gz",
"platform": null,
"description": "COMPREDICT's AI CORE API Client\n===============================\n\n![GitHub Workflow Status](https://img.shields.io/github/workflow/status/COMPREDICT-GmbH/ai-sdk-python/ai-sdk-python)\n![GitHub release (latest by date)](https://img.shields.io/github/v/release/COMPREDICT-GmbH/ai-sdk-python)\n[![PyPI](https://img.shields.io/pypi/v/COMPREDICT-AI-SDK?color=orange)](https://pypi.org/project/COMPREDICT-AI-SDK/)\n\n**Python client for connecting to the COMPREDICT V2 REST API.**\n\n**To find out more, please visit** **[COMPREDICT website](https://compredict.ai/ai-core/)**.\n\n\nRequirements\n------------\n\n**To connect to the API with basic auth you need the following:**\n\n- Token generated with your AI Core username and password\n- (Optional) Callback url to send the results\n\nInstallation\n------------\n\nYou can use `pip` or `easy-install` to install the package:\n\n~~~shell\n $ pip install COMPREDICT-AI-SDK\n~~~\n\nor\n\n~~~shell\n $ easy-install COMPREDICT-AI-SDK\n~~~\n\nConfiguration\n-------------\n\n### Basic Authentication\n\nAI Core requires from the user, to authenticate with token, generated with user's AI CORE username and password.\n\n**WARNING**: Bear in mind, that this type of authentication is working only for v2 of AI Core API.\n\n**There are two ways in which user can generate needed token:**\n\n1. **Generate token directly with utility function** (this approach requires user to pass url to AICore as well):\n~~~python\nfrom compredict.utils.authentications import generate_token\n\n# user_username and user_password in this example, are of course credentials personal to each user\nresponse = generate_token(url=\"https://core.compredict.ai/api/v2\", username=user_username, \n password=user_password)\nresponse_json = response.json()\n\n# access tokens or errors encountered\nif response.status_code == 200:\n token = response_json['access']\n refresh_token = response_json['refresh']\n print(token)\n print(refresh_token)\nelif response.status_code == 400:\n print(response_json['errors'])\nelse:\n print(response_json['error'])\n~~~\n\nNow, you can instantiate Client with freshly generated token.\n~~~python\nimport compredict\n\ncompredict_client = compredict.client.api.get_instance(token=your_new_generated_token_here)\n~~~\n\n2. **Instantiate Client with your AICore username and password.** In this case, token, as well as refresh_token, will be \n generated and assigned automatically to the Client. After this operation, you don't need to reinstantiate Client \n with generated token. You should be able to directly call all Client methods as you like.\n~~~python\nimport compredict\n\ncompredict_client = compredict.client.api.get_instance(username=username, password=password, callback_url=None)\n~~~\n\n### Accessing new access token with token refresh\nRefresh token is used for generating new access token (mainly in case if previous access token is expired).\n\n**New access token can be generated with refresh token in two ways:**\n\n**1. By calling utility function:**\n\n~~~python\nfrom compredict.utils.authentications import generate_token_from_refresh_token\n\nresponse = generate_token_from_refresh_token(url=\"https://core.compredict.ai/api/v2\", token=refresh_token)\nresponse_json = response.json()\n\n# access token or errors encountered\nif response.status_code == 200:\n token = response_json['access']\n print(token)\nelif response.status_code == 400:\n print(response_json['errors'])\nelse:\n print(response_json['error'])\n~~~\nThen, you can instantiate Client with new access token.\n\n**2. By calling Client method:**\n\nIf you generated token with passing to the Client your username and password, you don't need to pass\nyour refresh_token to generate_token_from_refresh_token() Client method, since your refresh_token is already stored \ninside the Client.\n\n~~~python\n# look above for the explanation in which cases token_to_refresh is not required\ntoken = compredict_client.generate_token_from_refresh_token(refresh_token)\n~~~\n\n### Check token validity\n\nIf user would like for Client to automatically check token validity while instantiating Client, **validate** needs to \nenabled.\n~~~python\nimport compredict\n\ncompredict_client = compredict.client.api.get_instance(token=your_new_generated_token_here, validate=True)\n~~~\n\n**User can manually verify token validity in two ways:**\n\n**1. By calling utility function:**\n~~~python\nfrom compredict.utils.authentications import verify_token\n\nresponse = verify_token(url=\"https://core.compredict.ai/api/v2\", token=token_to_verify)\n\n# check validity\nif response.status_code == 200:\n print(True)\nelse:\n print(False)\n~~~\n\n**2. By calling Client method:**\n\nIf you generated token with passing to the Client your username and password, you don't need to pass\nyour token to verify_token() Client method, since your token is already stored inside the Client.\n~~~python\n# look above for the explanation in which cases token_to_verify is not required\nvalidity = compredict_client.verify_token(token_to_verify)\nprint(validity)\n~~~\nIn case of valid token, response will be empty with status_code 200.\n\n**We highly advice that the SDK information are stored as environment variables.**\n\nAccessing Algorithms (GET)\n--------------------------\n\nTo list all the algorithms in a collection:\n\n~~~python\nalgorithms = compredict_client.get_algorithms()\n\nfor algorithm in algorithms:\n print(algorithm.name)\n print(algorithm.version)\n~~~\n\nTo access a single algorithm:\n\n~~~python\nalgorithm = compredict_client.get_algorithm('ecolife')\n\nprint(algorithm.name)\nprint(algorithm.description)\n~~~\n\nAlgorithm RUN (POST)\n--------------------\nEach algorithm, that user has access to, is different. It has different:\n\n- Input data and structure\n- Output data\n- Parameters data\n- Evaluation set\n- Result instance\n- Monitoring Tools\n\n**Features data, used for prediction, always needs to be provided in parquet file, whereas\nparameters data is always provided in json file.** \n\n**User, taking advantage of this SDK, can specify features in \ndictionary, list of dictionaries, DataFrame or string with path pointing out to parquet file.**\n\nThe `run` function has the following signature: \n\n~~~python\nTask|Result = algorithm.run(data, parameters=parameters, evaluate=True, encrypt=False, callback_url=None, \n callback_param=None, monitor=True)\n~~~\n\n- `features`: data to be processed by the algorithm, it can be:\n - `dict`: will be written into parquet file\n - `str`: path to the file to be sent (only parquet file will be accepted)\n - `pandas`: DataFrame containing the data, will be written into parquet file as well\n- `parameters`: Parameters used for configuration of algorithm (specific for each algorithm). It is optional and can be:\n - `dict`: will be converted into json file\n - `str` : path to json file with parameters data\n- `evaluate`: to evaluate the result of the algorithm. Check `algorithm.evaluations`, *more in depth later*.\n- `callback_url`: If the result is `Task`, then AI core will send back the results to the provided URL once processed. It can be multiple callbacks\n- `callback_param`: additional parameters to pass when results are sent to callback url. In case of multiple callbacks, it can be a single callback params for all, or multiple callback params for each callback url.\n- `monitor`: boolean indicating if the output results of the model should be monitored or not. By default it is set \n to True.\n \nDepending on the algorithm's computation requirement `algorithm.result`, the result can be:\n\n- **compredict.resources.Task**: holds a job id of the task that the user can query later to get the results.\n- **compredict.resources.Result**: contains the result of the algorithm + evaluation + monitors\n\n**Create list of urls for callbacks**\n\n~~~python\ncallback_url = [\"https://me.myportal.cloudapp.azure.com\", \"http://me.mydata.s3.amazonaws.com/my_bucket\",\n \"http://my_website/my_data.com\"]\n~~~\nAfter creating a list, use it when running algorithm:\n\n~~~python\nresults = algorithm.run(data, callback_url=callback_url, evaluate=False)\n~~~ \n\n**Example of specifying features data in a dictionary and sending it for prediction:**\n\n~~~python\nX_test = dict(\n feature_1=[1, 2, 3, 4],\n feature_2=[2, 3, 4, 5]\n)\n\nalgorithm = compredict_client.get_algorithm('algorithm_id')\nresult = algorithm.run(X_test)\n~~~\n\nYou can identify when the algorithm dispatches the processing of task to queue\nor sends the results instantly by checking:\n\n~~~python\n>>> print(algorithm.results)\n\n\"The request will be sent to queue for processing\"\n~~~\n\nor dynamically:\n\n~~~python\nresults = algorithm.run(X_test, parameters=parameters, evaluate=True)\n\nif isinstance(results, compredict.resources.Task):\n print(results.job_id)\n\n while results.status != results.STATUS_FINISHED:\n print(\"task is not done yet.. waiting...\")\n sleep(15)\n results.update()\n\n if results.success is True:\n print(results.predictions)\n else:\n print(results.error)\n\nelse: # not a Task, it is a Result Instance\n print(results.predictions)\n~~~\n\n**Example of specifying features data in DataFrame and sending it for prediction:**\n\n~~~python\nimport pandas as pd\n\nX_test = pd.DataFrame(dict(\n feature_1=[1, 2, 3, 4],\n feature_2=[2, 3, 4, 5]\n))\n\nalgorithm = compredict_client.get_algorithm('algorithm_id')\nresult = algorithm.run(X_test)\n~~~\n\n**Example specifying features data directly in parquet file and sending it for prediction:**\n\n~~~python\nalgorithm = compredict_client.get_algorithm('algorithm_id')\nresult = algorithm.run(\"/path/to/file.parquet\")\n~~~\n\nIf you set up ``callback_url`` then the results will be POSTed automatically to you once the\ncalculation is finished.\n\nEach algorithm has its own evaluation methods that are used to evaluate the performance of the algorithm given the data. You can identify the evaluation metric\nby calling:\n\n~~~python\nalgorithm.evaluations # associative array.\n~~~\n\nWhen running the algorithm, with `evaluate = True`, then the algorithm will be evaluated by the default parameters. \nIn order to tweak these parameters, you have to specify an associative array with the modified parameters. For example:\n\n~~~python\nevaluate = {\"rainflow-counting\": {\"hysteresis\": 0.2, \"N\":100000}} # evaluate name and its params\n\nresult = algorithm.run(X_test, evaluate=evaluate)\n~~~\n\n\nHandling Errors And Timeouts\n----------------------------\n\nFor whatever reason, the HTTP requests at the heart of the API may not always\nsucceed.\n\nEvery method will return false if an error occurred, and you should always\ncheck for this before acting on the results of the method call.\n\nIn some cases, you may also need to check the reason why the request failed.\nThis would most often be when you tried to save some data that did not validate\ncorrectly.\n\n~~~python\nalgorithms = compredict_client.get_algorithms()\n\nif not algorithms:\n error = compredict_client.last_error\n~~~\n\nReturning false on errors, and using error objects to provide context is good\nfor writing quick scripts but is not the most robust solution for larger and\nmore long-term applications.\n\nAn alternative approach to error handling is to configure the API client to\nthrow exceptions when errors occur. Bear in mind, that if you do this, you will\nneed to catch and handle the exception in code yourself. The exception throwing\nbehavior of the client is controlled using the failOnError method:\n\n~~~python\ncompredict_client.fail_on_error()\n\ntry:\n orders = compredict_client.get_algorithms()\nraise compredict.exceptions.CompredictError as e:\n ...\n~~~\n\nThe exceptions thrown are subclasses of Error, representing\nclient errors and server errors. The API documentation for response codes\ncontains a list of all the possible error conditions the client may encounter.\n\n\nVerifying SSL certificates\n--------------------------\n\nBy default, the client will attempt to verify the SSL certificate used by the\nCOMPREDICT AI Core. In cases where this is undesirable, or where an unsigned\ncertificate is being used, you can turn off this behavior using the verifyPeer\nswitch, which will disable certificate checking on all subsequent requests:\n\n~~~python\ncompredict_client.verify_peer(False)\n~~~\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "Connect Python applications with COMPREDICT AI Core.",
"version": "2.1.1",
"split_keywords": [
"compredict",
"ai",
"sdk",
"api",
"rest"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "a938e76a24dfefe1fa13d6fb1c556cfbdadb4b7dd2505909bcd7f187931a70c6",
"md5": "c81cabdc7bdc2726f82f71553a8ec29b",
"sha256": "8e2b13d2bff2847cb0d3f7bb66d76cd02cbc8c1ba276e31ae61b8510fba7717c"
},
"downloads": -1,
"filename": "COMPREDICT_AI_SDK-2.1.1-py3-none-any.whl",
"has_sig": false,
"md5_digest": "c81cabdc7bdc2726f82f71553a8ec29b",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": null,
"size": 33353,
"upload_time": "2023-01-19T13:51:33",
"upload_time_iso_8601": "2023-01-19T13:51:33.045052Z",
"url": "https://files.pythonhosted.org/packages/a9/38/e76a24dfefe1fa13d6fb1c556cfbdadb4b7dd2505909bcd7f187931a70c6/COMPREDICT_AI_SDK-2.1.1-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "71ca187dc299e084dc4e8b31626d704f379bb49e3cf1855d48f13476c1ecc20f",
"md5": "2692e7907ecdaa95e22b9792b7801c27",
"sha256": "cab1866bcce463c6d32854d642cdb62d36aada197e6d5b61d381082eed8b42ed"
},
"downloads": -1,
"filename": "COMPREDICT-AI-SDK-2.1.1.tar.gz",
"has_sig": false,
"md5_digest": "2692e7907ecdaa95e22b9792b7801c27",
"packagetype": "sdist",
"python_version": "source",
"requires_python": null,
"size": 30160,
"upload_time": "2023-01-19T13:51:34",
"upload_time_iso_8601": "2023-01-19T13:51:34.679951Z",
"url": "https://files.pythonhosted.org/packages/71/ca/187dc299e084dc4e8b31626d704f379bb49e3cf1855d48f13476c1ecc20f/COMPREDICT-AI-SDK-2.1.1.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2023-01-19 13:51:34",
"github": true,
"gitlab": false,
"bitbucket": false,
"github_user": "compredict",
"github_project": "ai-sdk-python",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"requirements": [],
"lcname": "compredict-ai-sdk"
}