ablt-python-api


Nameablt-python-api JSON
Version 0.0.6 PyPI version JSON
download
home_pageNone
SummaryaBLT Python API
upload_time2024-08-21 07:49:47
maintainerNone
docs_urlNone
authorNone
requires_python>=3.9
licenseMIT License Copyright (c) 2023 aBLT Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
keywords ablt api chatbot gpt
VCS
bugtrack_url
requirements aiohttp asyncio requests pydantic
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # aBLT Python API wrapper

[![PyPI version](https://badge.fury.io/py/ablt-python-api.svg)](https://badge.fury.io/py/ablt-python-api) [![Linters](https://github.com/ablt-ai/ablt_python_api/actions/workflows/master_linters.yml/badge.svg?branch=master)](https://github.com/ablt-ai/ablt_python_api/actions/workflows/master_linters.yml) [![Tests](https://github.com/ablt-ai/ablt_python_api/actions/workflows/master_tests.yml/badge.svg?branch=master)](https://github.com/ablt-ai/ablt_python_api/actions/workflows/master_tests.yml) ![PyPI - License](https://img.shields.io/pypi/l/ablt-python-api) ![PyPI - Python Version](https://img.shields.io/pypi/pyversions/ablt-python-api)[![Downloads](https://static.pepy.tech/badge/ablt-python-api)](https://pepy.tech/project/ablt-python-api) [![Downloads](https://static.pepy.tech/badge/ablt-python-api/month)](https://pepy.tech/project/ablt-python-api)

# About

This is Python aBLT API wrapper. It is used to communicate with aBLT API. You may use it to create your own aBLT client using asynchronous or synchronous Python. You can find more information about aBLT API [here](https://docs.ablt.ai/api_docs/overview).

At first, you need obtain aBLT API Token. You can do it [here](https://ablt.ai/) by get in touch and contact with support for paid plans. Next, you may use any existing ready-made bots or templates or create your own bot via UI interface.

# Installation

API wrapper is available on PyPI. You can install it with pip (recommended to use Python 3.9+):

```bash
pip install ablt-python-api
```

# Usage

Then you can import it and use it:

```python
# for asynchronous API wrapper use ABLTApi_async
from ablt_python_api import ABLTApi  # this is synchronous API wrapper


# Init with explicit token
api = ABLTApi(bearer_token=YOUR_ABLT_API_TOKEN)

# Init with environment variable, use ABLT_BEARER_TOKEN
api = ABLTApi()
```

For some reason you may want to use your own logger, then you can initialize API wrapper with logger:

```python
# logger it's pre-configured instance of logging.logger
api = ABLTApi(logger=your_logger)
```

# API methods

## Bots

### List of bots

You may get list of bots:

```python
# will return list of bots, see schema docs: https://docs.ablt.ai/api_docs/bots/get#Response-body
bots = api.api.get_bots()  
```

Or you may use actual schema:

```python
from ablt_python_api.schemas import BotsSchema


# Use await for asynchronous API wrapper
bots = [BotsSchema.model_validate(bot_dict) for bot_dict in api.get_bots()]  
```

### Single bot

You may get bot by UID:

```python
bot = api.find_bot_by_uid(bot_uid='F0b98A09-c5ed-1197-90F3-BBfF1DBb28ee')
```

Alternatively, you may get bot by slug:

```python
bot = api.find_bot_by_slug(bot_slug='omni')
```

Or by name (but it's not recommended, because name may be not unique):

```python
bot = api.find_bot_by_name(bot_name='Miles Hiker')
```

In case if no bot found, then `None` will be returned.

### Chat

To chat with bot you may use `chat' method:

```python
# Specify bot UID, will return generator with bot responses.
response = api.chat(bot_uid=BOT_UID, prompt='Hello, bot!')
# Or specify bot slug, return generator with bot responses.
response = api.chat(bot_slug=BOT_SLUG, prompt='Hello, bot!')
# To get response as string, you may use loop or extract just first response from generator
str_response = response.__next__()
# Or use __anext__ for async mode
```

Most probably, you will use `messages` list instead of `prompt` to save context. In this case, you may call method like
following:

```python
messages = [
    {"content": "I like to eat pizza", "role": "user"},
    {"content": "Hello, I like pizza too!", "role": "assistant"},
    {"content": "What do I like to eat?", "role": "user"},
]
# Will return generator with bot response
response = api.chat(bot_uid=BOT_UID, messages=messages)  
```

You need ensure, that your prompt is last message in `messages` list. 

### More options

Additionally, you may extend \ override system context with passing `system` instruction to bot:

```python
# Use with caution, it may conflict or replace all settings stored in UI 
# and may lead to unexpected results
messages = [
               {"content": "You are a bibliophile bot, you know everything about literature", "role": "system"},
               {"content": "Who is author of 'The Sirens of Titan'?", "role": "user"},
           ],
```

You may call `chat` method with params to override system settings, if you want:

```python
response = api.chat(bot_uid=BOT_UID,
                    prompt='Hello, bot!',
                    language='Arabic',  # may be: "Arabic", "French", "English", "Spanish", "Russian"
                    max_words=100,  # any integer, values less than 100 and greater than 2000 not recommended
                    user_id=42,  # unique user ID, used to split up usage statistics per user
                    use_search=False)  # use search mode, if True, then bot will try to find answer in internet
```

_Notes_:
In general, you may to try to use unusual values for:
* `language`, like "Serbian", but it's not guaranteed that it will work as you expected. 
* `max_words` - you may try to use values less than 100 words to save tokens, but you may be experienced with cut-offs. For values greater than 2000 words you may be experienced with timeouts or errors for regular, not 32k or 128k models.
* `user_id` - you may use any integer value, but it's recommended to use your own unique user ID, because it's used to split up usage statistics per user.
* `use_search` - it's special feature for premium plans, you may try to manage it from API, and not from UI, but it's highly not recommended to use with smaller `max_words` values, so, while using search, please use values at least 100 or more for `max words`.

### Streaming mode

By default, bots are working in streaming mode (as in UI), so, you may use `chat` method to chat with bot in streaming mode, but you may to switch it off by:

```python
response = api.chat(bot_uid=BOT_UID, prompt='Hello, bot!', stream=False)
```

In case if you prefer to use streaming mode, you need to get response from generator:

```python
import sys

from ablt_python_api import DoneException


try:
    for response in api.chat(bot_uid=BOT_UID, prompt='Hello, bot!', stream=True):
        # I use direct stdout output to make output be printed on-the-fly
        sys.stdout.write(response)  
        # To get typewriter effect I forcefully flush output each time
        sys.stdout.flush()  
except DoneException:
    pass  # DoneException is raised when bot finished conversation
```

## Statistics

Statistics may be used to obtain data for words and tokens usage for period of time. 

### Full statistics

```python
# Will return statistics for current date for default user = -1
statistics_for_today = api.get_usage_statistics()  
# Will return statistics for date range
statistics_for_range = api.get_usage_statistics(start_date='2022-02-24', 
                                                end_date='2023-11-17')
# Will return statistics for user with ID 42
statistics_for_user = api.get_usage_statistics(user_id=42)  
```

You may use schema to validate statistics:

```python
from ablt_python_api.schemas import StatisticsSchema


statistics = StatisticsSchema.model_validate(api.get_usage_statistics())
```

### Statistic for a day

```python
# Will return statistics for current date for user = -1
statistics_for_today = api.get_usage_statistics_for_day()
# Will return statistics for specified date
statistics_for_specific_day = api.get_usage_statistics_for_day(date='2022-02-24')
# Will return statistics for user with ID 42
statistics_for_user = api.get_usage_statistics_for_day(user_id=42)  
```

With schema:

```python
from ablt_python_api.schemas import StatisticItemSchema


statistics = StatisticItemSchema.model_validate(api.get_usage_statistics_for_day())
```

### Total statistics

```python
# Will return statistics for current date for user = -1
statistics_for_today = api.get_total_usage_statistics()
# Well return statistics for specified date
statistics_for_specific_day = api.get_total_usage_statistics(date='2022-02-24')
# Will return statistics for user with ID 42
statistics_for_user = api.get_total_usage_statistics(user_id=42)  
```

With schema:

```python
from ablt_python_api.schemas import StatisticTotalSchema


statistics = StatisticTotalSchema.model_validate(api.get_usage_statistics_for_day())
```

# Troubleshooting:

You can always [contact support](mailto:contact@aBLT.ai) or contact us in [Discord channel](https://discord.com/channels/1097998898506760392/1104055996302766120).

## SSL errors

In some cases, you may be experienced with SSL errors, then you may disable certificate verification:

```python
import ssl


sslcontext = ssl.create_default_context()
sslcontext.check_hostname = False
sslcontext.verify_mode = ssl.CERT_NONE

ABLTApi_async(ssl_context=sslcontext)  # for async
ABLTApl(ssl_verify=False)  # for sync
```

## Rate limit errors

Never try to flood API with requests, because you may be experienced with rate limit errors. In this case, you need to wait for some time and retry your request. Especially, never try to flood API with simultaneous requests for more users than allowed in your plan.

## Timeout errors

In some cases, you may be experienced with timeout errors, then you may decrease `max_words` value or use `stream = True` to get response on-the-fly.

## Other errors

In vary rare cases, you may be experienced with other errors, like rebooting of API itself. To check-up API health, you may use `health_check` method:

```python
# will return True if API is healthy, otherwise False
api.health_check()  
```

# Best practices

Always check-up and follow [Guides](https://docs.ablt.ai/guides/overview), [References](https://docs.ablt.ai/reference/overview) and [Examples](https://docs.ablt.ai/examples) from ABLT documentation.

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "ablt-python-api",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.9",
    "maintainer_email": "Iliya Vereshchagin <i.vereshchagin@gmail.com>",
    "keywords": "ablt, api, chatbot, gpt",
    "author": null,
    "author_email": "Iliya Vereshchagin <iliya.vereshchagin@ablt.ai>",
    "download_url": "https://files.pythonhosted.org/packages/64/b2/c3f3b1329b81ea728cff7c4e3ca26c803f957ec5b290f8f13b08b2f1709b/ablt_python_api-0.0.6.tar.gz",
    "platform": null,
    "description": "# aBLT Python API wrapper\n\n[![PyPI version](https://badge.fury.io/py/ablt-python-api.svg)](https://badge.fury.io/py/ablt-python-api) [![Linters](https://github.com/ablt-ai/ablt_python_api/actions/workflows/master_linters.yml/badge.svg?branch=master)](https://github.com/ablt-ai/ablt_python_api/actions/workflows/master_linters.yml) [![Tests](https://github.com/ablt-ai/ablt_python_api/actions/workflows/master_tests.yml/badge.svg?branch=master)](https://github.com/ablt-ai/ablt_python_api/actions/workflows/master_tests.yml) ![PyPI - License](https://img.shields.io/pypi/l/ablt-python-api) ![PyPI - Python Version](https://img.shields.io/pypi/pyversions/ablt-python-api)[![Downloads](https://static.pepy.tech/badge/ablt-python-api)](https://pepy.tech/project/ablt-python-api) [![Downloads](https://static.pepy.tech/badge/ablt-python-api/month)](https://pepy.tech/project/ablt-python-api)\n\n# About\n\nThis is Python aBLT API wrapper. It is used to communicate with aBLT API. You may use it to create your own aBLT client using asynchronous or synchronous Python. You can find more information about aBLT API [here](https://docs.ablt.ai/api_docs/overview).\n\nAt first, you need obtain aBLT API Token. You can do it [here](https://ablt.ai/) by get in touch and contact with support for paid plans. Next, you may use any existing ready-made bots or templates or create your own bot via UI interface.\n\n# Installation\n\nAPI wrapper is available on PyPI. You can install it with pip (recommended to use Python 3.9+):\n\n```bash\npip install ablt-python-api\n```\n\n# Usage\n\nThen you can import it and use it:\n\n```python\n# for asynchronous API wrapper use ABLTApi_async\nfrom ablt_python_api import ABLTApi  # this is synchronous API wrapper\n\n\n# Init with explicit token\napi = ABLTApi(bearer_token=YOUR_ABLT_API_TOKEN)\n\n# Init with environment variable, use ABLT_BEARER_TOKEN\napi = ABLTApi()\n```\n\nFor some reason you may want to use your own logger, then you can initialize API wrapper with logger:\n\n```python\n# logger it's pre-configured instance of logging.logger\napi = ABLTApi(logger=your_logger)\n```\n\n# API methods\n\n## Bots\n\n### List of bots\n\nYou may get list of bots:\n\n```python\n# will return list of bots, see schema docs: https://docs.ablt.ai/api_docs/bots/get#Response-body\nbots = api.api.get_bots()  \n```\n\nOr you may use actual schema:\n\n```python\nfrom ablt_python_api.schemas import BotsSchema\n\n\n# Use await for asynchronous API wrapper\nbots = [BotsSchema.model_validate(bot_dict) for bot_dict in api.get_bots()]  \n```\n\n### Single bot\n\nYou may get bot by UID:\n\n```python\nbot = api.find_bot_by_uid(bot_uid='F0b98A09-c5ed-1197-90F3-BBfF1DBb28ee')\n```\n\nAlternatively, you may get bot by slug:\n\n```python\nbot = api.find_bot_by_slug(bot_slug='omni')\n```\n\nOr by name (but it's not recommended, because name may be not unique):\n\n```python\nbot = api.find_bot_by_name(bot_name='Miles Hiker')\n```\n\nIn case if no bot found, then `None` will be returned.\n\n### Chat\n\nTo chat with bot you may use `chat' method:\n\n```python\n# Specify bot UID, will return generator with bot responses.\nresponse = api.chat(bot_uid=BOT_UID, prompt='Hello, bot!')\n# Or specify bot slug, return generator with bot responses.\nresponse = api.chat(bot_slug=BOT_SLUG, prompt='Hello, bot!')\n# To get response as string, you may use loop or extract just first response from generator\nstr_response = response.__next__()\n# Or use __anext__ for async mode\n```\n\nMost probably, you will use `messages` list instead of `prompt` to save context. In this case, you may call method like\nfollowing:\n\n```python\nmessages = [\n    {\"content\": \"I like to eat pizza\", \"role\": \"user\"},\n    {\"content\": \"Hello, I like pizza too!\", \"role\": \"assistant\"},\n    {\"content\": \"What do I like to eat?\", \"role\": \"user\"},\n]\n# Will return generator with bot response\nresponse = api.chat(bot_uid=BOT_UID, messages=messages)  \n```\n\nYou need ensure, that your prompt is last message in `messages` list. \n\n### More options\n\nAdditionally, you may extend \\ override system context with passing `system` instruction to bot:\n\n```python\n# Use with caution, it may conflict or replace all settings stored in UI \n# and may lead to unexpected results\nmessages = [\n               {\"content\": \"You are a bibliophile bot, you know everything about literature\", \"role\": \"system\"},\n               {\"content\": \"Who is author of 'The Sirens of Titan'?\", \"role\": \"user\"},\n           ],\n```\n\nYou may call `chat` method with params to override system settings, if you want:\n\n```python\nresponse = api.chat(bot_uid=BOT_UID,\n                    prompt='Hello, bot!',\n                    language='Arabic',  # may be: \"Arabic\", \"French\", \"English\", \"Spanish\", \"Russian\"\n                    max_words=100,  # any integer, values less than 100 and greater than 2000 not recommended\n                    user_id=42,  # unique user ID, used to split up usage statistics per user\n                    use_search=False)  # use search mode, if True, then bot will try to find answer in internet\n```\n\n_Notes_:\nIn general, you may to try to use unusual values for:\n* `language`, like \"Serbian\", but it's not guaranteed that it will work as you expected. \n* `max_words` - you may try to use values less than 100 words to save tokens, but you may be experienced with cut-offs. For values greater than 2000 words you may be experienced with timeouts or errors for regular, not 32k or 128k models.\n* `user_id` - you may use any integer value, but it's recommended to use your own unique user ID, because it's used to split up usage statistics per user.\n* `use_search` - it's special feature for premium plans, you may try to manage it from API, and not from UI, but it's highly not recommended to use with smaller `max_words` values, so, while using search, please use values at least 100 or more for `max words`.\n\n### Streaming mode\n\nBy default, bots are working in streaming mode (as in UI), so, you may use `chat` method to chat with bot in streaming mode, but you may to switch it off by:\n\n```python\nresponse = api.chat(bot_uid=BOT_UID, prompt='Hello, bot!', stream=False)\n```\n\nIn case if you prefer to use streaming mode, you need to get response from generator:\n\n```python\nimport sys\n\nfrom ablt_python_api import DoneException\n\n\ntry:\n    for response in api.chat(bot_uid=BOT_UID, prompt='Hello, bot!', stream=True):\n        # I use direct stdout output to make output be printed on-the-fly\n        sys.stdout.write(response)  \n        # To get typewriter effect I forcefully flush output each time\n        sys.stdout.flush()  \nexcept DoneException:\n    pass  # DoneException is raised when bot finished conversation\n```\n\n## Statistics\n\nStatistics may be used to obtain data for words and tokens usage for period of time. \n\n### Full statistics\n\n```python\n# Will return statistics for current date for default user = -1\nstatistics_for_today = api.get_usage_statistics()  \n# Will return statistics for date range\nstatistics_for_range = api.get_usage_statistics(start_date='2022-02-24', \n                                                end_date='2023-11-17')\n# Will return statistics for user with ID 42\nstatistics_for_user = api.get_usage_statistics(user_id=42)  \n```\n\nYou may use schema to validate statistics:\n\n```python\nfrom ablt_python_api.schemas import StatisticsSchema\n\n\nstatistics = StatisticsSchema.model_validate(api.get_usage_statistics())\n```\n\n### Statistic for a day\n\n```python\n# Will return statistics for current date for user = -1\nstatistics_for_today = api.get_usage_statistics_for_day()\n# Will return statistics for specified date\nstatistics_for_specific_day = api.get_usage_statistics_for_day(date='2022-02-24')\n# Will return statistics for user with ID 42\nstatistics_for_user = api.get_usage_statistics_for_day(user_id=42)  \n```\n\nWith schema:\n\n```python\nfrom ablt_python_api.schemas import StatisticItemSchema\n\n\nstatistics = StatisticItemSchema.model_validate(api.get_usage_statistics_for_day())\n```\n\n### Total statistics\n\n```python\n# Will return statistics for current date for user = -1\nstatistics_for_today = api.get_total_usage_statistics()\n# Well return statistics for specified date\nstatistics_for_specific_day = api.get_total_usage_statistics(date='2022-02-24')\n# Will return statistics for user with ID 42\nstatistics_for_user = api.get_total_usage_statistics(user_id=42)  \n```\n\nWith schema:\n\n```python\nfrom ablt_python_api.schemas import StatisticTotalSchema\n\n\nstatistics = StatisticTotalSchema.model_validate(api.get_usage_statistics_for_day())\n```\n\n# Troubleshooting:\n\nYou can always [contact support](mailto:contact@aBLT.ai) or contact us in [Discord channel](https://discord.com/channels/1097998898506760392/1104055996302766120).\n\n## SSL errors\n\nIn some cases, you may be experienced with SSL errors, then you may disable certificate verification:\n\n```python\nimport ssl\n\n\nsslcontext = ssl.create_default_context()\nsslcontext.check_hostname = False\nsslcontext.verify_mode = ssl.CERT_NONE\n\nABLTApi_async(ssl_context=sslcontext)  # for async\nABLTApl(ssl_verify=False)  # for sync\n```\n\n## Rate limit errors\n\nNever try to flood API with requests, because you may be experienced with rate limit errors. In this case, you need to wait for some time and retry your request. Especially, never try to flood API with simultaneous requests for more users than allowed in your plan.\n\n## Timeout errors\n\nIn some cases, you may be experienced with timeout errors, then you may decrease `max_words` value or use `stream = True` to get response on-the-fly.\n\n## Other errors\n\nIn vary rare cases, you may be experienced with other errors, like rebooting of API itself. To check-up API health, you may use `health_check` method:\n\n```python\n# will return True if API is healthy, otherwise False\napi.health_check()  \n```\n\n# Best practices\n\nAlways check-up and follow [Guides](https://docs.ablt.ai/guides/overview), [References](https://docs.ablt.ai/reference/overview) and [Examples](https://docs.ablt.ai/examples) from ABLT documentation.\n",
    "bugtrack_url": null,
    "license": "MIT License  Copyright (c) 2023 aBLT  Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the \"Software\"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:  The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.  THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.",
    "summary": "aBLT Python API",
    "version": "0.0.6",
    "project_urls": {
        "Bug Tracker": "https://github.com/ablt-ai/ablt_python_api/issues",
        "Homepage": "https://docs.ablt.ai/api_docs/overview"
    },
    "split_keywords": [
        "ablt",
        " api",
        " chatbot",
        " gpt"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "2180dcc5b195d800f971ee30f6b4a7ec6190dbf10b0015b372fd491a6c80b9c0",
                "md5": "5d46fa26852dc977c9d6820813d7cdbb",
                "sha256": "123259aa56791dd7c7baa8f276a2417e8646765dee83a927108accaeed8044b7"
            },
            "downloads": -1,
            "filename": "ablt_python_api-0.0.6-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "5d46fa26852dc977c9d6820813d7cdbb",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.9",
            "size": 21714,
            "upload_time": "2024-08-21T07:49:45",
            "upload_time_iso_8601": "2024-08-21T07:49:45.658234Z",
            "url": "https://files.pythonhosted.org/packages/21/80/dcc5b195d800f971ee30f6b4a7ec6190dbf10b0015b372fd491a6c80b9c0/ablt_python_api-0.0.6-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "64b2c3f3b1329b81ea728cff7c4e3ca26c803f957ec5b290f8f13b08b2f1709b",
                "md5": "4e227abf1789392c090f86a6a0f3c53e",
                "sha256": "1b5f832afd6ec3fb320a254153f1b727859c29b27c91b538d6ded6814d400642"
            },
            "downloads": -1,
            "filename": "ablt_python_api-0.0.6.tar.gz",
            "has_sig": false,
            "md5_digest": "4e227abf1789392c090f86a6a0f3c53e",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.9",
            "size": 37496,
            "upload_time": "2024-08-21T07:49:47",
            "upload_time_iso_8601": "2024-08-21T07:49:47.327809Z",
            "url": "https://files.pythonhosted.org/packages/64/b2/c3f3b1329b81ea728cff7c4e3ca26c803f957ec5b290f8f13b08b2f1709b/ablt_python_api-0.0.6.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-08-21 07:49:47",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "ablt-ai",
    "github_project": "ablt_python_api",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [
        {
            "name": "aiohttp",
            "specs": []
        },
        {
            "name": "asyncio",
            "specs": []
        },
        {
            "name": "requests",
            "specs": []
        },
        {
            "name": "pydantic",
            "specs": []
        }
    ],
    "lcname": "ablt-python-api"
}
        
Elapsed time: 0.53833s