buelon


Namebuelon JSON
Version 1.0.68 PyPI version JSON
download
home_pagehttps://github.com/daniel-olson-code/buelon
SummaryA scripting language to simply manage a very large amount of i/o heavy workloads. Such as API calls for your ETL, ELT or any program needing Python and/or SQL
upload_time2025-03-19 23:57:34
maintainerNone
docs_urlNone
authorDaniel Olson
requires_python>=3.10
licenseNone
keywords buelon etl pipeline asynchronous data-processing api
VCS
bugtrack_url
requirements psycopg2-binary orjson python-dotenv cython asyncio-pool psutil unsync redis persist-queue persistqueue PyYAML kazoo asyncpg
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Buelon

A scripting language to simply manage a very large amount of i/o heavy workloads. Such as API calls for your ETL, ELT or any program needing Python and/or SQL

## Table of Contents
<!--
- [Features](#features)
-->
- [Installation](#installation)
- [Quick Start](#quick-start)
- [Supported Languages](#supported-languages) <!-- - [Configuration](#configuration) - [Usage](#usage) -->
- [Learn by Example](#learn-by-example) <!-- - [Performance](#performance)   - [Contributing](#contributing) -->
- [Future of Buelon](#plans)
- [License](#license)

<!--
## Features
- Asynchronous execution of code across multiple servers
- Custom scripting language for defining ETL pipelines
- Support for Python, SQLite3, and PostgreSQL
- Efficient handling of APIs with long wait times
- Optimized for I/O-heavy workloads
- Scalable architecture for processing large amounts of data
-->

## Installation

`pip install buelon` That's it!

This will install the cli command `bue`. Check install by running `bue --version` or `bue -v`

### Note:

This package uses Cython and you may need to install `python3-dev` using 
`sudo apt-get install python3-dev` [[more commands and information](https://stackoverflow.com/a/21530768/19907524)]. 
If you would like to use this repository without Cython, 
you may `git clone` since it is not technically dependent on 
these scripts, but they do provide a significant performance boost.  



## Quick Start

1. Run bucket server: `bue bucket -b 0.0.0.0:61535`
2. Run hub: `bue hub -b 0.0.0.0:65432 -k localhost:61535`
3. Run n worker(s): `bue worker -b localhost:65432 -k localhost:61535`
4. Upload code: `bue upload  -b localhost:65432 -f path/to/file.bue`

## Production Start

**Security:** Make sure bucket, hub and workers are under 
a private network **only** 
(you will need a web server or something similar
under the same private network
to access this tool using `bue upload -f path/to/file.bue`)

### With Postgres (Under 1,000,000 Jobs at once)

1. Create a `.env` file
```properties
PIPE_WORKER_SCOPES=production-very-heavy,production-heavy,production-medium,production-small,testing-heavy,testing-medium,testing-small,default
PIPE_WORKER_SUBPROCESS_JOBS=false
N_WORKER_PROCESSES="25"

USING_POSTGRES_HUB=true
USING_POSTGRES_BUCKET="true"
POSTGRES_HOST="123.45.67.89"
POSTGRES_PORT="5432"
POSTGRES_USER="daniel"
POSTGRES_PASSWORD="Password123"
POSTGRES_DATABASE="my_db"
```

2. Run n worker(s): `bue worker -b localhost:65432 -k localhost:61535`
3. Upload code: `bue upload  -b localhost:65432 -f ./example.bue`

### Without Postgres (Under 10,000 jobs at once)

1. Create a `.env` file
```properties
PIPE_WORKER_SCOPES=production-very-heavy,production-heavy,production-medium,production-small,testing-heavy,testing-medium,testing-small,default
PIPE_WORKER_SUBPROCESS_JOBS=false
N_WORKER_PROCESSES="15"
PIPE_WORKER_HOST="123.45.67.89"
PIPE_WORKER_PORT="65432"

PIPELINE_HOST="0.0.0.0"
PIPELINE_PORT="65432"

BUCKET_SERVER_HOST="0.0.0.0"
BUCKET_SERVER_PORT="61535"
BUCKET_CLIENT_HOST="123.45.67.89"
BUCKET_CLIENT_PORT="61535"
```
1. Run bucket server: `bue bucket`
2. Run hub: `bue hub`
3. Run n worker(s): `bue worker`
4. Upload code: `bue upload -f ./example.bue`

## Supported Languages
- Python
- SQLite3
- PostgreSQL

## Learn by Example

(see below for `example.py` contents)

```python
# IMPORTANT: tabs are 4 spaces. white_space == "    "
# [Optional] change tab sizes like this
TAB = '    '

# set config values globally
!scope production-small  # job scope [see bellow]
!priority 0  # higher priority jobs are run first
!timeout 20 * 60  # job's max time to run in seconds
!retries 0  # how many times a job can run after error

# setting scopes is how you make new jobs with errors
# not interfere with all servers job queues
# and/or how you handle running heavy processes on large machine
# and small process on small machines

# define a single job called `accounts`
accounts:
    python  # <-- select the language to be run. currently only python, sqlite3 and postgres are available
    accounts  # select the function(for python) or table(for sql) name that will be used
    example.py  # either provide a file or write code directly using the "`" char (see below example)

# or

# define multiple jobs with:
import python (
    request_report 
        as request,
    get_status 
        as status 
        !scope testing-small,
    get_report 
        as download 
        !priority 9
        !timeout 60**2 * 5 / (1 % 2) // (1 + 1 - 1),  # 5 hrs
    transform_data 
        as py_transform 
        !scope production-heavy,
    upload_to_db as upload
) example.py  # <-- file path or using "`" like sql below


manipulate_data:
    sqlite3
    some_table  # *vvvv* see below for writing code directly *vvvv*
    `
SELECT
    *,
    CASE
        WHEN sales = 0
        THEN 0.0
        ELSE spend / sales
    END AS acos
FROM some_table
`

## this one's just to show postgres as well
#manipulate_data_again:
#    postgres
#    another_table
#    `
#select
#    *,
#    case
#        when spend = 0
#        then 0.0
#        else sales / spend
#    end AS roas
#from another_table
#`

# these are pipes and what will tell the server what order to run the steps
# and also transfer the returned  data between steps
# each step will be run individually and could be run on a different computer each time
accounts_pipe = | accounts  # single pipes currently need a `|` before or behind the value
# api_pipe = request | status | download | manipulate_data | py_transform | upload
# # or
api_pipe = (
    request | status | download 
    | manipulate_data | py_transform | upload
)


# currently there are only two syntax's for "running" pipes.
# either by itself:
# pipe()
#
# or in a loop:
# for value in pipe1():
#     pipe2(value)

# # Another Example:
# v = pipe()  # <-- single call
# pipe2(v)

for account in accounts_pipe():
    api_pipe(account)
```

#### example.py
```python
import time
import random
import uuid
import logging
from typing import List, Dict, Union

from buelon.core.step import Result, StepStatus

# Set up logging
logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s')
logger = logging.getLogger(__name__)


def accounts(*args) -> List[Dict[str, Union[int, str]]]:
    """Returns a list of sample account dictionaries.

    Returns:
        List[Dict[str, Union[int, str]]]: A list of dictionaries containing account information.
    """
    account_list = [
        {'id': 0, 'name': 'Account 1'},
        {'id': 2, 'name': 'Account 2'},
        {'id': 3, 'name': 'Account 4'},
    ]
    logger.info(f"Retrieved {len(account_list)} accounts")
    return account_list


def request_report(config: Dict[str, Union[int, str]]) -> Dict[str, Union[Dict, uuid.UUID, float]]:
    """Simulates a report request for a given account.

    Args:
        config (Dict[str, Union[int, str]]): A dictionary containing account information.

    Returns:
        Dict[str, Union[Dict, uuid.UUID, float]]: A dictionary with account data and request details.
    """
    account_id = config['id']
    
    request = {
        'report_id': uuid.uuid4(),
        'time': time.time(),
        'account_id': account_id
    }
    
    logger.info(f"Requested report for account ID: {account_id}, Report ID: {request['report_id']}")
    return {
        'account': config,
        'request': request
    }


def get_status(config: Dict[str, Union[Dict, uuid.UUID, float]]) -> Union[Dict, Result]:
    """Checks the status of a report request.

    Args:
        config (Dict[str, Union[Dict, uuid.UUID, float]]): A dictionary containing request information.

    Returns:
        Union[Dict, Result]: Either the input config if successful, or a Result object if pending.
    """
    requested_time = config['request']['time']
    account_id = config['account']['id']
    
    status = 'success' if requested_time + random.randint(10, 15) < time.time() else 'pending'
    
    if status == 'pending':
        logger.info(f"Report status for account ID {account_id} is pending")
        return Result(status=StepStatus.pending)
    
    logger.info(f"Report status for account ID {account_id} is success")
    return config
    

def get_report(config: Dict[str, Union[Dict, uuid.UUID, float]]) -> Union[Dict, Result]:
    """Retrieves a report or simulates an error.

    Args:
        config (Dict[str, Union[Dict, uuid.UUID, float]]): A dictionary containing request configuration.

    Returns:
        Union[Dict, Result]: Either a dictionary with report data or a Result object for reset.

    Raises:
        ValueError: If an unexpected error occurs.
    """
    account_id = config['account']['id']
    
    if random.randint(0, 10) == 0:
        report_data = {'status': 'error', 'msg': 'timeout error'}
    else:
        report_data = [
            {'sales': i * 10, 'spend': i % 10, 'clicks': i * 13}
            for i in range(random.randint(25, 100))
        ]
    
    if not isinstance(report_data, list):
        if isinstance(report_data, dict):
            if (report_data.get('status') == 'error' 
                and report_data.get('msg') == 'timeout error'):
                logger.warning(f"Timeout error for account ID {account_id}. Resetting.")
                return Result(status=StepStatus.reset)
        error_msg = f'Unexpected error: {report_data}'
        logger.error(f"Error getting report for account ID {account_id}: {error_msg}")
        raise ValueError(error_msg)
    
    logger.info(f"Successfully retrieved report for account ID {account_id} with {len(report_data)} rows")
    return {
        'config': config,
        'table_data': report_data
    }


def transform_data(data: Dict[str, Union[Dict, List[Dict]]]) -> None:
    """Transforms the report data by adding account information to each row.

    Args:
        data (Dict[str, Union[Dict, List[Dict]]]): A dictionary containing config and table data.
    """
    config = data['config']
    table_data = data['table_data']
    account_name = config['account']['name']
    
    for row in table_data:
        row['account'] = account_name
    
    logger.info(f"Transformed {len(table_data)} rows of data for account: {account_name}")

    
def upload_to_db(data: Dict[str, Union[Dict, List[Dict]]]) -> None:
    """Handles table upload to database.

    Args:
        data (Dict[str, Union[Dict, List[Dict]]]): A dictionary containing table data to be uploaded.
    """    
    table_data = data['table_data']
    account_name = data['config']['account']['name']
    # Implementation for database upload
    logger.info(f"Uploaded {len(table_data)} rows to the database for account: {account_name}")
```

## Known Defects

Error handling and logging are currently lacking


## Future Plans

If this projects sees some love, 
or I just find more free time, 
I'd like to support more languages like `node` or `deno` and
even compiled languages such as 
`rust`, `go` and `c++`. 
Allowing teams that write different 
languages to work on the same program.

Web app for logging, execution and worker management

Add a scheduler process to allow scheduled pipelines

<!---
your comment goes here
and here

## Contributing
[Contributing guidelines]
-->

## License
* MIT License

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/daniel-olson-code/buelon",
    "name": "buelon",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.10",
    "maintainer_email": null,
    "keywords": "buelon etl pipeline asynchronous data-processing api",
    "author": "Daniel Olson",
    "author_email": "daniel@orphos.cloud",
    "download_url": "https://files.pythonhosted.org/packages/19/1c/9bd7384c8f7240272a0edab02c4c8367aa992138cf31bfbdc7931e1f905b/buelon-1.0.68.tar.gz",
    "platform": null,
    "description": "# Buelon\n\nA scripting language to simply manage a very large amount of i/o heavy workloads. Such as API calls for your ETL, ELT or any program needing Python and/or SQL\n\n## Table of Contents\n<!--\n- [Features](#features)\n-->\n- [Installation](#installation)\n- [Quick Start](#quick-start)\n- [Supported Languages](#supported-languages) <!-- - [Configuration](#configuration) - [Usage](#usage) -->\n- [Learn by Example](#learn-by-example) <!-- - [Performance](#performance)   - [Contributing](#contributing) -->\n- [Future of Buelon](#plans)\n- [License](#license)\n\n<!--\n## Features\n- Asynchronous execution of code across multiple servers\n- Custom scripting language for defining ETL pipelines\n- Support for Python, SQLite3, and PostgreSQL\n- Efficient handling of APIs with long wait times\n- Optimized for I/O-heavy workloads\n- Scalable architecture for processing large amounts of data\n-->\n\n## Installation\n\n`pip install buelon` That's it!\n\nThis will install the cli command `bue`. Check install by running `bue --version` or `bue -v`\n\n### Note:\n\nThis package uses Cython and you may need to install `python3-dev` using \n`sudo apt-get install python3-dev` [[more commands and information](https://stackoverflow.com/a/21530768/19907524)]. \nIf you would like to use this repository without Cython, \nyou may `git clone` since it is not technically dependent on \nthese scripts, but they do provide a significant performance boost.  \n\n\n\n## Quick Start\n\n1. Run bucket server: `bue bucket -b 0.0.0.0:61535`\n2. Run hub: `bue hub -b 0.0.0.0:65432 -k localhost:61535`\n3. Run n worker(s): `bue worker -b localhost:65432 -k localhost:61535`\n4. Upload code: `bue upload  -b localhost:65432 -f path/to/file.bue`\n\n## Production Start\n\n**Security:** Make sure bucket, hub and workers are under \na private network **only** \n(you will need a web server or something similar\nunder the same private network\nto access this tool using `bue upload -f path/to/file.bue`)\n\n### With Postgres (Under 1,000,000 Jobs at once)\n\n1. Create a `.env` file\n```properties\nPIPE_WORKER_SCOPES=production-very-heavy,production-heavy,production-medium,production-small,testing-heavy,testing-medium,testing-small,default\nPIPE_WORKER_SUBPROCESS_JOBS=false\nN_WORKER_PROCESSES=\"25\"\n\nUSING_POSTGRES_HUB=true\nUSING_POSTGRES_BUCKET=\"true\"\nPOSTGRES_HOST=\"123.45.67.89\"\nPOSTGRES_PORT=\"5432\"\nPOSTGRES_USER=\"daniel\"\nPOSTGRES_PASSWORD=\"Password123\"\nPOSTGRES_DATABASE=\"my_db\"\n```\n\n2. Run n worker(s): `bue worker -b localhost:65432 -k localhost:61535`\n3. Upload code: `bue upload  -b localhost:65432 -f ./example.bue`\n\n### Without Postgres (Under 10,000 jobs at once)\n\n1. Create a `.env` file\n```properties\nPIPE_WORKER_SCOPES=production-very-heavy,production-heavy,production-medium,production-small,testing-heavy,testing-medium,testing-small,default\nPIPE_WORKER_SUBPROCESS_JOBS=false\nN_WORKER_PROCESSES=\"15\"\nPIPE_WORKER_HOST=\"123.45.67.89\"\nPIPE_WORKER_PORT=\"65432\"\n\nPIPELINE_HOST=\"0.0.0.0\"\nPIPELINE_PORT=\"65432\"\n\nBUCKET_SERVER_HOST=\"0.0.0.0\"\nBUCKET_SERVER_PORT=\"61535\"\nBUCKET_CLIENT_HOST=\"123.45.67.89\"\nBUCKET_CLIENT_PORT=\"61535\"\n```\n1. Run bucket server: `bue bucket`\n2. Run hub: `bue hub`\n3. Run n worker(s): `bue worker`\n4. Upload code: `bue upload -f ./example.bue`\n\n## Supported Languages\n- Python\n- SQLite3\n- PostgreSQL\n\n## Learn by Example\n\n(see below for `example.py` contents)\n\n```python\n# IMPORTANT: tabs are 4 spaces. white_space == \"    \"\n# [Optional] change tab sizes like this\nTAB = '    '\n\n# set config values globally\n!scope production-small  # job scope [see bellow]\n!priority 0  # higher priority jobs are run first\n!timeout 20 * 60  # job's max time to run in seconds\n!retries 0  # how many times a job can run after error\n\n# setting scopes is how you make new jobs with errors\n# not interfere with all servers job queues\n# and/or how you handle running heavy processes on large machine\n# and small process on small machines\n\n# define a single job called `accounts`\naccounts:\n    python  # <-- select the language to be run. currently only python, sqlite3 and postgres are available\n    accounts  # select the function(for python) or table(for sql) name that will be used\n    example.py  # either provide a file or write code directly using the \"`\" char (see below example)\n\n# or\n\n# define multiple jobs with:\nimport python (\n    request_report \n        as request,\n    get_status \n        as status \n        !scope testing-small,\n    get_report \n        as download \n        !priority 9\n        !timeout 60**2 * 5 / (1 % 2) // (1 + 1 - 1),  # 5 hrs\n    transform_data \n        as py_transform \n        !scope production-heavy,\n    upload_to_db as upload\n) example.py  # <-- file path or using \"`\" like sql below\n\n\nmanipulate_data:\n    sqlite3\n    some_table  # *vvvv* see below for writing code directly *vvvv*\n    `\nSELECT\n    *,\n    CASE\n        WHEN sales = 0\n        THEN 0.0\n        ELSE spend / sales\n    END AS acos\nFROM some_table\n`\n\n## this one's just to show postgres as well\n#manipulate_data_again:\n#    postgres\n#    another_table\n#    `\n#select\n#    *,\n#    case\n#        when spend = 0\n#        then 0.0\n#        else sales / spend\n#    end AS roas\n#from another_table\n#`\n\n# these are pipes and what will tell the server what order to run the steps\n# and also transfer the returned  data between steps\n# each step will be run individually and could be run on a different computer each time\naccounts_pipe = | accounts  # single pipes currently need a `|` before or behind the value\n# api_pipe = request | status | download | manipulate_data | py_transform | upload\n# # or\napi_pipe = (\n    request | status | download \n    | manipulate_data | py_transform | upload\n)\n\n\n# currently there are only two syntax's for \"running\" pipes.\n# either by itself:\n# pipe()\n#\n# or in a loop:\n# for value in pipe1():\n#     pipe2(value)\n\n# # Another Example:\n# v = pipe()  # <-- single call\n# pipe2(v)\n\nfor account in accounts_pipe():\n    api_pipe(account)\n```\n\n#### example.py\n```python\nimport time\nimport random\nimport uuid\nimport logging\nfrom typing import List, Dict, Union\n\nfrom buelon.core.step import Result, StepStatus\n\n# Set up logging\nlogging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s')\nlogger = logging.getLogger(__name__)\n\n\ndef accounts(*args) -> List[Dict[str, Union[int, str]]]:\n    \"\"\"Returns a list of sample account dictionaries.\n\n    Returns:\n        List[Dict[str, Union[int, str]]]: A list of dictionaries containing account information.\n    \"\"\"\n    account_list = [\n        {'id': 0, 'name': 'Account 1'},\n        {'id': 2, 'name': 'Account 2'},\n        {'id': 3, 'name': 'Account 4'},\n    ]\n    logger.info(f\"Retrieved {len(account_list)} accounts\")\n    return account_list\n\n\ndef request_report(config: Dict[str, Union[int, str]]) -> Dict[str, Union[Dict, uuid.UUID, float]]:\n    \"\"\"Simulates a report request for a given account.\n\n    Args:\n        config (Dict[str, Union[int, str]]): A dictionary containing account information.\n\n    Returns:\n        Dict[str, Union[Dict, uuid.UUID, float]]: A dictionary with account data and request details.\n    \"\"\"\n    account_id = config['id']\n    \n    request = {\n        'report_id': uuid.uuid4(),\n        'time': time.time(),\n        'account_id': account_id\n    }\n    \n    logger.info(f\"Requested report for account ID: {account_id}, Report ID: {request['report_id']}\")\n    return {\n        'account': config,\n        'request': request\n    }\n\n\ndef get_status(config: Dict[str, Union[Dict, uuid.UUID, float]]) -> Union[Dict, Result]:\n    \"\"\"Checks the status of a report request.\n\n    Args:\n        config (Dict[str, Union[Dict, uuid.UUID, float]]): A dictionary containing request information.\n\n    Returns:\n        Union[Dict, Result]: Either the input config if successful, or a Result object if pending.\n    \"\"\"\n    requested_time = config['request']['time']\n    account_id = config['account']['id']\n    \n    status = 'success' if requested_time + random.randint(10, 15) < time.time() else 'pending'\n    \n    if status == 'pending':\n        logger.info(f\"Report status for account ID {account_id} is pending\")\n        return Result(status=StepStatus.pending)\n    \n    logger.info(f\"Report status for account ID {account_id} is success\")\n    return config\n    \n\ndef get_report(config: Dict[str, Union[Dict, uuid.UUID, float]]) -> Union[Dict, Result]:\n    \"\"\"Retrieves a report or simulates an error.\n\n    Args:\n        config (Dict[str, Union[Dict, uuid.UUID, float]]): A dictionary containing request configuration.\n\n    Returns:\n        Union[Dict, Result]: Either a dictionary with report data or a Result object for reset.\n\n    Raises:\n        ValueError: If an unexpected error occurs.\n    \"\"\"\n    account_id = config['account']['id']\n    \n    if random.randint(0, 10) == 0:\n        report_data = {'status': 'error', 'msg': 'timeout error'}\n    else:\n        report_data = [\n            {'sales': i * 10, 'spend': i % 10, 'clicks': i * 13}\n            for i in range(random.randint(25, 100))\n        ]\n    \n    if not isinstance(report_data, list):\n        if isinstance(report_data, dict):\n            if (report_data.get('status') == 'error' \n                and report_data.get('msg') == 'timeout error'):\n                logger.warning(f\"Timeout error for account ID {account_id}. Resetting.\")\n                return Result(status=StepStatus.reset)\n        error_msg = f'Unexpected error: {report_data}'\n        logger.error(f\"Error getting report for account ID {account_id}: {error_msg}\")\n        raise ValueError(error_msg)\n    \n    logger.info(f\"Successfully retrieved report for account ID {account_id} with {len(report_data)} rows\")\n    return {\n        'config': config,\n        'table_data': report_data\n    }\n\n\ndef transform_data(data: Dict[str, Union[Dict, List[Dict]]]) -> None:\n    \"\"\"Transforms the report data by adding account information to each row.\n\n    Args:\n        data (Dict[str, Union[Dict, List[Dict]]]): A dictionary containing config and table data.\n    \"\"\"\n    config = data['config']\n    table_data = data['table_data']\n    account_name = config['account']['name']\n    \n    for row in table_data:\n        row['account'] = account_name\n    \n    logger.info(f\"Transformed {len(table_data)} rows of data for account: {account_name}\")\n\n    \ndef upload_to_db(data: Dict[str, Union[Dict, List[Dict]]]) -> None:\n    \"\"\"Handles table upload to database.\n\n    Args:\n        data (Dict[str, Union[Dict, List[Dict]]]): A dictionary containing table data to be uploaded.\n    \"\"\"    \n    table_data = data['table_data']\n    account_name = data['config']['account']['name']\n    # Implementation for database upload\n    logger.info(f\"Uploaded {len(table_data)} rows to the database for account: {account_name}\")\n```\n\n## Known Defects\n\nError handling and logging are currently lacking\n\n\n## Future Plans\n\nIf this projects sees some love, \nor I just find more free time, \nI'd like to support more languages like `node` or `deno` and\neven compiled languages such as \n`rust`, `go` and `c++`. \nAllowing teams that write different \nlanguages to work on the same program.\n\nWeb app for logging, execution and worker management\n\nAdd a scheduler process to allow scheduled pipelines\n\n<!---\nyour comment goes here\nand here\n\n## Contributing\n[Contributing guidelines]\n-->\n\n## License\n* MIT License\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "A scripting language to simply manage a very large amount of i/o heavy workloads. Such as API calls for your ETL, ELT or any program needing Python and/or SQL",
    "version": "1.0.68",
    "project_urls": {
        "Homepage": "https://github.com/daniel-olson-code/buelon"
    },
    "split_keywords": [
        "buelon",
        "etl",
        "pipeline",
        "asynchronous",
        "data-processing",
        "api"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "4283d6ec8e810bcc7fdf3e27f77b7d122cae7badfd895c713932848cca9b659f",
                "md5": "376a90acbd334fd8e1ba1cfa36d9c461",
                "sha256": "027f29f94aa1f9f33e875d86eeebec6b56388cbcb53cd0b3f0a68dd04347eb18"
            },
            "downloads": -1,
            "filename": "buelon-1.0.68-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "376a90acbd334fd8e1ba1cfa36d9c461",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.10",
            "size": 109495,
            "upload_time": "2025-03-19T23:57:32",
            "upload_time_iso_8601": "2025-03-19T23:57:32.588772Z",
            "url": "https://files.pythonhosted.org/packages/42/83/d6ec8e810bcc7fdf3e27f77b7d122cae7badfd895c713932848cca9b659f/buelon-1.0.68-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "191c9bd7384c8f7240272a0edab02c4c8367aa992138cf31bfbdc7931e1f905b",
                "md5": "faa561982e62961439e9e6639aaddeb6",
                "sha256": "d52b73c7ba421cc9185293e45dcdca53ef679e34238d8841db8f1cd30fb6ea39"
            },
            "downloads": -1,
            "filename": "buelon-1.0.68.tar.gz",
            "has_sig": false,
            "md5_digest": "faa561982e62961439e9e6639aaddeb6",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.10",
            "size": 98950,
            "upload_time": "2025-03-19T23:57:34",
            "upload_time_iso_8601": "2025-03-19T23:57:34.679015Z",
            "url": "https://files.pythonhosted.org/packages/19/1c/9bd7384c8f7240272a0edab02c4c8367aa992138cf31bfbdc7931e1f905b/buelon-1.0.68.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-03-19 23:57:34",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "daniel-olson-code",
    "github_project": "buelon",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "requirements": [
        {
            "name": "psycopg2-binary",
            "specs": []
        },
        {
            "name": "orjson",
            "specs": []
        },
        {
            "name": "python-dotenv",
            "specs": []
        },
        {
            "name": "cython",
            "specs": []
        },
        {
            "name": "asyncio-pool",
            "specs": []
        },
        {
            "name": "psutil",
            "specs": []
        },
        {
            "name": "unsync",
            "specs": []
        },
        {
            "name": "redis",
            "specs": []
        },
        {
            "name": "persist-queue",
            "specs": []
        },
        {
            "name": "persistqueue",
            "specs": []
        },
        {
            "name": "PyYAML",
            "specs": []
        },
        {
            "name": "kazoo",
            "specs": []
        },
        {
            "name": "asyncpg",
            "specs": []
        }
    ],
    "lcname": "buelon"
}
        
Elapsed time: 1.62291s