sqlalchemy-celery-beat


Namesqlalchemy-celery-beat JSON
Version 0.7.1 PyPI version JSON
download
home_pagehttps://github.com/farahats9/sqlalchemy-celery-beat
SummaryA Scheduler Based SQLalchemy For Celery
upload_time2023-11-28 23:00:53
maintainer
docs_urlNone
authorMohamed Farahat
requires_python
licenseMIT
keywords celery scheduler sqlalchemy beat
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # sqlalchemy-celery-beat



A Scheduler Based Sqlalchemy for Celery.



> NOTE: This project was originally developed by [AngelLiang](https://github.com/AngelLiang/celery-sqlalchemy-scheduler) to use sqlalchemy as the database scheduler for Flask or FastAPI, like [django-celery-beat](https://github.com/celery/django-celery-beat) for django. I am trying to continue on his work and maintain a working solution.





### Prerequisites



- Python 3

- celery >= 5.0

- sqlalchemy >= 1.4



First you must install `celery` and `sqlalchemy`, and `celery` should be >=5.0



```

$ pip install sqlalchemy celery

```



### Installing



Install from PyPi:



```

$ pip install sqlalchemy-celery-beat

```



Install from source by cloning this repository:



```

$ git clone git@github.com:farahats9/sqlalchemy-celery-beat.git

$ cd sqlalchemy-celery-beat

$ python setup.py install

```



## Usage



After you have installed `sqlalchemy_celery_beat`, you can easily start with following steps:



This is a demo for exmaple, you can check the code in `examples` directory



1. start celery worker



   ```

   $ celery worker -A tasks -l info

   ```



2. start the celery beat with `DatabaseScheduler` as scheduler:



   ```

   $ celery beat -A tasks -S sqlalchemy_celery_beat.schedulers:DatabaseScheduler -l info

   ```

    you can also use the shorthand argument `-S sqlalchemy`

## Description



After the celery beat is started, by default it create a sqlite database(`schedule.db`) in current folder. You can use `SQLiteStudio.exe` to inspect it.



Sample from the `PeriodicTask` model's table



![sqlite](screenshot/sqlite.png)



When you want to update scheduler, you can update the data in `schedule.db`. But `sqlalchemy_celery_beat` don't update the scheduler immediately. Then you shoule be change the first column's `last_update` field in the `celery_periodic_task_changed` to now datetime. Finally the celery beat will update scheduler at next wake-up time.



### Database Configuration



You can configure sqlalchemy db uri when you configure the celery, example as:



```Python

from celery import Celery



celery = Celery('tasks')



beat_dburi = 'sqlite:///schedule.db'



celery.conf.update(

    {

        'beat_dburi': beat_dburi,

        'beat_schema': None  # you can make the scheduler tables under different schema (tested for postgresql, not available in sqlite)

    }

)

```



Also, you can use MySQL or PostgreSQL.



```Python

# MySQL: `pip install mysql-connector`

beat_dburi = 'mysql+mysqlconnector://root:root@127.0.0.1:3306/celery-schedule'



# PostgreSQL: `pip install psycopg2`

beat_dburi = 'postgresql+psycopg2://postgres:postgres@127.0.0.1:5432/celery-schedule'

```



## Example Code 1



View `examples/base/tasks.py` for details.





Run Worker in console 1



    $ cd examples/base



    # Celery < 5.0

    $ celery worker -A tasks:celery -l info



    # Celery >= 5.0

    $ celery -A tasks:celery worker -l info



Run Beat in console 2



    $ cd examples/base



    # Celery < 5.0

    $ celery beat -A tasks:celery -S tasks:DatabaseScheduler -l info



    # Celery >= 5.0

    $ celery -A tasks:celery beat -S tasks:DatabaseScheduler -l info



## Example Code 2



### Example creating interval-based periodic task



To create a periodic task executing at an interval you must first

create the interval object:



```python

>>> from sqlalchemy_celery_beat.models import PeriodicTask, IntervalSchedule

>>> from sqlalchemy_celery_beat.session import SessionManager

>>> from celeryconfig import beat_dburi

>>> session_manager = SessionManager()

>>> engine, Session = session_manager.create_session(beat_dburi)

>>> session = Session()



# executes every 10 seconds.

>>> schedule = session.query(IntervalSchedule).filter_by(every=10, period=IntervalSchedule.SECONDS).first()

>>> if not schedule:

...     schedule = IntervalSchedule(every=10, period=IntervalSchedule.SECONDS)

...     session.add(schedule)

...     session.commit()

```



That's all the fields you need: a period type and the frequency.



You can choose between a specific set of periods:



- `IntervalSchedule.DAYS`

- `IntervalSchedule.HOURS`

- `IntervalSchedule.MINUTES`

- `IntervalSchedule.SECONDS`

- `IntervalSchedule.MICROSECONDS`



_note_:



    If you have multiple periodic tasks executing every 10 seconds,

    then they should all point to the same schedule object.



Now that we have defined the schedule object, we can create the periodic task

entry:



```python

    >>> task = PeriodicTask(

    ...     schedule_model=schedule,            # we created this above.

    ...     name='Importing contacts',          # simply describes this periodic task.

    ...     task='proj.tasks.import_contacts',  # name of task.

    ... )

    >>> session.add(task)

    >>> session.commit()

```



Note that this is a very basic example, you can also specify the

arguments and keyword arguments used to execute the task, the `queue` to

send it to[\*], and set an expiry time.



Here\'s an example specifying the arguments, note how JSON serialization

is required:



    >>> import json

    >>> from datetime import datetime, timedelta



    >>> periodic_task = PeriodicTask(

    ...     schedule_model=schedule,                  # we created this above.

    ...     name='Importing contacts',          # simply describes this periodic task.

    ...     task='proj.tasks.import_contacts',  # name of task.

    ...     args=json.dumps(['arg1', 'arg2']),

    ...     kwargs=json.dumps({

    ...        'be_careful': True,

    ...     }),

    ...     expires=datetime.utcnow() + timedelta(seconds=30)

    ... )

    ... session.add(periodic_task)

    ... session.commit()



### Example creating crontab-based periodic task



A crontab schedule has the fields: `minute`, `hour`, `day_of_week`,

`day_of_month` and `month_of_year`, so if you want the equivalent of a

`30 * * * *` (execute every 30 minutes) crontab entry you specify:



    >>> from sqlalchemy_celery_beat.models import PeriodicTask, CrontabSchedule

    >>> schedule = CrontabSchedule(

    ...     minute='30',

    ...     hour='*',

    ...     day_of_week='*',

    ...     day_of_month='*',

    ...     month_of_year='*',

    ...     timezone='UTC',

    ... )



The crontab schedule is linked to a specific timezone using the

'timezone' input parameter.



Then to create a periodic task using this schedule, use the same

approach as the interval-based periodic task earlier in this document,

the `schedule_model` is a generic foreign-key implementation which makes things very easy and efficient:



    >>> periodic_task = PeriodicTask(

    ...     schedule_model=schedule,

    ...     name='Importing contacts',

    ...     task='proj.tasks.import_contacts',

    ... )



What the previous code actually do is this:



    >>> periodic_task = PeriodicTask(

    ...     schedule_id=schedule.id,

    ...     discriminator=schedule.discriminator,

    ...     name='Importing contacts',

    ...     task='proj.tasks.import_contacts',

    ... )

So when you can use `discriminator` + `schedule_id` or use the convenient property `schedule_model` and it will populate them for you behind the scenes.



### Temporarily disable a periodic task



You can use the `enabled` flag to temporarily disable a periodic task:



    >>> periodic_task.enabled = False

    >>> session.add(periodic_task)

    >>> session.commit()



If you are using a bulk operation to update or delete multiple tasks at the same time, the changes won't be noticed by the scheduler until you do `PeriodicTaskChanged.update_changed()` or `.update_from_session()`



example:

``` python

from sqlalchemy_celery_beat.models import PeriodicTaskChanged



session = get_beat_session()



stmt = update(PeriodicTask).where(PeriodicTask.name == 'task-123').values(enabled=False)



session.execute(stmt)  # using execute causes no orm event to fire, changes are in the database but the schduler has no idea

session.commit()



PeriodicTaskChanged.update_from_session(session)

# now scheduler reloads the tasks and all is good

```

This is not needed when you are updating a specific object using `session.add(task)` because it will trigger the `after_update`, `after_delete` or `after_insert` events.



### Example running periodic tasks



The periodic tasks still need 'workers' to execute them. So make sure

the default **Celery** package is installed. (If not installed, please

follow the installation instructions here:

<https://github.com/celery/celery>)



Both the worker and beat services need to be running at the same time.



1.  Start a Celery worker service (specify your project name):



        $ celery -A [project-name] worker --loglevel=info



2.  As a separate process, start the beat service (specify the

    scheduler):



        $ celery -A [project-name] beat -l info --scheduler sqlalchemy_celery_beat.schedulers:DatabaseScheduler



## Working on adding the following features



- ✅ Add `ClockedSchedule` model

- ✅ Implement a generic foreign key

- ✅ More robust attribute validation on models

- ✅ Add Tests

- Add more examples

- Support for Async drivers like asyncpg and psycopg3 async mode

- Use Alembic migrations



Any help with the tasks above or feedback is appreciated 🙂



## Acknowledgments



- [django-celery-beat](https://github.com/celery/django-celery-beat)

- [celerybeatredis](https://github.com/liuliqiang/celerybeatredis)

- [celery](https://github.com/celery/celery)

- [SQLAlchemy](https://www.sqlalchemy.org)


            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/farahats9/sqlalchemy-celery-beat",
    "name": "sqlalchemy-celery-beat",
    "maintainer": "",
    "docs_url": null,
    "requires_python": "",
    "maintainer_email": "",
    "keywords": "celery scheduler sqlalchemy beat",
    "author": "Mohamed Farahat",
    "author_email": "farahats9@yahoo.com",
    "download_url": "https://files.pythonhosted.org/packages/46/fc/4afce7cc725fc93a92b08ab8fcb61615467b5835308226eef37601818f78/sqlalchemy_celery_beat-0.7.1.tar.gz",
    "platform": "any",
    "description": "# sqlalchemy-celery-beat\r\n\r\n\r\n\r\nA Scheduler Based Sqlalchemy for Celery.\r\n\r\n\r\n\r\n> NOTE: This project was originally developed by [AngelLiang](https://github.com/AngelLiang/celery-sqlalchemy-scheduler) to use sqlalchemy as the database scheduler for Flask or FastAPI, like [django-celery-beat](https://github.com/celery/django-celery-beat) for django. I am trying to continue on his work and maintain a working solution.\r\n\r\n\r\n\r\n\r\n\r\n### Prerequisites\r\n\r\n\r\n\r\n- Python 3\r\n\r\n- celery >= 5.0\r\n\r\n- sqlalchemy >= 1.4\r\n\r\n\r\n\r\nFirst you must install `celery` and `sqlalchemy`, and `celery` should be >=5.0\r\n\r\n\r\n\r\n```\r\n\r\n$ pip install sqlalchemy celery\r\n\r\n```\r\n\r\n\r\n\r\n### Installing\r\n\r\n\r\n\r\nInstall from PyPi:\r\n\r\n\r\n\r\n```\r\n\r\n$ pip install sqlalchemy-celery-beat\r\n\r\n```\r\n\r\n\r\n\r\nInstall from source by cloning this repository:\r\n\r\n\r\n\r\n```\r\n\r\n$ git clone git@github.com:farahats9/sqlalchemy-celery-beat.git\r\n\r\n$ cd sqlalchemy-celery-beat\r\n\r\n$ python setup.py install\r\n\r\n```\r\n\r\n\r\n\r\n## Usage\r\n\r\n\r\n\r\nAfter you have installed `sqlalchemy_celery_beat`, you can easily start with following steps:\r\n\r\n\r\n\r\nThis is a demo for exmaple, you can check the code in `examples` directory\r\n\r\n\r\n\r\n1. start celery worker\r\n\r\n\r\n\r\n   ```\r\n\r\n   $ celery worker -A tasks -l info\r\n\r\n   ```\r\n\r\n\r\n\r\n2. start the celery beat with `DatabaseScheduler` as scheduler:\r\n\r\n\r\n\r\n   ```\r\n\r\n   $ celery beat -A tasks -S sqlalchemy_celery_beat.schedulers:DatabaseScheduler -l info\r\n\r\n   ```\r\n\r\n    you can also use the shorthand argument `-S sqlalchemy`\r\n\r\n## Description\r\n\r\n\r\n\r\nAfter the celery beat is started, by default it create a sqlite database(`schedule.db`) in current folder. You can use `SQLiteStudio.exe` to inspect it.\r\n\r\n\r\n\r\nSample from the `PeriodicTask` model's table\r\n\r\n\r\n\r\n![sqlite](screenshot/sqlite.png)\r\n\r\n\r\n\r\nWhen you want to update scheduler, you can update the data in `schedule.db`. But `sqlalchemy_celery_beat` don't update the scheduler immediately. Then you shoule be change the first column's `last_update` field in the `celery_periodic_task_changed` to now datetime. Finally the celery beat will update scheduler at next wake-up time.\r\n\r\n\r\n\r\n### Database Configuration\r\n\r\n\r\n\r\nYou can configure sqlalchemy db uri when you configure the celery, example as:\r\n\r\n\r\n\r\n```Python\r\n\r\nfrom celery import Celery\r\n\r\n\r\n\r\ncelery = Celery('tasks')\r\n\r\n\r\n\r\nbeat_dburi = 'sqlite:///schedule.db'\r\n\r\n\r\n\r\ncelery.conf.update(\r\n\r\n    {\r\n\r\n        'beat_dburi': beat_dburi,\r\n\r\n        'beat_schema': None  # you can make the scheduler tables under different schema (tested for postgresql, not available in sqlite)\r\n\r\n    }\r\n\r\n)\r\n\r\n```\r\n\r\n\r\n\r\nAlso, you can use MySQL or PostgreSQL.\r\n\r\n\r\n\r\n```Python\r\n\r\n# MySQL: `pip install mysql-connector`\r\n\r\nbeat_dburi = 'mysql+mysqlconnector://root:root@127.0.0.1:3306/celery-schedule'\r\n\r\n\r\n\r\n# PostgreSQL: `pip install psycopg2`\r\n\r\nbeat_dburi = 'postgresql+psycopg2://postgres:postgres@127.0.0.1:5432/celery-schedule'\r\n\r\n```\r\n\r\n\r\n\r\n## Example Code 1\r\n\r\n\r\n\r\nView `examples/base/tasks.py` for details.\r\n\r\n\r\n\r\n\r\n\r\nRun Worker in console 1\r\n\r\n\r\n\r\n    $ cd examples/base\r\n\r\n\r\n\r\n    # Celery < 5.0\r\n\r\n    $ celery worker -A tasks:celery -l info\r\n\r\n\r\n\r\n    # Celery >= 5.0\r\n\r\n    $ celery -A tasks:celery worker -l info\r\n\r\n\r\n\r\nRun Beat in console 2\r\n\r\n\r\n\r\n    $ cd examples/base\r\n\r\n\r\n\r\n    # Celery < 5.0\r\n\r\n    $ celery beat -A tasks:celery -S tasks:DatabaseScheduler -l info\r\n\r\n\r\n\r\n    # Celery >= 5.0\r\n\r\n    $ celery -A tasks:celery beat -S tasks:DatabaseScheduler -l info\r\n\r\n\r\n\r\n## Example Code 2\r\n\r\n\r\n\r\n### Example creating interval-based periodic task\r\n\r\n\r\n\r\nTo create a periodic task executing at an interval you must first\r\n\r\ncreate the interval object:\r\n\r\n\r\n\r\n```python\r\n\r\n>>> from sqlalchemy_celery_beat.models import PeriodicTask, IntervalSchedule\r\n\r\n>>> from sqlalchemy_celery_beat.session import SessionManager\r\n\r\n>>> from celeryconfig import beat_dburi\r\n\r\n>>> session_manager = SessionManager()\r\n\r\n>>> engine, Session = session_manager.create_session(beat_dburi)\r\n\r\n>>> session = Session()\r\n\r\n\r\n\r\n# executes every 10 seconds.\r\n\r\n>>> schedule = session.query(IntervalSchedule).filter_by(every=10, period=IntervalSchedule.SECONDS).first()\r\n\r\n>>> if not schedule:\r\n\r\n...     schedule = IntervalSchedule(every=10, period=IntervalSchedule.SECONDS)\r\n\r\n...     session.add(schedule)\r\n\r\n...     session.commit()\r\n\r\n```\r\n\r\n\r\n\r\nThat's all the fields you need: a period type and the frequency.\r\n\r\n\r\n\r\nYou can choose between a specific set of periods:\r\n\r\n\r\n\r\n- `IntervalSchedule.DAYS`\r\n\r\n- `IntervalSchedule.HOURS`\r\n\r\n- `IntervalSchedule.MINUTES`\r\n\r\n- `IntervalSchedule.SECONDS`\r\n\r\n- `IntervalSchedule.MICROSECONDS`\r\n\r\n\r\n\r\n_note_:\r\n\r\n\r\n\r\n    If you have multiple periodic tasks executing every 10 seconds,\r\n\r\n    then they should all point to the same schedule object.\r\n\r\n\r\n\r\nNow that we have defined the schedule object, we can create the periodic task\r\n\r\nentry:\r\n\r\n\r\n\r\n```python\r\n\r\n    >>> task = PeriodicTask(\r\n\r\n    ...     schedule_model=schedule,            # we created this above.\r\n\r\n    ...     name='Importing contacts',          # simply describes this periodic task.\r\n\r\n    ...     task='proj.tasks.import_contacts',  # name of task.\r\n\r\n    ... )\r\n\r\n    >>> session.add(task)\r\n\r\n    >>> session.commit()\r\n\r\n```\r\n\r\n\r\n\r\nNote that this is a very basic example, you can also specify the\r\n\r\narguments and keyword arguments used to execute the task, the `queue` to\r\n\r\nsend it to[\\*], and set an expiry time.\r\n\r\n\r\n\r\nHere\\'s an example specifying the arguments, note how JSON serialization\r\n\r\nis required:\r\n\r\n\r\n\r\n    >>> import json\r\n\r\n    >>> from datetime import datetime, timedelta\r\n\r\n\r\n\r\n    >>> periodic_task = PeriodicTask(\r\n\r\n    ...     schedule_model=schedule,                  # we created this above.\r\n\r\n    ...     name='Importing contacts',          # simply describes this periodic task.\r\n\r\n    ...     task='proj.tasks.import_contacts',  # name of task.\r\n\r\n    ...     args=json.dumps(['arg1', 'arg2']),\r\n\r\n    ...     kwargs=json.dumps({\r\n\r\n    ...        'be_careful': True,\r\n\r\n    ...     }),\r\n\r\n    ...     expires=datetime.utcnow() + timedelta(seconds=30)\r\n\r\n    ... )\r\n\r\n    ... session.add(periodic_task)\r\n\r\n    ... session.commit()\r\n\r\n\r\n\r\n### Example creating crontab-based periodic task\r\n\r\n\r\n\r\nA crontab schedule has the fields: `minute`, `hour`, `day_of_week`,\r\n\r\n`day_of_month` and `month_of_year`, so if you want the equivalent of a\r\n\r\n`30 * * * *` (execute every 30 minutes) crontab entry you specify:\r\n\r\n\r\n\r\n    >>> from sqlalchemy_celery_beat.models import PeriodicTask, CrontabSchedule\r\n\r\n    >>> schedule = CrontabSchedule(\r\n\r\n    ...     minute='30',\r\n\r\n    ...     hour='*',\r\n\r\n    ...     day_of_week='*',\r\n\r\n    ...     day_of_month='*',\r\n\r\n    ...     month_of_year='*',\r\n\r\n    ...     timezone='UTC',\r\n\r\n    ... )\r\n\r\n\r\n\r\nThe crontab schedule is linked to a specific timezone using the\r\n\r\n'timezone' input parameter.\r\n\r\n\r\n\r\nThen to create a periodic task using this schedule, use the same\r\n\r\napproach as the interval-based periodic task earlier in this document,\r\n\r\nthe `schedule_model` is a generic foreign-key implementation which makes things very easy and efficient:\r\n\r\n\r\n\r\n    >>> periodic_task = PeriodicTask(\r\n\r\n    ...     schedule_model=schedule,\r\n\r\n    ...     name='Importing contacts',\r\n\r\n    ...     task='proj.tasks.import_contacts',\r\n\r\n    ... )\r\n\r\n\r\n\r\nWhat the previous code actually do is this:\r\n\r\n\r\n\r\n    >>> periodic_task = PeriodicTask(\r\n\r\n    ...     schedule_id=schedule.id,\r\n\r\n    ...     discriminator=schedule.discriminator,\r\n\r\n    ...     name='Importing contacts',\r\n\r\n    ...     task='proj.tasks.import_contacts',\r\n\r\n    ... )\r\n\r\nSo when you can use `discriminator` + `schedule_id` or use the convenient property `schedule_model` and it will populate them for you behind the scenes.\r\n\r\n\r\n\r\n### Temporarily disable a periodic task\r\n\r\n\r\n\r\nYou can use the `enabled` flag to temporarily disable a periodic task:\r\n\r\n\r\n\r\n    >>> periodic_task.enabled = False\r\n\r\n    >>> session.add(periodic_task)\r\n\r\n    >>> session.commit()\r\n\r\n\r\n\r\nIf you are using a bulk operation to update or delete multiple tasks at the same time, the changes won't be noticed by the scheduler until you do `PeriodicTaskChanged.update_changed()` or `.update_from_session()`\r\n\r\n\r\n\r\nexample:\r\n\r\n``` python\r\n\r\nfrom sqlalchemy_celery_beat.models import PeriodicTaskChanged\r\n\r\n\r\n\r\nsession = get_beat_session()\r\n\r\n\r\n\r\nstmt = update(PeriodicTask).where(PeriodicTask.name == 'task-123').values(enabled=False)\r\n\r\n\r\n\r\nsession.execute(stmt)  # using execute causes no orm event to fire, changes are in the database but the schduler has no idea\r\n\r\nsession.commit()\r\n\r\n\r\n\r\nPeriodicTaskChanged.update_from_session(session)\r\n\r\n# now scheduler reloads the tasks and all is good\r\n\r\n```\r\n\r\nThis is not needed when you are updating a specific object using `session.add(task)` because it will trigger the `after_update`, `after_delete` or `after_insert` events.\r\n\r\n\r\n\r\n### Example running periodic tasks\r\n\r\n\r\n\r\nThe periodic tasks still need 'workers' to execute them. So make sure\r\n\r\nthe default **Celery** package is installed. (If not installed, please\r\n\r\nfollow the installation instructions here:\r\n\r\n<https://github.com/celery/celery>)\r\n\r\n\r\n\r\nBoth the worker and beat services need to be running at the same time.\r\n\r\n\r\n\r\n1.  Start a Celery worker service (specify your project name):\r\n\r\n\r\n\r\n        $ celery -A [project-name] worker --loglevel=info\r\n\r\n\r\n\r\n2.  As a separate process, start the beat service (specify the\r\n\r\n    scheduler):\r\n\r\n\r\n\r\n        $ celery -A [project-name] beat -l info --scheduler sqlalchemy_celery_beat.schedulers:DatabaseScheduler\r\n\r\n\r\n\r\n## Working on adding the following features\r\n\r\n\r\n\r\n- \u2705 Add `ClockedSchedule` model\r\n\r\n- \u2705 Implement a generic foreign key\r\n\r\n- \u2705 More robust attribute validation on models\r\n\r\n- \u2705 Add Tests\r\n\r\n- Add more examples\r\n\r\n- Support for Async drivers like asyncpg and psycopg3 async mode\r\n\r\n- Use Alembic migrations\r\n\r\n\r\n\r\nAny help with the tasks above or feedback is appreciated \ud83d\ude42\r\n\r\n\r\n\r\n## Acknowledgments\r\n\r\n\r\n\r\n- [django-celery-beat](https://github.com/celery/django-celery-beat)\r\n\r\n- [celerybeatredis](https://github.com/liuliqiang/celerybeatredis)\r\n\r\n- [celery](https://github.com/celery/celery)\r\n\r\n- [SQLAlchemy](https://www.sqlalchemy.org)\r\n\r\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "A Scheduler Based SQLalchemy For Celery",
    "version": "0.7.1",
    "project_urls": {
        "Homepage": "https://github.com/farahats9/sqlalchemy-celery-beat"
    },
    "split_keywords": [
        "celery",
        "scheduler",
        "sqlalchemy",
        "beat"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "072461862ca9add312115d685f04fcd9af250ecc63f76b6041e049909a3e33f1",
                "md5": "3693c8917b03ddbd4a35955a2a98b177",
                "sha256": "04dda887d20b7aabec97acbd7e634401c1ddb442628f3a3b1be36356f8a6ad95"
            },
            "downloads": -1,
            "filename": "sqlalchemy_celery_beat-0.7.1-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "3693c8917b03ddbd4a35955a2a98b177",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": null,
            "size": 20584,
            "upload_time": "2023-11-28T23:00:51",
            "upload_time_iso_8601": "2023-11-28T23:00:51.018722Z",
            "url": "https://files.pythonhosted.org/packages/07/24/61862ca9add312115d685f04fcd9af250ecc63f76b6041e049909a3e33f1/sqlalchemy_celery_beat-0.7.1-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "46fc4afce7cc725fc93a92b08ab8fcb61615467b5835308226eef37601818f78",
                "md5": "2327f320a89160e2c025b4ce6ad18a54",
                "sha256": "4d0b7f1ea6fae8b4a097a5a8f3730b5f3274346289c4d1d321776e48f8524f2f"
            },
            "downloads": -1,
            "filename": "sqlalchemy_celery_beat-0.7.1.tar.gz",
            "has_sig": false,
            "md5_digest": "2327f320a89160e2c025b4ce6ad18a54",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 30775,
            "upload_time": "2023-11-28T23:00:53",
            "upload_time_iso_8601": "2023-11-28T23:00:53.970405Z",
            "url": "https://files.pythonhosted.org/packages/46/fc/4afce7cc725fc93a92b08ab8fcb61615467b5835308226eef37601818f78/sqlalchemy_celery_beat-0.7.1.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-11-28 23:00:53",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "farahats9",
    "github_project": "sqlalchemy-celery-beat",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "requirements": [],
    "lcname": "sqlalchemy-celery-beat"
}
        
Elapsed time: 0.17982s