patio


Namepatio JSON
Version 0.1.4 PyPI version JSON
download
home_pagehttps://github.com/patio-python/patio/
SummaryPython Asynchronous Task for AsyncIO
upload_time2023-06-09 07:08:02
maintainer
docs_urlNone
authorDmitry Orlov
requires_python>=3.8,<4.0
licenseMIT
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            [![PyPI - License](https://img.shields.io/pypi/l/patio)](https://pypi.org/project/patio) [![Wheel](https://img.shields.io/pypi/wheel/patio)](https://pypi.org/project/patio) [![Mypy](http://www.mypy-lang.org/static/mypy_badge.svg)]() [![PyPI](https://img.shields.io/pypi/v/patio)](https://pypi.org/project/patio) [![PyPI](https://img.shields.io/pypi/pyversions/patio)](https://pypi.org/project/patio) [![Coverage Status](https://coveralls.io/repos/github/patio-python/patio/badge.svg?branch=master)](https://coveralls.io/github/patio-python/patio?branch=master) ![tox](https://github.com/patio-python/patio/workflows/tests/badge.svg?branch=master)

PATIO
=====

PATIO is an acronym for **P**ython **A**synchronous **T**asks for Async**IO**.

Motivation
----------

I wanted to create an easily extensible library, for distributed task execution,
like [`celery`](https://docs.celeryq.dev/), only targeting asyncio as the main
design approach.

By design, the library should be suitable for small projects and the really
large distributed projects. The general idea is that the user simply splits
the projects code base into functions of two roles – background tasks and triggers of these background tasks.
Also, this should help your project to scale horizontally. It allows to make
workers or callers available across the network using embedded TCP, or using
plugins to communicate through the existing messaging infrastructure.

Quickstart
----------

The minimal example, which executes tasks in a thread pool:

```python
import asyncio
from functools import reduce

from patio import Registry
from patio.broker import MemoryBroker
from patio.executor import ThreadPoolExecutor


rpc = Registry()


@rpc("mul")
def multiply(*args: int) -> int:
    return reduce(lambda x, y: x * y, args)


async def main():
    async with ThreadPoolExecutor(rpc, max_workers=4) as executor:
        async with MemoryBroker(executor) as broker:
            print(
                await asyncio.gather(
                    *[broker.call("mul", 1, 2, 3) for _ in range(100)]
                )
            )


if __name__ == '__main__':
    asyncio.run(main())
```

The `ThreadPoolExecutor` in this example is the entity that will execute the
tasks. If the tasks in your project are asynchronous, you can select
`AsyncExecutor`, and then the code will look like this:

```python
import asyncio
from functools import reduce

from patio import Registry
from patio.broker import MemoryBroker
from patio.executor import AsyncExecutor


rpc = Registry()


@rpc("mul")
async def multiply(*args: int) -> int:
    # do something asynchronously
    await asyncio.sleep(0)
    return reduce(lambda x, y: x * y, args)


async def main():
    async with AsyncExecutor(rpc, max_workers=4) as executor:
        async with MemoryBroker(executor) as broker:
            print(
                await asyncio.gather(
                    *[broker.call("mul", 1, 2, 3) for _ in range(100)]
                )
            )


if __name__ == '__main__':
    asyncio.run(main())
```

These examples may seem complicated, but don't worry, the next section details
the general concepts and hopefully a lot will become clear to you.

The main concepts
-----------------

The main idea in developing this library was to create a maximally
modular and extensible system that can be expanded with third-party
integrations or directly in the user's code.

The basic elements from which everything is built are:

* `Registry` - Key-Value like store for the functions
* `Executor` - The thing which execute functions from the registry
* `Broker` - The actor who distributes tasks in your distributed
  (or local) system.

Registry
--------

This is a container of functions for their subsequent execution.
You can register a function by specific name or without it,
in which case the function is assigned a unique name that depends on
the source code of the function.

This registry does not necessarily have to match on the calling and called
sides, but for functions that you register without a name it is must be,
and then you should not need to pass the function name but the function
itself when you will call it.

An instance of the registry must be transferred to the broker,
the first broker in the process of setting up will block the registry
to write, that is, registering new functions will be impossible.

An optional ``project`` parameter, this is essentially like a namespace
that will help avoid clash functions in different projects with the same
name. It is recommended to specify it and the broker should also use this
parameter, so it should be the same value within the same project.

You can either manually register elements or use a
registry instance as a decorator:

```python
from patio import Registry

rpc = Registry(project="example")

# Will be registered with auto generated name
@rpc
def mul(a, b):
    return a * b

@rpc('div')
def div(a, b):
    return a / b

def pow(a, b):
    return a ** b

def sub(a, b):
    return a - b

# Register with auto generated name
rpc.register(pow)

rpc.register(sub, "sub")
```

Alternatively using ``register`` method:

```python
from patio import Registry

rpc = Registry(project="example")

def pow(a, b):
    return a ** b

def sub(a, b):
    return a - b

# Register with auto generated name
rpc.register(pow)

rpc.register(sub, "sub")
```

Finally, you can register functions explicitly, as if it were
just a dictionary:

```python
from patio import Registry

rpc = Registry(project="example")

def mul(a, b):
    return a * b

rpc['mul'] = mul
```

Executor
--------

An Executor is an entity that executes local functions from registry.
The following executors are implemented in the package:

* `AsyncExecutor` - Implements pool of asynchronous tasks
* `ThreadPoolExecutor` - Implements pool of threads
* `ProcessPoolExecutor` - Implements pool of processes
* `NullExecutor` - Implements nothing and exists just for forbid execute
  anything explicitly.

Its role is to reliably execute jobs without taking too much so as not to
cause a denial of service, or excessive memory consumption.

The executor instance is passing to the broker, it's usually applies
it to the whole registry. Therefore, you should understand what functions
the registry must contain to choose kind of an executor.

Broker
------

The basic approach for distributing tasks is to shift the responsibility
for the distribution to the user implementation. In this way, task
distribution can be implemented through third-party brokers, databases,
or something else.

This package is implemented by the following brokers:

* `MemoryBroker` - To distribute tasks within a single process.
   A very simple implementation, in case you don't know yet how
   your application will develop and just want to leave the decision
   of which broker to use for later, while laying the foundation for
   switching to another broker.
* `TCPBroker` - Simple implementation of the broker just using TCP,
   both Server and Client mode is supported for both the task executor
   and the task provider.

### `MemoryBroker`

It's a good start if you don't need to assign tasks in a distributed
fashion right now.

In fact, it's a simple way to run tasks in the executor from the other
places in your project.

### `TCPBroker`

It allows you to make your tasks distributed without resorting to external
message brokers.

The basic idea of TCP broker implementation is that in terms of
performing tasks, there is no difference between them, it is
just a way to establish a connection, both the server and the
client can be the one who performs tasks and the one who sets
them, and it is also possible in a mixed mode.

In other words, deciding who will be the server and who will be the
client in your system is just a way to connect and find each other
in your distributed system.

Here are the ways of organizing communication between the
server and the clients.

#### Server centric scheme example

![server centric](https://raw.githubusercontent.com/patio-python/patio/feature/tcp-broker/images/server-centric.svg "Server Centric")

This diagram describes a simple example, if there is one server
and one client exchanging messages via TCP.

#### One client multiple servers example

![multiple servers](https://raw.githubusercontent.com/patio-python/patio/feature/tcp-broker/images/multiple-servers.svg "One client multiple servers")

This is an example of how a client establishes connections to a set server.

#### Full mesh example

![full mesh](https://raw.githubusercontent.com/patio-python/patio/feature/tcp-broker/images/full-mesh.svg "Full mesh")

Full mesh scheme, all clients are connected to all servers.

#### Authorization

Authorization takes place at the start of the connection,
for this the parameter `key=` (`b''` is by default) must contain the same keys
for client and server.

**It is important to understand that this is not 100% protection against
attacks like MITM etc.**

This approach should only be used if the client and server are on a trusted
network. In order to secure traffic as it traverses the Internet, the
`ssl_context=` parameter should be prepended to both the server and the client
(see example bellow).

#### Examples

The examples below will hopefully help you figure this out.

##### Server executing tasks

```python
from functools import reduce

import asyncio

from patio import Registry
from patio.broker.tcp import TCPServerBroker
from patio.executor import ThreadPoolExecutor

rpc = Registry(project="test", auto_naming=False)


def mul(*args):
    return reduce(lambda x, y: x * y, args)


async def main():
    rpc.register(mul, "mul")

    async with ThreadPoolExecutor(rpc) as executor:
        async with TCPServerBroker(executor) as broker:
            # Start IPv4 server
            await broker.listen(address='127.0.0.1')

            # Start IPv6 server
            await broker.listen(address='::1', port=12345)

            await broker.join()


if __name__ == "__main__":
    asyncio.run(main())
```

##### Client calling tasks remotely

```python
import asyncio

from patio import Registry
from patio.broker.tcp import TCPClientBroker
from patio.executor import NullExecutor

rpc = Registry(project="test", auto_naming=False)


async def main():
    async with NullExecutor(rpc) as executor:
        async with TCPClientBroker(executor) as broker:
            # Connect to the IPv4 address
            await broker.connect(address='127.0.0.1')

            # Connect to the IPv6 address (optional)
            await broker.connect(address='::1', port=12345)

            print(
                await asyncio.gather(*[
                    broker.call('mul', i, i) for i in range(10)
                ]),
            )


if __name__ == "__main__":
    asyncio.run(main())
```

##### Examples with SSL

The task comes down to passing the ssl context to the server and to the client.

Below you will see an example of how to make a couple of self-signed
certificates, and an authorization CA. Original post
[here](https://gist.github.com/fntlnz/cf14feb5a46b2eda428e000157447309).

This is just an example, and if you want to use your own certificates, just
create an ssl context as required by your security policy.

###### Certificate authority creation

**Attention:** this is the key used to sign the certificate requests, anyone
holding this can sign certificates on your behalf. So keep it in a safe place!

```shell
openssl req -x509 \
  -sha256 -days 3650 \
  -nodes \
  -newkey rsa:2048 \
  -subj "/CN=Patio Example CA/C=CC/L=West Island" \
  -keyout CA.key -out CA.pem
```

###### Server certificate creation

First create server private key:

```shell
openssl genrsa -out server.key 2048
```

Then create the certificate request signing this key:

```shell
openssl req \
  -new -sha256 \
  -key server.key \
  -subj "/CN=server.example.net/C=CC/L=West Island" \
  -out server.csr
```

Sign this request by CA:

```shell
openssl x509 -req \
  -days 365 -sha256 \
  -in server.csr \
  -CA CA.pem \
  -CAkey CA.key \
  -CAcreateserial \
  -out server.pem
```

This should be enough to encrypt the traffic.

##### Server with SSL executing tasks

```python
from functools import reduce

import asyncio
import ssl

from patio import Registry
from patio.broker.tcp import TCPServerBroker
from patio.executor import ThreadPoolExecutor

rpc = Registry(project="test", auto_naming=False)


def mul(*args):
    return reduce(lambda x, y: x * y, args)


async def main():
    rpc.register(mul, "mul")

    ssl_context = ssl.SSLContext()
    ssl_context.load_verify_locations("path/to/CA.pem")
    ssl_context.load_cert_chain("path/to/server.pem", "path/to/server.key")

    async with ThreadPoolExecutor(rpc) as executor:
        async with TCPServerBroker(executor, ssl_context=ssl_context) as broker:
            # Start IPv4 server
            await broker.listen(address='127.0.0.1')

            # Start IPv6 server
            await broker.listen(address='::1', port=12345)

            await broker.join()


if __name__ == "__main__":
    asyncio.run(main())
```

##### Client calling tasks remotely

```python
import asyncio
import ssl

from patio import Registry
from patio.broker.tcp import TCPClientBroker
from patio.executor import NullExecutor


rpc = Registry(project="test", auto_naming=False)


async def main():
    ssl_context = ssl.create_default_context(cafile="path/to/CA.pem")

    async with NullExecutor(rpc) as executor:
        async with TCPClientBroker(executor, ssl_context=ssl_context) as broker:
            # Connect to the IPv4 address
            await broker.connect(address='127.0.0.1')

            # Connect to the IPv6 address (optional)
            await broker.connect(address='::1', port=12345)

            print(
                await asyncio.gather(*[
                    broker.call('mul', i, i) for i in range(10)
                ]),
            )


if __name__ == "__main__":
    asyncio.run(main())
```

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/patio-python/patio/",
    "name": "patio",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.8,<4.0",
    "maintainer_email": "",
    "keywords": "",
    "author": "Dmitry Orlov",
    "author_email": "me@mosquito.su",
    "download_url": "https://files.pythonhosted.org/packages/e5/05/7a3d5fa3a04102e20bbab95202ac15b435a88e3bf3ae4a77c00449ac927f/patio-0.1.4.tar.gz",
    "platform": null,
    "description": "[![PyPI - License](https://img.shields.io/pypi/l/patio)](https://pypi.org/project/patio) [![Wheel](https://img.shields.io/pypi/wheel/patio)](https://pypi.org/project/patio) [![Mypy](http://www.mypy-lang.org/static/mypy_badge.svg)]() [![PyPI](https://img.shields.io/pypi/v/patio)](https://pypi.org/project/patio) [![PyPI](https://img.shields.io/pypi/pyversions/patio)](https://pypi.org/project/patio) [![Coverage Status](https://coveralls.io/repos/github/patio-python/patio/badge.svg?branch=master)](https://coveralls.io/github/patio-python/patio?branch=master) ![tox](https://github.com/patio-python/patio/workflows/tests/badge.svg?branch=master)\n\nPATIO\n=====\n\nPATIO is an acronym for **P**ython **A**synchronous **T**asks for Async**IO**.\n\nMotivation\n----------\n\nI wanted to create an easily extensible library, for distributed task execution,\nlike [`celery`](https://docs.celeryq.dev/), only targeting asyncio as the main\ndesign approach.\n\nBy design, the library should be suitable for small projects and the really\nlarge distributed projects. The general idea is that the user simply splits\nthe projects code base into functions of two roles \u2013 background tasks and triggers of these background tasks.\nAlso, this should help your project to scale horizontally. It allows to make\nworkers or callers available across the network using embedded TCP, or using\nplugins to communicate through the existing messaging infrastructure.\n\nQuickstart\n----------\n\nThe minimal example, which executes tasks in a thread pool:\n\n```python\nimport asyncio\nfrom functools import reduce\n\nfrom patio import Registry\nfrom patio.broker import MemoryBroker\nfrom patio.executor import ThreadPoolExecutor\n\n\nrpc = Registry()\n\n\n@rpc(\"mul\")\ndef multiply(*args: int) -> int:\n    return reduce(lambda x, y: x * y, args)\n\n\nasync def main():\n    async with ThreadPoolExecutor(rpc, max_workers=4) as executor:\n        async with MemoryBroker(executor) as broker:\n            print(\n                await asyncio.gather(\n                    *[broker.call(\"mul\", 1, 2, 3) for _ in range(100)]\n                )\n            )\n\n\nif __name__ == '__main__':\n    asyncio.run(main())\n```\n\nThe `ThreadPoolExecutor` in this example is the entity that will execute the\ntasks. If the tasks in your project are asynchronous, you can select\n`AsyncExecutor`, and then the code will look like this:\n\n```python\nimport asyncio\nfrom functools import reduce\n\nfrom patio import Registry\nfrom patio.broker import MemoryBroker\nfrom patio.executor import AsyncExecutor\n\n\nrpc = Registry()\n\n\n@rpc(\"mul\")\nasync def multiply(*args: int) -> int:\n    # do something asynchronously\n    await asyncio.sleep(0)\n    return reduce(lambda x, y: x * y, args)\n\n\nasync def main():\n    async with AsyncExecutor(rpc, max_workers=4) as executor:\n        async with MemoryBroker(executor) as broker:\n            print(\n                await asyncio.gather(\n                    *[broker.call(\"mul\", 1, 2, 3) for _ in range(100)]\n                )\n            )\n\n\nif __name__ == '__main__':\n    asyncio.run(main())\n```\n\nThese examples may seem complicated, but don't worry, the next section details\nthe general concepts and hopefully a lot will become clear to you.\n\nThe main concepts\n-----------------\n\nThe main idea in developing this library was to create a maximally\nmodular and extensible system that can be expanded with third-party\nintegrations or directly in the user's code.\n\nThe basic elements from which everything is built are:\n\n* `Registry` - Key-Value like store for the functions\n* `Executor` - The thing which execute functions from the registry\n* `Broker` - The actor who distributes tasks in your distributed\n  (or local) system.\n\nRegistry\n--------\n\nThis is a container of functions for their subsequent execution.\nYou can register a function by specific name or without it,\nin which case the function is assigned a unique name that depends on\nthe source code of the function.\n\nThis registry does not necessarily have to match on the calling and called\nsides, but for functions that you register without a name it is must be,\nand then you should not need to pass the function name but the function\nitself when you will call it.\n\nAn instance of the registry must be transferred to the broker,\nthe first broker in the process of setting up will block the registry\nto write, that is, registering new functions will be impossible.\n\nAn optional ``project`` parameter, this is essentially like a namespace\nthat will help avoid clash functions in different projects with the same\nname. It is recommended to specify it and the broker should also use this\nparameter, so it should be the same value within the same project.\n\nYou can either manually register elements or use a\nregistry instance as a decorator:\n\n```python\nfrom patio import Registry\n\nrpc = Registry(project=\"example\")\n\n# Will be registered with auto generated name\n@rpc\ndef mul(a, b):\n    return a * b\n\n@rpc('div')\ndef div(a, b):\n    return a / b\n\ndef pow(a, b):\n    return a ** b\n\ndef sub(a, b):\n    return a - b\n\n# Register with auto generated name\nrpc.register(pow)\n\nrpc.register(sub, \"sub\")\n```\n\nAlternatively using ``register`` method:\n\n```python\nfrom patio import Registry\n\nrpc = Registry(project=\"example\")\n\ndef pow(a, b):\n    return a ** b\n\ndef sub(a, b):\n    return a - b\n\n# Register with auto generated name\nrpc.register(pow)\n\nrpc.register(sub, \"sub\")\n```\n\nFinally, you can register functions explicitly, as if it were\njust a dictionary:\n\n```python\nfrom patio import Registry\n\nrpc = Registry(project=\"example\")\n\ndef mul(a, b):\n    return a * b\n\nrpc['mul'] = mul\n```\n\nExecutor\n--------\n\nAn Executor is an entity that executes local functions from registry.\nThe following executors are implemented in the package:\n\n* `AsyncExecutor` - Implements pool of asynchronous tasks\n* `ThreadPoolExecutor` - Implements pool of threads\n* `ProcessPoolExecutor` - Implements pool of processes\n* `NullExecutor` - Implements nothing and exists just for forbid execute\n  anything explicitly.\n\nIts role is to reliably execute jobs without taking too much so as not to\ncause a denial of service, or excessive memory consumption.\n\nThe executor instance is passing to the broker, it's usually applies\nit to the whole registry. Therefore, you should understand what functions\nthe registry must contain to choose kind of an executor.\n\nBroker\n------\n\nThe basic approach for distributing tasks is to shift the responsibility\nfor the distribution to the user implementation. In this way, task\ndistribution can be implemented through third-party brokers, databases,\nor something else.\n\nThis package is implemented by the following brokers:\n\n* `MemoryBroker` - To distribute tasks within a single process.\n   A very simple implementation, in case you don't know yet how\n   your application will develop and just want to leave the decision\n   of which broker to use for later, while laying the foundation for\n   switching to another broker.\n* `TCPBroker` - Simple implementation of the broker just using TCP,\n   both Server and Client mode is supported for both the task executor\n   and the task provider.\n\n### `MemoryBroker`\n\nIt's a good start if you don't need to assign tasks in a distributed\nfashion right now.\n\nIn fact, it's a simple way to run tasks in the executor from the other\nplaces in your project.\n\n### `TCPBroker`\n\nIt allows you to make your tasks distributed without resorting to external\nmessage brokers.\n\nThe basic idea of TCP broker implementation is that in terms of\nperforming tasks, there is no difference between them, it is\njust a way to establish a connection, both the server and the\nclient can be the one who performs tasks and the one who sets\nthem, and it is also possible in a mixed mode.\n\nIn other words, deciding who will be the server and who will be the\nclient in your system is just a way to connect and find each other\nin your distributed system.\n\nHere are the ways of organizing communication between the\nserver and the clients.\n\n#### Server centric scheme example\n\n![server centric](https://raw.githubusercontent.com/patio-python/patio/feature/tcp-broker/images/server-centric.svg \"Server Centric\")\n\nThis diagram describes a simple example, if there is one server\nand one client exchanging messages via TCP.\n\n#### One client multiple servers example\n\n![multiple servers](https://raw.githubusercontent.com/patio-python/patio/feature/tcp-broker/images/multiple-servers.svg \"One client multiple servers\")\n\nThis is an example of how a client establishes connections to a set server.\n\n#### Full mesh example\n\n![full mesh](https://raw.githubusercontent.com/patio-python/patio/feature/tcp-broker/images/full-mesh.svg \"Full mesh\")\n\nFull mesh scheme, all clients are connected to all servers.\n\n#### Authorization\n\nAuthorization takes place at the start of the connection,\nfor this the parameter `key=` (`b''` is by default) must contain the same keys\nfor client and server.\n\n**It is important to understand that this is not 100% protection against\nattacks like MITM etc.**\n\nThis approach should only be used if the client and server are on a trusted\nnetwork. In order to secure traffic as it traverses the Internet, the\n`ssl_context=` parameter should be prepended to both the server and the client\n(see example bellow).\n\n#### Examples\n\nThe examples below will hopefully help you figure this out.\n\n##### Server executing tasks\n\n```python\nfrom functools import reduce\n\nimport asyncio\n\nfrom patio import Registry\nfrom patio.broker.tcp import TCPServerBroker\nfrom patio.executor import ThreadPoolExecutor\n\nrpc = Registry(project=\"test\", auto_naming=False)\n\n\ndef mul(*args):\n    return reduce(lambda x, y: x * y, args)\n\n\nasync def main():\n    rpc.register(mul, \"mul\")\n\n    async with ThreadPoolExecutor(rpc) as executor:\n        async with TCPServerBroker(executor) as broker:\n            # Start IPv4 server\n            await broker.listen(address='127.0.0.1')\n\n            # Start IPv6 server\n            await broker.listen(address='::1', port=12345)\n\n            await broker.join()\n\n\nif __name__ == \"__main__\":\n    asyncio.run(main())\n```\n\n##### Client calling tasks remotely\n\n```python\nimport asyncio\n\nfrom patio import Registry\nfrom patio.broker.tcp import TCPClientBroker\nfrom patio.executor import NullExecutor\n\nrpc = Registry(project=\"test\", auto_naming=False)\n\n\nasync def main():\n    async with NullExecutor(rpc) as executor:\n        async with TCPClientBroker(executor) as broker:\n            # Connect to the IPv4 address\n            await broker.connect(address='127.0.0.1')\n\n            # Connect to the IPv6 address (optional)\n            await broker.connect(address='::1', port=12345)\n\n            print(\n                await asyncio.gather(*[\n                    broker.call('mul', i, i) for i in range(10)\n                ]),\n            )\n\n\nif __name__ == \"__main__\":\n    asyncio.run(main())\n```\n\n##### Examples with SSL\n\nThe task comes down to passing the ssl context to the server and to the client.\n\nBelow you will see an example of how to make a couple of self-signed\ncertificates, and an authorization CA. Original post\n[here](https://gist.github.com/fntlnz/cf14feb5a46b2eda428e000157447309).\n\nThis is just an example, and if you want to use your own certificates, just\ncreate an ssl context as required by your security policy.\n\n###### Certificate authority creation\n\n**Attention:** this is the key used to sign the certificate requests, anyone\nholding this can sign certificates on your behalf. So keep it in a safe place!\n\n```shell\nopenssl req -x509 \\\n  -sha256 -days 3650 \\\n  -nodes \\\n  -newkey rsa:2048 \\\n  -subj \"/CN=Patio Example CA/C=CC/L=West Island\" \\\n  -keyout CA.key -out CA.pem\n```\n\n###### Server certificate creation\n\nFirst create server private key:\n\n```shell\nopenssl genrsa -out server.key 2048\n```\n\nThen create the certificate request signing this key:\n\n```shell\nopenssl req \\\n  -new -sha256 \\\n  -key server.key \\\n  -subj \"/CN=server.example.net/C=CC/L=West Island\" \\\n  -out server.csr\n```\n\nSign this request by CA:\n\n```shell\nopenssl x509 -req \\\n  -days 365 -sha256 \\\n  -in server.csr \\\n  -CA CA.pem \\\n  -CAkey CA.key \\\n  -CAcreateserial \\\n  -out server.pem\n```\n\nThis should be enough to encrypt the traffic.\n\n##### Server with SSL executing tasks\n\n```python\nfrom functools import reduce\n\nimport asyncio\nimport ssl\n\nfrom patio import Registry\nfrom patio.broker.tcp import TCPServerBroker\nfrom patio.executor import ThreadPoolExecutor\n\nrpc = Registry(project=\"test\", auto_naming=False)\n\n\ndef mul(*args):\n    return reduce(lambda x, y: x * y, args)\n\n\nasync def main():\n    rpc.register(mul, \"mul\")\n\n    ssl_context = ssl.SSLContext()\n    ssl_context.load_verify_locations(\"path/to/CA.pem\")\n    ssl_context.load_cert_chain(\"path/to/server.pem\", \"path/to/server.key\")\n\n    async with ThreadPoolExecutor(rpc) as executor:\n        async with TCPServerBroker(executor, ssl_context=ssl_context) as broker:\n            # Start IPv4 server\n            await broker.listen(address='127.0.0.1')\n\n            # Start IPv6 server\n            await broker.listen(address='::1', port=12345)\n\n            await broker.join()\n\n\nif __name__ == \"__main__\":\n    asyncio.run(main())\n```\n\n##### Client calling tasks remotely\n\n```python\nimport asyncio\nimport ssl\n\nfrom patio import Registry\nfrom patio.broker.tcp import TCPClientBroker\nfrom patio.executor import NullExecutor\n\n\nrpc = Registry(project=\"test\", auto_naming=False)\n\n\nasync def main():\n    ssl_context = ssl.create_default_context(cafile=\"path/to/CA.pem\")\n\n    async with NullExecutor(rpc) as executor:\n        async with TCPClientBroker(executor, ssl_context=ssl_context) as broker:\n            # Connect to the IPv4 address\n            await broker.connect(address='127.0.0.1')\n\n            # Connect to the IPv6 address (optional)\n            await broker.connect(address='::1', port=12345)\n\n            print(\n                await asyncio.gather(*[\n                    broker.call('mul', i, i) for i in range(10)\n                ]),\n            )\n\n\nif __name__ == \"__main__\":\n    asyncio.run(main())\n```\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "Python Asynchronous Task for AsyncIO",
    "version": "0.1.4",
    "project_urls": {
        "Homepage": "https://github.com/patio-python/patio/"
    },
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "e161c01a376337da62197bf760b4cb0898eceeee798b1a2d762ff8e09fa0d30d",
                "md5": "552cfebb245bf46e9e7e3f6e60f30f4a",
                "sha256": "c09810b4b38bfdd95f2929a5290808575d4f449ac078020e796de3947b9b2af1"
            },
            "downloads": -1,
            "filename": "patio-0.1.4-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "552cfebb245bf46e9e7e3f6e60f30f4a",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8,<4.0",
            "size": 21579,
            "upload_time": "2023-06-09T07:08:00",
            "upload_time_iso_8601": "2023-06-09T07:08:00.024298Z",
            "url": "https://files.pythonhosted.org/packages/e1/61/c01a376337da62197bf760b4cb0898eceeee798b1a2d762ff8e09fa0d30d/patio-0.1.4-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "e5057a3d5fa3a04102e20bbab95202ac15b435a88e3bf3ae4a77c00449ac927f",
                "md5": "abd101d46e2b5ddd706fbd9bc62e02e7",
                "sha256": "fe526ffbdeff7f354dc8a7aedc0f8cc4b0851445bddc21339ff5fd15d4975ebb"
            },
            "downloads": -1,
            "filename": "patio-0.1.4.tar.gz",
            "has_sig": false,
            "md5_digest": "abd101d46e2b5ddd706fbd9bc62e02e7",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8,<4.0",
            "size": 19216,
            "upload_time": "2023-06-09T07:08:02",
            "upload_time_iso_8601": "2023-06-09T07:08:02.218046Z",
            "url": "https://files.pythonhosted.org/packages/e5/05/7a3d5fa3a04102e20bbab95202ac15b435a88e3bf3ae4a77c00449ac927f/patio-0.1.4.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-06-09 07:08:02",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "patio-python",
    "github_project": "patio",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "patio"
}
        
Elapsed time: 0.07646s