# naeural_client SDK
This is the Python SDK package that allows interactions, development and deployment of jobs in Naeural Edge Protocol network. The SDK enables low-code development and deployment of end-to-end AI (and not only) cooperative application pipelines within the Naeural Edge Protocol Execution Engine processing nodes ecosystem. For further information please see [Naeural Edge Protocol AI OS - Decentralized ubiquitous computing MLOps execution engine](https://arxiv.org/pdf/2306.08708).
## Dependencies
This packet depends on the following packets: `pika`, `paho-mqtt`, `numpy`, `pyopenssl>=23.0.0`, `cryptography>=39.0.0`, `python-dateutil`, `pyaml`.
## Installation
```shell
python -m pip install naeural_client
```
## Documentation
Minimal documentation will be presented here. The complete documentation is
Work in Progress.
Code examples are located in the `tutorials` folder in the project's repository.
## Quick start guides
Here you will find a selection of guides and documentation snippets to get
you started with the `naeural_client` SDK. These are only the most important aspects,
selected from the documentation and from the code examples. For more
in-depth information, please consult the examples from the repository
and the documentation.
### Naming conventions & FAQs
The following are the same:
- `Signature == Plugin's name`
- `Plugin ~ Instance` (Only in the context of talking about a running plugin (instance); people tend to omit the word `instance`)
- `Node == Worker` (Unless it is in the context of a distributed job, the 2 words refer to the same thing)
## Hello world tutorial
Below is a simple "Hello world!" style application that aims to show how simple and straightforward it is to distribute existing Python code to multiple edge node workers.
To execute this code, you can check [tutorials/video_presentation/1. hello_world.ipynb](./tutorials/video_presentation/1.%20hello_world.ipynb)
### 1. Create `.env` file
Copy the `tutorials/.example_env` file to your project directory and rename it to `.env`.
Fill in the empty variables with appropriate values.
### 2. Create new / Use test private key
**Disclaimer: You should never publish sensitive information such as private keys.**
To experiment on our test net, you can use the provided private key to communicate with the 3 nodes in the test network.
#### Create new private key
When first connecting to our network, the sdk will search in the current working directory for an existing private key. If not found, the SDK will create one at `$(cwd)/_local_cache/_data/_pk_sdk.pem`.
#### Using an existing private key
To use an existing private key, create in the working directory the directory tree `_local_cache/_data/` and add the `_pk_sdk.pem` file there.
To use our provided key. copy it from `tutorials/_example_pk_sdk.pem` to `local_cache/_data/` and change its name to `_pk_sdk.pem`.
### 3. Local Execution
We want to find all $168$ prime numbers in the interval $1$ - $1000$. For this we can run the following code on our local machine.
This code has segments running on multiple threads using a ThreadPool.
```python
import numpy as np
from concurrent.futures import ThreadPoolExecutor
def local_brute_force_prime_number_generator():
def is_prime(n):
if n <= 1:
return False
for i in range(2, int(np.sqrt(n)) + 1):
if n % i == 0:
return False
return True
random_numbers = np.random.randint(1, 1000, 20)
thread_pool = ThreadPoolExecutor(max_workers=4)
are_primes = list(thread_pool.map(is_prime, random_numbers))
prime_numbers = []
for i in range(len(random_numbers)):
if are_primes[i]:
prime_numbers.append(random_numbers[i])
return prime_numbers
if __name__ == "__main__":
found_so_far = []
print_step = 0
while len(found_so_far) < 168:
# compute a batch of prime numbers
prime_numbers = local_brute_force_prime_number_generator()
# keep only the new prime numbers
for prime_number in prime_numbers:
if prime_number not in found_so_far:
found_so_far.append(prime_number)
# end for
# show progress
if print_step % 50 == 0:
print("Found so far: {}: {}\n".format(len(found_so_far), sorted(found_so_far)))
print_step += 1
# end while
# show final result
print("Found so far: {}: {}\n".format(len(found_so_far), sorted(found_so_far)))
```
We can see that we have a `local_brute_force_prime_number_generator` method which will generate a random sample of $20$ numbers that will be checked if they are prime or not.
The rest of the code handles how the numbers generated with this method are kept.
Because we want to find $168$ unique numbers, we append to the list of found primes only the numbers that are not present yet.
At the end, we want to show a list of all the numbers found.
### 4. Remote Execution
For this example we would like to use multiple edge nodes to find the prime numbers faster.
To execute this code on our network, a series of changes must be made to the `local_brute_force_prime_number_generator` method.
These changes are the only ones a developer has to do to deploy his own custom code on the network.
For this, we will create a new method, `remote_brute_force_prime_number_generator`, which will use the exposed edge node API methods.
```python
from naeural_client import CustomPluginTemplate
# through the `plugin` object we get access to the edge node API
# the CustomPluginTemplate class acts as a documentation for all the available methods and attributes
# since we do not allow imports in the custom code due to security reasons, the `plugin` object
# exposes common modules to the user
def remote_brute_force_prime_number_generator(plugin: CustomPluginTemplate):
def is_prime(n):
if n <= 1:
return False
# we use the `plugin.np` instead of the `np` module
for i in range(2, int(plugin.np.sqrt(n)) + 1):
if n % i == 0:
return False
return True
# we use the `plugin.np` instead of the `np` module
random_numbers = plugin.np.random.randint(1, 1000, 20)
# we use the `plugin.threadapi_map` instead of the `ThreadPoolExecutor.map`
are_primes = plugin.threadapi_map(is_prime, random_numbers, n_threads=4)
prime_numbers = []
for i in range(len(random_numbers)):
if are_primes[i]:
prime_numbers.append(random_numbers[i])
return prime_numbers
```
This are all the changes we have to do to deploy this code in the network.
Now lets connect to the network and see what nodes are online.
We will use the `on_heartbeat` callback to print the nodes.
```python
from naeural_client import Session
from time import sleep
def on_heartbeat(session: Session, node: str, heartbeat: dict):
# the `.P` method is used to print messages in the console and store them in the log file
session.P("{} is online".format(node))
return
if __name__ == '__main__':
# create a session
# the network credentials are read from the .env file automatically
session = Session(
on_heartbeat=on_heartbeat
)
# run the program for 15 seconds to show all the nodes that are online
sleep(15)
```
Next we will select an online node. This node will be our entrypoint in the network.
The available nodes in our test net are:
```
0xai_A8SY7lEqBtf5XaGyB6ipdk5C30vSf3HK4xELp3iplwLe naeural-1
0xai_Amfnbt3N-qg2-qGtywZIPQBTVlAnoADVRmSAsdDhlQ-6 naeural-2
0xai_ApltAljEgWk3g8x2QcSa0sS3hT1P4dyCchd04zFSMy5e naeural-3
```
We will send a task to this node. Since we want to distribute the task of finding prime numbers to multiple nodes, this selected node will handle distribution of tasks and collection of the results.
```python
node = "0xai_A8SY7lEqBtf5XaGyB6ipdk5C30vSf3HK4xELp3iplwLe" # naeural-1
# we usually wait for the node to be online before sending the task
# but in this case we are sure that the node is online because we
# have received heartbeats from it during the sleep period
# session.wait_for_node(node)
```
Our selected node will periodically output partial results with the prime numbers found so far by the worker nodes. We want to consume these results.
Thus, we need to implement a callback method that will handle this.
```python
from naeural_client import Pipeline
# a flag used to close the session when the task is finished
finished = False
def locally_process_partial_results(pipeline: Pipeline, full_payload):
global finished
found_so_far = full_payload.get("DATA")
if found_so_far:
pipeline.P("Found so far: {}: {}\n\n".format(len(found_so_far), sorted(found_so_far)))
progress = full_payload.get("PROGRESS")
if progress == 100:
pipeline.P("FINISHED\n\n")
finished = True
return
```
Now we are ready to deploy our job to the network.
```python
from naeural_client import DistributedCustomCodePresets as Presets
_, _ = session.create_chain_dist_custom_job(
# this is the main node, our entrypoint
node=node,
# this function is executed on the main node
# this handles what we want to do with primes found by a worker node after an iteration
# we want to store only the unique prime numbers
# we cam either write a custom code to pass here or we can use a preset
main_node_process_real_time_collected_data=Presets.PROCESS_REAL_TIME_COLLECTED_DATA__KEEP_UNIQUES_IN_AGGREGATED_COLLECTED_DATA,
# this function is executed on the main node
# this handles the finish condition of our distributed job
# we want to finish when we have found 168 prime numbers
# so more than 167 prime numbers
# we cam either write a custom code to pass here or we can use a preset
main_node_finish_condition=Presets.FINISH_CONDITION___AGGREGATED_DATA_MORE_THAN_X,
main_node_finish_condition_kwargs={
"X": 167
},
# this function is executed on the main node
# this handles the final processing of the results
# this function prepares data for the final result of the distributed job
# we want to aggregate all the prime numbers found by the worker nodes in a single list
# we cam either write a custom code to pass here or we can use a preset
main_node_aggregate_collected_data=Presets.AGGREGATE_COLLECTED_DATA___AGGREGATE_COLLECTED_DATA,
# how many worker nodes we want to use for this task
nr_remote_worker_nodes=2,
# this is the function that will be executed on the worker nodes
# this function generates prime numbers using brute force
# we simply pass the function reference
worker_node_code=remote_brute_force_prime_number_generator,
# this is the function that will be executed on the client
# this is the callback function that processes the partial results
# in our case we want to print the partial results
on_data=locally_process_partial_results,
# we want to deploy the job immediately
deploy=True
)
```
Last but not least, we want to close the session when the distributed job finished.
```python
# we wait until the finished flag is set to True
# we want to release the resources allocated on the selected node when the job is finished
session.run(wait=lambda: not finished, close_pipelines=True)
```
# Project Financing Disclaimer
This project includes open-source components that have been developed with the support of financing grants SMIS 143488 and SMIS 156084, provided by the Romanian Competitiveness Operational Programme. We are grateful for this support, which has enabled us to advance our work and share these resources with the community.
The content and information provided within this repository are solely the responsibility of the authors and do not necessarily reflect the views of the funding agencies. The funding received under these grants has been instrumental in supporting specific parts of this open source project, allowing for broader dissemination and collaborative development.
For any inquiries related to the funding and its impact on this project, please contact the authors directly.
# Citation
```bibtex
@misc{naeural_client,
author = {Stefan Saraev, Andrei Damian},
title = {naeural_client: Python SDK for Naeural Edge Protocol Edge Protocol},
year = {2024},
howpublished = {\url{https://github.com/Naeural Edge ProtocolEdgeProtocol/naeural_client}},
}
```
```bibtex
@misc{project_funding_acknowledgment1,
author = {Damian, Bleotiu, Saraev, Constantinescu},
title = {SOLIS – Sistem Omogen multi-Locație cu funcționalități Inteligente și Sustenabile”
SMIS 143488},
howpublished = {\url{https://github.com/Naeural Edge ProtocolEdgeProtocol/}},
note = {This project includes open-source components developed with support from the Romanian Competitiveness Operational Programme under grants SMIS 143488. The content is solely the responsibility of the authors and does not necessarily reflect the views of the funding agencies.},
year = {2021-2022}
}
```
```bibtex
@misc{project_funding_acknowledgment2,
author = {Damian, Bleotiu, Saraev, Constantinescu, Milik, Lupaescu},
title = {ReDeN – Rețea Descentralizată Neurală SMIS 156084},
howpublished = {\url{https://github.com/Naeural Edge ProtocolEdgeProtocol/}},
note = {This project includes open-source components developed with support from the Romanian Competitiveness Operational Programme under grants SMIS 143488. The content is solely the responsibility of the authors and does not necessarily reflect the views of the funding agencies.},
year = {2023-2024}
}
```
Raw data
{
"_id": null,
"home_page": null,
"name": "naeural-client",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.8",
"maintainer_email": null,
"keywords": null,
"author": null,
"author_email": "Andrei Ionut Damian <andrei.damian@me.com>, Cristan Bleotiu <cristibleotiu@gmail.com>, Stefan Saraev <saraevstefan@gmail.com>",
"download_url": "https://files.pythonhosted.org/packages/1d/82/cc9131df60193d46fbb98e6db7b39d7ec2b5fc6cbeaf11a6ff03d67be826/naeural_client-2.5.11.tar.gz",
"platform": null,
"description": "# naeural_client SDK\n\nThis is the Python SDK package that allows interactions, development and deployment of jobs in Naeural Edge Protocol network. The SDK enables low-code development and deployment of end-to-end AI (and not only) cooperative application pipelines within the Naeural Edge Protocol Execution Engine processing nodes ecosystem. For further information please see [Naeural Edge Protocol AI OS - Decentralized ubiquitous computing MLOps execution engine](https://arxiv.org/pdf/2306.08708).\n\n## Dependencies\n\nThis packet depends on the following packets: `pika`, `paho-mqtt`, `numpy`, `pyopenssl>=23.0.0`, `cryptography>=39.0.0`, `python-dateutil`, `pyaml`.\n\n## Installation\n\n```shell\npython -m pip install naeural_client\n```\n\n## Documentation\n\nMinimal documentation will be presented here. The complete documentation is\nWork in Progress.\n\nCode examples are located in the `tutorials` folder in the project's repository.\n\n## Quick start guides\n\nHere you will find a selection of guides and documentation snippets to get\nyou started with the `naeural_client` SDK. These are only the most important aspects,\nselected from the documentation and from the code examples. For more\nin-depth information, please consult the examples from the repository\nand the documentation.\n\n### Naming conventions & FAQs\n\nThe following are the same:\n\n- `Signature == Plugin's name`\n- `Plugin ~ Instance` (Only in the context of talking about a running plugin (instance); people tend to omit the word `instance`)\n- `Node == Worker` (Unless it is in the context of a distributed job, the 2 words refer to the same thing)\n\n## Hello world tutorial\n\nBelow is a simple \"Hello world!\" style application that aims to show how simple and straightforward it is to distribute existing Python code to multiple edge node workers.\n\nTo execute this code, you can check [tutorials/video_presentation/1. hello_world.ipynb](./tutorials/video_presentation/1.%20hello_world.ipynb)\n\n### 1. Create `.env` file\n\nCopy the `tutorials/.example_env` file to your project directory and rename it to `.env`.\n\nFill in the empty variables with appropriate values.\n\n### 2. Create new / Use test private key\n\n**Disclaimer: You should never publish sensitive information such as private keys.**\n\nTo experiment on our test net, you can use the provided private key to communicate with the 3 nodes in the test network.\n\n#### Create new private key\n\nWhen first connecting to our network, the sdk will search in the current working directory for an existing private key. If not found, the SDK will create one at `$(cwd)/_local_cache/_data/_pk_sdk.pem`.\n\n#### Using an existing private key\n\nTo use an existing private key, create in the working directory the directory tree `_local_cache/_data/` and add the `_pk_sdk.pem` file there.\n\nTo use our provided key. copy it from `tutorials/_example_pk_sdk.pem` to `local_cache/_data/` and change its name to `_pk_sdk.pem`.\n\n### 3. Local Execution\n\nWe want to find all $168$ prime numbers in the interval $1$ - $1000$. For this we can run the following code on our local machine.\n\nThis code has segments running on multiple threads using a ThreadPool.\n\n```python\nimport numpy as np\nfrom concurrent.futures import ThreadPoolExecutor\n\n\ndef local_brute_force_prime_number_generator():\n def is_prime(n):\n if n <= 1:\n return False\n for i in range(2, int(np.sqrt(n)) + 1):\n if n % i == 0:\n return False\n return True\n\n random_numbers = np.random.randint(1, 1000, 20)\n\n thread_pool = ThreadPoolExecutor(max_workers=4)\n are_primes = list(thread_pool.map(is_prime, random_numbers))\n\n prime_numbers = []\n for i in range(len(random_numbers)):\n if are_primes[i]:\n prime_numbers.append(random_numbers[i])\n\n return prime_numbers\n\n\nif __name__ == \"__main__\":\n found_so_far = []\n\n print_step = 0\n\n while len(found_so_far) < 168:\n # compute a batch of prime numbers\n prime_numbers = local_brute_force_prime_number_generator()\n\n # keep only the new prime numbers\n for prime_number in prime_numbers:\n if prime_number not in found_so_far:\n found_so_far.append(prime_number)\n # end for\n\n # show progress\n if print_step % 50 == 0:\n print(\"Found so far: {}: {}\\n\".format(len(found_so_far), sorted(found_so_far)))\n\n print_step += 1\n # end while\n\n # show final result\n print(\"Found so far: {}: {}\\n\".format(len(found_so_far), sorted(found_so_far)))\n```\n\nWe can see that we have a `local_brute_force_prime_number_generator` method which will generate a random sample of $20$ numbers that will be checked if they are prime or not.\n\nThe rest of the code handles how the numbers generated with this method are kept.\nBecause we want to find $168$ unique numbers, we append to the list of found primes only the numbers that are not present yet.\n\nAt the end, we want to show a list of all the numbers found.\n\n### 4. Remote Execution\n\nFor this example we would like to use multiple edge nodes to find the prime numbers faster.\n\nTo execute this code on our network, a series of changes must be made to the `local_brute_force_prime_number_generator` method.\nThese changes are the only ones a developer has to do to deploy his own custom code on the network.\n\nFor this, we will create a new method, `remote_brute_force_prime_number_generator`, which will use the exposed edge node API methods.\n\n```python\nfrom naeural_client import CustomPluginTemplate\n\n# through the `plugin` object we get access to the edge node API\n# the CustomPluginTemplate class acts as a documentation for all the available methods and attributes\n# since we do not allow imports in the custom code due to security reasons, the `plugin` object\n# exposes common modules to the user\ndef remote_brute_force_prime_number_generator(plugin: CustomPluginTemplate):\n def is_prime(n):\n if n <= 1:\n return False\n # we use the `plugin.np` instead of the `np` module\n for i in range(2, int(plugin.np.sqrt(n)) + 1):\n if n % i == 0:\n return False\n return True\n\n # we use the `plugin.np` instead of the `np` module\n random_numbers = plugin.np.random.randint(1, 1000, 20)\n\n # we use the `plugin.threadapi_map` instead of the `ThreadPoolExecutor.map`\n are_primes = plugin.threadapi_map(is_prime, random_numbers, n_threads=4)\n\n prime_numbers = []\n for i in range(len(random_numbers)):\n if are_primes[i]:\n prime_numbers.append(random_numbers[i])\n\n return prime_numbers\n```\n\nThis are all the changes we have to do to deploy this code in the network.\n\nNow lets connect to the network and see what nodes are online.\nWe will use the `on_heartbeat` callback to print the nodes.\n\n```python\nfrom naeural_client import Session\nfrom time import sleep\n\ndef on_heartbeat(session: Session, node: str, heartbeat: dict):\n # the `.P` method is used to print messages in the console and store them in the log file\n session.P(\"{} is online\".format(node))\n return\n\n\nif __name__ == '__main__':\n # create a session\n # the network credentials are read from the .env file automatically\n session = Session(\n on_heartbeat=on_heartbeat\n )\n\n # run the program for 15 seconds to show all the nodes that are online\n sleep(15)\n\n```\n\nNext we will select an online node. This node will be our entrypoint in the network.\n\nThe available nodes in our test net are:\n\n```\n0xai_A8SY7lEqBtf5XaGyB6ipdk5C30vSf3HK4xELp3iplwLe naeural-1\n0xai_Amfnbt3N-qg2-qGtywZIPQBTVlAnoADVRmSAsdDhlQ-6 naeural-2\n0xai_ApltAljEgWk3g8x2QcSa0sS3hT1P4dyCchd04zFSMy5e naeural-3\n```\n\nWe will send a task to this node. Since we want to distribute the task of finding prime numbers to multiple nodes, this selected node will handle distribution of tasks and collection of the results.\n\n```python\nnode = \"0xai_A8SY7lEqBtf5XaGyB6ipdk5C30vSf3HK4xELp3iplwLe\" # naeural-1\n\n# we usually wait for the node to be online before sending the task\n# but in this case we are sure that the node is online because we\n# have received heartbeats from it during the sleep period\n\n# session.wait_for_node(node)\n```\n\nOur selected node will periodically output partial results with the prime numbers found so far by the worker nodes. We want to consume these results.\n\nThus, we need to implement a callback method that will handle this.\n\n```python\nfrom naeural_client import Pipeline\n\n# a flag used to close the session when the task is finished\nfinished = False\n\ndef locally_process_partial_results(pipeline: Pipeline, full_payload):\n global finished\n found_so_far = full_payload.get(\"DATA\")\n\n if found_so_far:\n pipeline.P(\"Found so far: {}: {}\\n\\n\".format(len(found_so_far), sorted(found_so_far)))\n\n progress = full_payload.get(\"PROGRESS\")\n if progress == 100:\n pipeline.P(\"FINISHED\\n\\n\")\n finished = True\n\n return\n```\n\nNow we are ready to deploy our job to the network.\n\n```python\nfrom naeural_client import DistributedCustomCodePresets as Presets\n\n_, _ = session.create_chain_dist_custom_job(\n # this is the main node, our entrypoint\n node=node,\n\n # this function is executed on the main node\n # this handles what we want to do with primes found by a worker node after an iteration\n # we want to store only the unique prime numbers\n # we cam either write a custom code to pass here or we can use a preset\n main_node_process_real_time_collected_data=Presets.PROCESS_REAL_TIME_COLLECTED_DATA__KEEP_UNIQUES_IN_AGGREGATED_COLLECTED_DATA,\n\n # this function is executed on the main node\n # this handles the finish condition of our distributed job\n # we want to finish when we have found 168 prime numbers\n # so more than 167 prime numbers\n # we cam either write a custom code to pass here or we can use a preset\n main_node_finish_condition=Presets.FINISH_CONDITION___AGGREGATED_DATA_MORE_THAN_X,\n main_node_finish_condition_kwargs={\n \"X\": 167\n },\n\n # this function is executed on the main node\n # this handles the final processing of the results\n # this function prepares data for the final result of the distributed job\n # we want to aggregate all the prime numbers found by the worker nodes in a single list\n # we cam either write a custom code to pass here or we can use a preset\n main_node_aggregate_collected_data=Presets.AGGREGATE_COLLECTED_DATA___AGGREGATE_COLLECTED_DATA,\n\n # how many worker nodes we want to use for this task\n nr_remote_worker_nodes=2,\n\n # this is the function that will be executed on the worker nodes\n # this function generates prime numbers using brute force\n # we simply pass the function reference\n worker_node_code=remote_brute_force_prime_number_generator,\n\n # this is the function that will be executed on the client\n # this is the callback function that processes the partial results\n # in our case we want to print the partial results\n on_data=locally_process_partial_results,\n\n # we want to deploy the job immediately\n deploy=True\n)\n```\n\nLast but not least, we want to close the session when the distributed job finished.\n\n```python\n# we wait until the finished flag is set to True\n# we want to release the resources allocated on the selected node when the job is finished\nsession.run(wait=lambda: not finished, close_pipelines=True)\n```\n\n\n# Project Financing Disclaimer\n\nThis project includes open-source components that have been developed with the support of financing grants SMIS 143488 and SMIS 156084, provided by the Romanian Competitiveness Operational Programme. We are grateful for this support, which has enabled us to advance our work and share these resources with the community.\n\nThe content and information provided within this repository are solely the responsibility of the authors and do not necessarily reflect the views of the funding agencies. The funding received under these grants has been instrumental in supporting specific parts of this open source project, allowing for broader dissemination and collaborative development.\n\nFor any inquiries related to the funding and its impact on this project, please contact the authors directly.\n\n\n# Citation\n\n```bibtex\n@misc{naeural_client,\n author = {Stefan Saraev, Andrei Damian},\n title = {naeural_client: Python SDK for Naeural Edge Protocol Edge Protocol},\n year = {2024},\n howpublished = {\\url{https://github.com/Naeural Edge ProtocolEdgeProtocol/naeural_client}},\n}\n```\n\n```bibtex\n@misc{project_funding_acknowledgment1,\n author = {Damian, Bleotiu, Saraev, Constantinescu},\n title = {SOLIS \u2013 Sistem Omogen multi-Loca\u021bie cu func\u021bionalit\u0103\u021bi Inteligente \u0219i Sustenabile\u201d\nSMIS 143488},\n howpublished = {\\url{https://github.com/Naeural Edge ProtocolEdgeProtocol/}},\n note = {This project includes open-source components developed with support from the Romanian Competitiveness Operational Programme under grants SMIS 143488. The content is solely the responsibility of the authors and does not necessarily reflect the views of the funding agencies.},\n year = {2021-2022}\n}\n```\n\n```bibtex\n@misc{project_funding_acknowledgment2,\n author = {Damian, Bleotiu, Saraev, Constantinescu, Milik, Lupaescu},\n title = {ReDeN \u2013 Re\u021bea Descentralizat\u0103 Neural\u0103 SMIS 156084},\n howpublished = {\\url{https://github.com/Naeural Edge ProtocolEdgeProtocol/}},\n note = {This project includes open-source components developed with support from the Romanian Competitiveness Operational Programme under grants SMIS 143488. The content is solely the responsibility of the authors and does not necessarily reflect the views of the funding agencies.},\n year = {2023-2024}\n}\n```\n",
"bugtrack_url": null,
"license": null,
"summary": "`naeural_client` is the Python SDK required for client app development for the Naeural Edge Protocol Edge Protocol framework",
"version": "2.5.11",
"project_urls": {
"Bug Tracker": "https://github.com/NaeuralEdgeProtocol/naeural_client/issues",
"Homepage": "https://github.com/NaeuralEdgeProtocol/naeural_client"
},
"split_keywords": [],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "201edfbf5bee47a0a8f66e7a999d130a98a6a28f986701c732031f9c9fbf2893",
"md5": "f76f52ec9e66e257a28a2c6b8cf3e017",
"sha256": "7d7ea3352e2ccc11a76cf0795f9036aff161fab4bd8bb25ed2315232230aa5ff"
},
"downloads": -1,
"filename": "naeural_client-2.5.11-py3-none-any.whl",
"has_sig": false,
"md5_digest": "f76f52ec9e66e257a28a2c6b8cf3e017",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.8",
"size": 204384,
"upload_time": "2024-12-24T11:39:55",
"upload_time_iso_8601": "2024-12-24T11:39:55.215295Z",
"url": "https://files.pythonhosted.org/packages/20/1e/dfbf5bee47a0a8f66e7a999d130a98a6a28f986701c732031f9c9fbf2893/naeural_client-2.5.11-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "1d82cc9131df60193d46fbb98e6db7b39d7ec2b5fc6cbeaf11a6ff03d67be826",
"md5": "0c9756b7ac6a4e24f5e384791d253653",
"sha256": "699970ddd0582be82709cb22aff5e91c5cf4d0c4f77e9c4c51d1c3dad84376fc"
},
"downloads": -1,
"filename": "naeural_client-2.5.11.tar.gz",
"has_sig": false,
"md5_digest": "0c9756b7ac6a4e24f5e384791d253653",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8",
"size": 202985,
"upload_time": "2024-12-24T11:39:56",
"upload_time_iso_8601": "2024-12-24T11:39:56.619106Z",
"url": "https://files.pythonhosted.org/packages/1d/82/cc9131df60193d46fbb98e6db7b39d7ec2b5fc6cbeaf11a6ff03d67be826/naeural_client-2.5.11.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-12-24 11:39:56",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "NaeuralEdgeProtocol",
"github_project": "naeural_client",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"requirements": [
{
"name": "pika",
"specs": []
},
{
"name": "paho-mqtt",
"specs": []
},
{
"name": "numpy",
"specs": []
},
{
"name": "pyopenssl",
"specs": [
[
">=",
"23.0.0"
]
]
},
{
"name": "cryptography",
"specs": [
[
">=",
"39.0.0"
]
]
},
{
"name": "python-dateutil",
"specs": []
},
{
"name": "pyaml",
"specs": []
}
],
"lcname": "naeural-client"
}