<p align="center">
<img width="400" src="https://github.com/conect2ai/Conect2Py-Package/assets/56210040/60055d32-77f0-4381-bfc1-c9300eb30920" />
</p>
<p align="center">
<img width="800" src="https://drive.google.com/uc?export=view&id=1JlRnq5IG1ZMwfzu2-Wr9rKw5Hd-MidIn" />
</p>
# TensorFlores: An Enhanced Python-based TinyML Framework
The TensorFlores framework is a Python-based solution designed for optimizing machine learning deployment in resource-constrained environments. It introduces an evolving clustering-based quantization, enabling quantization-aware training (QAT) and post-training quantization (PTQ) while preserving model accuracy. TensorFlores seamlessly converts TensorFlow models into optimized formats and generates platform-agnostic C++ code for embedded systems. Its modular architecture minimizes memory usage and computational overhead, ensuring efficient real-time inference. By integrating clustering-based quantization and automated code generation, TensorFlores enhances the feasibility of TinyML applications, particularly in low-power and edge AI scenarios. This framework provides a robust and scalable solution for deploying machine learning models in embedded and IoT systems.
<p align="right">
<img alt="version" src="https://img.shields.io/badge/version-0.1.4-blue">
</p>
- [Software description](#software-description)
- [Installation](#installation)
- [Usage Examples](#usage-example)
- [References](#literature-reference)
- [License](#license)
---
### Dependencies
**Python v3.9.6**
```bash
pip install -r requirements.txt
```
---
## Software description
The TensorFlores framework is a Python-based solution designed for optimizing machine learning deployment in resource-constrained environments
### Software architecture
The architecture of TensorFlores can be divided into four primary layers:
- **Model Training:** A high-level API for the streamlined creation and training of MLP, supporting evolutionary vector quantization during training;
- **Json Handle:** Responsible for interpreting TensorFlow models and generating structured JSON files, serving as an intermediary representation for both TensorFlow and TensorFlores models;
- **Quantization:** Dedicated to processing the structured JSON model representation and applying PTQ techniques;
- **Code Generation:** Responsible to processing the structured representation of the JSON model and generating the machine learning model in C++ format to be embedded in the microcontroller, whether quantised or not.
### Software structure
The project directory is divided into key components, as illustrated in Figure:
```plaintext
tensorflores/
├── models/
│ └── multilayer_perceptron.py
├── utils/
│ ├── autocloud/
│ │ ├── auto_cloud_bias.py
│ │ ├── auto_cloud_weight.py
│ │ ├── data_cloud_bias.py
│ │ ├── data_cloud_weight.py
│ │ └── __init__.py
│ ├── array_manipulation.py
│ ├── clustering.py
│ ├── cpp_generation.py
│ ├── json_handle.py
│ ├── quantization.py
│ └── __init__.py
```
### Software functionalities
The pipeline illustrated in Figure outlines a workflow for optimizing and deploying machine learning models, specifically designed for resource-constrained environments such as microcontrollers. The software structure is divided into four main blocks: model training (with or without quantization-aware training), post-training quantization, TensorFlow model conversion, and code generation, which translates the optimized model into platform-agnostic C++ code.
<p align="center">
<img width="800" src="https://drive.google.com/uc?export=view&id=173u4BWHWPMw0BBa4GHxRtmP1RetHZ5gD" />
</p>
The parameters are highly customizable, as shown in Table 1, which lists the class parameters and their corresponding default input values
| **Class Parameters** | **Type** | **Input Values** |
|--------------------------------|----------|----------------------------------------------------------|
| `input_size` | int | 5 |
| `hidden_layer_sizes` | list | [64, 32] |
| `output_size` | int | 1 |
| `activation_functions` | list | 'sigmoid', 'relu', 'leaky_relu', 'tanh', 'elu', 'softmax', 'softplus', 'swish', 'linear' |
| `weight_bias_init` | str | 'RandomNormal', 'RandomUniform', 'GlorotUniform', 'HeNormal' |
| `training_with_quantization` | bool | True or False |
**Table 1 -** MLP Initialization Parameters.
The "train" method has the following main parameters:
| **Parameter** | **Type** | **Input Values** |
|-------------------------------|----------|-----------------------------------------------------------------------------------------------|
| `X` | list | List of input data for training |
| `y` | list | List of corresponding labels |
| `epochs` | int | Default: 100 |
| `learning_rate` | float | Default: 0.001 |
| `loss_function` | str | 'mean_squared_error', 'cross_entropy', 'mean_absolute_error', 'binary_cross_entropy' |
| `optimizer` | str | 'sgd', 'adam', 'adamax' |
| `batch_size` | int | Default: 36 |
| `beta1` | float | Default: 0.9 (Adam first moment) |
| `beta2` | float | Default: 0.999 (Adam second moment) |
| `epsilon` | float | Default: 1e-7 (Avoid division by zero in Adam) |
| `epochs_quantization` | int | Default: 50 |
| `distance_metric` | str | 'euclidean', 'manhattan', 'minkowski', 'chebyshev', 'cosine', 'hamming', 'bray_curtis', 'jaccard', 'wasserstein', 'dtw' and 'mahalanobis' |
| `bias_clustering_method` | | Clustering method for biases |
| `weight_clustering_method` | | Clustering method for weights |
| `validation_split` | float | Default: 0.2 (Validation data percentage) |
**Table 2 -** Configurable Train Method Parameters.
Table 3 presents a summary of the clustering algorithms and their respective configuration parameters.
| **Algorithm** | **Parameter** | **Value** |
|-------------------------|---------------------------|------------|
| **AutoCloud** | Threshold ($m$) | 1.414 |
| **MeanShift** | Bandwidth ($b$) | 0.005 |
| | Maximum iterations | 300 |
| | Bin seeding | True |
| **Affinity Propagation** | Damping ($d$) | 0.7 |
| | Maximum iterations | 500 |
| | Convergence iterations | 20 |
| **DBStream** | Clustering threshold ($\tau$) | 0.1 |
| | Fading factor ($\lambda$) | 0.05 |
| | Cleanup interval | 4 |
| | Intersection factor | 0.5 |
| | Minimum weight | 1 |
**Table 3-** Clustering Algorithms and Their Respective Parameters.
## Installation
#### You can download our package from the PyPi repository using the following command:
```bash
pip install tensorflores
```
#### If you want to install it locally you download the Wheel distribution from [Build Distribution](https://pypi.org/project/tensorflores/).
*First navigate to the folder where you downloaded the file and run the following command:*
```bash
pip install tensorflores-0.1.4-py3-none-any.whl
```
---
## Usage Example
The following four examples will be considered:
### Example 01
Implementation and Training of a Neural Network Using TensorFlores:
[](https://github.com/conect2ai/TensorFlores/blob/main/examples/01_example_01/Example_01.ipynb)
### Example 02
Implementation and Training of a Neural Network with
quantization-aware training (QAT) Using TensorFlores:
[](https://github.com/conect2ai/TensorFlores/blob/main/examples/02_example_02/Example_02.ipynb)
### Example 03
Post-Training Quantization with TensorFlores:
[](https://github.com/conect2ai/TensorFlores/blob/main/examples/03_example_03/Example_03.ipynb)
### Example 04
Converting a TensorFlow Model using
TensorFlores:
[](https://github.com/conect2ai/TensorFlores/blob/main/examples/04_example_04/Example_04.ipynb)
### Auxiliary
This section provides an example of code that transforms an input matrix (`X_test`) and (`y_test`) into a C++ array format.
[](https://github.com/conect2ai/TensorFlores/blob/main/examples/05_auxiliary/auxiliary.ipynb)
The Arduino code to deployment are avaliable [here](https://github.com/conect2ai/TensorFlores/blob/main/examples/05_auxiliary/arduino_code/MLP/MLP.ino):
## Other Models
Please check the [informations](https://github.com/conect2ai/TensorFlores/blob/main/README.md) for more information about the other models been implemented in this package.
# Literature reference
1. T. K. S. Flores, M. Medeiros, M. Silva, D. G. Costa, I. Silva, Enhanced Vector Quantization for Embedded Machine Learning: A Post-Training Approach With Incremental Clustering, IEEE Access 13 (2025) 17440 17456. [doi:10.1109/ACCESS.2025.3532849](https://doi.org/10.1109/ACCESS.2025.3532849).
2. T. K. S. Flores, I. Silva, M. B. Azevedo, T. d. A. de Medeiros, M. d. A. Medeiros, D. G. Costa, P. Ferrari, E. Sisinni, Advancing TinyMLOps:
Robust model updates in the internet of intelligent vehicles, IEEE Micro
(2024) [doi:10.1109/MM.2024.3354323](https://doi.org/10.1109/MM.2024.3354323).
# License
This package is licensed under the [MIT License](https://github.com/conect2ai/Conect2Py-Package/blob/main/LICENSE) - © 2023 Conect2ai.
Raw data
{
"_id": null,
"home_page": null,
"name": "tensorflores",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.9.6",
"maintainer_email": null,
"keywords": "EdgeAI, Machine Learning, Quantization, TinyML",
"author": null,
"author_email": "\"Thommas K. S. Flores\" <thommas.flores.101@ufrn.edu.br>",
"download_url": "https://files.pythonhosted.org/packages/47/86/a6c95c57df4f1b231f24bc3069a6d2a5ddf2933ab28ffa6f3262f9fd5209/tensorflores-0.1.5.tar.gz",
"platform": null,
"description": " \n<p align=\"center\">\n <img width=\"400\" src=\"https://github.com/conect2ai/Conect2Py-Package/assets/56210040/60055d32-77f0-4381-bfc1-c9300eb30920\" />\n</p> \n \n \n<p align=\"center\">\n <img width=\"800\" src=\"https://drive.google.com/uc?export=view&id=1JlRnq5IG1ZMwfzu2-Wr9rKw5Hd-MidIn\" />\n</p> \n \n\n\n\n\n\n\n# TensorFlores: An Enhanced Python-based TinyML Framework\n\n\nThe TensorFlores framework is a Python-based solution designed for optimizing machine learning deployment in resource-constrained environments. It introduces an evolving clustering-based quantization, enabling quantization-aware training (QAT) and post-training quantization (PTQ) while preserving model accuracy. TensorFlores seamlessly converts TensorFlow models into optimized formats and generates platform-agnostic C++ code for embedded systems. Its modular architecture minimizes memory usage and computational overhead, ensuring efficient real-time inference. By integrating clustering-based quantization and automated code generation, TensorFlores enhances the feasibility of TinyML applications, particularly in low-power and edge AI scenarios. This framework provides a robust and scalable solution for deploying machine learning models in embedded and IoT systems.\n\n<p align=\"right\">\n <img alt=\"version\" src=\"https://img.shields.io/badge/version-0.1.4-blue\">\n</p>\n\n- [Software description](#software-description)\n- [Installation](#installation)\n- [Usage Examples](#usage-example)\n- [References](#literature-reference)\n- [License](#license)\n\n---\n### Dependencies\n**Python v3.9.6** \n```bash\npip install -r requirements.txt\n```\n\n---\n## Software description\n\nThe TensorFlores framework is a Python-based solution designed for optimizing machine learning deployment in resource-constrained environments\n\n### Software architecture\n\nThe architecture of TensorFlores can be divided into four primary layers:\n\n- **Model Training:** A high-level API for the streamlined creation and training of MLP, supporting evolutionary vector quantization during training;\n\n- **Json Handle:** Responsible for interpreting TensorFlow models and generating structured JSON files, serving as an intermediary representation for both TensorFlow and TensorFlores models;\n\n- **Quantization:** Dedicated to processing the structured JSON model representation and applying PTQ techniques;\n\n- **Code Generation:** Responsible to processing the structured representation of the JSON model and generating the machine learning model in C++ format to be embedded in the microcontroller, whether quantised or not.\n \n### Software structure\n\nThe project directory is divided into key components, as illustrated in Figure:\n\n```plaintext\ntensorflores/\n\u251c\u2500\u2500 models/\n\u2502 \u2514\u2500\u2500 multilayer_perceptron.py\n\u251c\u2500\u2500 utils/\n\u2502 \u251c\u2500\u2500 autocloud/\n\u2502 \u2502 \u251c\u2500\u2500 auto_cloud_bias.py\n\u2502 \u2502 \u251c\u2500\u2500 auto_cloud_weight.py\n\u2502 \u2502 \u251c\u2500\u2500 data_cloud_bias.py\n\u2502 \u2502 \u251c\u2500\u2500 data_cloud_weight.py\n\u2502 \u2502 \u2514\u2500\u2500 __init__.py\n\u2502 \u251c\u2500\u2500 array_manipulation.py\n\u2502 \u251c\u2500\u2500 clustering.py\n\u2502 \u251c\u2500\u2500 cpp_generation.py\n\u2502 \u251c\u2500\u2500 json_handle.py\n\u2502 \u251c\u2500\u2500 quantization.py\n\u2502 \u2514\u2500\u2500 __init__.py\n```\n\n### Software functionalities\n\nThe pipeline illustrated in Figure outlines a workflow for optimizing and deploying machine learning models, specifically designed for resource-constrained environments such as microcontrollers. The software structure is divided into four main blocks: model training (with or without quantization-aware training), post-training quantization, TensorFlow model conversion, and code generation, which translates the optimized model into platform-agnostic C++ code. \n\n \n<p align=\"center\">\n <img width=\"800\" src=\"https://drive.google.com/uc?export=view&id=173u4BWHWPMw0BBa4GHxRtmP1RetHZ5gD\" />\n</p> \n \n\n\n\nThe parameters are highly customizable, as shown in Table 1, which lists the class parameters and their corresponding default input values\n\n| **Class Parameters** | **Type** | **Input Values** |\n|--------------------------------|----------|----------------------------------------------------------|\n| `input_size` | int | 5 |\n| `hidden_layer_sizes` | list | [64, 32] |\n| `output_size` | int | 1 |\n| `activation_functions` | list | 'sigmoid', 'relu', 'leaky_relu', 'tanh', 'elu', 'softmax', 'softplus', 'swish', 'linear' |\n| `weight_bias_init` | str | 'RandomNormal', 'RandomUniform', 'GlorotUniform', 'HeNormal' |\n| `training_with_quantization` | bool | True or False |\n\n**Table 1 -** MLP Initialization Parameters.\n\n\nThe \"train\" method has the following main parameters:\n\n\n| **Parameter** | **Type** | **Input Values** |\n|-------------------------------|----------|-----------------------------------------------------------------------------------------------|\n| `X` | list | List of input data for training |\n| `y` | list | List of corresponding labels |\n| `epochs` | int | Default: 100 |\n| `learning_rate` | float | Default: 0.001 |\n| `loss_function` | str | 'mean_squared_error', 'cross_entropy', 'mean_absolute_error', 'binary_cross_entropy' |\n| `optimizer` | str | 'sgd', 'adam', 'adamax' |\n| `batch_size` | int | Default: 36 |\n| `beta1` | float | Default: 0.9 (Adam first moment) |\n| `beta2` | float | Default: 0.999 (Adam second moment) |\n| `epsilon` | float | Default: 1e-7 (Avoid division by zero in Adam) |\n| `epochs_quantization` | int | Default: 50 |\n| `distance_metric` | str | 'euclidean', 'manhattan', 'minkowski', 'chebyshev', 'cosine', 'hamming', 'bray_curtis', 'jaccard', 'wasserstein', 'dtw' and 'mahalanobis' |\n| `bias_clustering_method` | | Clustering method for biases |\n| `weight_clustering_method` | | Clustering method for weights |\n| `validation_split` | float | Default: 0.2 (Validation data percentage) |\n\n**Table 2 -** Configurable Train Method Parameters.\n\n\n\n\nTable 3 presents a summary of the clustering algorithms and their respective configuration parameters.\n\n| **Algorithm** | **Parameter** | **Value** |\n|-------------------------|---------------------------|------------|\n| **AutoCloud** | Threshold ($m$) | 1.414 |\n| **MeanShift** | Bandwidth ($b$) | 0.005 |\n| | Maximum iterations | 300 |\n| | Bin seeding | True |\n| **Affinity Propagation** | Damping ($d$) | 0.7 |\n| | Maximum iterations | 500 |\n| | Convergence iterations | 20 |\n| **DBStream** | Clustering threshold ($\\tau$) | 0.1 |\n| | Fading factor ($\\lambda$) | 0.05 |\n| | Cleanup interval | 4 |\n| | Intersection factor | 0.5 |\n| | Minimum weight | 1 |\n\n**Table 3-** Clustering Algorithms and Their Respective Parameters.\n\n\n\n\n\n\n## Installation\n\n#### You can download our package from the PyPi repository using the following command:\n\n```bash\npip install tensorflores\n```\n\n#### If you want to install it locally you download the Wheel distribution from [Build Distribution](https://pypi.org/project/tensorflores/).\n\n*First navigate to the folder where you downloaded the file and run the following command:*\n\n```bash\npip install tensorflores-0.1.4-py3-none-any.whl\n```\n\n---\n\n## Usage Example\n\nThe following four examples will be considered:\n\n### Example 01\n\nImplementation and Training of a Neural Network Using TensorFlores:\n\n[](https://github.com/conect2ai/TensorFlores/blob/main/examples/01_example_01/Example_01.ipynb)\n\n\n### Example 02\n\nImplementation and Training of a Neural Network with\nquantization-aware training (QAT) Using TensorFlores:\n\n[](https://github.com/conect2ai/TensorFlores/blob/main/examples/02_example_02/Example_02.ipynb)\n\n### Example 03\nPost-Training Quantization with TensorFlores:\n\n[](https://github.com/conect2ai/TensorFlores/blob/main/examples/03_example_03/Example_03.ipynb)\n\n### Example 04\nConverting a TensorFlow Model using\nTensorFlores: \n\n[](https://github.com/conect2ai/TensorFlores/blob/main/examples/04_example_04/Example_04.ipynb)\n\n### Auxiliary\n\nThis section provides an example of code that transforms an input matrix (`X_test`) and (`y_test`) into a C++ array format.\n\n[](https://github.com/conect2ai/TensorFlores/blob/main/examples/05_auxiliary/auxiliary.ipynb)\n\n\nThe Arduino code to deployment are avaliable [here](https://github.com/conect2ai/TensorFlores/blob/main/examples/05_auxiliary/arduino_code/MLP/MLP.ino): \n\n## Other Models\n\nPlease check the [informations](https://github.com/conect2ai/TensorFlores/blob/main/README.md) for more information about the other models been implemented in this package.\n\n# Literature reference\n\n\n1. T. K. S. Flores, M. Medeiros, M. Silva, D. G. Costa, I. Silva, Enhanced Vector Quantization for Embedded Machine Learning: A Post-Training Approach With Incremental Clustering, IEEE Access 13 (2025) 17440 17456. [doi:10.1109/ACCESS.2025.3532849](https://doi.org/10.1109/ACCESS.2025.3532849).\n\n2. T. K. S. Flores, I. Silva, M. B. Azevedo, T. d. A. de Medeiros, M. d. A. Medeiros, D. G. Costa, P. Ferrari, E. Sisinni, Advancing TinyMLOps:\n Robust model updates in the internet of intelligent vehicles, IEEE Micro\n (2024) [doi:10.1109/MM.2024.3354323](https://doi.org/10.1109/MM.2024.3354323).\n\n# License\n\nThis package is licensed under the [MIT License](https://github.com/conect2ai/Conect2Py-Package/blob/main/LICENSE) - \u00a9 2023 Conect2ai.\n",
"bugtrack_url": null,
"license": null,
"summary": "The TensorFlores framework is a Python-based solution designed for optimizing machine learning deployment in resource-constrained environments.",
"version": "0.1.5",
"project_urls": {
"Homepage": "https://conect2ai.dca.ufrn.br/",
"Issues": "https://github.com/conect2ai/TensorFlores"
},
"split_keywords": [
"edgeai",
" machine learning",
" quantization",
" tinyml"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "836bba7e8e855218ea4684205219f0e3636d00ecd6e9c93381cf9fa40391262e",
"md5": "3d00d5427c74a571f3c8600ff14c9cda",
"sha256": "321d8137fcbc375fd1b61e6bf504b24d0df9591149860503eec9f2576db236df"
},
"downloads": -1,
"filename": "tensorflores-0.1.5-py3-none-any.whl",
"has_sig": false,
"md5_digest": "3d00d5427c74a571f3c8600ff14c9cda",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.9.6",
"size": 34460,
"upload_time": "2025-02-14T11:18:43",
"upload_time_iso_8601": "2025-02-14T11:18:43.930149Z",
"url": "https://files.pythonhosted.org/packages/83/6b/ba7e8e855218ea4684205219f0e3636d00ecd6e9c93381cf9fa40391262e/tensorflores-0.1.5-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "4786a6c95c57df4f1b231f24bc3069a6d2a5ddf2933ab28ffa6f3262f9fd5209",
"md5": "8c4dcdefb7ba0381463468ca8f408108",
"sha256": "a77613e6f22488fcb47191a535e7f6a88831970269eaa610d3416b7d7140c4c9"
},
"downloads": -1,
"filename": "tensorflores-0.1.5.tar.gz",
"has_sig": false,
"md5_digest": "8c4dcdefb7ba0381463468ca8f408108",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.9.6",
"size": 1432555,
"upload_time": "2025-02-14T11:18:47",
"upload_time_iso_8601": "2025-02-14T11:18:47.226812Z",
"url": "https://files.pythonhosted.org/packages/47/86/a6c95c57df4f1b231f24bc3069a6d2a5ddf2933ab28ffa6f3262f9fd5209/tensorflores-0.1.5.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-02-14 11:18:47",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "conect2ai",
"github_project": "TensorFlores",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"requirements": [
{
"name": "pandas",
"specs": [
[
"==",
"2.2.3"
]
]
},
{
"name": "numpy",
"specs": [
[
"==",
"1.24.0"
]
]
},
{
"name": "scikit-learn",
"specs": [
[
"==",
"1.0.2"
]
]
},
{
"name": "pandas",
"specs": [
[
"==",
"2.2.3"
]
]
},
{
"name": "matplotlib",
"specs": [
[
"==",
"3.7.1"
]
]
},
{
"name": "river",
"specs": [
[
"==",
"0.21.2"
]
]
},
{
"name": "tensorflow",
"specs": [
[
"==",
"2.15.0"
]
]
}
],
"lcname": "tensorflores"
}