[DDesigner API] Deep-learning Designer API
==========================================
# 1. About
## 1.1. DDesignerAPI?
It is a API for deep-learning learning and inference, and an API for application development using multi-platform
## 1.2. Functions
### 1.2.1. Layers and Blocks
* Accelerator enabled layers and the ability to define special layers that are not defined in Keras and others
* A function that defines a combination of layers as a block and easily composes a block (ex. CONV + BN + ACT + DROPOUT= ConvBlock)
### 1.2.2. Optimization for Accelerator Usage (XWN)
* Optimized function to use accelerator
<br/><br/><br/>
# 2. Support
## 2.1. Platforms
* Tensorflow 2.6.0
* PyTorch 1.13.1
## 2.2. Components of Network
### 2.2.1. Layers
* Accelerator enabled layers and Custom layers that perform specific functions
#### 2.2.1.1. Summary
|Operation|Support Train Platform|Support TACHY Accelerator|
|:---:|:---:|:---:|
|**Convolution**|TF / Keras / PyTorch|O|
|**TransposeConvolution**|TF / Keras / PyTorch|O|
|**CascadeConvolution**|Keras / PyTorch|O|
#### 2.2.1.2. Detail
* Convolution : 1D, 2D with XWN optimization
* TransposeConvolution : 1D, 2D with XWN optimization
* CascadeConvolution : A Layer that decomposes a layer with large kernel into multiple layers with smaller kernels to lighten the model / 1D, 2D with XWN optimization
<br/><br/>
### 2.2.2. Blocks
* A set of defined layers for user convenience
#### 2.2.2.1. Summary
|Platform|ConvBlock|TConvBlock|FCBlock|
|:---:|:---:|:---:|:---:|
|**TF-Keras**|1D/2D|2D|TODO|
|**PyTorch**|TODO|TODO|TODO|
#### 2.2.2.2. Detail
* ConvBlock : Convolution N-D Block (CONV + BN + ACT + DROPOUT), support Conv1DBlock, Conv2DBlock
* TConvBlock : Transpose Convolution 2D Block (TCONV + BN + ACT + DROPOUT), support TConv2DBlock
* CascadeConvBlock : Cascade Convolution N-D Block (CONV + BN + ACT + DROPOUT), support CascadeConv1DBlock, CascadeConv2DBlock
<br/><br/>
## 2.3. XWN (**Applies only to convolution operations**)
### 2.3.1. Transform Configuration (data type / default value / description)
* transform : bool / False / Choose whether to use
* bit : int / 4 / Quantization range (bit-1 ** 2)
* max_scale : float / 4.0 / Max value
### 2.3.2. Pruning Configuration
* pruning : bool / False / Choose whether to use
* prun_weight : float / 0.5 / Weights for puning edge generation
### 2.3.3. Summary
|Platform|Conv|TransposeConv|CascadeConv|
|:---:|:---:|:---:|:---:|
|**TF**|1D/2D|1D/2D|TODO|
|**Keras**|1D/2D|1D/2D|TODO|
|**PyTorch**|1D/2D|1D/2D|1D/2D|
<br/><br/>
# 3. Command Usage
## 3.1. XWN
### 3.1.1. Single Convolution
### 3.1.1.1. Tensorflow
>>> from ddesigner_api.tensorflow.xwn import tf_nn as nn
>>> nn.conv2d(
x,
kernel,
...
use_transform=True,
bit=4,
max_scale=4.0,
use_pruning=False
)
### 3.1.1.2. Keras
>>> from ddesigner_api.tensorflow.xwn import keras_layers as klayers
>>> klayers.Conv2D(
2, 3,
...
use_transform=True,
bit=4,
max_scale=4.0
use_pruning=True,
prun_weight=0.5
)
### 3.1.1.3. PyTorch
>>> from ddesigner_api.pytorch.xwn import torch_nn as nn
>>> nn.Conv2d(
in_channels=1,
out_channels=2,
...
use_transform=True,
bit=4,
max_scale=4.0,
use_pruning=False
)
### 3.1.2. Custum Layer and Block (CascadeConv, ...)
### 3.1.2.1. Keras
>>> from ddesigner_api.tensorflow import dpi_layers as dlayers
>>> dlayers.CascadeConv2d(
2, 3,
...
transform=4,
max_scale=4.0,
pruning=None,
)
### 3.1.2.2. PyTorch
>>> from ddesigner_api.pytorch import dpi_nn as dnn
>>> dnn.CascadeConv2d(
16, # in_channels
32, # out_channels
7, # kernel_size
stride=(1,1),
bias=False,
...
transform=4,
max_scale=4.0,
pruning=None,
)
<br/>
## 3.2. Blocks
### 3.2.1. Keras
#### 3.2.1.1. Conv1DBlock
>>> from ddesigner_api.tensorflow import dpi_blocks as db
>>> dtype='mixed_float16'
>>> db.Conv1DBlock(
64, 3, strides=1, padding='SAME', use_bias=False,
activation=tf.keras.layers.ReLU(dtype=dtype),
batchnormalization=tf.keras.layers.BatchNormalization(dtype=dtype),
dtype=dtype,
transform=4, max_scale=4.0,
pruning=0.5
)
#### 3.2.1.2. Conv2DBlock
>>> from ddesigner_api.tensorflow import dpi_blocks as db
>>> dtype='mixed_float16'
>>> db.Conv2DBlock(
64, (3,3), strides=(1,1), padding='SAME', use_bias=False,
activation=tf.keras.layers.ReLU(dtype=dtype),
batchnormalization=tf.keras.layers.BatchNormalization(dtype=dtype),
dtype=dtype,
transform=4, max_scale=4.0,
pruning=0.5
)
#### 3.2.1.3. TConv2DBlock
>>> from ddesigner_api.tensorflow import dpi_blocks as db
>>> dtype='mixed_float16'
>>> db.TConv2DBlock(
64, (3,3), strides=(2,2), padding='SAME', use_bias=False,
activation=tf.keras.layers.ReLU(dtype=dtype),
batchnormalization=tf.keras.layers.BatchNormalization(dtype=dtype),
dtype=dtype,
transform=4, max_scale=4.0,
pruning=0.5
)
<br/>
## 3.3. Examples
* An example of comparing and printing results before optimization(XWN) and after XWN for the same input on a supported platform.
### 3.3.1. Tensorflow
>>> import ddesigner_api.tensorflow.examples.examples_tensorflow as ex
>>> ex.main()
>>> ====== TENSORFLOW Examples======
>>> 1: Fixed Float32 Input Conv2D
>>> q: Quit
>>> Select Case: ...
### 3.3.2. Keras
>>> import ddesigner_api.tensorflow.examples.examples_keras as ex
>>> ex.main()
>>> ====== KERAS Examples======
>>> 1: Fixed Float32 Input Conv2D
>>> 2: Random Float32 Input Conv2D
>>> 3: Random Float32 Input Conv2DTranspose
>>> 4: Random Float16 Input Conv2D
>>> q: Quit
>>> Select Case: ...
### 3.3.3. PyTorch
>>> import ddesigner_api.pytorch.examples.examples_pytorch as ex
>>> ex.main()
>>> ====== PYTORCH Examples======
>>> 1: Fixed Float32 Input Conv2D
>>> 2: Random Float32 Input Conv2D
>>> 3: Fixed Float32 Input Conv1D
>>> 4: Fixed Float32 Input Conv1DTranspose
>>> 5: Random Float32 Input CascadeConv2D
>>> 6: Random Float32 Input CascadeConv1D
>>> q: Quit
>>> Select Case: ...
### 3.3.4. Numpy
>>> import ddesigner_api.numpy.examples.examples_numpy as ex
>>> ex.main()
>>> ====== NUMPY Examples======
>>> 1: XWN Transform
>>> 2: XWN Transform and Pruning
>>> q: Quit
>>> Select Case: ...
Raw data
{
"_id": null,
"home_page": "https://github.com/DPI/DDesigner",
"name": "DDesignerAPI",
"maintainer": "",
"docs_url": null,
"requires_python": ">=3.7",
"maintainer_email": "",
"keywords": "xwn,pytorch,tensorflow,keras",
"author": "Deeper-I",
"author_email": "dean@deeper-i.com",
"download_url": "",
"platform": null,
"description": "[DDesigner API] Deep-learning Designer API\n==========================================\n\n# 1. About\n## 1.1. DDesignerAPI?\nIt is a API for deep-learning learning and inference, and an API for application development using multi-platform\n\n## 1.2. Functions\n### 1.2.1. Layers and Blocks\n* Accelerator enabled layers and the ability to define special layers that are not defined in Keras and others\n* A function that defines a combination of layers as a block and easily composes a block (ex. CONV + BN + ACT + DROPOUT= ConvBlock)\n\n### 1.2.2. Optimization for Accelerator Usage (XWN)\n* Optimized function to use accelerator\n<br/><br/><br/>\n \n# 2. Support\n## 2.1. Platforms\n* Tensorflow 2.6.0\n* PyTorch 1.13.1\n\n## 2.2. Components of Network \n### 2.2.1. Layers\n* Accelerator enabled layers and Custom layers that perform specific functions\n#### 2.2.1.1. Summary\n|Operation|Support Train Platform|Support TACHY Accelerator|\n|:---:|:---:|:---:|\n|**Convolution**|TF / Keras / PyTorch|O|\n|**TransposeConvolution**|TF / Keras / PyTorch|O|\n|**CascadeConvolution**|Keras / PyTorch|O|\n#### 2.2.1.2. Detail\n* Convolution : 1D, 2D with XWN optimization\n* TransposeConvolution : 1D, 2D with XWN optimization\n* CascadeConvolution : A Layer that decomposes a layer with large kernel into multiple layers with smaller kernels to lighten the model / 1D, 2D with XWN optimization\n<br/><br/>\n### 2.2.2. Blocks\n* A set of defined layers for user convenience\n#### 2.2.2.1. Summary\n|Platform|ConvBlock|TConvBlock|FCBlock|\n|:---:|:---:|:---:|:---:|\n|**TF-Keras**|1D/2D|2D|TODO|\n|**PyTorch**|TODO|TODO|TODO|\n#### 2.2.2.2. Detail\n* ConvBlock : Convolution N-D Block (CONV + BN + ACT + DROPOUT), support Conv1DBlock, Conv2DBlock\n* TConvBlock : Transpose Convolution 2D Block (TCONV + BN + ACT + DROPOUT), support TConv2DBlock\n* CascadeConvBlock : Cascade Convolution N-D Block (CONV + BN + ACT + DROPOUT), support CascadeConv1DBlock, CascadeConv2DBlock\n<br/><br/>\n## 2.3. XWN (**Applies only to convolution operations**)\n### 2.3.1. Transform Configuration (data type / default value / description) \n* transform : bool / False / Choose whether to use\n* bit : int / 4 / Quantization range (bit-1 ** 2)\n* max_scale : float / 4.0 / Max value\n### 2.3.2. Pruning Configuration\n* pruning : bool / False / Choose whether to use\n* prun_weight : float / 0.5 / Weights for puning edge generation\n### 2.3.3. Summary\n|Platform|Conv|TransposeConv|CascadeConv|\n|:---:|:---:|:---:|:---:|\n|**TF**|1D/2D|1D/2D|TODO|\n|**Keras**|1D/2D|1D/2D|TODO|\n|**PyTorch**|1D/2D|1D/2D|1D/2D|\n\n<br/><br/>\n\n# 3. Command Usage\n## 3.1. XWN \n### 3.1.1. Single Convolution \n### 3.1.1.1. Tensorflow\n >>> from ddesigner_api.tensorflow.xwn import tf_nn as nn\n >>> nn.conv2d(\n x,\n kernel,\n ...\n use_transform=True,\n bit=4,\n max_scale=4.0,\n use_pruning=False\n )\n### 3.1.1.2. Keras\n >>> from ddesigner_api.tensorflow.xwn import keras_layers as klayers\n >>> klayers.Conv2D(\n 2, 3, \n ...\n use_transform=True,\n bit=4,\n max_scale=4.0\n use_pruning=True,\n prun_weight=0.5\n )\n\n### 3.1.1.3. PyTorch\n >>> from ddesigner_api.pytorch.xwn import torch_nn as nn\n >>> nn.Conv2d(\n in_channels=1,\n out_channels=2,\n ...\n use_transform=True,\n bit=4,\n max_scale=4.0,\n use_pruning=False\n )\n\n### 3.1.2. Custum Layer and Block (CascadeConv, ...)\n### 3.1.2.1. Keras\n >>> from ddesigner_api.tensorflow import dpi_layers as dlayers\n >>> dlayers.CascadeConv2d(\n 2, 3, \n ...\n transform=4,\n max_scale=4.0,\n pruning=None,\n )\n\n### 3.1.2.2. PyTorch\n >>> from ddesigner_api.pytorch import dpi_nn as dnn\n >>> dnn.CascadeConv2d(\n 16, # in_channels \n 32, # out_channels\n 7, # kernel_size\n stride=(1,1), \n bias=False,\n ...\n transform=4,\n max_scale=4.0,\n pruning=None,\n )\n\n<br/>\n\n## 3.2. Blocks \n### 3.2.1. Keras\n#### 3.2.1.1. Conv1DBlock\n >>> from ddesigner_api.tensorflow import dpi_blocks as db\n >>> dtype='mixed_float16'\n >>> db.Conv1DBlock(\n 64, 3, strides=1, padding='SAME', use_bias=False,\n activation=tf.keras.layers.ReLU(dtype=dtype), \n batchnormalization=tf.keras.layers.BatchNormalization(dtype=dtype), \n dtype=dtype,\n transform=4, max_scale=4.0,\n pruning=0.5\n )\n#### 3.2.1.2. Conv2DBlock\n >>> from ddesigner_api.tensorflow import dpi_blocks as db\n >>> dtype='mixed_float16'\n >>> db.Conv2DBlock(\n 64, (3,3), strides=(1,1), padding='SAME', use_bias=False,\n activation=tf.keras.layers.ReLU(dtype=dtype), \n batchnormalization=tf.keras.layers.BatchNormalization(dtype=dtype), \n dtype=dtype,\n transform=4, max_scale=4.0,\n pruning=0.5\n )\n#### 3.2.1.3. TConv2DBlock\n >>> from ddesigner_api.tensorflow import dpi_blocks as db\n >>> dtype='mixed_float16'\n >>> db.TConv2DBlock(\n 64, (3,3), strides=(2,2), padding='SAME', use_bias=False,\n activation=tf.keras.layers.ReLU(dtype=dtype), \n batchnormalization=tf.keras.layers.BatchNormalization(dtype=dtype), \n dtype=dtype,\n transform=4, max_scale=4.0,\n pruning=0.5\n )\n<br/>\n\n## 3.3. Examples\n* An example of comparing and printing results before optimization(XWN) and after XWN for the same input on a supported platform.\n### 3.3.1. Tensorflow\n >>> import ddesigner_api.tensorflow.examples.examples_tensorflow as ex\n >>> ex.main()\n >>> ====== TENSORFLOW Examples======\n >>> 1: Fixed Float32 Input Conv2D\n >>> q: Quit\n >>> Select Case: ...\n\n### 3.3.2. Keras\n >>> import ddesigner_api.tensorflow.examples.examples_keras as ex\n >>> ex.main()\n >>> ====== KERAS Examples======\n >>> 1: Fixed Float32 Input Conv2D\n >>> 2: Random Float32 Input Conv2D\n >>> 3: Random Float32 Input Conv2DTranspose\n >>> 4: Random Float16 Input Conv2D\n >>> q: Quit\n >>> Select Case: ...\n\n### 3.3.3. PyTorch\n >>> import ddesigner_api.pytorch.examples.examples_pytorch as ex\n >>> ex.main()\n >>> ====== PYTORCH Examples======\n >>> 1: Fixed Float32 Input Conv2D\n >>> 2: Random Float32 Input Conv2D\n >>> 3: Fixed Float32 Input Conv1D\n >>> 4: Fixed Float32 Input Conv1DTranspose\n >>> 5: Random Float32 Input CascadeConv2D\n >>> 6: Random Float32 Input CascadeConv1D\n >>> q: Quit\n >>> Select Case: ...\n\n### 3.3.4. Numpy\n >>> import ddesigner_api.numpy.examples.examples_numpy as ex\n >>> ex.main()\n >>> ====== NUMPY Examples======\n >>> 1: XWN Transform\n >>> 2: XWN Transform and Pruning\n >>> q: Quit\n >>> Select Case: ...\n",
"bugtrack_url": null,
"license": "Apache-2.0, BSD3-Clause",
"summary": "Deep-learning Designer: Deep-Learning Training Optimization & Layers API(like Keras)",
"version": "0.0.6.0",
"project_urls": {
"Homepage": "https://github.com/DPI/DDesigner"
},
"split_keywords": [
"xwn",
"pytorch",
"tensorflow",
"keras"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "00357d9c2868d1acd763916f1c3929db17ddc808f093d8180f94e490eab074ea",
"md5": "07db47275fd32a337b1ceb628c89350d",
"sha256": "621f525546b60e1f1360122b2c9b5601886a239c544d558b445ca8d6ab38e492"
},
"downloads": -1,
"filename": "DDesignerAPI-0.0.6.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "07db47275fd32a337b1ceb628c89350d",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.7",
"size": 66111,
"upload_time": "2023-12-26T09:23:01",
"upload_time_iso_8601": "2023-12-26T09:23:01.603735Z",
"url": "https://files.pythonhosted.org/packages/00/35/7d9c2868d1acd763916f1c3929db17ddc808f093d8180f94e490eab074ea/DDesignerAPI-0.0.6.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2023-12-26 09:23:01",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "DPI",
"github_project": "DDesigner",
"github_not_found": true,
"lcname": "ddesignerapi"
}