![trident](trident_logo.png)
**Make PyTorch and TensorFlow two become one.**
| version | pytorch | tensorflow |
|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------|
| [![version](https://img.shields.io/static/v1?label=&message=0.7.6&color=377EF0&style=for-the-badge)](https://img.shields.io/static/v1?label=&message=0.7.5&color=377EF0&style=for-the-badge) | ![pytorch](https://img.shields.io/static/v1?label=&message=>1.4&color=377EF0&style=for-the-badge) | ![tensorflow](https://img.shields.io/static/v1?label=&message=>2.2.0&color=377EF0&style=for-the-badge) |
**Trident** is a deep learning dynamic calculation graph api based on PyTorch and TensorFlow (pure Eager mode, no Keras dependency). Through Trident, not only can you use the same developer experience (more than 99% of the same code) within PyTorch and Tensorflow, it is also designed to simplify deep learning developers routine work. It's functions not only cover computing vision, natural language understanding and reinforcement learning, but also include a simpler network structure declaration, a more powerful but easier training process control, intuitive data access and data augmentation.
**Trident** 是基於PyTorch和TensorFlow的深度學習動態計算圖API(純Eager模式,無Keras依賴)。 通過Trident,您不僅可以在PyTorch和Tensorflow中使用相同的開發經驗(超過99%的相同代碼),它的誕生的目的就是希望簡化深度學習開發人員的日常工作,Trident的功能不但覆蓋機器視覺、自然語言與強化學習。它的功能還包括更簡單的網絡結構宣告,更強大但更容易實現的訓練流程控制,直觀的數據訪問和數據增強。
## Key Features
- Integrated Pytorch and Tensorflow experience (from ops operation, neural network structure announcement, loss function and evaluation function call...)
- Able to automatically transpose the tensor direction according to the background type (PyTorch (CHW) or Tensorflow (HWC))
- Only one original neural block is used to meet more needs. For example, Conv2d_Block integrates five functions including convolution layer, normalization, activation function, dropout, and noise.
- The amount of padding can be automatically calculated through Autopad during neural layer design. Even PyTorch can delay shape inference, and use Summary to view model structure and computing power consumption information.
- Rich built-in visualization, evaluation function and internal information can be inserted into the training plan.
- Training Plan can be flexible like building blocks stacked to design the training process you want, while using fluent style syntax to make the overall code easier to read and easier to manage.
- Provide the latest optimizers (Ranger, Lars, RangerLars, AdaBelief...) and optimization techniques (gradient centralization).
- 整合一致的Pytorch與Tensorflow體驗(從ops操作、神經網路結構宣告、損失函數與評估函數調用....)
- 能夠根據後台種類(PyTorch (CHW) 或是 Tensorflow (HWC))自動進行張量方向轉置
- 僅用神經區塊(block)元件來滿足更多的建模需求,例如Conv2d_Block整合了卷積層、正規化、活化函數、Dropout、噪音等五種功能,同時可以在block中執行神經層融合。
- 神經層設計時可以透過Autopad 自動計算padding量,就連PyTorch也可以延遲形狀推斷,以及使用Summary檢視模型結構與算力耗用信息。
- 豐富的內建視覺化、評估函數以及內部訊息,可供插入至訓練計畫中。
- 訓練計畫(Training Plan) 可以如同堆積木般彈性設計你想要的訓練流程,同時使用fluent style語法讓整體代碼易讀更容易管理。
- 提供最新的優化器( Ranger,Lars, RangerLars, AdaBelief...)以及優化技巧(gradient centralization)。
a
## New Release version 0.7.4
- Experimental: Keras model (at tensorflow backend) and Primitive pytorch model (at pytorch backend) support in TrainingPlan
- print_gpu_utilization in TrainingPlan
- Experimental: Layer fusion (conv+norm=>conv) in ConvXd_Block, FullConnect_Block
- Experimental: Automatic inplace of Relu and LeakyRelu, switch back to False when it detect in leaf layer.
- Experimental: MLFlow support
- New optimizer LAMB, Ranger_AdaBelief
- Rewrite lots of loss function.
. List[String](image path) as output of ImageDataset
.
. More stable and reliability.
## New Release version 0.7.3
![Alt text](images/text_process.png)
- New with_accumulate_grad for accumulating gradient.
- Enhancement for TextSequenceDataset and TextSequenceDataprovider.
- New TextTransform: RandomMask,BopomofoConvert,ChineseConvert,RandomHomophonicTypo,RandomHomomorphicTypo
- New VisionTransform: ImageMosaic, SaltPepperNoise
- Transformer, Bert, Vit support in pytorch backend.
- New layers and blocks: FullConnect_Block, TemporalConv1d_Block
- Differentiable color space convertion function: rgb2hsv, rgb2xyz rgb2lab....
- Enhancement for GANBuilder, now conditional GAN and skip-connections networks is support.
- LSTM support attention in pytorch backend, and LSTM comes in tensorflow mode.
## New Release version 0.7.1
![Alt text](images/vision_transform.png)
- New Vision Transform.
## New Release version 0.7.0
![Alt text](images/tensorboard.png)
- Tensorboard support.
- New optimizer: AdaBelief, DiffGrad
- Initializers support.
## How To Use
#### Step 0: Install
Simple installation from PyPI
```bash
pip install tridentx --upgrade
```
#### Step 1: Add these imports
```python
import os
os.environ['TRIDENT_BACKEND'] = 'pytorch'
import trident as T
from trident import *
from trident.models.pytorch_densenet import DenseNetFcn
```
#### Step 2: A simple case both in PyTorch and Tensorflow
```
data_provider=load_examples_data('dogs-vs-cats')
data_provider.image_transform_funcs=[
random_rescale_crop(224,224,scale=(0.9,1.1)),
random_adjust_gamma(gamma=(0.9,1.1)),
normalize(0,255),
normalize([0.485, 0.456, 0.406],[0.229, 0.224, 0.225])]
model=resnet.ResNet50(include_top=True,pretrained=True,freeze_features=True,classes=2)\
.with_optimizer(optimizer=Ranger,lr=1e-3,betas=(0.9, 0.999),gradient_centralization='all')\
.with_loss(CrossEntropyLoss)\
.with_metric(accuracy,name='accuracy')\
.unfreeze_model_scheduling(200,'batch',5,None) \
.unfreeze_model_scheduling(1, 'epoch', 4, None) \
.summary()
plan=TrainingPlan()\
.add_training_item(model)\
.with_data_loader(data_provider)\
.repeat_epochs(10)\
.within_minibatch_size(32)\
.print_progress_scheduling(10,unit='batch')\
.display_loss_metric_curve_scheduling(200,'batch')\
.print_gradients_scheduling(200,'batch')\
.start_now()
```
#### Step 3: Examples
- mnist classsification [pytorch](https://github.com/AllanYiin/DeepBelief_Course5_Examples/blob/master/epoch001_%E5%8F%A6%E4%B8%80%E7%A8%AE%E8%A7%92%E5%BA%A6%E7%9C%8Bmnist/HelloWorld_mnist_pytorch.ipynb) [tensorflow](https://github.com/AllanYiin/DeepBelief_Course5_Examples/blob/master/epoch001_%E5%8F%A6%E4%B8%80%E7%A8%AE%E8%A7%92%E5%BA%A6%E7%9C%8Bmnist/HelloWorld_mnist_tf.ipynb)
- activation function [pytorch](https://github.com/AllanYiin/DeepBelief_Course5_Examples/blob/master/epoch002_%E6%B4%BB%E5%8C%96%E5%87%BD%E6%95%B8%E5%A4%A7%E6%B8%85%E9%BB%9E/%20Activation_Function_AllStar_Pytorch.ipynb) [tensorflow](https://github.com/AllanYiin/DeepBelief_Course5_Examples/blob/master/epoch002_%E6%B4%BB%E5%8C%96%E5%87%BD%E6%95%B8%E5%A4%A7%E6%B8%85%E9%BB%9E/Activation_Function_AllStar_tf.ipynb)
- auto-encoder [pytorch](https://github.com/AllanYiin/DeepBelief_Course5_Examples/blob/master/epoch003_%E8%87%AA%E5%8B%95%E5%AF%B6%E5%8F%AF%E5%A4%A2%E7%B7%A8%E7%A2%BC%E5%99%A8/Pokemon_Autoencoder_pytorch.ipynb) [tensorflow](https://github.com/AllanYiin/DeepBelief_Course5_Examples/blob/master/epoch003_%E8%87%AA%E5%8B%95%E5%AF%B6%E5%8F%AF%E5%A4%A2%E7%B7%A8%E7%A2%BC%E5%99%A8/Pokemon_Autoencoder_tf.ipynb)
## BibTeX
If you want to cite the framework feel free to use this:
```bibtex
@article{AllanYiin2020Trident,
title={Trident},
author={AllanYiin, Taiwan},
journal={GitHub. Note: https://github.com/AllanYiin/trident},
volume={1},
year={2020}
}
```
Raw data
{
"_id": null,
"home_page": "",
"name": "tridentx",
"maintainer": "",
"docs_url": null,
"requires_python": ">=3.5",
"maintainer_email": "",
"keywords": "deep learning,machine learning,pytorch,tensorflow,AI",
"author": "Allan Yiin",
"author_email": "allanyiin.ai@gmail.com",
"download_url": "https://test.pypi.org/project/tridentx",
"platform": null,
"description": "![trident](trident_logo.png)\r\n**Make PyTorch and TensorFlow two become one.**\r\n \r\n\r\n| version | pytorch | tensorflow |\r\n|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------|\r\n| [![version](https://img.shields.io/static/v1?label=&message=0.7.6&color=377EF0&style=for-the-badge)](https://img.shields.io/static/v1?label=&message=0.7.5&color=377EF0&style=for-the-badge) | ![pytorch](https://img.shields.io/static/v1?label=&message=>1.4&color=377EF0&style=for-the-badge) | ![tensorflow](https://img.shields.io/static/v1?label=&message=>2.2.0&color=377EF0&style=for-the-badge) |\r\n\r\n**Trident** is a deep learning dynamic calculation graph api based on PyTorch and TensorFlow (pure Eager mode, no Keras dependency). Through Trident, not only can you use the same developer experience (more than 99% of the same code) within PyTorch and Tensorflow, it is also designed to simplify deep learning developers routine work. It's functions not only cover computing vision, natural language understanding and reinforcement learning, but also include a simpler network structure declaration, a more powerful but easier training process control, intuitive data access and data augmentation.\r\n\r\n**Trident** \u662f\u57fa\u65bcPyTorch\u548cTensorFlow\u7684\u6df1\u5ea6\u5b78\u7fd2\u52d5\u614b\u8a08\u7b97\u5716API\uff08\u7d14Eager\u6a21\u5f0f\uff0c\u7121Keras\u4f9d\u8cf4\uff09\u3002 \u901a\u904eTrident\uff0c\u60a8\u4e0d\u50c5\u53ef\u4ee5\u5728PyTorch\u548cTensorflow\u4e2d\u4f7f\u7528\u76f8\u540c\u7684\u958b\u767c\u7d93\u9a57\uff08\u8d85\u904e99\uff05\u7684\u76f8\u540c\u4ee3\u78bc\uff09\uff0c\u5b83\u7684\u8a95\u751f\u7684\u76ee\u7684\u5c31\u662f\u5e0c\u671b\u7c21\u5316\u6df1\u5ea6\u5b78\u7fd2\u958b\u767c\u4eba\u54e1\u7684\u65e5\u5e38\u5de5\u4f5c\uff0cTrident\u7684\u529f\u80fd\u4e0d\u4f46\u8986\u84cb\u6a5f\u5668\u8996\u89ba\u3001\u81ea\u7136\u8a9e\u8a00\u8207\u5f37\u5316\u5b78\u7fd2\u3002\u5b83\u7684\u529f\u80fd\u9084\u5305\u62ec\u66f4\u7c21\u55ae\u7684\u7db2\u7d61\u7d50\u69cb\u5ba3\u544a\uff0c\u66f4\u5f37\u5927\u4f46\u66f4\u5bb9\u6613\u5be6\u73fe\u7684\u8a13\u7df4\u6d41\u7a0b\u63a7\u5236\uff0c\u76f4\u89c0\u7684\u6578\u64da\u8a2a\u554f\u548c\u6578\u64da\u589e\u5f37\u3002\r\n\r\n## Key Features \r\n- Integrated Pytorch and Tensorflow experience (from ops operation, neural network structure announcement, loss function and evaluation function call...)\r\n- Able to automatically transpose the tensor direction according to the background type (PyTorch (CHW) or Tensorflow (HWC))\r\n- Only one original neural block is used to meet more needs. For example, Conv2d_Block integrates five functions including convolution layer, normalization, activation function, dropout, and noise.\r\n- The amount of padding can be automatically calculated through Autopad during neural layer design. Even PyTorch can delay shape inference, and use Summary to view model structure and computing power consumption information.\r\n- Rich built-in visualization, evaluation function and internal information can be inserted into the training plan.\r\n- Training Plan can be flexible like building blocks stacked to design the training process you want, while using fluent style syntax to make the overall code easier to read and easier to manage.\r\n- Provide the latest optimizers (Ranger, Lars, RangerLars, AdaBelief...) and optimization techniques (gradient centralization).\r\n\r\n- \u6574\u5408\u4e00\u81f4\u7684Pytorch\u8207Tensorflow\u9ad4\u9a57(\u5f9eops\u64cd\u4f5c\u3001\u795e\u7d93\u7db2\u8def\u7d50\u69cb\u5ba3\u544a\u3001\u640d\u5931\u51fd\u6578\u8207\u8a55\u4f30\u51fd\u6578\u8abf\u7528....)\r\n- \u80fd\u5920\u6839\u64da\u5f8c\u53f0\u7a2e\u985e(PyTorch (CHW) \u6216\u662f Tensorflow (HWC))\u81ea\u52d5\u9032\u884c\u5f35\u91cf\u65b9\u5411\u8f49\u7f6e\r\n- \u50c5\u7528\u795e\u7d93\u5340\u584a(block)\u5143\u4ef6\u4f86\u6eff\u8db3\u66f4\u591a\u7684\u5efa\u6a21\u9700\u6c42\uff0c\u4f8b\u5982Conv2d_Block\u6574\u5408\u4e86\u5377\u7a4d\u5c64\u3001\u6b63\u898f\u5316\u3001\u6d3b\u5316\u51fd\u6578\u3001Dropout\u3001\u566a\u97f3\u7b49\u4e94\u7a2e\u529f\u80fd\uff0c\u540c\u6642\u53ef\u4ee5\u5728block\u4e2d\u57f7\u884c\u795e\u7d93\u5c64\u878d\u5408\u3002\r\n- \u795e\u7d93\u5c64\u8a2d\u8a08\u6642\u53ef\u4ee5\u900f\u904eAutopad \u81ea\u52d5\u8a08\u7b97padding\u91cf\uff0c\u5c31\u9023PyTorch\u4e5f\u53ef\u4ee5\u5ef6\u9072\u5f62\u72c0\u63a8\u65b7\uff0c\u4ee5\u53ca\u4f7f\u7528Summary\u6aa2\u8996\u6a21\u578b\u7d50\u69cb\u8207\u7b97\u529b\u8017\u7528\u4fe1\u606f\u3002\r\n- \u8c50\u5bcc\u7684\u5167\u5efa\u8996\u89ba\u5316\u3001\u8a55\u4f30\u51fd\u6578\u4ee5\u53ca\u5167\u90e8\u8a0a\u606f\uff0c\u53ef\u4f9b\u63d2\u5165\u81f3\u8a13\u7df4\u8a08\u756b\u4e2d\u3002\r\n- \u8a13\u7df4\u8a08\u756b(Training Plan) \u53ef\u4ee5\u5982\u540c\u5806\u7a4d\u6728\u822c\u5f48\u6027\u8a2d\u8a08\u4f60\u60f3\u8981\u7684\u8a13\u7df4\u6d41\u7a0b\uff0c\u540c\u6642\u4f7f\u7528fluent style\u8a9e\u6cd5\u8b93\u6574\u9ad4\u4ee3\u78bc\u6613\u8b80\u66f4\u5bb9\u6613\u7ba1\u7406\u3002\r\n- \u63d0\u4f9b\u6700\u65b0\u7684\u512a\u5316\u5668( Ranger,Lars, RangerLars, AdaBelief...)\u4ee5\u53ca\u512a\u5316\u6280\u5de7(gradient centralization)\u3002\r\n\r\na\r\n\r\n## New Release version 0.7.4\r\n- Experimental: Keras model (at tensorflow backend) and Primitive pytorch model (at pytorch backend) support in TrainingPlan\r\n- print_gpu_utilization in TrainingPlan\r\n- Experimental: Layer fusion (conv+norm=>conv) in ConvXd_Block, FullConnect_Block\r\n- Experimental: Automatic inplace of Relu and LeakyRelu, switch back to False when it detect in leaf layer.\r\n- Experimental: MLFlow support\r\n- New optimizer LAMB, Ranger_AdaBelief\r\n- Rewrite lots of loss function.\r\n. List[String](image path) as output of ImageDataset\r\n. \r\n. More stable and reliability.\r\n\r\n\r\n## New Release version 0.7.3\r\n![Alt text](images/text_process.png)\r\n- New with_accumulate_grad for accumulating gradient.\r\n- Enhancement for TextSequenceDataset and TextSequenceDataprovider.\r\n- New TextTransform: RandomMask,BopomofoConvert,ChineseConvert,RandomHomophonicTypo,RandomHomomorphicTypo \r\n- New VisionTransform: ImageMosaic, SaltPepperNoise \r\n- Transformer, Bert, Vit support in pytorch backend. \r\n- New layers and blocks: FullConnect_Block, TemporalConv1d_Block\r\n- Differentiable color space convertion function: rgb2hsv, rgb2xyz rgb2lab....\r\n- Enhancement for GANBuilder, now conditional GAN and skip-connections networks is support.\r\n- LSTM support attention in pytorch backend, and LSTM comes in tensorflow mode.\r\n\r\n## New Release version 0.7.1 \r\n![Alt text](images/vision_transform.png)\r\n- New Vision Transform.\r\n\r\n## New Release version 0.7.0 \r\n![Alt text](images/tensorboard.png)\r\n\r\n- Tensorboard support.\r\n- New optimizer: AdaBelief, DiffGrad\r\n- Initializers support.\r\n\r\n\r\n\r\n\r\n## How To Use\r\n\r\n#### Step 0: Install\r\n\r\nSimple installation from PyPI\r\n```bash\r\npip install tridentx --upgrade\r\n```\r\n\r\n#### Step 1: Add these imports\r\n\r\n```python\r\nimport os\r\nos.environ['TRIDENT_BACKEND'] = 'pytorch'\r\nimport trident as T\r\nfrom trident import *\r\nfrom trident.models.pytorch_densenet import DenseNetFcn\r\n```\r\n\r\n#### Step 2: A simple case both in PyTorch and Tensorflow\r\n\r\n```\r\ndata_provider=load_examples_data('dogs-vs-cats')\r\ndata_provider.image_transform_funcs=[\r\n random_rescale_crop(224,224,scale=(0.9,1.1)),\r\n random_adjust_gamma(gamma=(0.9,1.1)),\r\n normalize(0,255),\r\n normalize([0.485, 0.456, 0.406],[0.229, 0.224, 0.225])]\r\n\r\nmodel=resnet.ResNet50(include_top=True,pretrained=True,freeze_features=True,classes=2)\\\r\n .with_optimizer(optimizer=Ranger,lr=1e-3,betas=(0.9, 0.999),gradient_centralization='all')\\\r\n .with_loss(CrossEntropyLoss)\\\r\n .with_metric(accuracy,name='accuracy')\\\r\n .unfreeze_model_scheduling(200,'batch',5,None) \\\r\n .unfreeze_model_scheduling(1, 'epoch', 4, None) \\\r\n .summary()\r\n\r\nplan=TrainingPlan()\\\r\n .add_training_item(model)\\\r\n .with_data_loader(data_provider)\\\r\n .repeat_epochs(10)\\\r\n .within_minibatch_size(32)\\\r\n .print_progress_scheduling(10,unit='batch')\\\r\n .display_loss_metric_curve_scheduling(200,'batch')\\\r\n .print_gradients_scheduling(200,'batch')\\\r\n .start_now()\r\n```\r\n#### Step 3: Examples\r\n- mnist classsification [pytorch](https://github.com/AllanYiin/DeepBelief_Course5_Examples/blob/master/epoch001_%E5%8F%A6%E4%B8%80%E7%A8%AE%E8%A7%92%E5%BA%A6%E7%9C%8Bmnist/HelloWorld_mnist_pytorch.ipynb) [tensorflow](https://github.com/AllanYiin/DeepBelief_Course5_Examples/blob/master/epoch001_%E5%8F%A6%E4%B8%80%E7%A8%AE%E8%A7%92%E5%BA%A6%E7%9C%8Bmnist/HelloWorld_mnist_tf.ipynb)\r\n- activation function [pytorch](https://github.com/AllanYiin/DeepBelief_Course5_Examples/blob/master/epoch002_%E6%B4%BB%E5%8C%96%E5%87%BD%E6%95%B8%E5%A4%A7%E6%B8%85%E9%BB%9E/%20Activation_Function_AllStar_Pytorch.ipynb) [tensorflow](https://github.com/AllanYiin/DeepBelief_Course5_Examples/blob/master/epoch002_%E6%B4%BB%E5%8C%96%E5%87%BD%E6%95%B8%E5%A4%A7%E6%B8%85%E9%BB%9E/Activation_Function_AllStar_tf.ipynb)\r\n- auto-encoder [pytorch](https://github.com/AllanYiin/DeepBelief_Course5_Examples/blob/master/epoch003_%E8%87%AA%E5%8B%95%E5%AF%B6%E5%8F%AF%E5%A4%A2%E7%B7%A8%E7%A2%BC%E5%99%A8/Pokemon_Autoencoder_pytorch.ipynb) [tensorflow](https://github.com/AllanYiin/DeepBelief_Course5_Examples/blob/master/epoch003_%E8%87%AA%E5%8B%95%E5%AF%B6%E5%8F%AF%E5%A4%A2%E7%B7%A8%E7%A2%BC%E5%99%A8/Pokemon_Autoencoder_tf.ipynb)\r\n\r\n## BibTeX\r\nIf you want to cite the framework feel free to use this:\r\n\r\n```bibtex\r\n@article{AllanYiin2020Trident,\r\n title={Trident},\r\n author={AllanYiin, Taiwan},\r\n journal={GitHub. Note: https://github.com/AllanYiin/trident},\r\n volume={1},\r\n year={2020}\r\n}\r\n```\r\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "Make pytorch and tensorflow two become one.",
"version": "0.7.8",
"project_urls": {
"Download": "https://test.pypi.org/project/tridentx"
},
"split_keywords": [
"deep learning",
"machine learning",
"pytorch",
"tensorflow",
"ai"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "09b9dca63093839cc100e97055580ab14b9eaec69a47efea5c0ddf94ba220087",
"md5": "886b16f9c59cd254a0a5409073a90112",
"sha256": "f43114136fd161e054937398db51c134215ab89a6b084e0ff8064330336f23cb"
},
"downloads": -1,
"filename": "tridentx-0.7.8-py3-none-any.whl",
"has_sig": false,
"md5_digest": "886b16f9c59cd254a0a5409073a90112",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.5",
"size": 1176574,
"upload_time": "2023-08-22T08:53:05",
"upload_time_iso_8601": "2023-08-22T08:53:05.191480Z",
"url": "https://files.pythonhosted.org/packages/09/b9/dca63093839cc100e97055580ab14b9eaec69a47efea5c0ddf94ba220087/tridentx-0.7.8-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2023-08-22 08:53:05",
"github": false,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"lcname": "tridentx"
}