Name | unitorch JSON |
Version |
0.0.0.24
JSON |
| download |
home_page | None |
Summary | unitorch provides efficient implementation of popular unified NLU / NLG / CV / CTR / MM / RL models with PyTorch. |
upload_time | 2025-01-15 07:13:00 |
maintainer | None |
docs_url | None |
author | None |
requires_python | >=3.8 |
license | MIT |
keywords |
pytorch
|
VCS |
|
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
<div align="Center">
![unitorch](https://raw.githubusercontent.com/fuliucansheng/unitorch/master/unitorch.png)
[Documentation](https://fuliucansheng.github.io/unitorch) •
[Installation Instructions](https://fuliucansheng.github.io/unitorch/installation/) •
[Reporting Issues](https://github.com/fuliucansheng/unitorch/issues/new?assignees=&labels=&template=bug-report.yml)
[![PyPI - Python Version](https://img.shields.io/pypi/pyversions/unitorch)](https://pypi.org/project/unitorch/)
[![PyPI Version](https://badge.fury.io/py/unitorch.svg)](https://badge.fury.io/py/unitorch)
[![PyPI Downloads](https://pepy.tech/badge/unitorch)](https://pepy.tech/project/unitorch)
[![Github Downloads](https://img.shields.io/github/downloads/fuliucansheng/unitorch/total?color=blue&label=downloads&logo=github&logoColor=lightgrey)](https://img.shields.io/github/downloads/fuliucansheng/unitorch/total?color=blue&label=Downloads&logo=github&logoColor=lightgrey)
[![License](https://img.shields.io/github/license/fuliucansheng/unitorch?color=dfd)](LICENSE)
[![PRs Welcome](https://img.shields.io/badge/PRs-welcome-pink.svg)](https://github.com/fuliucansheng/unitorch/issues?q=is%3Aopen+is%3Aissue+label%3A%22help+wanted%22)
</div>
# Introduction
🔥 unitorch is a library that simplifies and accelerates the development of unified models for natural language understanding, natural language generation, computer vision, click-through rate prediction, multimodal learning and reinforcement learning. It is built on top of PyTorch and integrates seamlessly with popular frameworks such as transformers, peft, diffusers, and fastseq. With unitorch, you can use a single command line tool or a one-line code ` import unitorch` import to leverage the state-of-the-art models and datasets without sacrificing performance or accuracy.
------------------------------------
# What's New Model
* **SDXL** released with the paper [SDXL: Improving Latent Diffusion Models for High-Resolution Image Synthesis](https://arxiv.org/abs/2307.01952) by Dustin Podell, Zion English, Kyle Lacey, Andreas Blattmann, Tim Dockhorn, Jonas Müller, Joe Penna, Robin Rombach.
* **LLaMA** released with the paper [LLaMA: Open and Efficient Foundation Language Models](https://arxiv.org/abs/2302.13971) by Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Armand Joulin, Edouard Grave, Guillaume Lample.
* **ControlNet** released with the paper [Adding Conditional Control to Text-to-Image Diffusion Models](https://arxiv.org/abs/2302.05543) by Lvmin Zhang, Anyi Rao, Maneesh Agrawala.
* **BLOOM** released with the paper [BLOOM: A 176B-Parameter Open-Access Multilingual Language Model](https://arxiv.org/abs/2211.05100) by BigScience Workshop: Teven Le Scao, Angela Fan, Christopher Akiki, Ellie Pavlick, Suzana Ilić, Daniel Hesslow...
* **PEGASUS-X** released with the paper [Investigating Efficiently Extending Transformers for Long Input Summarization](https://arxiv.org/abs/2208.04347) by Jason Phang, Yao Zhao, Peter J. Liu.
* **BLIP** released with the paper [BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation](https://arxiv.org/abs/2201.12086) by Junnan Li, Dongxu Li, Caiming Xiong, Steven Hoi.
* **BEiT** released with the paper [BEiT: BERT Pre-Training of Image Transformers](https://arxiv.org/abs/2106.08254) by Hangbo Bao, Li Dong, Songhao Piao, Furu Wei.
* **Swin Transformer** released with the paper [Swin Transformer: Hierarchical Vision Transformer using Shifted Windows](https://arxiv.org/abs/2103.14030) by Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, Baining Guo.
* **CLIP** released with the paper [Learning Transferable Visual Models From Natural Language Supervision](https://arxiv.org/abs/2103.00020) by Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, Ilya Sutskever.
* **mT5** released with the paper [mT5: A massively multilingual pre-trained text-to-text transformer](https://arxiv.org/abs/2010.11934) by Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, Colin Raffel.
* **Vision Transformer (ViT)** released with the paper [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929) by Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby.
* **DeBERTa-V2** released with the paper [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654) by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen.
* **DeBERTa** released with the paper [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654) by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen.
* **MBart** released with the paper [Multilingual Denoising Pre-training for Neural Machine Translation](https://arxiv.org/abs/2001.08210) by Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, Luke Zettlemoyer.
* **PEGASUS** released with the paper [PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization](https://arxiv.org/abs/1912.08777) by Jingqing Zhang, Yao Zhao, Mohammad Saleh, Peter J. Liu.
* **BART** released with the paper [BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension](https://arxiv.org/abs/1910.13461) by Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov and Luke Zettlemoyer.
* **T5** released with the paper [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/abs/1910.10683) by Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu.
* **VisualBERT** released with the paper [VisualBERT: A Simple and Performant Baseline for Vision and Language](https://arxiv.org/abs/1908.03557) by Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, Kai-Wei Chang.
* **RoBERTa** released together with the paper [RoBERTa: A Robustly Optimized BERT Pretraining Approach](https://arxiv.org/abs/1907.11692) by Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov.
* **BERT** released with the paper [BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding](https://arxiv.org/abs/1810.04805) by Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova.
------------------------------------
# Features
* User-Friendly Python Package
* Faster & Streamlined Train/Inference
* Deepspeed Integration for Large-Scale Models
* CUDA Optimization
* Extensive STOA Model & Task Supports
# Installation
```bash
pip3 install unitorch
```
# Quick Examples
### Source Code
```python
import unitorch
# import bart model
from unitorch.models.bart import BartForGeneration
model = BartForGeneration("path/to/bart/config.json")
# use the configuration class
from unitorch.cli import CoreConfigureParser
config = CoreConfigureParser("path/to/config.ini")
```
### Multi-GPU Training
```bash
torchrun --no_python --nproc_per_node 4 \
unitorch-train examples/configs/generation/bart.ini \
--train_file path/to/train.tsv --dev_file path/to/dev.tsv
```
### Single-GPU Inference
```bash
unitorch-infer examples/configs/generation/bart.ini --test_file path/to/test.tsv
```
> **Find more details in the Tutorials section of the [documentation](https://fuliucansheng.github.io/unitorch).**
# License
Code released under MIT license.
Raw data
{
"_id": null,
"home_page": null,
"name": "unitorch",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.8",
"maintainer_email": null,
"keywords": "PyTorch",
"author": null,
"author_email": "fuliucansheng <fuliucansheng@gmail.com>",
"download_url": "https://files.pythonhosted.org/packages/a8/6c/3cf65b3246bcc381c72aedf7f6db1e602038229b14cae63f2797c375d360/unitorch-0.0.0.24.tar.gz",
"platform": null,
"description": "<div align=\"Center\"> \n\n![unitorch](https://raw.githubusercontent.com/fuliucansheng/unitorch/master/unitorch.png)\n\n\n[Documentation](https://fuliucansheng.github.io/unitorch) \u2022\n[Installation Instructions](https://fuliucansheng.github.io/unitorch/installation/) \u2022\n[Reporting Issues](https://github.com/fuliucansheng/unitorch/issues/new?assignees=&labels=&template=bug-report.yml)\n\n[![PyPI - Python Version](https://img.shields.io/pypi/pyversions/unitorch)](https://pypi.org/project/unitorch/)\n[![PyPI Version](https://badge.fury.io/py/unitorch.svg)](https://badge.fury.io/py/unitorch)\n[![PyPI Downloads](https://pepy.tech/badge/unitorch)](https://pepy.tech/project/unitorch)\n[![Github Downloads](https://img.shields.io/github/downloads/fuliucansheng/unitorch/total?color=blue&label=downloads&logo=github&logoColor=lightgrey)](https://img.shields.io/github/downloads/fuliucansheng/unitorch/total?color=blue&label=Downloads&logo=github&logoColor=lightgrey)\n\n[![License](https://img.shields.io/github/license/fuliucansheng/unitorch?color=dfd)](LICENSE)\n[![PRs Welcome](https://img.shields.io/badge/PRs-welcome-pink.svg)](https://github.com/fuliucansheng/unitorch/issues?q=is%3Aopen+is%3Aissue+label%3A%22help+wanted%22)\n\n</div>\n\n# Introduction\n \n\ud83d\udd25 unitorch is a library that simplifies and accelerates the development of unified models for natural language understanding, natural language generation, computer vision, click-through rate prediction, multimodal learning and reinforcement learning. It is built on top of PyTorch and integrates seamlessly with popular frameworks such as transformers, peft, diffusers, and fastseq. With unitorch, you can use a single command line tool or a one-line code ` import unitorch` import to leverage the state-of-the-art models and datasets without sacrificing performance or accuracy.\n\n------------------------------------\n\n# What's New Model\n\n* **SDXL** released with the paper [SDXL: Improving Latent Diffusion Models for High-Resolution Image Synthesis](https://arxiv.org/abs/2307.01952) by Dustin Podell, Zion English, Kyle Lacey, Andreas Blattmann, Tim Dockhorn, Jonas M\u00fcller, Joe Penna, Robin Rombach.\n* **LLaMA** released with the paper [LLaMA: Open and Efficient Foundation Language Models](https://arxiv.org/abs/2302.13971) by Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timoth\u00e9e Lacroix, Baptiste Rozi\u00e8re, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Armand Joulin, Edouard Grave, Guillaume Lample.\n* **ControlNet** released with the paper [Adding Conditional Control to Text-to-Image Diffusion Models](https://arxiv.org/abs/2302.05543) by Lvmin Zhang, Anyi Rao, Maneesh Agrawala.\n* **BLOOM** released with the paper [BLOOM: A 176B-Parameter Open-Access Multilingual Language Model](https://arxiv.org/abs/2211.05100) by BigScience Workshop: Teven Le Scao, Angela Fan, Christopher Akiki, Ellie Pavlick, Suzana Ili\u0107, Daniel Hesslow...\n* **PEGASUS-X** released with the paper [Investigating Efficiently Extending Transformers for Long Input Summarization](https://arxiv.org/abs/2208.04347) by Jason Phang, Yao Zhao, Peter J. Liu.\n* **BLIP** released with the paper [BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation](https://arxiv.org/abs/2201.12086) by Junnan Li, Dongxu Li, Caiming Xiong, Steven Hoi.\n* **BEiT** released with the paper [BEiT: BERT Pre-Training of Image Transformers](https://arxiv.org/abs/2106.08254) by Hangbo Bao, Li Dong, Songhao Piao, Furu Wei.\n* **Swin Transformer** released with the paper [Swin Transformer: Hierarchical Vision Transformer using Shifted Windows](https://arxiv.org/abs/2103.14030) by Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, Baining Guo.\n* **CLIP** released with the paper [Learning Transferable Visual Models From Natural Language Supervision](https://arxiv.org/abs/2103.00020) by Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, Ilya Sutskever.\n* **mT5** released with the paper [mT5: A massively multilingual pre-trained text-to-text transformer](https://arxiv.org/abs/2010.11934) by Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, Colin Raffel.\n* **Vision Transformer (ViT)** released with the paper [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929) by Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby.\n* **DeBERTa-V2** released with the paper [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654) by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen.\n* **DeBERTa** released with the paper [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654) by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen.\n* **MBart** released with the paper [Multilingual Denoising Pre-training for Neural Machine Translation](https://arxiv.org/abs/2001.08210) by Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, Luke Zettlemoyer.\n* **PEGASUS** released with the paper [PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization](https://arxiv.org/abs/1912.08777) by Jingqing Zhang, Yao Zhao, Mohammad Saleh, Peter J. Liu.\n* **BART** released with the paper [BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension](https://arxiv.org/abs/1910.13461) by Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov and Luke Zettlemoyer.\n* **T5** released with the paper [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/abs/1910.10683) by Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu.\n* **VisualBERT** released with the paper [VisualBERT: A Simple and Performant Baseline for Vision and Language](https://arxiv.org/abs/1908.03557) by Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, Kai-Wei Chang.\n* **RoBERTa** released together with the paper [RoBERTa: A Robustly Optimized BERT Pretraining Approach](https://arxiv.org/abs/1907.11692) by Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov.\n* **BERT** released with the paper [BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding](https://arxiv.org/abs/1810.04805) by Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova.\n\n------------------------------------\n\n# Features\n\n* User-Friendly Python Package\n* Faster & Streamlined Train/Inference\n* Deepspeed Integration for Large-Scale Models\n* CUDA Optimization\n* Extensive STOA Model & Task Supports\n\n# Installation\n\n```bash\npip3 install unitorch\n```\n\n# Quick Examples\n\n### Source Code\n```python\nimport unitorch\n\n# import bart model\nfrom unitorch.models.bart import BartForGeneration\nmodel = BartForGeneration(\"path/to/bart/config.json\")\n\n# use the configuration class\nfrom unitorch.cli import CoreConfigureParser\nconfig = CoreConfigureParser(\"path/to/config.ini\")\n```\n\n### Multi-GPU Training\n```bash\ntorchrun --no_python --nproc_per_node 4 \\\n\tunitorch-train examples/configs/generation/bart.ini \\\n\t--train_file path/to/train.tsv --dev_file path/to/dev.tsv\n```\n\n### Single-GPU Inference\n```bash\nunitorch-infer examples/configs/generation/bart.ini --test_file path/to/test.tsv\n```\n\n> **Find more details in the Tutorials section of the [documentation](https://fuliucansheng.github.io/unitorch).**\n\n\n# License\n\nCode released under MIT license.\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "unitorch provides efficient implementation of popular unified NLU / NLG / CV / CTR / MM / RL models with PyTorch.",
"version": "0.0.0.24",
"project_urls": null,
"split_keywords": [
"pytorch"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "755feb979c2b50050f8465104a281a8eedc0832a56fdff90fea7051fcd4fc8f4",
"md5": "8d2fc212a2d70b30543b610625853f59",
"sha256": "c5e572c8171f2c897f1c32608f13f3d5c970e9e6e0a79ecd0aed1248b9e33bdd"
},
"downloads": -1,
"filename": "unitorch-0.0.0.24-py3-none-any.whl",
"has_sig": false,
"md5_digest": "8d2fc212a2d70b30543b610625853f59",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.8",
"size": 766716,
"upload_time": "2025-01-15T07:12:53",
"upload_time_iso_8601": "2025-01-15T07:12:53.491718Z",
"url": "https://files.pythonhosted.org/packages/75/5f/eb979c2b50050f8465104a281a8eedc0832a56fdff90fea7051fcd4fc8f4/unitorch-0.0.0.24-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "a86c3cf65b3246bcc381c72aedf7f6db1e602038229b14cae63f2797c375d360",
"md5": "276b84895d679e7b09a793442517cc2f",
"sha256": "095a0c00a35ea2c08ece234499d53d6b28f0be1b9788e45002d5a09eebcf406d"
},
"downloads": -1,
"filename": "unitorch-0.0.0.24.tar.gz",
"has_sig": false,
"md5_digest": "276b84895d679e7b09a793442517cc2f",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8",
"size": 1084607,
"upload_time": "2025-01-15T07:13:00",
"upload_time_iso_8601": "2025-01-15T07:13:00.008112Z",
"url": "https://files.pythonhosted.org/packages/a8/6c/3cf65b3246bcc381c72aedf7f6db1e602038229b14cae63f2797c375d360/unitorch-0.0.0.24.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-01-15 07:13:00",
"github": false,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"lcname": "unitorch"
}