# transformers-stream-generator
[![PyPI - Python Version](https://img.shields.io/pypi/pyversions/transformers-stream-generator.svg)](https://pypi.org/project/transformers-stream-generator/)
[![PyPI](https://img.shields.io/pypi/v/transformers-stream-generator.svg)](https://pypi.org/project/transformers-stream-generator/)
[![GitHub license badge](https://img.shields.io/github/license/LowinLi/transformers-stream-generator)](https://github.com/LowinLi/transformers-stream-generator/blob/main/LICENSE)
[![Blog](https://img.shields.io/badge/blog-LowinLi-important)](https://lowin.li)
### Description
This is a text generation method which returns a generator, streaming out each token in real-time during inference, based on Huggingface/Transformers.
### Web Demo
+ original
![](./pic/original.gif)
+ stream
![](./pic/stream.gif)
### Installation
```bash
pip install transformers-stream-generator
```
### Usage
1. just add two lines of code before your original code
```python
from transformers_stream_generator import init_stream_support
init_stream_support()
```
2. add `do_stream=True` in `model.generate` function and keep `do_sample=True`, then you can get a generator
```python
generator = model.generate(input_ids, do_stream=True, do_sample=True)
for token in generator:
word = tokenizer.decode(token)
print(word)
```
### Example
+ run python script [example](./example/run.py) by gpt2
+ run web [example](./example/run_web.py) by gpt2 and test in client [example](./example/test_client.py)
Raw data
{
"_id": null,
"home_page": "https://github.com/LowinLi/transformers-stream-generator",
"name": "transformers-stream-generator",
"maintainer": "",
"docs_url": null,
"requires_python": ">=3.5",
"maintainer_email": "",
"keywords": "GPT,stream,transformers,NLP,model hub,transformer,text generation,summarization,translation,q&a,qg,machine learning,CausalLM",
"author": "LowinLi",
"author_email": "lowinli@outlook.com",
"download_url": "https://files.pythonhosted.org/packages/42/c2/65f13aec253100e1916e9bd7965fe17bde796ebabeb1265f45191ab4ddc0/transformers-stream-generator-0.0.5.tar.gz",
"platform": null,
"description": "# transformers-stream-generator\n\n[![PyPI - Python Version](https://img.shields.io/pypi/pyversions/transformers-stream-generator.svg)](https://pypi.org/project/transformers-stream-generator/)\n[![PyPI](https://img.shields.io/pypi/v/transformers-stream-generator.svg)](https://pypi.org/project/transformers-stream-generator/)\n[![GitHub license badge](https://img.shields.io/github/license/LowinLi/transformers-stream-generator)](https://github.com/LowinLi/transformers-stream-generator/blob/main/LICENSE)\n[![Blog](https://img.shields.io/badge/blog-LowinLi-important)](https://lowin.li)\n\n### Description\nThis is a text generation method which returns a generator, streaming out each token in real-time during inference, based on Huggingface/Transformers. \n\n### Web Demo\n+ original\n![](./pic/original.gif)\n+ stream\n![](./pic/stream.gif)\n\n### Installation\n```bash\npip install transformers-stream-generator\n```\n\n### Usage\n1. just add two lines of code before your original code\n```python\nfrom transformers_stream_generator import init_stream_support\ninit_stream_support()\n```\n\n2. add `do_stream=True` in `model.generate` function and keep `do_sample=True`, then you can get a generator\n```python\ngenerator = model.generate(input_ids, do_stream=True, do_sample=True)\nfor token in generator:\n word = tokenizer.decode(token)\n print(word)\n```\n\n### Example\n+ run python script [example](./example/run.py) by gpt2\n+ run web [example](./example/run_web.py) by gpt2 and test in client [example](./example/test_client.py)",
"bugtrack_url": null,
"license": "MIT License",
"summary": "This is a text generation method which returns a generator, streaming out each token in real-time during inference, based on Huggingface/Transformers.",
"version": "0.0.5",
"project_urls": {
"Bug Tracker": "https://github.com/LowinLi/transformers-stream-generator/issues",
"Homepage": "https://github.com/LowinLi/transformers-stream-generator",
"Repo": "https://github.com/LowinLi/transformers-stream-generator"
},
"split_keywords": [
"gpt",
"stream",
"transformers",
"nlp",
"model hub",
"transformer",
"text generation",
"summarization",
"translation",
"q&a",
"qg",
"machine learning",
"causallm"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "42c265f13aec253100e1916e9bd7965fe17bde796ebabeb1265f45191ab4ddc0",
"md5": "069ae3115525fa148d88af8f01772ee2",
"sha256": "271deace0abf9c0f83b36db472c8ba61fdc7b04d1bf89d845644acac2795ed57"
},
"downloads": -1,
"filename": "transformers-stream-generator-0.0.5.tar.gz",
"has_sig": false,
"md5_digest": "069ae3115525fa148d88af8f01772ee2",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.5",
"size": 13033,
"upload_time": "2024-03-11T14:18:02",
"upload_time_iso_8601": "2024-03-11T14:18:02.079394Z",
"url": "https://files.pythonhosted.org/packages/42/c2/65f13aec253100e1916e9bd7965fe17bde796ebabeb1265f45191ab4ddc0/transformers-stream-generator-0.0.5.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-03-11 14:18:02",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "LowinLi",
"github_project": "transformers-stream-generator",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "transformers-stream-generator"
}