curated-transformers


Namecurated-transformers JSON
Version 2.0.1 PyPI version JSON
download
home_pagehttps://github.com/explosion/curated-transformers
SummaryA PyTorch library of transformer models and components
upload_time2024-04-17 17:14:44
maintainerNone
docs_urlNone
authorExplosion
requires_python>=3.9
licenseMIT
keywords
VCS
bugtrack_url
requirements catalogue curated-tokenizers huggingface-hub tokenizers torch mypy pytest
Travis-CI No Travis.
coveralls test coverage No coveralls.
            <img src="docs/source/logo.png" width="100" align="right"/>

# Curated Transformers

[![Documentation Status](https://readthedocs.org/projects/button/badge/?version=latest)](https://curated-transformers.readthedocs.io/en/latest/?badge=latest)
[![pypi Version](https://img.shields.io/pypi/v/curated-transformers.svg?style=flat-square&logo=pypi&logoColor=white)](https://pypi.org/project/curated-transformers/)

**State-of-the-art transformers, brick by brick**

Curated Transformers is a transformer library for PyTorch. It provides
state-of-the-art models that are composed from a set of reusable
components. The stand-out features of Curated Transformer are:

- ⚡️ Supports state-of-the art transformer models, including LLMs such
  as Falcon, Llama, and Dolly v2.
- 👩‍🎨 Each model is composed from a set of reusable building blocks,
  providing many benefits:
  - Implementing a feature or bugfix benefits all models. For example,
    all models support 4/8-bit inference through the
    [`bitsandbytes`](https://github.com/TimDettmers/bitsandbytes) library
    and each model can use the PyTorch `meta` device to avoid unnecessary
    allocations and initialization.
  - Adding new models to the library is low-effort.
  - Do you want to try a new transformer architecture? A BERT encoder
    with rotary embeddings? You can make it in a pinch.
- 💎 Consistent type annotations of all public APIs:
  - Get great coding support from your IDE.
  - Integrates well with your existing type-checked code.
- 🎓 Great for education, because the building blocks are easy to study.
- 📦 Minimal dependencies.

Curated Transformers has been production-tested by [Explosion](http://explosion.ai/)
and will be used as the default transformer implementation in spaCy 3.7.

## 🧰 Supported Model Architectures

Supported encoder-only models:

- ALBERT
- BERT
- CamemBERT
- RoBERTa
- XLM-RoBERTa

Supported decoder-only models:

- Falcon
- GPT-NeoX
- Llama 1/2
- MPT

Generator wrappers:

- Dolly v2
- Falcon
- Llama 1/2
- MPT

All types of models can be loaded from Huggingface Hub.

spaCy integration for curated transformers is provided by the
[`spacy-curated-transformers`](https://github.com/explosion/spacy-curated-transformers)
package.

## ⏳ Install

```bash
pip install curated-transformers
```

### CUDA support

The default Linux build of PyTorch is built with CUDA 11.7 support. You should
explicitly install a CUDA build in the following cases:

- If you want to use Curated Transformers on Windows.
- If you want to use Curated Transformers on Linux with Ada-generation GPUs.
  The standard PyTorch build supports Ada GPUs, but you can get considerable
  performance improvements by installing PyTorch with CUDA 11.8 support.

In both cases, you can install PyTorch with:

```bash
pip install torch --index-url https://download.pytorch.org/whl/cu118
```

## 🏃‍♀️ Usage Example

```python-console
>>> import torch
>>> from curated_transformers.generation import AutoGenerator, GreedyGeneratorConfig
>>> generator = AutoGenerator.from_hf_hub(name="tiiuae/falcon-7b-instruct", device=torch.device("cuda"))
>>> generator(["What is Python in one sentence?", "What is Rust in one sentence?"], GreedyGeneratorConfig())
['Python is a high-level programming language that is easy to learn and widely used for web development, data analysis, and automation.',
 'Rust is a programming language that is designed to be a safe, concurrent, and efficient replacement for C++.']
```

You can find more [usage examples](https://curated-transformers.readthedocs.io/en/latest/usage.html)
in the documentation. You can also find example programs that use Curated Transformers in the
[`examples`](examples/) directory.

## 📚 Documentation

You can read more about how to use Curated Transformers here:

- [Overview](https://curated-transformers.readthedocs.io/en/v1.2.x/) ([Development](https://curated-transformers.readthedocs.io/en/latest/))
- [Usage](https://curated-transformers.readthedocs.io/en/v1.2.x/usage.html) ([Development](https://curated-transformers.readthedocs.io/en/latest/usage.html))
- [API](https://curated-transformers.readthedocs.io/en/v1.2.x/api.html) ([Development](https://curated-transformers.readthedocs.io/en/latest/api.html))

## 🗜️ Quantization

`curated-transformers` supports dynamic 8-bit and 4-bit quantization of models by leveraging the [`bitsandbytes` library](https://github.com/TimDettmers/bitsandbytes).

Use the quantization variant to automatically install the necessary dependencies:

```bash
pip install curated-transformers[quantization]
```

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/explosion/curated-transformers",
    "name": "curated-transformers",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.9",
    "maintainer_email": null,
    "keywords": null,
    "author": "Explosion",
    "author_email": "contact@explosion.ai",
    "download_url": "https://files.pythonhosted.org/packages/f5/89/ef25c717786a83d63bb47eb7092e8cf68732b1429ac01be91defebe94646/curated_transformers-2.0.1.tar.gz",
    "platform": null,
    "description": "<img src=\"docs/source/logo.png\" width=\"100\" align=\"right\"/>\n\n# Curated Transformers\n\n[![Documentation Status](https://readthedocs.org/projects/button/badge/?version=latest)](https://curated-transformers.readthedocs.io/en/latest/?badge=latest)\n[![pypi Version](https://img.shields.io/pypi/v/curated-transformers.svg?style=flat-square&logo=pypi&logoColor=white)](https://pypi.org/project/curated-transformers/)\n\n**State-of-the-art transformers, brick by brick**\n\nCurated Transformers is a transformer library for PyTorch. It provides\nstate-of-the-art models that are composed from a set of reusable\ncomponents. The stand-out features of Curated Transformer are:\n\n- \u26a1\ufe0f Supports state-of-the art transformer models, including LLMs such\n  as Falcon, Llama, and Dolly v2.\n- \ud83d\udc69\u200d\ud83c\udfa8 Each model is composed from a set of reusable building blocks,\n  providing many benefits:\n  - Implementing a feature or bugfix benefits all models. For example,\n    all models support 4/8-bit inference through the\n    [`bitsandbytes`](https://github.com/TimDettmers/bitsandbytes) library\n    and each model can use the PyTorch `meta` device to avoid unnecessary\n    allocations and initialization.\n  - Adding new models to the library is low-effort.\n  - Do you want to try a new transformer architecture? A BERT encoder\n    with rotary embeddings? You can make it in a pinch.\n- \ud83d\udc8e Consistent type annotations of all public APIs:\n  - Get great coding support from your IDE.\n  - Integrates well with your existing type-checked code.\n- \ud83c\udf93 Great for education, because the building blocks are easy to study.\n- \ud83d\udce6 Minimal dependencies.\n\nCurated Transformers has been production-tested by [Explosion](http://explosion.ai/)\nand will be used as the default transformer implementation in spaCy 3.7.\n\n## \ud83e\uddf0 Supported Model Architectures\n\nSupported encoder-only models:\n\n- ALBERT\n- BERT\n- CamemBERT\n- RoBERTa\n- XLM-RoBERTa\n\nSupported decoder-only models:\n\n- Falcon\n- GPT-NeoX\n- Llama 1/2\n- MPT\n\nGenerator wrappers:\n\n- Dolly v2\n- Falcon\n- Llama 1/2\n- MPT\n\nAll types of models can be loaded from Huggingface Hub.\n\nspaCy integration for curated transformers is provided by the\n[`spacy-curated-transformers`](https://github.com/explosion/spacy-curated-transformers)\npackage.\n\n## \u23f3 Install\n\n```bash\npip install curated-transformers\n```\n\n### CUDA support\n\nThe default Linux build of PyTorch is built with CUDA 11.7 support. You should\nexplicitly install a CUDA build in the following cases:\n\n- If you want to use Curated Transformers on Windows.\n- If you want to use Curated Transformers on Linux with Ada-generation GPUs.\n  The standard PyTorch build supports Ada GPUs, but you can get considerable\n  performance improvements by installing PyTorch with CUDA 11.8 support.\n\nIn both cases, you can install PyTorch with:\n\n```bash\npip install torch --index-url https://download.pytorch.org/whl/cu118\n```\n\n## \ud83c\udfc3\u200d\u2640\ufe0f Usage Example\n\n```python-console\n>>> import torch\n>>> from curated_transformers.generation import AutoGenerator, GreedyGeneratorConfig\n>>> generator = AutoGenerator.from_hf_hub(name=\"tiiuae/falcon-7b-instruct\", device=torch.device(\"cuda\"))\n>>> generator([\"What is Python in one sentence?\", \"What is Rust in one sentence?\"], GreedyGeneratorConfig())\n['Python is a high-level programming language that is easy to learn and widely used for web development, data analysis, and automation.',\n 'Rust is a programming language that is designed to be a safe, concurrent, and efficient replacement for C++.']\n```\n\nYou can find more [usage examples](https://curated-transformers.readthedocs.io/en/latest/usage.html)\nin the documentation. You can also find example programs that use Curated Transformers in the\n[`examples`](examples/) directory.\n\n## \ud83d\udcda Documentation\n\nYou can read more about how to use Curated Transformers here:\n\n- [Overview](https://curated-transformers.readthedocs.io/en/v1.2.x/) ([Development](https://curated-transformers.readthedocs.io/en/latest/))\n- [Usage](https://curated-transformers.readthedocs.io/en/v1.2.x/usage.html) ([Development](https://curated-transformers.readthedocs.io/en/latest/usage.html))\n- [API](https://curated-transformers.readthedocs.io/en/v1.2.x/api.html) ([Development](https://curated-transformers.readthedocs.io/en/latest/api.html))\n\n## \ud83d\udddc\ufe0f Quantization\n\n`curated-transformers` supports dynamic 8-bit and 4-bit quantization of models by leveraging the [`bitsandbytes` library](https://github.com/TimDettmers/bitsandbytes).\n\nUse the quantization variant to automatically install the necessary dependencies:\n\n```bash\npip install curated-transformers[quantization]\n```\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "A PyTorch library of transformer models and components",
    "version": "2.0.1",
    "project_urls": {
        "Homepage": "https://github.com/explosion/curated-transformers"
    },
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "1d1662dcf539c53ec0a7f27c308e4665aecd4e4eafa0c3b7d4fe544ab9409525",
                "md5": "335312bd95a0bb0d98524a612746fb82",
                "sha256": "5da92a4bf41a8a0bebca39b65ede4db49a7bc78467ca3c3ba7f84ecdcc8c45e5"
            },
            "downloads": -1,
            "filename": "curated_transformers-2.0.1-py2.py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "335312bd95a0bb0d98524a612746fb82",
            "packagetype": "bdist_wheel",
            "python_version": "py2.py3",
            "requires_python": ">=3.9",
            "size": 363454,
            "upload_time": "2024-04-17T17:14:38",
            "upload_time_iso_8601": "2024-04-17T17:14:38.295517Z",
            "url": "https://files.pythonhosted.org/packages/1d/16/62dcf539c53ec0a7f27c308e4665aecd4e4eafa0c3b7d4fe544ab9409525/curated_transformers-2.0.1-py2.py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "f589ef25c717786a83d63bb47eb7092e8cf68732b1429ac01be91defebe94646",
                "md5": "d7ebd79c46a646f464745612ecd77b35",
                "sha256": "3348e639d9247bf222bcd90cefded05b37c5dc0e798b0af6957e37a4aafd0f1d"
            },
            "downloads": -1,
            "filename": "curated_transformers-2.0.1.tar.gz",
            "has_sig": false,
            "md5_digest": "d7ebd79c46a646f464745612ecd77b35",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.9",
            "size": 289037,
            "upload_time": "2024-04-17T17:14:44",
            "upload_time_iso_8601": "2024-04-17T17:14:44.611698Z",
            "url": "https://files.pythonhosted.org/packages/f5/89/ef25c717786a83d63bb47eb7092e8cf68732b1429ac01be91defebe94646/curated_transformers-2.0.1.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-04-17 17:14:44",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "explosion",
    "github_project": "curated-transformers",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [
        {
            "name": "catalogue",
            "specs": [
                [
                    ">=",
                    "2.0.4"
                ],
                [
                    "<",
                    "2.1.0"
                ]
            ]
        },
        {
            "name": "curated-tokenizers",
            "specs": [
                [
                    "<",
                    "3.0.0"
                ],
                [
                    ">=",
                    "2.0.0"
                ]
            ]
        },
        {
            "name": "huggingface-hub",
            "specs": [
                [
                    ">=",
                    "0.14"
                ]
            ]
        },
        {
            "name": "tokenizers",
            "specs": [
                [
                    ">=",
                    "0.13.3"
                ]
            ]
        },
        {
            "name": "torch",
            "specs": [
                [
                    ">=",
                    "1.12.0"
                ]
            ]
        },
        {
            "name": "mypy",
            "specs": [
                [
                    ">=",
                    "1.5.0"
                ],
                [
                    "<",
                    "1.6.0"
                ]
            ]
        },
        {
            "name": "pytest",
            "specs": []
        }
    ],
    "lcname": "curated-transformers"
}
        
Elapsed time: 0.19775s