optimum-neuron


Nameoptimum-neuron JSON
Version 0.0.21 PyPI version JSON
download
home_pagehttps://huggingface.co/hardware/aws
SummaryOptimum Neuron is the interface between the Hugging Face Transformers and Diffusers libraries and AWS Tranium and Inferentia accelerators. It provides a set of tools enabling easy model loading, training and inference on single and multiple neuron core settings for different downstream tasks.
upload_time2024-04-08 22:45:46
maintainerNone
docs_urlNone
authorHuggingFace Inc. Special Ops Team
requires_pythonNone
licenseApache
keywords transformers diffusers mixed-precision training fine-tuning inference tranium inferentia aws
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            <!---
Copyright 2023 The HuggingFace Team. All rights reserved.

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->

# Optimum Neuron

πŸ€— Optimum Neuron is the interface between the πŸ€— Transformers library and AWS AcceleratorsΒ including [AWS Trainium](https://aws.amazon.com/machine-learning/trainium/?nc1=h_ls) and [AWS Inferentia](https://aws.amazon.com/machine-learning/inferentia/?nc1=h_ls).
It provides a set of tools enabling easy model loading, training and inference on single- and multi-Accelerator settings for different downstream tasks.
The list of officially validated models and tasks is available [here](TODO:). Users can try other models and tasks with only few changes.

## Install
To install the latest release of this package:

* For AWS Trainium (trn1) or AWS inferentia2 (inf2)

```bash
pip install optimum[neuronx]
```

* For AWS inferentia (inf1)

```bash
pip install optimum[neuron]
```

Optimum Neuron is a fast-moving project, and you may want to install it from source:

```bash
pip install git+https://github.com/huggingface/optimum-neuron.git
```

> Alternatively, you can install the package without pip as follows:
> ```bash
> git clone https://github.com/huggingface/optimum-neuron.git
> cd optimum-neuron
> python setup.py install
> ```

*Make sure that you have installed the Neuron driver and tools before installing `optimum-neuron`, [more extensive guide here](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/general/setup/torch-neuronx.html#setup-torch-neuronx).*

Last but not least, don't forget to install the requirements for every example:

```bash
cd <example-folder>
pip install -r requirements.txt
```


## Quick Start

πŸ€— Optimum Neuron was designed with one goal in mind: **to make training and inference straightforward for any πŸ€— Transformers user while leveraging the complete power of AWS Accelerators**.

### Training

There are two main classes one needs to know:
- TrainiumArgumentParser: inherits the original [HfArgumentParser](https://huggingface.co/docs/transformers/main/en/internal/trainer_utils#transformers.HfArgumentParser) in Transformers with additional checks on the argument values to make sure that they will work well with AWS Trainium instances.
- [NeuronTrainer](https://huggingface.co/docs/optimum/neuron/package_reference/trainer): this version trainer takes care of doing the proper checks and changes to the supported models to make them trainable on AWS Trainium instances.

The [NeuronTrainer](https://huggingface.co/docs/optimum/neuron/package_reference/trainer) is very similar to the [πŸ€— Transformers Trainer](https://huggingface.co/docs/transformers/main_classes/trainer), and adapting a script using the Trainer to make it work with Trainium will mostly consist in simply swapping the Trainer class for the NeuronTrainer one.
That's how most of the [example scripts](https://github.com/huggingface/optimum-neuron/tree/main/examples) were adapted from their [original counterparts](https://github.com/huggingface/transformers/tree/main/examples/pytorch).

```diff
from transformers import TrainingArguments
+from optimum.neuron import NeuronTrainer as Trainer

training_args = TrainingArguments(
  # training arguments...
)

# A lot of code here

# Initialize our Trainer
trainer = Trainer(
    model=model,
    args=training_args,  # Original training arguments.
    train_dataset=train_dataset if training_args.do_train else None,
    eval_dataset=eval_dataset if training_args.do_eval else None,
    compute_metrics=compute_metrics,
    tokenizer=tokenizer,
    data_collator=data_collator,
)
```

### Inference

You can compile and export your πŸ€— Transformers models to a serialized format before inference on Neuron devices:

```bash
optimum-cli export neuron \
  --model distilbert-base-uncased-finetuned-sst-2-english \
  --batch_size 1 \
  --sequence_length 32 \
  --auto_cast matmul \
  --auto_cast_type bf16 \
  distilbert_base_uncased_finetuned_sst2_english_neuron/
```

The command above will export `distilbert-base-uncased-finetuned-sst-2-english` with static shapes: `batch_size=1` and `sequence_length=32`, and cast all `matmul` operations from FP32 to BF16. Check out the [exporter guide](https://huggingface.co/docs/optimum-neuron/guides/export_model) for more compilation options.

Then you can run the exported Neuron model on Neuron devices with `NeuronModelForXXX` classes which are similar to `AutoModelForXXX` classes in πŸ€— Transformers:

```diff
from transformers import AutoTokenizer
-from transformers import AutoModelForSequenceClassification
+from optimum.neuron import NeuronModelForSequenceClassification

# PyTorch checkpoint
-model = AutoModelForSequenceClassification.from_pretrained("distilbert-base-uncased-finetuned-sst-2-english")
+model = NeuronModelForSequenceClassification.from_pretrained("distilbert_base_uncased_finetuned_sst2_english_neuron")

tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased-finetuned-sst-2-english")
inputs = tokenizer("Hamilton is considered to be the best musical of past years.", return_tensors="pt")

logits = model(**inputs).logits
print(model.config.id2label[logits.argmax().item()])
# 'POSITIVE'
```

### Documentation

Check out [the documentation of Optimum Neuron](https://huggingface.co/docs/optimum-neuron/index) for more advanced usage.

<!---

## Validated Models

The following model architectures, tasks and device distributions have been validated for πŸ€— Optimum Neuron:

<div align="center">

| Architecture     | State | <center>Tasks</center>                                                                                                                                                                                                                                                                                                                                 |
| ---------------- | ----- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| BERT             | βœ…     | <li>[text classification](https://github.com/huggingface/optimum-neuron/tree/main/examples/text-classification)</li><li>[question answering](https://github.com/huggingface/optimum-neuron/tree/main/examples/question-answering)</li><li>[language modeling](https://github.com/huggingface/optimum-neuron/tree/main/examples/language-modeling)</li> |
| RoBERTa          | ❌     | <li>[question answering](https://github.com/huggingface/optimum-neuron/tree/main/examples/question-answering)</li><li>[language modeling](https://github.com/huggingface/optimum-neuron/tree/main/examples/language-modeling)</li>                                                                                                                     |
| ALBERT           | ❌     | <li>[question answering](https://github.com/huggingface/optimum-neuron/tree/main/examples/question-answering)</li><li>[language modeling](https://github.com/huggingface/optimum-neuron/tree/main/examples/language-modeling)</li>                                                                                                                     |
| DistilBERT       | ❌     | <li>[question answering](https://github.com/huggingface/optimum-neuron/tree/main/examples/question-answering)</li><li>[language modeling](https://github.com/huggingface/optimum-neuron/tree/main/examples/language-modeling)</li>                                                                                                                     |
| GPT2             | ❌     | <li>[language modeling](https://github.com/huggingface/optimum-neuron/tree/main/examples/language-modeling)</li>                                                                                                                                                                                                                                       |
| T5               | ❌     | <li>[summarization](https://github.com/huggingface/optimum-neuron/tree/main/examples/summarization)</li><li>[translation](https://github.com/huggingface/optimum-neuron/tree/main/examples/translation)</li>                                                                                                                                           |
| ViT              | ❌     | <li>[image classification](https://github.com/huggingface/optimum-neuron/tree/main/examples/image-classification)</li>                                                                                                                                                                                                                                 |
| Swin             | ❌     | <li>[image classification](https://github.com/huggingface/optimum-neuron/tree/main/examples/image-classification)</li>                                                                                                                                                                                                                                 |
| Wav2Vec2         | ❌     | <li>[audio classification](https://github.com/huggingface/optimum-neuron/tree/main/examples/audio-classification)</li><li>[speech recognition](https://github.com/huggingface/optimum-neuron/tree/main/examples/speech-recognition)</li>                                                                                                               |
| Stable Diffusion | ❌     | <li>[text-to-image generation](https://github.com/huggingface/optimum-neuron/tree/main/examples/stable-diffusion)</li>                                                                                                                                                                                                                                 |
| CLIP             | ❌     | <li>[contrastive image-text training](https://github.com/huggingface/optimum-neuron/tree/main/examples/contrastive-image-text)</li>                                                                                                                                                                                                                    |

</div>

Other models and tasks supported by the πŸ€— Transformers library may also work. You can refer to this [section](https://github.com/huggingface/optimum-neuron#how-to-use-it) for using them with πŸ€— Optimum Neuron. Besides, [this page](https://github.com/huggingface/optimum-neuron/tree/main/examples) explains how to modify any [example](https://github.com/huggingface/transformers/tree/main/examples/pytorch) from the πŸ€— Transformers library to make it work with πŸ€— Optimum Neuron.

-->

If you find any issue while using those, please open an issue or a pull request.

## Text-generation-inference

This repository maintains a [text-generation-inference (TGI)](https://github.com/huggingface/optimum-neuron/tree/main/text-generation-inference) docker image for deployment on AWS inferentia2.

            

Raw data

            {
    "_id": null,
    "home_page": "https://huggingface.co/hardware/aws",
    "name": "optimum-neuron",
    "maintainer": null,
    "docs_url": null,
    "requires_python": null,
    "maintainer_email": null,
    "keywords": "transformers, diffusers, mixed-precision training, fine-tuning, inference, tranium, inferentia, aws",
    "author": "HuggingFace Inc. Special Ops Team",
    "author_email": "hardware@huggingface.co",
    "download_url": "https://files.pythonhosted.org/packages/ca/9b/1cc6e055823d69398f17df7f0574deb7e8a0403272da24faf714249185e1/optimum-neuron-0.0.21.tar.gz",
    "platform": null,
    "description": "<!---\nCopyright 2023 The HuggingFace Team. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\n\n    http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\n-->\n\n# Optimum Neuron\n\n\ud83e\udd17 Optimum Neuron is the interface between the \ud83e\udd17 Transformers library and AWS Accelerators\u00a0including [AWS Trainium](https://aws.amazon.com/machine-learning/trainium/?nc1=h_ls) and [AWS Inferentia](https://aws.amazon.com/machine-learning/inferentia/?nc1=h_ls).\nIt provides a set of tools enabling easy model loading, training and inference on single- and multi-Accelerator settings for different downstream tasks.\nThe list of officially validated models and tasks is available [here](TODO:). Users can try other models and tasks with only few changes.\n\n## Install\nTo install the latest release of this package:\n\n* For AWS Trainium (trn1) or AWS inferentia2 (inf2)\n\n```bash\npip install optimum[neuronx]\n```\n\n* For AWS inferentia (inf1)\n\n```bash\npip install optimum[neuron]\n```\n\nOptimum Neuron is a fast-moving project, and you may want to install it from source:\n\n```bash\npip install git+https://github.com/huggingface/optimum-neuron.git\n```\n\n> Alternatively, you can install the package without pip as follows:\n> ```bash\n> git clone https://github.com/huggingface/optimum-neuron.git\n> cd optimum-neuron\n> python setup.py install\n> ```\n\n*Make sure that you have installed the Neuron driver and tools before installing `optimum-neuron`, [more extensive guide here](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/general/setup/torch-neuronx.html#setup-torch-neuronx).*\n\nLast but not least, don't forget to install the requirements for every example:\n\n```bash\ncd <example-folder>\npip install -r requirements.txt\n```\n\n\n## Quick Start\n\n\ud83e\udd17 Optimum Neuron was designed with one goal in mind: **to make training and inference straightforward for any \ud83e\udd17 Transformers user while leveraging the complete power of AWS Accelerators**.\n\n### Training\n\nThere are two main classes one needs to know:\n- TrainiumArgumentParser: inherits the original [HfArgumentParser](https://huggingface.co/docs/transformers/main/en/internal/trainer_utils#transformers.HfArgumentParser) in Transformers with additional checks on the argument values to make sure that they will work well with AWS Trainium instances.\n- [NeuronTrainer](https://huggingface.co/docs/optimum/neuron/package_reference/trainer): this version trainer takes care of doing the proper checks and changes to the supported models to make them trainable on AWS Trainium instances.\n\nThe [NeuronTrainer](https://huggingface.co/docs/optimum/neuron/package_reference/trainer) is very similar to the [\ud83e\udd17 Transformers Trainer](https://huggingface.co/docs/transformers/main_classes/trainer), and adapting a script using the Trainer to make it work with Trainium will mostly consist in simply swapping the Trainer class for the NeuronTrainer one.\nThat's how most of the [example scripts](https://github.com/huggingface/optimum-neuron/tree/main/examples) were adapted from their [original counterparts](https://github.com/huggingface/transformers/tree/main/examples/pytorch).\n\n```diff\nfrom transformers import TrainingArguments\n+from optimum.neuron import NeuronTrainer as Trainer\n\ntraining_args = TrainingArguments(\n  # training arguments...\n)\n\n# A lot of code here\n\n# Initialize our Trainer\ntrainer = Trainer(\n    model=model,\n    args=training_args,  # Original training arguments.\n    train_dataset=train_dataset if training_args.do_train else None,\n    eval_dataset=eval_dataset if training_args.do_eval else None,\n    compute_metrics=compute_metrics,\n    tokenizer=tokenizer,\n    data_collator=data_collator,\n)\n```\n\n### Inference\n\nYou can compile and export your \ud83e\udd17 Transformers models to a serialized format before inference on Neuron devices:\n\n```bash\noptimum-cli export neuron \\\n  --model distilbert-base-uncased-finetuned-sst-2-english \\\n  --batch_size 1 \\\n  --sequence_length 32 \\\n  --auto_cast matmul \\\n  --auto_cast_type bf16 \\\n  distilbert_base_uncased_finetuned_sst2_english_neuron/\n```\n\nThe command above will export `distilbert-base-uncased-finetuned-sst-2-english` with static shapes: `batch_size=1` and `sequence_length=32`, and cast all `matmul` operations from FP32 to BF16. Check out the [exporter guide](https://huggingface.co/docs/optimum-neuron/guides/export_model) for more compilation options.\n\nThen you can run the exported Neuron model on Neuron devices with `NeuronModelForXXX` classes which are similar to `AutoModelForXXX` classes in \ud83e\udd17 Transformers:\n\n```diff\nfrom transformers import AutoTokenizer\n-from transformers import AutoModelForSequenceClassification\n+from optimum.neuron import NeuronModelForSequenceClassification\n\n# PyTorch checkpoint\n-model = AutoModelForSequenceClassification.from_pretrained(\"distilbert-base-uncased-finetuned-sst-2-english\")\n+model = NeuronModelForSequenceClassification.from_pretrained(\"distilbert_base_uncased_finetuned_sst2_english_neuron\")\n\ntokenizer = AutoTokenizer.from_pretrained(\"distilbert-base-uncased-finetuned-sst-2-english\")\ninputs = tokenizer(\"Hamilton is considered to be the best musical of past years.\", return_tensors=\"pt\")\n\nlogits = model(**inputs).logits\nprint(model.config.id2label[logits.argmax().item()])\n# 'POSITIVE'\n```\n\n### Documentation\n\nCheck out [the documentation of Optimum Neuron](https://huggingface.co/docs/optimum-neuron/index) for more advanced usage.\n\n<!---\n\n## Validated Models\n\nThe following model architectures, tasks and device distributions have been validated for \ud83e\udd17 Optimum Neuron:\n\n<div align=\"center\">\n\n| Architecture     | State | <center>Tasks</center>                                                                                                                                                                                                                                                                                                                                 |\n| ---------------- | ----- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |\n| BERT             | \u2705     | <li>[text classification](https://github.com/huggingface/optimum-neuron/tree/main/examples/text-classification)</li><li>[question answering](https://github.com/huggingface/optimum-neuron/tree/main/examples/question-answering)</li><li>[language modeling](https://github.com/huggingface/optimum-neuron/tree/main/examples/language-modeling)</li> |\n| RoBERTa          | \u274c     | <li>[question answering](https://github.com/huggingface/optimum-neuron/tree/main/examples/question-answering)</li><li>[language modeling](https://github.com/huggingface/optimum-neuron/tree/main/examples/language-modeling)</li>                                                                                                                     |\n| ALBERT           | \u274c     | <li>[question answering](https://github.com/huggingface/optimum-neuron/tree/main/examples/question-answering)</li><li>[language modeling](https://github.com/huggingface/optimum-neuron/tree/main/examples/language-modeling)</li>                                                                                                                     |\n| DistilBERT       | \u274c     | <li>[question answering](https://github.com/huggingface/optimum-neuron/tree/main/examples/question-answering)</li><li>[language modeling](https://github.com/huggingface/optimum-neuron/tree/main/examples/language-modeling)</li>                                                                                                                     |\n| GPT2             | \u274c     | <li>[language modeling](https://github.com/huggingface/optimum-neuron/tree/main/examples/language-modeling)</li>                                                                                                                                                                                                                                       |\n| T5               | \u274c     | <li>[summarization](https://github.com/huggingface/optimum-neuron/tree/main/examples/summarization)</li><li>[translation](https://github.com/huggingface/optimum-neuron/tree/main/examples/translation)</li>                                                                                                                                           |\n| ViT              | \u274c     | <li>[image classification](https://github.com/huggingface/optimum-neuron/tree/main/examples/image-classification)</li>                                                                                                                                                                                                                                 |\n| Swin             | \u274c     | <li>[image classification](https://github.com/huggingface/optimum-neuron/tree/main/examples/image-classification)</li>                                                                                                                                                                                                                                 |\n| Wav2Vec2         | \u274c     | <li>[audio classification](https://github.com/huggingface/optimum-neuron/tree/main/examples/audio-classification)</li><li>[speech recognition](https://github.com/huggingface/optimum-neuron/tree/main/examples/speech-recognition)</li>                                                                                                               |\n| Stable Diffusion | \u274c     | <li>[text-to-image generation](https://github.com/huggingface/optimum-neuron/tree/main/examples/stable-diffusion)</li>                                                                                                                                                                                                                                 |\n| CLIP             | \u274c     | <li>[contrastive image-text training](https://github.com/huggingface/optimum-neuron/tree/main/examples/contrastive-image-text)</li>                                                                                                                                                                                                                    |\n\n</div>\n\nOther models and tasks supported by the \ud83e\udd17 Transformers library may also work. You can refer to this [section](https://github.com/huggingface/optimum-neuron#how-to-use-it) for using them with \ud83e\udd17 Optimum Neuron. Besides, [this page](https://github.com/huggingface/optimum-neuron/tree/main/examples) explains how to modify any [example](https://github.com/huggingface/transformers/tree/main/examples/pytorch) from the \ud83e\udd17 Transformers library to make it work with \ud83e\udd17 Optimum Neuron.\n\n-->\n\nIf you find any issue while using those, please open an issue or a pull request.\n\n## Text-generation-inference\n\nThis repository maintains a [text-generation-inference (TGI)](https://github.com/huggingface/optimum-neuron/tree/main/text-generation-inference) docker image for deployment on AWS inferentia2.\n",
    "bugtrack_url": null,
    "license": "Apache",
    "summary": "Optimum Neuron is the interface between the Hugging Face Transformers and Diffusers libraries and AWS Tranium and Inferentia accelerators. It provides a set of tools enabling easy model loading, training and inference on single and multiple neuron core settings for different downstream tasks.",
    "version": "0.0.21",
    "project_urls": {
        "Homepage": "https://huggingface.co/hardware/aws"
    },
    "split_keywords": [
        "transformers",
        " diffusers",
        " mixed-precision training",
        " fine-tuning",
        " inference",
        " tranium",
        " inferentia",
        " aws"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "56cf8f0a3dc82bee397c329528278f9e7ada13b8116e12f9d026234cb8c732d0",
                "md5": "d8d42676ebfd924b0182bd3f13af6f5d",
                "sha256": "bc892078a4ac9b4cf9c0a20cfc24930c72360b6cb5567438d39ad25fec1e5ad7"
            },
            "downloads": -1,
            "filename": "optimum_neuron-0.0.21-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "d8d42676ebfd924b0182bd3f13af6f5d",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": null,
            "size": 310568,
            "upload_time": "2024-04-08T22:45:44",
            "upload_time_iso_8601": "2024-04-08T22:45:44.873102Z",
            "url": "https://files.pythonhosted.org/packages/56/cf/8f0a3dc82bee397c329528278f9e7ada13b8116e12f9d026234cb8c732d0/optimum_neuron-0.0.21-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "ca9b1cc6e055823d69398f17df7f0574deb7e8a0403272da24faf714249185e1",
                "md5": "f530ad1061c64c07f31928dde3460404",
                "sha256": "546eefa8e42c321037f2eb2c5d4bfac1e54efd228b8bf649168d0d1fa449c0a2"
            },
            "downloads": -1,
            "filename": "optimum-neuron-0.0.21.tar.gz",
            "has_sig": false,
            "md5_digest": "f530ad1061c64c07f31928dde3460404",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 247791,
            "upload_time": "2024-04-08T22:45:46",
            "upload_time_iso_8601": "2024-04-08T22:45:46.800179Z",
            "url": "https://files.pythonhosted.org/packages/ca/9b/1cc6e055823d69398f17df7f0574deb7e8a0403272da24faf714249185e1/optimum-neuron-0.0.21.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-04-08 22:45:46",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "lcname": "optimum-neuron"
}
        
Elapsed time: 0.23627s