optimum-neuron


Nameoptimum-neuron JSON
Version 0.4.1 PyPI version JSON
download
home_pageNone
SummaryOptimum Neuron serves as the bridge between Hugging Face libraries, such as Transformers, Diffusers, and PEFT, and AWS Trainium and Inferentia accelerators. It provides a set of tools enabling easy model loading, training, and inference on both single and multiple Neuron core configurations, across a wide range of downstream tasks.
upload_time2025-10-23 15:53:22
maintainerNone
docs_urlNone
authorNone
requires_python>=3.10
licenseApache-2.0
keywords transformers diffusers mixed-precision training fine-tuning inference trainium inferentia aws
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            <!---
Copyright 2023 The HuggingFace Team. All rights reserved.

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->

# Optimum Neuron

πŸ€— Optimum Neuron is the interface between the πŸ€— Transformers library and AWS AcceleratorsΒ including [AWS Trainium](https://aws.amazon.com/machine-learning/trainium/?nc1=h_ls) and [AWS Inferentia](https://aws.amazon.com/machine-learning/inferentia/?nc1=h_ls).
**Key Features:**
- πŸ”„ **Drop-in replacement** for standard Transformers training and inference
- ⚑ **Distributed training** support with minimal code changes
- 🎯 **Optimized models** for AWS accelerators
- πŸ“ˆ **Production-ready** inference with compiled models

## Install
To install the latest release of this package:

* For AWS Trainium (trn1) or AWS inferentia2 (inf2)

```bash
pip install --upgrade-strategy eager optimum-neuron[neuronx]
```

* For AWS inferentia (inf1)

```bash
pip install --upgrade-strategy eager optimum-neuron[neuron]
```

Optimum Neuron is a fast-moving project, and you may want to install it from source:

```bash
pip install git+https://github.com/huggingface/optimum-neuron.git
```

*Make sure that you have installed the Neuron driver and tools before installing `optimum-neuron`, [more extensive guide here](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/general/setup/torch-neuronx.html#setup-torch-neuronx).*

## Quick Start

Optimum Neuron makes AWS accelerator adoption seamless for Transformers users.

### Training

Training on AWS Trainium requires minimal changes to your existing code:

```python
import torch
import torch_xla.runtime as xr

from datasets import load_dataset
from transformers import AutoTokenizer

# Optimum Neuron's drop-in replacements for standard training components
from optimum.neuron import NeuronSFTConfig, NeuronSFTTrainer, NeuronTrainingArguments
from optimum.neuron.models.training import NeuronModelForCausalLM


def format_dolly_dataset(example):
    """Format Dolly dataset into instruction-following format."""
    instruction = f"### Instruction\n{example['instruction']}"
    context = f"### Context\n{example['context']}" if example["context"] else None
    response = f"### Answer\n{example['response']}"

    # Combine all parts with double newlines
    parts = [instruction, context, response]
    return "\n\n".join(part for part in parts if part)


def main():
    # Load instruction-following dataset
    dataset = load_dataset("databricks/databricks-dolly-15k", split="train")

    # Model configuration
    model_id = "Qwen/Qwen3-1.7B"
    output_dir = "qwen3-1.7b-finetuned"

    # Setup tokenizer
    tokenizer = AutoTokenizer.from_pretrained(model_id)
    tokenizer.pad_token = tokenizer.eos_token

    # Configure training for Trainium
    training_args = NeuronTrainingArguments(
        learning_rate=1e-4,
        tensor_parallel_size=8,  # Split model across 8 accelerators
        per_device_train_batch_size=1,  # Batch size per device
        gradient_accumulation_steps=8,
        logging_steps=1,
        output_dir=output_dir,
    )

    # Load model optimized for Trainium
    model = NeuronModelForCausalLM.from_pretrained(
        model_id,
        training_args.trn_config,
        torch_dtype=torch.bfloat16,
        attn_implementation="flash_attention_2", # Enable flash attention
    )

    # Setup supervised fine-tuning
    sft_config = NeuronSFTConfig(
        max_seq_length=2048,
        packing=True,  # Pack multiple samples for efficiency
        **training_args.to_dict(),
    )

    # Initialize trainer and start training
    trainer = NeuronSFTTrainer(
        model=model,
        args=sft_config,
        tokenizer=tokenizer,
        train_dataset=dataset,
        formatting_func=format_dolly_dataset,
    )

    trainer.train()

    # Share your model with the community
    trainer.push_to_hub(
        commit_message="Fine-tuned on Databricks Dolly dataset",
        blocking=True,
        model_name=output_dir,
    )

    if xr.local_ordinal() == 0:
        print(f"Training complete! Model saved to {output_dir}")


if __name__ == "__main__":
    main()
```

This example demonstrates supervised fine-tuning on the [Databricks Dolly dataset](https://huggingface.co/datasets/databricks/databricks-dolly-15k) using `NeuronSFTTrainer` and `NeuronModelForCausalLM` - the Trainium-optimized versions of standard Transformers components.


**Compilation** (optional for first run):
```bash
NEURON_CC_FLAGS="--model-type transformer" neuron_parallel_compile torchrun --nproc_per_node 32 sft_finetune_qwen3.py
```

**Training:**
```bash
NEURON_CC_FLAGS="--model-type transformer" torchrun --nproc_per_node 32 sft_finetune_qwen3.py
```


### Inference

You can compile and export your πŸ€— Transformers models to a serialized format before inference on Neuron devices:

```bash
optimum-cli export neuron \
  --model distilbert-base-uncased-finetuned-sst-2-english \
  --batch_size 1 \
  --sequence_length 32 \
  --auto_cast matmul \
  --auto_cast_type bf16 \
  distilbert_base_uncased_finetuned_sst2_english_neuron/
```

The command above will export `distilbert-base-uncased-finetuned-sst-2-english` with static shapes: `batch_size=1` and `sequence_length=32`, and cast all `matmul` operations from FP32 to BF16. Check out the [exporter guide](https://huggingface.co/docs/optimum-neuron/guides/export_model) for more compilation options.

Then you can run the exported Neuron model on Neuron devices with `NeuronModelForXXX` classes which are similar to `AutoModelForXXX` classes in πŸ€— Transformers:

```diff
from transformers import AutoTokenizer
-from transformers import AutoModelForSequenceClassification
+from optimum.neuron import NeuronModelForSequenceClassification

# PyTorch checkpoint
-model = AutoModelForSequenceClassification.from_pretrained("distilbert-base-uncased-finetuned-sst-2-english")
+model = NeuronModelForSequenceClassification.from_pretrained("distilbert_base_uncased_finetuned_sst2_english_neuron")

tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased-finetuned-sst-2-english")
inputs = tokenizer("Hamilton is considered to be the best musical of past years.", return_tensors="pt")

logits = model(**inputs).logits
print(model.config.id2label[logits.argmax().item()])
# 'POSITIVE'
```

### Documentation

Check out [the documentation of Optimum Neuron](https://huggingface.co/docs/optimum-neuron/index) for more advanced usage.

<!---

## Validated Models

The following model architectures, tasks and device distributions have been validated for πŸ€— Optimum Neuron:

<div align="center">

| Architecture     | State | <center>Tasks</center>                                                                                                                                                                                                                                                                                                                                 |
| ---------------- | ----- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| BERT             | βœ…     | <li>[text classification](https://github.com/huggingface/optimum-neuron/tree/main/examples/text-classification)</li><li>[question answering](https://github.com/huggingface/optimum-neuron/tree/main/examples/question-answering)</li><li>[language modeling](https://github.com/huggingface/optimum-neuron/tree/main/examples/language-modeling)</li> |
| RoBERTa          | ❌     | <li>[question answering](https://github.com/huggingface/optimum-neuron/tree/main/examples/question-answering)</li><li>[language modeling](https://github.com/huggingface/optimum-neuron/tree/main/examples/language-modeling)</li>                                                                                                                     |
| ALBERT           | ❌     | <li>[question answering](https://github.com/huggingface/optimum-neuron/tree/main/examples/question-answering)</li><li>[language modeling](https://github.com/huggingface/optimum-neuron/tree/main/examples/language-modeling)</li>                                                                                                                     |
| DistilBERT       | ❌     | <li>[question answering](https://github.com/huggingface/optimum-neuron/tree/main/examples/question-answering)</li><li>[language modeling](https://github.com/huggingface/optimum-neuron/tree/main/examples/language-modeling)</li>                                                                                                                     |
| GPT2             | ❌     | <li>[language modeling](https://github.com/huggingface/optimum-neuron/tree/main/examples/language-modeling)</li>                                                                                                                                                                                                                                       |
| T5               | ❌     | <li>[summarization](https://github.com/huggingface/optimum-neuron/tree/main/examples/summarization)</li><li>[translation](https://github.com/huggingface/optimum-neuron/tree/main/examples/translation)</li>                                                                                                                                           |
| ViT              | ❌     | <li>[image classification](https://github.com/huggingface/optimum-neuron/tree/main/examples/image-classification)</li>                                                                                                                                                                                                                                 |
| Swin             | ❌     | <li>[image classification](https://github.com/huggingface/optimum-neuron/tree/main/examples/image-classification)</li>                                                                                                                                                                                                                                 |
| Wav2Vec2         | ❌     | <li>[audio classification](https://github.com/huggingface/optimum-neuron/tree/main/examples/audio-classification)</li><li>[speech recognition](https://github.com/huggingface/optimum-neuron/tree/main/examples/speech-recognition)</li>                                                                                                               |
| Stable Diffusion | ❌     | <li>[text-to-image generation](https://github.com/huggingface/optimum-neuron/tree/main/examples/stable-diffusion)</li>                                                                                                                                                                                                                                 |
| CLIP             | ❌     | <li>[contrastive image-text training](https://github.com/huggingface/optimum-neuron/tree/main/examples/contrastive-image-text)</li>                                                                                                                                                                                                                    |

</div>

Other models and tasks supported by the πŸ€— Transformers library may also work. You can refer to this [section](https://github.com/huggingface/optimum-neuron#how-to-use-it) for using them with πŸ€— Optimum Neuron. Besides, [this page](https://github.com/huggingface/optimum-neuron/tree/main/examples) explains how to modify any [example](https://github.com/huggingface/transformers/tree/main/examples/pytorch) from the πŸ€— Transformers library to make it work with πŸ€— Optimum Neuron.

-->

If you find any issue while using those, please open an issue or a pull request.

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "optimum-neuron",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.10",
    "maintainer_email": null,
    "keywords": "transformers, diffusers, mixed-precision training, fine-tuning, inference, trainium, inferentia, aws",
    "author": null,
    "author_email": "\"HuggingFace Inc. Special Ops Team\" <hardware@huggingface.co>",
    "download_url": "https://files.pythonhosted.org/packages/fc/52/8d13d6342388186621dac02563024761ccf0588e50001f6cdadb07416553/optimum_neuron-0.4.1.tar.gz",
    "platform": null,
    "description": "<!---\nCopyright 2023 The HuggingFace Team. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\n\n    http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\n-->\n\n# Optimum Neuron\n\n\ud83e\udd17 Optimum Neuron is the interface between the \ud83e\udd17 Transformers library and AWS Accelerators\u00a0including [AWS Trainium](https://aws.amazon.com/machine-learning/trainium/?nc1=h_ls) and [AWS Inferentia](https://aws.amazon.com/machine-learning/inferentia/?nc1=h_ls).\n**Key Features:**\n- \ud83d\udd04 **Drop-in replacement** for standard Transformers training and inference\n- \u26a1 **Distributed training** support with minimal code changes\n- \ud83c\udfaf **Optimized models** for AWS accelerators\n- \ud83d\udcc8 **Production-ready** inference with compiled models\n\n## Install\nTo install the latest release of this package:\n\n* For AWS Trainium (trn1) or AWS inferentia2 (inf2)\n\n```bash\npip install --upgrade-strategy eager optimum-neuron[neuronx]\n```\n\n* For AWS inferentia (inf1)\n\n```bash\npip install --upgrade-strategy eager optimum-neuron[neuron]\n```\n\nOptimum Neuron is a fast-moving project, and you may want to install it from source:\n\n```bash\npip install git+https://github.com/huggingface/optimum-neuron.git\n```\n\n*Make sure that you have installed the Neuron driver and tools before installing `optimum-neuron`, [more extensive guide here](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/general/setup/torch-neuronx.html#setup-torch-neuronx).*\n\n## Quick Start\n\nOptimum Neuron makes AWS accelerator adoption seamless for Transformers users.\n\n### Training\n\nTraining on AWS Trainium requires minimal changes to your existing code:\n\n```python\nimport torch\nimport torch_xla.runtime as xr\n\nfrom datasets import load_dataset\nfrom transformers import AutoTokenizer\n\n# Optimum Neuron's drop-in replacements for standard training components\nfrom optimum.neuron import NeuronSFTConfig, NeuronSFTTrainer, NeuronTrainingArguments\nfrom optimum.neuron.models.training import NeuronModelForCausalLM\n\n\ndef format_dolly_dataset(example):\n    \"\"\"Format Dolly dataset into instruction-following format.\"\"\"\n    instruction = f\"### Instruction\\n{example['instruction']}\"\n    context = f\"### Context\\n{example['context']}\" if example[\"context\"] else None\n    response = f\"### Answer\\n{example['response']}\"\n\n    # Combine all parts with double newlines\n    parts = [instruction, context, response]\n    return \"\\n\\n\".join(part for part in parts if part)\n\n\ndef main():\n    # Load instruction-following dataset\n    dataset = load_dataset(\"databricks/databricks-dolly-15k\", split=\"train\")\n\n    # Model configuration\n    model_id = \"Qwen/Qwen3-1.7B\"\n    output_dir = \"qwen3-1.7b-finetuned\"\n\n    # Setup tokenizer\n    tokenizer = AutoTokenizer.from_pretrained(model_id)\n    tokenizer.pad_token = tokenizer.eos_token\n\n    # Configure training for Trainium\n    training_args = NeuronTrainingArguments(\n        learning_rate=1e-4,\n        tensor_parallel_size=8,  # Split model across 8 accelerators\n        per_device_train_batch_size=1,  # Batch size per device\n        gradient_accumulation_steps=8,\n        logging_steps=1,\n        output_dir=output_dir,\n    )\n\n    # Load model optimized for Trainium\n    model = NeuronModelForCausalLM.from_pretrained(\n        model_id,\n        training_args.trn_config,\n        torch_dtype=torch.bfloat16,\n        attn_implementation=\"flash_attention_2\", # Enable flash attention\n    )\n\n    # Setup supervised fine-tuning\n    sft_config = NeuronSFTConfig(\n        max_seq_length=2048,\n        packing=True,  # Pack multiple samples for efficiency\n        **training_args.to_dict(),\n    )\n\n    # Initialize trainer and start training\n    trainer = NeuronSFTTrainer(\n        model=model,\n        args=sft_config,\n        tokenizer=tokenizer,\n        train_dataset=dataset,\n        formatting_func=format_dolly_dataset,\n    )\n\n    trainer.train()\n\n    # Share your model with the community\n    trainer.push_to_hub(\n        commit_message=\"Fine-tuned on Databricks Dolly dataset\",\n        blocking=True,\n        model_name=output_dir,\n    )\n\n    if xr.local_ordinal() == 0:\n        print(f\"Training complete! Model saved to {output_dir}\")\n\n\nif __name__ == \"__main__\":\n    main()\n```\n\nThis example demonstrates supervised fine-tuning on the [Databricks Dolly dataset](https://huggingface.co/datasets/databricks/databricks-dolly-15k) using `NeuronSFTTrainer` and `NeuronModelForCausalLM` - the Trainium-optimized versions of standard Transformers components.\n\n\n**Compilation** (optional for first run):\n```bash\nNEURON_CC_FLAGS=\"--model-type transformer\" neuron_parallel_compile torchrun --nproc_per_node 32 sft_finetune_qwen3.py\n```\n\n**Training:**\n```bash\nNEURON_CC_FLAGS=\"--model-type transformer\" torchrun --nproc_per_node 32 sft_finetune_qwen3.py\n```\n\n\n### Inference\n\nYou can compile and export your \ud83e\udd17 Transformers models to a serialized format before inference on Neuron devices:\n\n```bash\noptimum-cli export neuron \\\n  --model distilbert-base-uncased-finetuned-sst-2-english \\\n  --batch_size 1 \\\n  --sequence_length 32 \\\n  --auto_cast matmul \\\n  --auto_cast_type bf16 \\\n  distilbert_base_uncased_finetuned_sst2_english_neuron/\n```\n\nThe command above will export `distilbert-base-uncased-finetuned-sst-2-english` with static shapes: `batch_size=1` and `sequence_length=32`, and cast all `matmul` operations from FP32 to BF16. Check out the [exporter guide](https://huggingface.co/docs/optimum-neuron/guides/export_model) for more compilation options.\n\nThen you can run the exported Neuron model on Neuron devices with `NeuronModelForXXX` classes which are similar to `AutoModelForXXX` classes in \ud83e\udd17 Transformers:\n\n```diff\nfrom transformers import AutoTokenizer\n-from transformers import AutoModelForSequenceClassification\n+from optimum.neuron import NeuronModelForSequenceClassification\n\n# PyTorch checkpoint\n-model = AutoModelForSequenceClassification.from_pretrained(\"distilbert-base-uncased-finetuned-sst-2-english\")\n+model = NeuronModelForSequenceClassification.from_pretrained(\"distilbert_base_uncased_finetuned_sst2_english_neuron\")\n\ntokenizer = AutoTokenizer.from_pretrained(\"distilbert-base-uncased-finetuned-sst-2-english\")\ninputs = tokenizer(\"Hamilton is considered to be the best musical of past years.\", return_tensors=\"pt\")\n\nlogits = model(**inputs).logits\nprint(model.config.id2label[logits.argmax().item()])\n# 'POSITIVE'\n```\n\n### Documentation\n\nCheck out [the documentation of Optimum Neuron](https://huggingface.co/docs/optimum-neuron/index) for more advanced usage.\n\n<!---\n\n## Validated Models\n\nThe following model architectures, tasks and device distributions have been validated for \ud83e\udd17 Optimum Neuron:\n\n<div align=\"center\">\n\n| Architecture     | State | <center>Tasks</center>                                                                                                                                                                                                                                                                                                                                 |\n| ---------------- | ----- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |\n| BERT             | \u2705     | <li>[text classification](https://github.com/huggingface/optimum-neuron/tree/main/examples/text-classification)</li><li>[question answering](https://github.com/huggingface/optimum-neuron/tree/main/examples/question-answering)</li><li>[language modeling](https://github.com/huggingface/optimum-neuron/tree/main/examples/language-modeling)</li> |\n| RoBERTa          | \u274c     | <li>[question answering](https://github.com/huggingface/optimum-neuron/tree/main/examples/question-answering)</li><li>[language modeling](https://github.com/huggingface/optimum-neuron/tree/main/examples/language-modeling)</li>                                                                                                                     |\n| ALBERT           | \u274c     | <li>[question answering](https://github.com/huggingface/optimum-neuron/tree/main/examples/question-answering)</li><li>[language modeling](https://github.com/huggingface/optimum-neuron/tree/main/examples/language-modeling)</li>                                                                                                                     |\n| DistilBERT       | \u274c     | <li>[question answering](https://github.com/huggingface/optimum-neuron/tree/main/examples/question-answering)</li><li>[language modeling](https://github.com/huggingface/optimum-neuron/tree/main/examples/language-modeling)</li>                                                                                                                     |\n| GPT2             | \u274c     | <li>[language modeling](https://github.com/huggingface/optimum-neuron/tree/main/examples/language-modeling)</li>                                                                                                                                                                                                                                       |\n| T5               | \u274c     | <li>[summarization](https://github.com/huggingface/optimum-neuron/tree/main/examples/summarization)</li><li>[translation](https://github.com/huggingface/optimum-neuron/tree/main/examples/translation)</li>                                                                                                                                           |\n| ViT              | \u274c     | <li>[image classification](https://github.com/huggingface/optimum-neuron/tree/main/examples/image-classification)</li>                                                                                                                                                                                                                                 |\n| Swin             | \u274c     | <li>[image classification](https://github.com/huggingface/optimum-neuron/tree/main/examples/image-classification)</li>                                                                                                                                                                                                                                 |\n| Wav2Vec2         | \u274c     | <li>[audio classification](https://github.com/huggingface/optimum-neuron/tree/main/examples/audio-classification)</li><li>[speech recognition](https://github.com/huggingface/optimum-neuron/tree/main/examples/speech-recognition)</li>                                                                                                               |\n| Stable Diffusion | \u274c     | <li>[text-to-image generation](https://github.com/huggingface/optimum-neuron/tree/main/examples/stable-diffusion)</li>                                                                                                                                                                                                                                 |\n| CLIP             | \u274c     | <li>[contrastive image-text training](https://github.com/huggingface/optimum-neuron/tree/main/examples/contrastive-image-text)</li>                                                                                                                                                                                                                    |\n\n</div>\n\nOther models and tasks supported by the \ud83e\udd17 Transformers library may also work. You can refer to this [section](https://github.com/huggingface/optimum-neuron#how-to-use-it) for using them with \ud83e\udd17 Optimum Neuron. Besides, [this page](https://github.com/huggingface/optimum-neuron/tree/main/examples) explains how to modify any [example](https://github.com/huggingface/transformers/tree/main/examples/pytorch) from the \ud83e\udd17 Transformers library to make it work with \ud83e\udd17 Optimum Neuron.\n\n-->\n\nIf you find any issue while using those, please open an issue or a pull request.\n",
    "bugtrack_url": null,
    "license": "Apache-2.0",
    "summary": "Optimum Neuron serves as the bridge between Hugging Face libraries, such as Transformers, Diffusers, and PEFT, and AWS Trainium and Inferentia accelerators. It provides a set of tools enabling easy model loading, training, and inference on both single and multiple Neuron core configurations, across a wide range of downstream tasks.",
    "version": "0.4.1",
    "project_urls": {
        "Homepage": "https://huggingface.co/docs/optimum-neuron/index"
    },
    "split_keywords": [
        "transformers",
        " diffusers",
        " mixed-precision training",
        " fine-tuning",
        " inference",
        " trainium",
        " inferentia",
        " aws"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "731e5cc9f92672c0346ea54973c685cfbc9a43c54e030d8768aa64ae7bcb0faa",
                "md5": "7b03685f10bc73b640f30cdfcf482b71",
                "sha256": "f1c62d94d2bf7971745e2b5e9c52a102d42639c079590a3be3cb71576752a1db"
            },
            "downloads": -1,
            "filename": "optimum_neuron-0.4.1-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "7b03685f10bc73b640f30cdfcf482b71",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.10",
            "size": 482669,
            "upload_time": "2025-10-23T15:53:19",
            "upload_time_iso_8601": "2025-10-23T15:53:19.132670Z",
            "url": "https://files.pythonhosted.org/packages/73/1e/5cc9f92672c0346ea54973c685cfbc9a43c54e030d8768aa64ae7bcb0faa/optimum_neuron-0.4.1-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "fc528d13d6342388186621dac02563024761ccf0588e50001f6cdadb07416553",
                "md5": "757b21c1348dd023735a0eb8b7834f71",
                "sha256": "c875d1e271f601caf95d5acebac460244671b0d4b7a70afdbaaf4c2c4ddf2ff1"
            },
            "downloads": -1,
            "filename": "optimum_neuron-0.4.1.tar.gz",
            "has_sig": false,
            "md5_digest": "757b21c1348dd023735a0eb8b7834f71",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.10",
            "size": 360802,
            "upload_time": "2025-10-23T15:53:22",
            "upload_time_iso_8601": "2025-10-23T15:53:22.862903Z",
            "url": "https://files.pythonhosted.org/packages/fc/52/8d13d6342388186621dac02563024761ccf0588e50001f6cdadb07416553/optimum_neuron-0.4.1.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-10-23 15:53:22",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "lcname": "optimum-neuron"
}
        
Elapsed time: 0.63812s