<!---
Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->
<p align="center">
<picture>
<source media="(prefers-color-scheme: dark)" srcset="https://huggingface.co/datasets/huggingface/documentation-images/raw/main/transformers-logo-dark.svg">
<source media="(prefers-color-scheme: light)" srcset="https://huggingface.co/datasets/huggingface/documentation-images/raw/main/transformers-logo-light.svg">
<img alt="Hugging Face Transformers Library" src="https://huggingface.co/datasets/huggingface/documentation-images/raw/main/transformers-logo-light.svg" width="352" height="59" style="max-width: 100%;">
</picture>
<br/>
<br/>
</p>
<p align="center">
<a href="https://huggingface.com/models"><img alt="Checkpoints on Hub" src="https://img.shields.io/endpoint?url=https://huggingface.co/api/shields/models&color=brightgreen"></a>
<a href="https://circleci.com/gh/huggingface/transformers"><img alt="Build" src="https://img.shields.io/circleci/build/github/huggingface/transformers/main"></a>
<a href="https://github.com/huggingface/transformers/blob/main/LICENSE"><img alt="GitHub" src="https://img.shields.io/github/license/huggingface/transformers.svg?color=blue"></a>
<a href="https://huggingface.co/docs/transformers/index"><img alt="Documentation" src="https://img.shields.io/website/http/huggingface.co/docs/transformers/index.svg?down_color=red&down_message=offline&up_message=online"></a>
<a href="https://github.com/huggingface/transformers/releases"><img alt="GitHub release" src="https://img.shields.io/github/release/huggingface/transformers.svg"></a>
<a href="https://github.com/huggingface/transformers/blob/main/CODE_OF_CONDUCT.md"><img alt="Contributor Covenant" src="https://img.shields.io/badge/Contributor%20Covenant-v2.0%20adopted-ff69b4.svg"></a>
<a href="https://zenodo.org/badge/latestdoi/155220641"><img src="https://zenodo.org/badge/155220641.svg" alt="DOI"></a>
</p>
<h4 align="center">
<p>
<b>English</b> |
<a href="https://github.com/huggingface/transformers/blob/main/i18n/README_zh-hans.md">简体中文</a> |
<a href="https://github.com/huggingface/transformers/blob/main/i18n/README_zh-hant.md">繁體中文</a> |
<a href="https://github.com/huggingface/transformers/blob/main/i18n/README_ko.md">한국어</a> |
<a href="https://github.com/huggingface/transformers/blob/main/i18n/README_es.md">Español</a> |
<a href="https://github.com/huggingface/transformers/blob/main/i18n/README_ja.md">日本語</a> |
<a href="https://github.com/huggingface/transformers/blob/main/i18n/README_hd.md">हिन्दी</a> |
<a href="https://github.com/huggingface/transformers/blob/main/i18n/README_ru.md">Русский</a> |
<a href="https://github.com/huggingface/transformers/blob/main/i18n/README_pt-br.md">Português</a> |
<a href="https://github.com/huggingface/transformers/blob/main/i18n/README_te.md">తెలుగు</a> |
<a href="https://github.com/huggingface/transformers/blob/main/i18n/README_fr.md">Français</a> |
<a href="https://github.com/huggingface/transformers/blob/main/i18n/README_de.md">Deutsch</a> |
<a href="https://github.com/huggingface/transformers/blob/main/i18n/README_vi.md">Tiếng Việt</a> |
<a href="https://github.com/huggingface/transformers/blob/main/i18n/README_ar.md">العربية</a> |
<a href="https://github.com/huggingface/transformers/blob/main/i18n/README_ur.md">اردو</a> |
</p>
</h4>
<h3 align="center">
<p>Enhanced state-of-the-art pretrained models with Omega3 support</p>
</h3>
<h3 align="center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/transformers_as_a_model_definition.png"/>
</h3>
## Transformers-USF
**Transformers-USF** is an enhanced version of the Hugging Face Transformers library that includes **Omega3 model support** alongside all original transformers functionality.
This package acts as the model-definition framework for state-of-the-art machine learning models in text, computer
vision, audio, video, and multimodal model, for both inference and training - **now with Omega3 capabilities**.
It centralizes the model definition so that this definition is agreed upon across the ecosystem. `transformers-usf` is the
pivot across frameworks: if a model definition is supported, it will be compatible with the majority of training
frameworks (Axolotl, Unsloth, DeepSpeed, FSDP, PyTorch-Lightning, ...), inference engines (vLLM, SGLang, TGI, ...),
and adjacent modeling libraries (llama.cpp, mlx, ...) which leverage the model definition from `transformers`.
### Key Features:
- **🔥 Omega3 Model Support**: Advanced transformer architecture with enhanced capabilities
- **🎯 Drop-in Replacement**: Use `from transformers import ...` syntax unchanged
- **🚀 Full Compatibility**: All original HuggingFace models and features included
- **⚡ Latest Base**: Built on transformers 4.56.0 with all recent improvements
We pledge to help support new state-of-the-art models and democratize their usage by having their model definition be
simple, customizable, and efficient.
There are over 1M+ Transformers [model checkpoints](https://huggingface.co/models?library=transformers&sort=trending) on the [Hugging Face Hub](https://huggingface.com/models) you can use, plus our enhanced Omega3 models.
Explore the [Hub](https://huggingface.com/) today to find a model and use Transformers-USF to help you get started right away.
## Installation
Transformers works with Python 3.9+ [PyTorch](https://pytorch.org/get-started/locally/) 2.1+, [TensorFlow](https://www.tensorflow.org/install/pip) 2.6+, and [Flax](https://flax.readthedocs.io/en/latest/) 0.4.1+.
Create and activate a virtual environment with [venv](https://docs.python.org/3/library/venv.html) or [uv](https://docs.astral.sh/uv/), a fast Rust-based Python package and project manager.
```py
# venv
python -m venv .my-env
source .my-env/bin/activate
# uv
uv venv .my-env
source .my-env/bin/activate
```
Install Transformers-USF in your virtual environment.
```py
# pip
pip install "transformers-usf[torch]"
# uv
uv pip install "transformers-usf[torch]"
```
Install Transformers-USF from source if you want the latest changes in the library or are interested in contributing. However, the *latest* version may not be stable. Feel free to open an [issue](https://github.com/apt-team-018/transformers-usf/issues) if you encounter an error.
```shell
git clone https://github.com/apt-team-018/transformers-usf.git
cd transformers-usf
# pip
pip install .[torch]
# uv
uv pip install .[torch]
```
## Using Omega3 Models
The enhanced Transformers-USF library includes powerful **Omega3 model support** with advanced transformer architecture capabilities:
### Basic Omega3 Usage
```py
from transformers import AutoModel, AutoTokenizer, Omega3Config
# Load Omega3 model with configuration
config = Omega3Config.from_pretrained("omega3-base")
model = AutoModel.from_pretrained("omega3-base", config=config)
tokenizer = AutoTokenizer.from_pretrained("omega3-base")
# Use the model for inference
inputs = tokenizer("Advanced natural language processing with Omega3 architecture", return_tensors="pt")
outputs = model(**inputs)
# Access advanced Omega3 features
attention_weights = outputs.attentions # Enhanced attention mechanisms
hidden_states = outputs.hidden_states # Improved representations
```
### Advanced Omega3 Features
```py
from transformers import Omega3ForSequenceClassification, Omega3ForCausalLM
# Text Classification with Omega3
classifier = Omega3ForSequenceClassification.from_pretrained("omega3-classifier")
result = classifier("This transformer architecture is revolutionary!")
# Text Generation with Omega3
generator = Omega3ForCausalLM.from_pretrained("omega3-generator")
generated = generator.generate(
inputs.input_ids,
max_length=100,
do_sample=True,
temperature=0.7,
omega3_enhanced_sampling=True # Unique Omega3 feature
)
```
## Quickstart
Get started with Transformers-USF and Omega3 models using the [Pipeline](https://huggingface.co/docs/transformers/pipeline_tutorial) API. The `Pipeline` supports all standard tasks plus enhanced Omega3 capabilities.
### Text Generation with Omega3
```py
from transformers import pipeline
# Create pipeline with Omega3 model
pipeline = pipeline(task="text-generation", model="omega3-base")
result = pipeline("The future of AI is powered by ")
print(result[0]['generated_text'])
# Expected: "The future of AI is powered by advanced transformer architectures like Omega3..."
```
### Conversational AI with Omega3
```py
import torch
from transformers import pipeline
chat = [
{"role": "system", "content": "You are an AI assistant powered by Omega3 architecture."},
{"role": "user", "content": "What makes Omega3 models special?"}
]
# Use Omega3 for enhanced conversational AI
pipeline = pipeline(
task="text-generation",
model="omega3-chat",
model_kwargs={"torch_dtype": torch.bfloat16, "device_map": "auto"}
)
response = pipeline(chat, max_new_tokens=512)
print(response[0]["generated_text"][-1]["content"])
```
> [!TIP]
> You can chat with Omega3 models directly from the command line:
> ```shell
> transformers chat omega3-base --model-type omega3
> ```
Expand the examples below to see how `Pipeline` works for different modalities and tasks.
<details>
<summary>Automatic speech recognition</summary>
```py
from transformers import pipeline
pipeline = pipeline(task="automatic-speech-recognition", model="openai/whisper-large-v3")
pipeline("https://huggingface.co/datasets/Narsil/asr_dummy/resolve/main/mlk.flac")
{'text': ' I have a dream that one day this nation will rise up and live out the true meaning of its creed.'}
```
</details>
<details>
<summary>Image classification</summary>
<h3 align="center">
<a><img src="https://huggingface.co/datasets/Narsil/image_dummy/raw/main/parrots.png"></a>
</h3>
```py
from transformers import pipeline
pipeline = pipeline(task="image-classification", model="facebook/dinov2-small-imagenet1k-1-layer")
pipeline("https://huggingface.co/datasets/Narsil/image_dummy/raw/main/parrots.png")
[{'label': 'macaw', 'score': 0.997848391532898},
{'label': 'sulphur-crested cockatoo, Kakatoe galerita, Cacatua galerita',
'score': 0.0016551691805943847},
{'label': 'lorikeet', 'score': 0.00018523589824326336},
{'label': 'African grey, African gray, Psittacus erithacus',
'score': 7.85409429227002e-05},
{'label': 'quail', 'score': 5.502637941390276e-05}]
```
</details>
<details>
<summary>Visual question answering</summary>
<h3 align="center">
<a><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/idefics-few-shot.jpg"></a>
</h3>
```py
from transformers import pipeline
pipeline = pipeline(task="visual-question-answering", model="Salesforce/blip-vqa-base")
pipeline(
image="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/idefics-few-shot.jpg",
question="What is in the image?",
)
[{'answer': 'statue of liberty'}]
```
</details>
## Why should I use Transformers?
1. Easy-to-use state-of-the-art models:
- High performance on natural language understanding & generation, computer vision, audio, video, and multimodal tasks.
- Low barrier to entry for researchers, engineers, and developers.
- Few user-facing abstractions with just three classes to learn.
- A unified API for using all our pretrained models.
1. Lower compute costs, smaller carbon footprint:
- Share trained models instead of training from scratch.
- Reduce compute time and production costs.
- Dozens of model architectures with 1M+ pretrained checkpoints across all modalities.
1. Choose the right framework for every part of a models lifetime:
- Train state-of-the-art models in 3 lines of code.
- Move a single model between PyTorch/JAX/TF2.0 frameworks at will.
- Pick the right framework for training, evaluation, and production.
1. Easily customize a model or an example to your needs:
- We provide examples for each architecture to reproduce the results published by its original authors.
- Model internals are exposed as consistently as possible.
- Model files can be used independently of the library for quick experiments.
<a target="_blank" href="https://huggingface.co/enterprise">
<img alt="Hugging Face Enterprise Hub" src="https://github.com/user-attachments/assets/247fb16d-d251-4583-96c4-d3d76dda4925">
</a><br>
## Why shouldn't I use Transformers?
- This library is not a modular toolbox of building blocks for neural nets. The code in the model files is not refactored with additional abstractions on purpose, so that researchers can quickly iterate on each of the models without diving into additional abstractions/files.
- The training API is optimized to work with PyTorch models provided by Transformers. For generic machine learning loops, you should use another library like [Accelerate](https://huggingface.co/docs/accelerate).
- The [example scripts](https://github.com/huggingface/transformers/tree/main/examples) are only *examples*. They may not necessarily work out-of-the-box on your specific use case and you'll need to adapt the code for it to work.
## 100 projects using Transformers
Transformers is more than a toolkit to use pretrained models, it's a community of projects built around it and the
Hugging Face Hub. We want Transformers to enable developers, researchers, students, professors, engineers, and anyone
else to build their dream projects.
In order to celebrate Transformers 100,000 stars, we wanted to put the spotlight on the
community with the [awesome-transformers](./awesome-transformers.md) page which lists 100
incredible projects built with Transformers.
If you own or use a project that you believe should be part of the list, please open a PR to add it!
## Example models
You can test most of our models directly on their [Hub model pages](https://huggingface.co/models).
Expand each modality below to see a few example models for various use cases.
<details>
<summary>Audio</summary>
- Audio classification with [Whisper](https://huggingface.co/openai/whisper-large-v3-turbo)
- Automatic speech recognition with [Moonshine](https://huggingface.co/UsefulSensors/moonshine)
- Keyword spotting with [Wav2Vec2](https://huggingface.co/superb/wav2vec2-base-superb-ks)
- Speech to speech generation with [Moshi](https://huggingface.co/kyutai/moshiko-pytorch-bf16)
- Text to audio with [MusicGen](https://huggingface.co/facebook/musicgen-large)
- Text to speech with [Bark](https://huggingface.co/suno/bark)
</details>
<details>
<summary>Computer vision</summary>
- Automatic mask generation with [SAM](https://huggingface.co/facebook/sam-vit-base)
- Depth estimation with [DepthPro](https://huggingface.co/apple/DepthPro-hf)
- Image classification with [DINO v2](https://huggingface.co/facebook/dinov2-base)
- Keypoint detection with [SuperPoint](https://huggingface.co/magic-leap-community/superpoint)
- Keypoint matching with [SuperGlue](https://huggingface.co/magic-leap-community/superglue_outdoor)
- Object detection with [RT-DETRv2](https://huggingface.co/PekingU/rtdetr_v2_r50vd)
- Pose Estimation with [VitPose](https://huggingface.co/usyd-community/vitpose-base-simple)
- Universal segmentation with [OneFormer](https://huggingface.co/shi-labs/oneformer_ade20k_swin_large)
- Video classification with [VideoMAE](https://huggingface.co/MCG-NJU/videomae-large)
</details>
<details>
<summary>Omega3 Specialized Tasks</summary>
- **Advanced Text Generation** with [Omega3-Large](omega3-large) - Enhanced contextual understanding
- **Multimodal Reasoning** with [Omega3-Vision](omega3-vision) - Integrated text and image processing
- **Conversational AI** with [Omega3-Chat](omega3-chat) - Superior dialogue capabilities
- **Code Generation** with [Omega3-Code](omega3-code) - Programming language understanding
- **Scientific Text Processing** with [Omega3-Science](omega3-science) - Domain-specific reasoning
- **Creative Writing** with [Omega3-Creative](omega3-creative) - Enhanced narrative generation
- **Technical Documentation** with [Omega3-Tech](omega3-tech) - Structured content creation
- **Multilingual Translation** with [Omega3-Translate](omega3-translate) - Cross-language understanding
</details>
<details>
<summary>NLP with Omega3</summary>
- **Advanced Text Classification** with [Omega3-Classifier](omega3-classifier) - Enhanced semantic understanding
- **Named Entity Recognition** with [Omega3-NER](omega3-ner) - Improved entity extraction
- **Sentiment Analysis** with [Omega3-Sentiment](omega3-sentiment) - Nuanced emotional understanding
- **Question Answering** with [Omega3-QA](omega3-qa) - Context-aware response generation
- **Text Summarization** with [Omega3-Summarize](omega3-summarize) - Intelligent content distillation
- **Language Translation** with [Omega3-Translate](omega3-translate) - High-quality cross-language conversion
- **Text Generation** with [Omega3-Generator](omega3-generator) - Creative and coherent text production
</details>
## Citation
We now have a [paper](https://www.aclweb.org/anthology/2020.emnlp-demos.6/) you can cite for the 🤗 Transformers library:
```bibtex
@inproceedings{wolf-etal-2020-transformers,
title = "Transformers: State-of-the-Art Natural Language Processing",
author = "Thomas Wolf and Lysandre Debut and Victor Sanh and Julien Chaumond and Clement Delangue and Anthony Moi and Pierric Cistac and Tim Rault and Rémi Louf and Morgan Funtowicz and Joe Davison and Sam Shleifer and Patrick von Platen and Clara Ma and Yacine Jernite and Julien Plu and Canwen Xu and Teven Le Scao and Sylvain Gugger and Mariama Drame and Quentin Lhoest and Alexander M. Rush",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
month = oct,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.emnlp-demos.6",
pages = "38--45"
}
```
Raw data
{
"_id": null,
"home_page": "https://github.com/apt-team-018/transformers-usf",
"name": "transformers-usf",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.9.0",
"maintainer_email": null,
"keywords": "NLP vision speech deep learning transformer pytorch tensorflow jax BERT GPT-2 Wav2Vec2 ViT Omega3 USF",
"author": "USF Team based on The Hugging Face team (past and future) with the help of all our contributors (https://github.com/huggingface/transformers/graphs/contributors)",
"author_email": "transformers@huggingface.co",
"download_url": "https://files.pythonhosted.org/packages/89/1f/9c90d1d5e526607320d4e078704c9a628f87730240624dae19cd4c7cccf6/transformers_usf-4.56.0.post2.tar.gz",
"platform": null,
"description": "<!---\nCopyright 2020 The HuggingFace Team. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\n\n http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\n-->\n\n<p align=\"center\">\n <picture>\n <source media=\"(prefers-color-scheme: dark)\" srcset=\"https://huggingface.co/datasets/huggingface/documentation-images/raw/main/transformers-logo-dark.svg\">\n <source media=\"(prefers-color-scheme: light)\" srcset=\"https://huggingface.co/datasets/huggingface/documentation-images/raw/main/transformers-logo-light.svg\">\n <img alt=\"Hugging Face Transformers Library\" src=\"https://huggingface.co/datasets/huggingface/documentation-images/raw/main/transformers-logo-light.svg\" width=\"352\" height=\"59\" style=\"max-width: 100%;\">\n </picture>\n <br/>\n <br/>\n</p>\n\n<p align=\"center\">\n <a href=\"https://huggingface.com/models\"><img alt=\"Checkpoints on Hub\" src=\"https://img.shields.io/endpoint?url=https://huggingface.co/api/shields/models&color=brightgreen\"></a>\n <a href=\"https://circleci.com/gh/huggingface/transformers\"><img alt=\"Build\" src=\"https://img.shields.io/circleci/build/github/huggingface/transformers/main\"></a>\n <a href=\"https://github.com/huggingface/transformers/blob/main/LICENSE\"><img alt=\"GitHub\" src=\"https://img.shields.io/github/license/huggingface/transformers.svg?color=blue\"></a>\n <a href=\"https://huggingface.co/docs/transformers/index\"><img alt=\"Documentation\" src=\"https://img.shields.io/website/http/huggingface.co/docs/transformers/index.svg?down_color=red&down_message=offline&up_message=online\"></a>\n <a href=\"https://github.com/huggingface/transformers/releases\"><img alt=\"GitHub release\" src=\"https://img.shields.io/github/release/huggingface/transformers.svg\"></a>\n <a href=\"https://github.com/huggingface/transformers/blob/main/CODE_OF_CONDUCT.md\"><img alt=\"Contributor Covenant\" src=\"https://img.shields.io/badge/Contributor%20Covenant-v2.0%20adopted-ff69b4.svg\"></a>\n <a href=\"https://zenodo.org/badge/latestdoi/155220641\"><img src=\"https://zenodo.org/badge/155220641.svg\" alt=\"DOI\"></a>\n</p>\n\n<h4 align=\"center\">\n <p>\n <b>English</b> |\n <a href=\"https://github.com/huggingface/transformers/blob/main/i18n/README_zh-hans.md\">\u7b80\u4f53\u4e2d\u6587</a> |\n <a href=\"https://github.com/huggingface/transformers/blob/main/i18n/README_zh-hant.md\">\u7e41\u9ad4\u4e2d\u6587</a> |\n <a href=\"https://github.com/huggingface/transformers/blob/main/i18n/README_ko.md\">\ud55c\uad6d\uc5b4</a> |\n <a href=\"https://github.com/huggingface/transformers/blob/main/i18n/README_es.md\">Espa\u00f1ol</a> |\n <a href=\"https://github.com/huggingface/transformers/blob/main/i18n/README_ja.md\">\u65e5\u672c\u8a9e</a> |\n <a href=\"https://github.com/huggingface/transformers/blob/main/i18n/README_hd.md\">\u0939\u093f\u0928\u094d\u0926\u0940</a> |\n <a href=\"https://github.com/huggingface/transformers/blob/main/i18n/README_ru.md\">\u0420\u0443\u0441\u0441\u043a\u0438\u0439</a> |\n <a href=\"https://github.com/huggingface/transformers/blob/main/i18n/README_pt-br.md\">Portugu\u00eas</a> |\n <a href=\"https://github.com/huggingface/transformers/blob/main/i18n/README_te.md\">\u0c24\u0c46\u0c32\u0c41\u0c17\u0c41</a> |\n <a href=\"https://github.com/huggingface/transformers/blob/main/i18n/README_fr.md\">Fran\u00e7ais</a> |\n <a href=\"https://github.com/huggingface/transformers/blob/main/i18n/README_de.md\">Deutsch</a> |\n <a href=\"https://github.com/huggingface/transformers/blob/main/i18n/README_vi.md\">Ti\u1ebfng Vi\u1ec7t</a> |\n <a href=\"https://github.com/huggingface/transformers/blob/main/i18n/README_ar.md\">\u0627\u0644\u0639\u0631\u0628\u064a\u0629</a> |\n <a href=\"https://github.com/huggingface/transformers/blob/main/i18n/README_ur.md\">\u0627\u0631\u062f\u0648</a> |\n </p>\n</h4>\n\n<h3 align=\"center\">\n <p>Enhanced state-of-the-art pretrained models with Omega3 support</p>\n</h3>\n\n<h3 align=\"center\">\n <img src=\"https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/transformers_as_a_model_definition.png\"/>\n</h3>\n\n## Transformers-USF\n\n**Transformers-USF** is an enhanced version of the Hugging Face Transformers library that includes **Omega3 model support** alongside all original transformers functionality.\n\nThis package acts as the model-definition framework for state-of-the-art machine learning models in text, computer \nvision, audio, video, and multimodal model, for both inference and training - **now with Omega3 capabilities**.\n\nIt centralizes the model definition so that this definition is agreed upon across the ecosystem. `transformers-usf` is the \npivot across frameworks: if a model definition is supported, it will be compatible with the majority of training \nframeworks (Axolotl, Unsloth, DeepSpeed, FSDP, PyTorch-Lightning, ...), inference engines (vLLM, SGLang, TGI, ...),\nand adjacent modeling libraries (llama.cpp, mlx, ...) which leverage the model definition from `transformers`.\n\n### Key Features:\n- **\ud83d\udd25 Omega3 Model Support**: Advanced transformer architecture with enhanced capabilities\n- **\ud83c\udfaf Drop-in Replacement**: Use `from transformers import ...` syntax unchanged\n- **\ud83d\ude80 Full Compatibility**: All original HuggingFace models and features included\n- **\u26a1 Latest Base**: Built on transformers 4.56.0 with all recent improvements\n\nWe pledge to help support new state-of-the-art models and democratize their usage by having their model definition be\nsimple, customizable, and efficient.\n\nThere are over 1M+ Transformers [model checkpoints](https://huggingface.co/models?library=transformers&sort=trending) on the [Hugging Face Hub](https://huggingface.com/models) you can use, plus our enhanced Omega3 models.\n\nExplore the [Hub](https://huggingface.com/) today to find a model and use Transformers-USF to help you get started right away.\n\n## Installation\n\nTransformers works with Python 3.9+ [PyTorch](https://pytorch.org/get-started/locally/) 2.1+, [TensorFlow](https://www.tensorflow.org/install/pip) 2.6+, and [Flax](https://flax.readthedocs.io/en/latest/) 0.4.1+.\n\nCreate and activate a virtual environment with [venv](https://docs.python.org/3/library/venv.html) or [uv](https://docs.astral.sh/uv/), a fast Rust-based Python package and project manager.\n\n```py\n# venv\npython -m venv .my-env\nsource .my-env/bin/activate\n# uv\nuv venv .my-env\nsource .my-env/bin/activate\n```\n\nInstall Transformers-USF in your virtual environment.\n\n```py\n# pip\npip install \"transformers-usf[torch]\"\n\n# uv\nuv pip install \"transformers-usf[torch]\"\n```\n\nInstall Transformers-USF from source if you want the latest changes in the library or are interested in contributing. However, the *latest* version may not be stable. Feel free to open an [issue](https://github.com/apt-team-018/transformers-usf/issues) if you encounter an error.\n\n```shell\ngit clone https://github.com/apt-team-018/transformers-usf.git\ncd transformers-usf\n\n# pip\npip install .[torch]\n\n# uv\nuv pip install .[torch]\n```\n\n## Using Omega3 Models\n\nThe enhanced Transformers-USF library includes powerful **Omega3 model support** with advanced transformer architecture capabilities:\n\n### Basic Omega3 Usage\n\n```py\nfrom transformers import AutoModel, AutoTokenizer, Omega3Config\n\n# Load Omega3 model with configuration\nconfig = Omega3Config.from_pretrained(\"omega3-base\")\nmodel = AutoModel.from_pretrained(\"omega3-base\", config=config)\ntokenizer = AutoTokenizer.from_pretrained(\"omega3-base\")\n\n# Use the model for inference\ninputs = tokenizer(\"Advanced natural language processing with Omega3 architecture\", return_tensors=\"pt\")\noutputs = model(**inputs)\n\n# Access advanced Omega3 features\nattention_weights = outputs.attentions # Enhanced attention mechanisms\nhidden_states = outputs.hidden_states # Improved representations\n```\n\n### Advanced Omega3 Features\n\n```py\nfrom transformers import Omega3ForSequenceClassification, Omega3ForCausalLM\n\n# Text Classification with Omega3\nclassifier = Omega3ForSequenceClassification.from_pretrained(\"omega3-classifier\")\nresult = classifier(\"This transformer architecture is revolutionary!\")\n\n# Text Generation with Omega3\ngenerator = Omega3ForCausalLM.from_pretrained(\"omega3-generator\")\ngenerated = generator.generate(\n inputs.input_ids,\n max_length=100,\n do_sample=True,\n temperature=0.7,\n omega3_enhanced_sampling=True # Unique Omega3 feature\n)\n```\n\n## Quickstart\n\nGet started with Transformers-USF and Omega3 models using the [Pipeline](https://huggingface.co/docs/transformers/pipeline_tutorial) API. The `Pipeline` supports all standard tasks plus enhanced Omega3 capabilities.\n\n### Text Generation with Omega3\n\n```py\nfrom transformers import pipeline\n\n# Create pipeline with Omega3 model\npipeline = pipeline(task=\"text-generation\", model=\"omega3-base\")\nresult = pipeline(\"The future of AI is powered by \")\nprint(result[0]['generated_text'])\n# Expected: \"The future of AI is powered by advanced transformer architectures like Omega3...\"\n```\n\n### Conversational AI with Omega3\n\n```py\nimport torch\nfrom transformers import pipeline\n\nchat = [\n {\"role\": \"system\", \"content\": \"You are an AI assistant powered by Omega3 architecture.\"},\n {\"role\": \"user\", \"content\": \"What makes Omega3 models special?\"}\n]\n\n# Use Omega3 for enhanced conversational AI\npipeline = pipeline(\n task=\"text-generation\", \n model=\"omega3-chat\",\n model_kwargs={\"torch_dtype\": torch.bfloat16, \"device_map\": \"auto\"}\n)\nresponse = pipeline(chat, max_new_tokens=512)\nprint(response[0][\"generated_text\"][-1][\"content\"])\n```\n\n> [!TIP]\n> You can chat with Omega3 models directly from the command line:\n> ```shell\n> transformers chat omega3-base --model-type omega3\n> ```\n\nExpand the examples below to see how `Pipeline` works for different modalities and tasks.\n\n<details>\n<summary>Automatic speech recognition</summary>\n\n```py\nfrom transformers import pipeline\n\npipeline = pipeline(task=\"automatic-speech-recognition\", model=\"openai/whisper-large-v3\")\npipeline(\"https://huggingface.co/datasets/Narsil/asr_dummy/resolve/main/mlk.flac\")\n{'text': ' I have a dream that one day this nation will rise up and live out the true meaning of its creed.'}\n```\n\n</details>\n\n<details>\n<summary>Image classification</summary>\n\n<h3 align=\"center\">\n <a><img src=\"https://huggingface.co/datasets/Narsil/image_dummy/raw/main/parrots.png\"></a>\n</h3>\n\n```py\nfrom transformers import pipeline\n\npipeline = pipeline(task=\"image-classification\", model=\"facebook/dinov2-small-imagenet1k-1-layer\")\npipeline(\"https://huggingface.co/datasets/Narsil/image_dummy/raw/main/parrots.png\")\n[{'label': 'macaw', 'score': 0.997848391532898},\n {'label': 'sulphur-crested cockatoo, Kakatoe galerita, Cacatua galerita',\n 'score': 0.0016551691805943847},\n {'label': 'lorikeet', 'score': 0.00018523589824326336},\n {'label': 'African grey, African gray, Psittacus erithacus',\n 'score': 7.85409429227002e-05},\n {'label': 'quail', 'score': 5.502637941390276e-05}]\n```\n\n</details>\n\n<details>\n<summary>Visual question answering</summary>\n\n\n<h3 align=\"center\">\n <a><img src=\"https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/idefics-few-shot.jpg\"></a>\n</h3>\n\n```py\nfrom transformers import pipeline\n\npipeline = pipeline(task=\"visual-question-answering\", model=\"Salesforce/blip-vqa-base\")\npipeline(\n image=\"https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/idefics-few-shot.jpg\",\n question=\"What is in the image?\",\n)\n[{'answer': 'statue of liberty'}]\n```\n\n</details>\n\n## Why should I use Transformers?\n\n1. Easy-to-use state-of-the-art models:\n - High performance on natural language understanding & generation, computer vision, audio, video, and multimodal tasks.\n - Low barrier to entry for researchers, engineers, and developers.\n - Few user-facing abstractions with just three classes to learn.\n - A unified API for using all our pretrained models.\n\n1. Lower compute costs, smaller carbon footprint:\n - Share trained models instead of training from scratch.\n - Reduce compute time and production costs.\n - Dozens of model architectures with 1M+ pretrained checkpoints across all modalities.\n\n1. Choose the right framework for every part of a models lifetime:\n - Train state-of-the-art models in 3 lines of code.\n - Move a single model between PyTorch/JAX/TF2.0 frameworks at will.\n - Pick the right framework for training, evaluation, and production.\n\n1. Easily customize a model or an example to your needs:\n - We provide examples for each architecture to reproduce the results published by its original authors.\n - Model internals are exposed as consistently as possible.\n - Model files can be used independently of the library for quick experiments.\n\n<a target=\"_blank\" href=\"https://huggingface.co/enterprise\">\n <img alt=\"Hugging Face Enterprise Hub\" src=\"https://github.com/user-attachments/assets/247fb16d-d251-4583-96c4-d3d76dda4925\">\n</a><br>\n\n## Why shouldn't I use Transformers?\n\n- This library is not a modular toolbox of building blocks for neural nets. The code in the model files is not refactored with additional abstractions on purpose, so that researchers can quickly iterate on each of the models without diving into additional abstractions/files.\n- The training API is optimized to work with PyTorch models provided by Transformers. For generic machine learning loops, you should use another library like [Accelerate](https://huggingface.co/docs/accelerate).\n- The [example scripts](https://github.com/huggingface/transformers/tree/main/examples) are only *examples*. They may not necessarily work out-of-the-box on your specific use case and you'll need to adapt the code for it to work.\n\n## 100 projects using Transformers\n\nTransformers is more than a toolkit to use pretrained models, it's a community of projects built around it and the\nHugging Face Hub. We want Transformers to enable developers, researchers, students, professors, engineers, and anyone\nelse to build their dream projects.\n\nIn order to celebrate Transformers 100,000 stars, we wanted to put the spotlight on the\ncommunity with the [awesome-transformers](./awesome-transformers.md) page which lists 100\nincredible projects built with Transformers.\n\nIf you own or use a project that you believe should be part of the list, please open a PR to add it!\n\n## Example models\n\nYou can test most of our models directly on their [Hub model pages](https://huggingface.co/models).\n\nExpand each modality below to see a few example models for various use cases.\n\n<details>\n<summary>Audio</summary>\n\n- Audio classification with [Whisper](https://huggingface.co/openai/whisper-large-v3-turbo)\n- Automatic speech recognition with [Moonshine](https://huggingface.co/UsefulSensors/moonshine)\n- Keyword spotting with [Wav2Vec2](https://huggingface.co/superb/wav2vec2-base-superb-ks)\n- Speech to speech generation with [Moshi](https://huggingface.co/kyutai/moshiko-pytorch-bf16)\n- Text to audio with [MusicGen](https://huggingface.co/facebook/musicgen-large)\n- Text to speech with [Bark](https://huggingface.co/suno/bark)\n\n</details>\n\n<details>\n<summary>Computer vision</summary>\n\n- Automatic mask generation with [SAM](https://huggingface.co/facebook/sam-vit-base)\n- Depth estimation with [DepthPro](https://huggingface.co/apple/DepthPro-hf)\n- Image classification with [DINO v2](https://huggingface.co/facebook/dinov2-base)\n- Keypoint detection with [SuperPoint](https://huggingface.co/magic-leap-community/superpoint)\n- Keypoint matching with [SuperGlue](https://huggingface.co/magic-leap-community/superglue_outdoor)\n- Object detection with [RT-DETRv2](https://huggingface.co/PekingU/rtdetr_v2_r50vd)\n- Pose Estimation with [VitPose](https://huggingface.co/usyd-community/vitpose-base-simple)\n- Universal segmentation with [OneFormer](https://huggingface.co/shi-labs/oneformer_ade20k_swin_large)\n- Video classification with [VideoMAE](https://huggingface.co/MCG-NJU/videomae-large)\n\n</details>\n\n<details>\n<summary>Omega3 Specialized Tasks</summary>\n\n- **Advanced Text Generation** with [Omega3-Large](omega3-large) - Enhanced contextual understanding\n- **Multimodal Reasoning** with [Omega3-Vision](omega3-vision) - Integrated text and image processing\n- **Conversational AI** with [Omega3-Chat](omega3-chat) - Superior dialogue capabilities\n- **Code Generation** with [Omega3-Code](omega3-code) - Programming language understanding\n- **Scientific Text Processing** with [Omega3-Science](omega3-science) - Domain-specific reasoning\n- **Creative Writing** with [Omega3-Creative](omega3-creative) - Enhanced narrative generation\n- **Technical Documentation** with [Omega3-Tech](omega3-tech) - Structured content creation\n- **Multilingual Translation** with [Omega3-Translate](omega3-translate) - Cross-language understanding\n\n</details>\n\n<details>\n<summary>NLP with Omega3</summary>\n\n- **Advanced Text Classification** with [Omega3-Classifier](omega3-classifier) - Enhanced semantic understanding\n- **Named Entity Recognition** with [Omega3-NER](omega3-ner) - Improved entity extraction\n- **Sentiment Analysis** with [Omega3-Sentiment](omega3-sentiment) - Nuanced emotional understanding\n- **Question Answering** with [Omega3-QA](omega3-qa) - Context-aware response generation\n- **Text Summarization** with [Omega3-Summarize](omega3-summarize) - Intelligent content distillation\n- **Language Translation** with [Omega3-Translate](omega3-translate) - High-quality cross-language conversion\n- **Text Generation** with [Omega3-Generator](omega3-generator) - Creative and coherent text production\n\n</details>\n\n## Citation\n\nWe now have a [paper](https://www.aclweb.org/anthology/2020.emnlp-demos.6/) you can cite for the \ud83e\udd17 Transformers library:\n```bibtex\n@inproceedings{wolf-etal-2020-transformers,\n title = \"Transformers: State-of-the-Art Natural Language Processing\",\n author = \"Thomas Wolf and Lysandre Debut and Victor Sanh and Julien Chaumond and Clement Delangue and Anthony Moi and Pierric Cistac and Tim Rault and R\u00e9mi Louf and Morgan Funtowicz and Joe Davison and Sam Shleifer and Patrick von Platen and Clara Ma and Yacine Jernite and Julien Plu and Canwen Xu and Teven Le Scao and Sylvain Gugger and Mariama Drame and Quentin Lhoest and Alexander M. Rush\",\n booktitle = \"Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations\",\n month = oct,\n year = \"2020\",\n address = \"Online\",\n publisher = \"Association for Computational Linguistics\",\n url = \"https://www.aclweb.org/anthology/2020.emnlp-demos.6\",\n pages = \"38--45\"\n}\n```\n",
"bugtrack_url": null,
"license": "Apache 2.0 License",
"summary": "Enhanced Transformers library with Omega3 model support - State-of-the-art Machine Learning for JAX, PyTorch and TensorFlow",
"version": "4.56.0.post2",
"project_urls": {
"Homepage": "https://github.com/apt-team-018/transformers-usf"
},
"split_keywords": [
"nlp",
"vision",
"speech",
"deep",
"learning",
"transformer",
"pytorch",
"tensorflow",
"jax",
"bert",
"gpt-2",
"wav2vec2",
"vit",
"omega3",
"usf"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "5fd26040715af75adb2e272624d2a2dfbdd742586950951ba791e2c756bdf2c4",
"md5": "62e5f800e7a7bd594a01a0527b48ea9e",
"sha256": "022f1930e2145a00f176f407efd9208f921d8fd74129d16be41ccb385258c4c2"
},
"downloads": -1,
"filename": "transformers_usf-4.56.0.post2-py3-none-any.whl",
"has_sig": false,
"md5_digest": "62e5f800e7a7bd594a01a0527b48ea9e",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.9.0",
"size": 12632014,
"upload_time": "2025-08-26T09:14:08",
"upload_time_iso_8601": "2025-08-26T09:14:08.076690Z",
"url": "https://files.pythonhosted.org/packages/5f/d2/6040715af75adb2e272624d2a2dfbdd742586950951ba791e2c756bdf2c4/transformers_usf-4.56.0.post2-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "891f9c90d1d5e526607320d4e078704c9a628f87730240624dae19cd4c7cccf6",
"md5": "c08614f69c53ed602cf85494ab6f3e82",
"sha256": "3474942b6d611c68f13e3f2d8e305d606a6fd0eeb1a25f735374042f85e97c3c"
},
"downloads": -1,
"filename": "transformers_usf-4.56.0.post2.tar.gz",
"has_sig": false,
"md5_digest": "c08614f69c53ed602cf85494ab6f3e82",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.9.0",
"size": 10629046,
"upload_time": "2025-08-26T09:14:15",
"upload_time_iso_8601": "2025-08-26T09:14:15.569606Z",
"url": "https://files.pythonhosted.org/packages/89/1f/9c90d1d5e526607320d4e078704c9a628f87730240624dae19cd4c7cccf6/transformers_usf-4.56.0.post2.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-08-26 09:14:15",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "apt-team-018",
"github_project": "transformers-usf",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"circle": true,
"lcname": "transformers-usf"
}