[![Multi-Modality](images/agorabanner.png)](https://discord.gg/qUtxnK2NMf)
# Andromeda: Ultra-Fast and Ultra-Intelligent SOTA Language Model 🚀🌌
<div align="center">
[![Open Bounties](https://img.shields.io/endpoint?url=https%3A%2F%2Fconsole.algora.io%2Fapi%2Fshields%2Fkyegomez%2Fbounties%3Fstatus%3Dopen)](https://console.algora.io/org/kyegomez/bounties?status=open)
[![Rewarded Bounties](https://img.shields.io/endpoint?url=https%3A%2F%2Fconsole.algora.io%2Fapi%2Fshields%2Fkyegomez%2Fbounties%3Fstatus%3Dcompleted)](https://console.algora.io/org/kyegomez/bounties?status=completed)
[![GitHub issues](https://img.shields.io/github/issues/kyegomez/Andromeda)](https://github.com/kyegomez/Andromeda/issues)
[![GitHub forks](https://img.shields.io/github/forks/kyegomez/Andromeda)](https://github.com/kyegomez/Andromeda/network)
[![GitHub stars](https://img.shields.io/github/stars/kyegomez/Andromeda)](https://github.com/kyegomez/Andromeda/stargazers)
[![GitHub license](https://img.shields.io/github/license/kyegomez/Andromeda)](https://github.com/kyegomez/Andromeda/blob/main/LICENSE)
[![Share on Twitter](https://img.shields.io/twitter/url/https/twitter.com/cloudposse.svg?style=social&label=Share%20%40kyegomez/Andromeda)](https://twitter.com/intent/tweet?text=Check%20out%20this%20amazing%20AI%20project:%20Andromeda&url=https%3A%2F%2Fgithub.com%2Fkyegomez%2FAndromeda)
[![Share on Facebook](https://img.shields.io/badge/Share-%20facebook-blue)](https://www.facebook.com/sharer/sharer.php?u=https%3A%2F%2Fgithub.com%2Fkyegomez%2FAndromeda)
[![Share on LinkedIn](https://img.shields.io/badge/Share-%20linkedin-blue)](https://www.linkedin.com/shareArticle?mini=true&url=https%3A%2F%2Fgithub.com%2Fkyegomez%2FAndromeda&title=&summary=&source=)
![Discord](https://img.shields.io/discord/999382051935506503)
[![Share on Reddit](https://img.shields.io/badge/-Share%20on%20Reddit-orange)](https://www.reddit.com/submit?url=https%3A%2F%2Fgithub.com%2Fkyegomez%2FAndromeda&title=Andromeda%20-%20the%20next%20generation%20AI%20shields)
[![Share on Hacker News](https://img.shields.io/badge/-Share%20on%20Hacker%20News-orange)](https://news.ycombinator.com/submitlink?u=https%3A%2F%2Fgithub.com%2Fkyegomez%2FAndromeda&t=Andromeda%20-%20the%20next%20generation%20AI%20shields)
[![Share on Pinterest](https://img.shields.io/badge/-Share%20on%20Pinterest-red)](https://pinterest.com/pin/create/button/?url=https%3A%2F%2Fgithub.com%2Fkyegomez%2FAndromeda&media=https%3A%2F%2Fexample.com%2Fimage.jpg&description=Andromeda%20-%20the%20next%20generation%20AI%20shields)
[![Share on WhatsApp](https://img.shields.io/badge/-Share%20on%20WhatsApp-green)](https://api.whatsapp.com/send?text=Check%20out%20Andromeda%20-%20the%20next%20generation%20AI%20shields%20%23Andromeda%20%23AI%0A%0Ahttps%3A%2F%2Fgithub.com%2Fkyegomez%2FAndromeda)
</div>
Welcome to Andromeda, The Fastest, Most Creative, and Reliable Language Model Ever Built, train your own verison, conduct inference, and finetune your own verison with simple plug in and play scripts get started in 10 seconds:
## Features
- 💼 Handle Ultra Long Sequences (32,000-200,000+ context lengths)
- âš¡ Ultra Fast Processing (32,000+ tokens in under 100ms)
- 🎓 Superior Reasoning Capabilities
## 🎯 Principles
- **Efficiency**: Optimize with techniques like attention flashing, rotary position encodings, and deep normalization.
- **Flexibility**: Adapt to various tasks and domains for wide applications.
- **Scalability**: Designed to scale with resources and data sizes.
- **Community-Driven**: Thrives on contributions from the open-source community.
---
## 💻 Install
`python3.11 -m pip install --upgrade andromeda-torch`
## Usage
- Forward pass with random inputs
```python
import torch
from andromeda.configs import Andromeda1Billion
model = Andromeda1Billion()
x = torch.randint(0, 256, (1, 1024)).cuda()
out = model(x) # (1, 1024, 20000)
print(out)
```
- Tokenized inputs
```python
from andromeda_torch import Tokenizer
from andromeda_torch.configs import Andromeda1Billion
model = Andromeda1Billion()
tokenizer = Tokenizer()
encoded_text = tokenizer.encode("Hello world!")
out = model(encoded_text)
print(out)
```
## 📚 Training
1. Set the environment variables:
- `ENTITY_NAME`: Your wandb project name
- `OUTPUT_DIR`: Directory to save the weights (e.g., `./weights`)
- `MASTER_ADDR`: For distributed training
- `MASTER_PORT` For master port distributed training
- `RANK`- Number of nodes services
- `WORLD_SIZE` Number of gpus
2. Configure the training:
- Accelerate Config
- Enable Deepspeed 3
- Accelerate launch train_distributed_accelerate.py
For more information, refer to the [Training SOP](DOCs/TRAINING.md).
---
## Todo
- [ ] Add Yarn Embeddings from zeta
## 📈 Benchmarks
### Speed
- Andromeda utilizes one of the most reliable Attentions ever, flash attention 2.0 Triton. It consumes 50x less memory than GPT-3 and 10x less than LLAMA.
![AndromedaBanner](images/andromeda_performance.png)
- We can speed this up even more with dynamic sparse flash attention 2.0.
# License
Apache License
Raw data
{
"_id": null,
"home_page": "https://github.com/kyegomez/Andromeda",
"name": "andromeda-torch",
"maintainer": null,
"docs_url": null,
"requires_python": "<4.0,>=3.9",
"maintainer_email": null,
"keywords": "artificial intelligence, attention mechanism, transformers",
"author": "Kye Gomez",
"author_email": "kye@apac.ai",
"download_url": "https://files.pythonhosted.org/packages/ac/6b/397a1fe98024e70c7f8e81b819fcd4d7070dae7ea373ccae6b7bd78d6341/andromeda_torch-0.0.9.tar.gz",
"platform": null,
"description": "[![Multi-Modality](images/agorabanner.png)](https://discord.gg/qUtxnK2NMf)\n\n\n# Andromeda: Ultra-Fast and Ultra-Intelligent SOTA Language Model \ud83d\ude80\ud83c\udf0c\n\n<div align=\"center\">\n\n[![Open Bounties](https://img.shields.io/endpoint?url=https%3A%2F%2Fconsole.algora.io%2Fapi%2Fshields%2Fkyegomez%2Fbounties%3Fstatus%3Dopen)](https://console.algora.io/org/kyegomez/bounties?status=open)\n[![Rewarded Bounties](https://img.shields.io/endpoint?url=https%3A%2F%2Fconsole.algora.io%2Fapi%2Fshields%2Fkyegomez%2Fbounties%3Fstatus%3Dcompleted)](https://console.algora.io/org/kyegomez/bounties?status=completed)\n[![GitHub issues](https://img.shields.io/github/issues/kyegomez/Andromeda)](https://github.com/kyegomez/Andromeda/issues) \n[![GitHub forks](https://img.shields.io/github/forks/kyegomez/Andromeda)](https://github.com/kyegomez/Andromeda/network) \n[![GitHub stars](https://img.shields.io/github/stars/kyegomez/Andromeda)](https://github.com/kyegomez/Andromeda/stargazers) \n[![GitHub license](https://img.shields.io/github/license/kyegomez/Andromeda)](https://github.com/kyegomez/Andromeda/blob/main/LICENSE)\n[![Share on Twitter](https://img.shields.io/twitter/url/https/twitter.com/cloudposse.svg?style=social&label=Share%20%40kyegomez/Andromeda)](https://twitter.com/intent/tweet?text=Check%20out%20this%20amazing%20AI%20project:%20Andromeda&url=https%3A%2F%2Fgithub.com%2Fkyegomez%2FAndromeda) \n[![Share on Facebook](https://img.shields.io/badge/Share-%20facebook-blue)](https://www.facebook.com/sharer/sharer.php?u=https%3A%2F%2Fgithub.com%2Fkyegomez%2FAndromeda) \n[![Share on LinkedIn](https://img.shields.io/badge/Share-%20linkedin-blue)](https://www.linkedin.com/shareArticle?mini=true&url=https%3A%2F%2Fgithub.com%2Fkyegomez%2FAndromeda&title=&summary=&source=)\n![Discord](https://img.shields.io/discord/999382051935506503)\n[![Share on Reddit](https://img.shields.io/badge/-Share%20on%20Reddit-orange)](https://www.reddit.com/submit?url=https%3A%2F%2Fgithub.com%2Fkyegomez%2FAndromeda&title=Andromeda%20-%20the%20next%20generation%20AI%20shields) \n[![Share on Hacker News](https://img.shields.io/badge/-Share%20on%20Hacker%20News-orange)](https://news.ycombinator.com/submitlink?u=https%3A%2F%2Fgithub.com%2Fkyegomez%2FAndromeda&t=Andromeda%20-%20the%20next%20generation%20AI%20shields) \n[![Share on Pinterest](https://img.shields.io/badge/-Share%20on%20Pinterest-red)](https://pinterest.com/pin/create/button/?url=https%3A%2F%2Fgithub.com%2Fkyegomez%2FAndromeda&media=https%3A%2F%2Fexample.com%2Fimage.jpg&description=Andromeda%20-%20the%20next%20generation%20AI%20shields) \n[![Share on WhatsApp](https://img.shields.io/badge/-Share%20on%20WhatsApp-green)](https://api.whatsapp.com/send?text=Check%20out%20Andromeda%20-%20the%20next%20generation%20AI%20shields%20%23Andromeda%20%23AI%0A%0Ahttps%3A%2F%2Fgithub.com%2Fkyegomez%2FAndromeda)\n\n</div>\n\n\n\nWelcome to Andromeda, The Fastest, Most Creative, and Reliable Language Model Ever Built, train your own verison, conduct inference, and finetune your own verison with simple plug in and play scripts get started in 10 seconds:\n\n## Features\n\n- \ud83d\udcbc Handle Ultra Long Sequences (32,000-200,000+ context lengths)\n- \u26a1 Ultra Fast Processing (32,000+ tokens in under 100ms)\n- \ud83c\udf93 Superior Reasoning Capabilities\n\n## \ud83c\udfaf Principles\n\n- **Efficiency**: Optimize with techniques like attention flashing, rotary position encodings, and deep normalization.\n- **Flexibility**: Adapt to various tasks and domains for wide applications.\n- **Scalability**: Designed to scale with resources and data sizes.\n- **Community-Driven**: Thrives on contributions from the open-source community.\n\n---\n\n\n## \ud83d\udcbb Install\n\n`python3.11 -m pip install --upgrade andromeda-torch`\n\n\n## Usage\n- Forward pass with random inputs\n```python\nimport torch\n\nfrom andromeda.configs import Andromeda1Billion\n\nmodel = Andromeda1Billion()\n\nx = torch.randint(0, 256, (1, 1024)).cuda()\n\nout = model(x) # (1, 1024, 20000)\nprint(out)\n```\n\n- Tokenized inputs\n```python\nfrom andromeda_torch import Tokenizer\nfrom andromeda_torch.configs import Andromeda1Billion\n\nmodel = Andromeda1Billion()\ntokenizer = Tokenizer()\n\nencoded_text = tokenizer.encode(\"Hello world!\")\nout = model(encoded_text)\nprint(out)\n\n\n```\n\n\n\n## \ud83d\udcda Training\n\n1. Set the environment variables:\n - `ENTITY_NAME`: Your wandb project name\n - `OUTPUT_DIR`: Directory to save the weights (e.g., `./weights`)\n - `MASTER_ADDR`: For distributed training\n - `MASTER_PORT` For master port distributed training\n - `RANK`- Number of nodes services\n - `WORLD_SIZE` Number of gpus\n\n2. Configure the training:\n - Accelerate Config\n - Enable Deepspeed 3\n - Accelerate launch train_distributed_accelerate.py\n\nFor more information, refer to the [Training SOP](DOCs/TRAINING.md).\n\n---\n\n\n## Todo\n- [ ] Add Yarn Embeddings from zeta\n\n\n\n## \ud83d\udcc8 Benchmarks\n\n### Speed\n- Andromeda utilizes one of the most reliable Attentions ever, flash attention 2.0 Triton. It consumes 50x less memory than GPT-3 and 10x less than LLAMA.\n\n![AndromedaBanner](images/andromeda_performance.png)\n\n- We can speed this up even more with dynamic sparse flash attention 2.0.\n\n# License\nApache License",
"bugtrack_url": null,
"license": "MIT",
"summary": "Andromeda - Pytorch",
"version": "0.0.9",
"project_urls": {
"Homepage": "https://github.com/kyegomez/Andromeda"
},
"split_keywords": [
"artificial intelligence",
" attention mechanism",
" transformers"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "4a821f65e43e27842f99980a253fff118ef1328caacb6c6030fd8eac591a0570",
"md5": "e444afc2a845fead76f5c798c9acd7e9",
"sha256": "5c2ab07a6f87336ea79675dd211c0b520ea649927b73e19a53fdd78ed7ea83fb"
},
"downloads": -1,
"filename": "andromeda_torch-0.0.9-py3-none-any.whl",
"has_sig": false,
"md5_digest": "e444afc2a845fead76f5c798c9acd7e9",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "<4.0,>=3.9",
"size": 22745,
"upload_time": "2024-03-21T22:02:58",
"upload_time_iso_8601": "2024-03-21T22:02:58.760414Z",
"url": "https://files.pythonhosted.org/packages/4a/82/1f65e43e27842f99980a253fff118ef1328caacb6c6030fd8eac591a0570/andromeda_torch-0.0.9-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "ac6b397a1fe98024e70c7f8e81b819fcd4d7070dae7ea373ccae6b7bd78d6341",
"md5": "eab5eb031836d396819e816690ce0adc",
"sha256": "8ff70d7a4e8768a010e753b1fad8b5fc174d121a82f8e8ef2407d4d1d02f63d5"
},
"downloads": -1,
"filename": "andromeda_torch-0.0.9.tar.gz",
"has_sig": false,
"md5_digest": "eab5eb031836d396819e816690ce0adc",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "<4.0,>=3.9",
"size": 23322,
"upload_time": "2024-03-21T22:02:59",
"upload_time_iso_8601": "2024-03-21T22:02:59.839651Z",
"url": "https://files.pythonhosted.org/packages/ac/6b/397a1fe98024e70c7f8e81b819fcd4d7070dae7ea373ccae6b7bd78d6341/andromeda_torch-0.0.9.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-03-21 22:02:59",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "kyegomez",
"github_project": "Andromeda",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"requirements": [],
"lcname": "andromeda-torch"
}