[![Multi-Modality](agorabanner.png)](https://discord.gg/qUtxnK2NMf)
# Exa
Boost your GPU's LLM performance by 300% on everyday GPU hardware, as validated by renowned developers, in just 5 minutes of setup and with no additional hardware costs.
-----
## Principles
- Radical Simplicity (Utilizing super-powerful LLMs with as minimal lines of code as possible)
- Ultra-Optimizated Peformance (High Performance code that extract all the power from these LLMs)
- Fludity & Shapelessness (Plug in and play and re-architecture as you please)
---
## 📦 Install 📦
```bash
$ pip3 install exxa
```
-----
## Usage
## 🎉 Features 🎉
- **World-Class Quantization**: Get the most out of your models with top-tier performance and preserved accuracy! 🏋️♂️
- **Automated PEFT**: Simplify your workflow! Let our toolkit handle the optimizations. 🛠️
- **LoRA Configuration**: Dive into the potential of flexible LoRA configurations, a game-changer for performance! 🌌
- **Seamless Integration**: Designed to work seamlessly with popular models like LLAMA, Falcon, and more! 🤖
----
## 💌 Feedback & Contributions 💌
We're excited about the journey ahead and would love to have you with us! For feedback, suggestions, or contributions, feel free to open an issue or a pull request. Let's shape the future of fine-tuning together! 🌱
[Check out our project board for our current backlog and features we're implementing](https://github.com/users/kyegomez/projects/8/views/2)
# License
MIT
# Todo
- Setup utils logger classes for metric logging with useful metadata such as token inference per second, latency, memory consumption
- Add cuda c++ extensions for radically optimized classes for high performance quantization + inference on the edge
Raw data
{
"_id": null,
"home_page": "https://github.com/kyegomez/Exa",
"name": "exxa",
"maintainer": null,
"docs_url": null,
"requires_python": "<4.0,>=3.10",
"maintainer_email": null,
"keywords": "artificial intelligence, deep learning, optimizers, Prompt Engineering",
"author": "Kye Gomez",
"author_email": "kye@apac.ai",
"download_url": "https://files.pythonhosted.org/packages/8b/4b/48a979864938f8d22028d2774ffc0e59d94fe07548189eb4b6f793e10f26/exxa-0.6.4.tar.gz",
"platform": null,
"description": "[![Multi-Modality](agorabanner.png)](https://discord.gg/qUtxnK2NMf)\n\n# Exa\nBoost your GPU's LLM performance by 300% on everyday GPU hardware, as validated by renowned developers, in just 5 minutes of setup and with no additional hardware costs.\n\n-----\n\n## Principles\n- Radical Simplicity (Utilizing super-powerful LLMs with as minimal lines of code as possible)\n- Ultra-Optimizated Peformance (High Performance code that extract all the power from these LLMs)\n- Fludity & Shapelessness (Plug in and play and re-architecture as you please)\n\n---\n\n## \ud83d\udce6 Install \ud83d\udce6\n```bash\n$ pip3 install exxa\n```\n-----\n\n\n## Usage\n\n\n\n\n\n\n## \ud83c\udf89 Features \ud83c\udf89\n\n- **World-Class Quantization**: Get the most out of your models with top-tier performance and preserved accuracy! \ud83c\udfcb\ufe0f\u200d\u2642\ufe0f\n \n- **Automated PEFT**: Simplify your workflow! Let our toolkit handle the optimizations. \ud83d\udee0\ufe0f\n\n- **LoRA Configuration**: Dive into the potential of flexible LoRA configurations, a game-changer for performance! \ud83c\udf0c\n\n- **Seamless Integration**: Designed to work seamlessly with popular models like LLAMA, Falcon, and more! \ud83e\udd16\n\n----\n\n## \ud83d\udc8c Feedback & Contributions \ud83d\udc8c\n\nWe're excited about the journey ahead and would love to have you with us! For feedback, suggestions, or contributions, feel free to open an issue or a pull request. Let's shape the future of fine-tuning together! \ud83c\udf31\n\n[Check out our project board for our current backlog and features we're implementing](https://github.com/users/kyegomez/projects/8/views/2)\n\n\n# License\nMIT\n\n# Todo\n\n- Setup utils logger classes for metric logging with useful metadata such as token inference per second, latency, memory consumption\n- Add cuda c++ extensions for radically optimized classes for high performance quantization + inference on the edge\n\n\n\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "Exa - Pytorch",
"version": "0.6.4",
"project_urls": {
"Documentation": "https://github.com/kyegomez/Exa",
"Homepage": "https://github.com/kyegomez/Exa",
"Repository": "https://github.com/kyegomez/Exa"
},
"split_keywords": [
"artificial intelligence",
" deep learning",
" optimizers",
" prompt engineering"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "5a7614ededdfd1fc7b6f849c6a795bb0d2d695f764153166ed934ef9e3ee5312",
"md5": "87b7c92a2c5d9143f9f66ee57ad22af0",
"sha256": "edd63879d41b2f405b402745aa41ed148ebe951b22e394fc1bc51f7f47551fd8"
},
"downloads": -1,
"filename": "exxa-0.6.4-py3-none-any.whl",
"has_sig": false,
"md5_digest": "87b7c92a2c5d9143f9f66ee57ad22af0",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "<4.0,>=3.10",
"size": 13387,
"upload_time": "2024-04-05T04:36:39",
"upload_time_iso_8601": "2024-04-05T04:36:39.989721Z",
"url": "https://files.pythonhosted.org/packages/5a/76/14ededdfd1fc7b6f849c6a795bb0d2d695f764153166ed934ef9e3ee5312/exxa-0.6.4-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "8b4b48a979864938f8d22028d2774ffc0e59d94fe07548189eb4b6f793e10f26",
"md5": "006623d26a0b6b985dbf2c08fca8f869",
"sha256": "299e8aca1f40748d78c13e4c1f2c92c845528345399afc7db5ff8631bf34f42b"
},
"downloads": -1,
"filename": "exxa-0.6.4.tar.gz",
"has_sig": false,
"md5_digest": "006623d26a0b6b985dbf2c08fca8f869",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "<4.0,>=3.10",
"size": 11545,
"upload_time": "2024-04-05T04:36:41",
"upload_time_iso_8601": "2024-04-05T04:36:41.881668Z",
"url": "https://files.pythonhosted.org/packages/8b/4b/48a979864938f8d22028d2774ffc0e59d94fe07548189eb4b6f793e10f26/exxa-0.6.4.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-04-05 04:36:41",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "kyegomez",
"github_project": "Exa",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"requirements": [
{
"name": "torch",
"specs": []
},
{
"name": "pytest",
"specs": []
},
{
"name": "loguru",
"specs": []
},
{
"name": "mkdocs",
"specs": []
},
{
"name": "mkdocs-material",
"specs": []
},
{
"name": "mkdocs-glightbox",
"specs": []
}
],
"lcname": "exxa"
}