# OmniHuman
AI model that understands text and humanoids.
<div align="center">
[![PyPi](https://img.shields.io/pypi/v/omnihuman?logo=pypi)](https://pypi.org/project/omnihuman/)
[![Documentation Status](https://readthedocs.org/projects/omnihuman/badge/?version=latest)](https://omnihuman.readthedocs.io/en/latest/?badge=latest)
[![python](https://img.shields.io/pypi/pyversions/omnihuman?logo=python)](https://pypi.org/project/omnihuman/)
[![GitHub Repo stars](https://img.shields.io/github/stars/mdsrqbl/omnihuman?logo=github)](https://github.com/mdsrqbl/omnihuman/stargazers)
[![Downloads](https://img.shields.io/pepy/dt/omnihuman?color=purple&logoColor=white&logo=data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHdpZHRoPSI4MDAiIGhlaWdodD0iODAwIiBmaWxsPSJub25lIiB2aWV3Qm94PSIwIDAgMjQgMjQiPjxwYXRoIGZpbGw9IiMwMDAiIGZpbGwtcnVsZT0iZXZlbm9kZCIgZD0iTTkuMiAyLjhjLS4yLjMtLjIuOC0uMiAxLjZWMTFINy44Yy0uOSAwLTEuMyAwLTEuNS4yYS44LjggMCAwIDAtLjMuNmMwIC4zLjMuNiAxIDEuMmw0LjEgNC40LjcuNmEuNy43IDAgMCAwIC40IDBsLjctLjZMMTcgMTNjLjYtLjYuOS0xIC45LTEuMmEuOC44IDAgMCAwLS4zLS42Yy0uMi0uMi0uNi0uMi0xLjUtLjJIMTVWNC40YzAtLjggMC0xLjMtLjItMS42YTEuNSAxLjUgMCAwIDAtLjYtLjZjLS4zLS4yLS44LS4yLTEuNi0uMmgtMS4yYy0uOCAwLTEuMyAwLTEuNi4yYTEuNSAxLjUgMCAwIDAtLjYuNnpNNSAyMWExIDEgMCAwIDAgMSAxaDEyYTEgMSAwIDEgMCAwLTJINmExIDEgMCAwIDAtMSAxeiIgY2xpcC1ydWxlPSJldmVub2RkIi8+PC9zdmc+)](https://pepy.tech/projects/omnihuman/)<br>
| **Support Us** ❤️ | [![PayPal](https://img.shields.io/badge/PayPal-00457C?logo=paypal&logoColor=white)](https://www.paypal.com/donate/?hosted_button_id=7SNGNSKUQXQW2) |
| ---------------- | -------------------------------------------------------------------------------------------------------------------------------------------------- |
</div>
---
1. [OmniHuman](#omnihuman)
1. [Installation](#installation)
2. [Usage](#usage)
3. [Citation \& License](#citation--license)
## Installation
```bash
pip install omnihuman
```
or install editable from source
```bash
git clone https://github.com/mdsrqbl/omnihuman.git
cd omnihuman
pip install -e .
```
## Usage
```python
import omnihuman
import PIL.Image
text = "Raise both hands and clap overhead."
frames = omnihuman.read_frames("path/to/image.jpg") # (n_frames, channels, height, width)
# model = omnihuman.OmniHuman()
# frames = model.video_generation(text, frames)
PIL.Image.fromarray(frames[-1].permute(1,2,0).numpy()).show()
```
Full documentation is available at [omnihuman.readTheDocs.io](https://omnihuman.readthedocs.io/en/latest/).
## Citation & License
```bibtex
@misc{mdsr2024omnihuman,
author = {Mudassar Iqbal},
title = {OmniHuman: AI model that understands text and humanoids.},
year = {2024},
publisher = {GitHub},
howpublished = {\url{https://github.com/mdsrqbl/omnihuman}}
}
```
This project is licensed under Apache License 2.0 - see the [LICENSE](https://github.com/mdsrqbl/omnihuman/blob/main/LICENSE) file for details.
You are permitted to use the library & models, create modified versions, or incorporate pieces of the code into your own work. Your product or research, whether commercial or non-commercial, must provide appropriate credit to the original author(s) by citing this repository & research papers.
Stay tuned for research papers!
Raw data
{
"_id": null,
"home_page": "https://github.com/mdsrqbl/omnihuman.git",
"name": "omnihuman",
"maintainer": null,
"docs_url": null,
"requires_python": "<3.13,>=3.8",
"maintainer_email": null,
"keywords": "ai, nlp, text, humanoids, pose, computer-vision",
"author": "Mudassar Iqbal",
"author_email": "mdsriqb@gmail.com",
"download_url": "https://files.pythonhosted.org/packages/75/ba/6822bac400d974b5fd6606411ee0177dd290ddcc84930c0c5b898dadd7fe/omnihuman-0.1.1.tar.gz",
"platform": null,
"description": "# OmniHuman\n\nAI model that understands text and humanoids.\n\n<div align=\"center\">\n\n[![PyPi](https://img.shields.io/pypi/v/omnihuman?logo=pypi)](https://pypi.org/project/omnihuman/)\n[![Documentation Status](https://readthedocs.org/projects/omnihuman/badge/?version=latest)](https://omnihuman.readthedocs.io/en/latest/?badge=latest)\n[![python](https://img.shields.io/pypi/pyversions/omnihuman?logo=python)](https://pypi.org/project/omnihuman/)\n\n[![GitHub Repo stars](https://img.shields.io/github/stars/mdsrqbl/omnihuman?logo=github)](https://github.com/mdsrqbl/omnihuman/stargazers)\n[![Downloads](https://img.shields.io/pepy/dt/omnihuman?color=purple&logoColor=white&logo=data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHdpZHRoPSI4MDAiIGhlaWdodD0iODAwIiBmaWxsPSJub25lIiB2aWV3Qm94PSIwIDAgMjQgMjQiPjxwYXRoIGZpbGw9IiMwMDAiIGZpbGwtcnVsZT0iZXZlbm9kZCIgZD0iTTkuMiAyLjhjLS4yLjMtLjIuOC0uMiAxLjZWMTFINy44Yy0uOSAwLTEuMyAwLTEuNS4yYS44LjggMCAwIDAtLjMuNmMwIC4zLjMuNiAxIDEuMmw0LjEgNC40LjcuNmEuNy43IDAgMCAwIC40IDBsLjctLjZMMTcgMTNjLjYtLjYuOS0xIC45LTEuMmEuOC44IDAgMCAwLS4zLS42Yy0uMi0uMi0uNi0uMi0xLjUtLjJIMTVWNC40YzAtLjggMC0xLjMtLjItMS42YTEuNSAxLjUgMCAwIDAtLjYtLjZjLS4zLS4yLS44LS4yLTEuNi0uMmgtMS4yYy0uOCAwLTEuMyAwLTEuNi4yYTEuNSAxLjUgMCAwIDAtLjYuNnpNNSAyMWExIDEgMCAwIDAgMSAxaDEyYTEgMSAwIDEgMCAwLTJINmExIDEgMCAwIDAtMSAxeiIgY2xpcC1ydWxlPSJldmVub2RkIi8+PC9zdmc+)](https://pepy.tech/projects/omnihuman/)<br>\n\n| **Support Us** \u2764\ufe0f | [![PayPal](https://img.shields.io/badge/PayPal-00457C?logo=paypal&logoColor=white)](https://www.paypal.com/donate/?hosted_button_id=7SNGNSKUQXQW2) |\n| ---------------- | -------------------------------------------------------------------------------------------------------------------------------------------------- |\n\n</div>\n\n---\n\n1. [OmniHuman](#omnihuman)\n 1. [Installation](#installation)\n 2. [Usage](#usage)\n 3. [Citation \\& License](#citation--license)\n\n## Installation\n\n```bash\npip install omnihuman\n```\n\nor install editable from source\n\n```bash\ngit clone https://github.com/mdsrqbl/omnihuman.git\ncd omnihuman\npip install -e .\n```\n\n## Usage\n\n```python\nimport omnihuman\nimport PIL.Image\n\ntext = \"Raise both hands and clap overhead.\"\nframes = omnihuman.read_frames(\"path/to/image.jpg\") # (n_frames, channels, height, width)\n\n# model = omnihuman.OmniHuman()\n# frames = model.video_generation(text, frames)\n\nPIL.Image.fromarray(frames[-1].permute(1,2,0).numpy()).show()\n```\n\nFull documentation is available at [omnihuman.readTheDocs.io](https://omnihuman.readthedocs.io/en/latest/).\n\n## Citation & License\n\n```bibtex\n@misc{mdsr2024omnihuman,\n author = {Mudassar Iqbal},\n title = {OmniHuman: AI model that understands text and humanoids.},\n year = {2024},\n publisher = {GitHub},\n howpublished = {\\url{https://github.com/mdsrqbl/omnihuman}}\n}\n```\n\nThis project is licensed under Apache License 2.0 - see the [LICENSE](https://github.com/mdsrqbl/omnihuman/blob/main/LICENSE) file for details.\n\nYou are permitted to use the library & models, create modified versions, or incorporate pieces of the code into your own work. Your product or research, whether commercial or non-commercial, must provide appropriate credit to the original author(s) by citing this repository & research papers.\n\nStay tuned for research papers!\n\n",
"bugtrack_url": null,
"license": "Apache 2.0",
"summary": "AI model that understands text and humanoids.",
"version": "0.1.1",
"project_urls": {
"Documentation": "https://omnihuman.readthedocs.io/",
"Homepage": "https://github.com/mdsrqbl/omnihuman.git",
"Repository": "https://github.com/mdsrqbl/omnihuman.git"
},
"split_keywords": [
"ai",
" nlp",
" text",
" humanoids",
" pose",
" computer-vision"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "8aa091fd3ee94aa47b4c7152039cc60f3ba387e636a0f07765eb093f872c6658",
"md5": "656550bbe86af93b9f58812d4537a265",
"sha256": "ba20b29792ab3c3ba3432c52f7972997292d019f34e70e1cc39a7e22e90f29e2"
},
"downloads": -1,
"filename": "omnihuman-0.1.1-py3-none-any.whl",
"has_sig": false,
"md5_digest": "656550bbe86af93b9f58812d4537a265",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "<3.13,>=3.8",
"size": 8431,
"upload_time": "2024-08-24T14:44:01",
"upload_time_iso_8601": "2024-08-24T14:44:01.207351Z",
"url": "https://files.pythonhosted.org/packages/8a/a0/91fd3ee94aa47b4c7152039cc60f3ba387e636a0f07765eb093f872c6658/omnihuman-0.1.1-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "75ba6822bac400d974b5fd6606411ee0177dd290ddcc84930c0c5b898dadd7fe",
"md5": "448eec60e7f079ee250ca2a244e9e622",
"sha256": "b68e62fd9b784e93bcd765e7d6f6d184373d4f51a24ea7a14039a644b4534a66"
},
"downloads": -1,
"filename": "omnihuman-0.1.1.tar.gz",
"has_sig": false,
"md5_digest": "448eec60e7f079ee250ca2a244e9e622",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "<3.13,>=3.8",
"size": 7447,
"upload_time": "2024-08-24T14:44:03",
"upload_time_iso_8601": "2024-08-24T14:44:03.140547Z",
"url": "https://files.pythonhosted.org/packages/75/ba/6822bac400d974b5fd6606411ee0177dd290ddcc84930c0c5b898dadd7fe/omnihuman-0.1.1.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-08-24 14:44:03",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "mdsrqbl",
"github_project": "omnihuman",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"lcname": "omnihuman"
}