# VortexGPT
VortexGPT provides free access to text and image generation models.
## Getting Started:
python -m pip install -U VortexGPT
Join my [Discord server](https://discord.gg/hf99YvT8qX ) for live chat, support, or if you have any issues with this package.
## Sources:
| Model | Website |
| ------------ | ----------------------------------------------------- |
| gpt3 | [chat9.yqcloud.top](https://chat9.yqcloud.top/) |
| gpt4 | [you.com](https://you.com/) |
| alpaca_7b | [chatllama.baseten.co](https://chatllama.baseten.co/) |
| falcon_40b | [gpt-gm.h2o.ai](https://gpt-gm.h2o.ai/) |
| prodia | [prodia.com](https://prodia.com/) |
| pollinations | [pollinations.ai](https://pollinations.ai/) |
## Support this repository:
- 🎉 **Join my Discord Server:** Chat with me and others. [Join here](https://discord.gg/hf99YvT8qX ):
[![DiscordWidget](https://discordapp.com/api/guilds/1076407776403787796/widget.png?style=banner2)](https://discord.gg/hf99YvT8qX)
## Examples:
### Text Completion:
**Async:**
```python
from VortexGPT import AsyncClient
from asyncio import run
async def main():
while True:
prompt = input("👦: ")
try:
resp = await AsyncClient.create_completion("MODEL", prompt)
print(f"🤖: {resp}")
except Exception as e:
print(f"🤖: {e}")
run(main())
```
**Non-Async:**
```python
from VortexGPT import Client
while True:
prompt = input("👦: ")
try:
resp = Client.create_completion("MODEL", prompt)
print(f"🤖: {resp}")
except Exception as e:
print(f"🤖: {e}")
```
### Image Generation:
**Async:**
```python
from VortexGPT import AsyncClient
from PIL import Image
from io import BytesIO
from asyncio import run
async def main():
while True:
prompt = input("👦: ")
try:
resp = await AsyncClient.create_generation("MODEL", prompt)
Image.open(BytesIO(resp)).show()
print(f"🤖: Image shown.")
except Exception as e:
print(f"🤖: {e}")
run(main())
```
**Non-Async:**
```python
from VortexGPT import Client
from PIL import Image
from io import BytesIO
while True:
prompt = input("👦: ")
try:
resp = Client.create_generation("MODEL", prompt)
Image.open(BytesIO(resp)).show()
print(f"🤖: Image shown.")
except Exception as e:
print(f"🤖: {e}")
```
Raw data
{
"_id": null,
"home_page": "https://github.com/HelpingAI/VORTEX",
"name": "VortexGPT",
"maintainer": "",
"docs_url": null,
"requires_python": ">=3.6",
"maintainer_email": "",
"keywords": "artificial-intelligence,machine-learning,deep-learning,gpt4free,gpt4all,freegpt,chatgpt,python,llama,llm,nlp,gpt,ai",
"author": "OEvortex",
"author_email": "",
"download_url": "https://files.pythonhosted.org/packages/4d/74/88350d8e683a67b6344ae9a9aa56d910d2210c00ec79497f7d2033669d12/VortexGPT-0.1.tar.gz",
"platform": null,
"description": "# VortexGPT\r\n\r\nVortexGPT provides free access to text and image generation models.\r\n\r\n## Getting Started:\r\n\r\n python -m pip install -U VortexGPT\r\n\r\nJoin my [Discord server](https://discord.gg/hf99YvT8qX ) for live chat, support, or if you have any issues with this package.\r\n\r\n## Sources:\r\n\r\n| Model | Website |\r\n| ------------ | ----------------------------------------------------- |\r\n| gpt3 | [chat9.yqcloud.top](https://chat9.yqcloud.top/) |\r\n| gpt4 | [you.com](https://you.com/) |\r\n| alpaca_7b | [chatllama.baseten.co](https://chatllama.baseten.co/) |\r\n| falcon_40b | [gpt-gm.h2o.ai](https://gpt-gm.h2o.ai/) |\r\n| prodia | [prodia.com](https://prodia.com/) |\r\n| pollinations | [pollinations.ai](https://pollinations.ai/) |\r\n\r\n## Support this repository:\r\n\r\n- \ud83c\udf89 **Join my Discord Server:** Chat with me and others. [Join here](https://discord.gg/hf99YvT8qX ):\r\n\r\n[![DiscordWidget](https://discordapp.com/api/guilds/1076407776403787796/widget.png?style=banner2)](https://discord.gg/hf99YvT8qX)\r\n\r\n## Examples:\r\n\r\n### Text Completion:\r\n\r\n**Async:**\r\n\r\n```python\r\nfrom VortexGPT import AsyncClient\r\nfrom asyncio import run\r\n\r\n\r\nasync def main():\r\n while True:\r\n prompt = input(\"\ud83d\udc66: \")\r\n try:\r\n resp = await AsyncClient.create_completion(\"MODEL\", prompt)\r\n print(f\"\ud83e\udd16: {resp}\")\r\n except Exception as e:\r\n print(f\"\ud83e\udd16: {e}\")\r\n\r\n\r\nrun(main())\r\n```\r\n\r\n**Non-Async:**\r\n\r\n```python\r\nfrom VortexGPT import Client\r\n\r\nwhile True:\r\n prompt = input(\"\ud83d\udc66: \")\r\n try:\r\n resp = Client.create_completion(\"MODEL\", prompt)\r\n print(f\"\ud83e\udd16: {resp}\")\r\n except Exception as e:\r\n print(f\"\ud83e\udd16: {e}\")\r\n```\r\n\r\n### Image Generation:\r\n\r\n**Async:**\r\n\r\n```python\r\nfrom VortexGPT import AsyncClient\r\nfrom PIL import Image\r\nfrom io import BytesIO\r\nfrom asyncio import run\r\n\r\n\r\nasync def main():\r\n while True:\r\n prompt = input(\"\ud83d\udc66: \")\r\n try:\r\n resp = await AsyncClient.create_generation(\"MODEL\", prompt)\r\n Image.open(BytesIO(resp)).show()\r\n print(f\"\ud83e\udd16: Image shown.\")\r\n except Exception as e:\r\n print(f\"\ud83e\udd16: {e}\")\r\n\r\n\r\nrun(main())\r\n```\r\n\r\n**Non-Async:**\r\n\r\n```python\r\nfrom VortexGPT import Client\r\nfrom PIL import Image\r\nfrom io import BytesIO\r\n\r\nwhile True:\r\n prompt = input(\"\ud83d\udc66: \")\r\n try:\r\n resp = Client.create_generation(\"MODEL\", prompt)\r\n Image.open(BytesIO(resp)).show()\r\n print(f\"\ud83e\udd16: Image shown.\")\r\n except Exception as e:\r\n print(f\"\ud83e\udd16: {e}\")\r\n```\r\n",
"bugtrack_url": null,
"license": "GPLv3",
"summary": "VORTEX provides free access to text and image generation models.",
"version": "0.1",
"project_urls": {
"Homepage": "https://github.com/HelpingAI/VORTEX",
"Source": "https://github.com/Ruu3f/freeGPT"
},
"split_keywords": [
"artificial-intelligence",
"machine-learning",
"deep-learning",
"gpt4free",
"gpt4all",
"freegpt",
"chatgpt",
"python",
"llama",
"llm",
"nlp",
"gpt",
"ai"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "4a16ff5deca013f0c0fc3ffcc15d566adb8a11a58d8a0268d1f8e4f352f0f712",
"md5": "e320aed8609a6f04edcd0d2486e22071",
"sha256": "2008486b6888526a44082d3bd053890d61e88f0dbb24076a98682de2c2f73c43"
},
"downloads": -1,
"filename": "VortexGPT-0.1-py3-none-any.whl",
"has_sig": false,
"md5_digest": "e320aed8609a6f04edcd0d2486e22071",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.6",
"size": 26778,
"upload_time": "2024-01-12T11:36:00",
"upload_time_iso_8601": "2024-01-12T11:36:00.069811Z",
"url": "https://files.pythonhosted.org/packages/4a/16/ff5deca013f0c0fc3ffcc15d566adb8a11a58d8a0268d1f8e4f352f0f712/VortexGPT-0.1-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "4d7488350d8e683a67b6344ae9a9aa56d910d2210c00ec79497f7d2033669d12",
"md5": "46d158eca0bbed4cb8f753c304961e80",
"sha256": "d8b4a8a5f1e915a64df02ad24a9cf6272d47b08b5d2216e5f74811a73b4997ca"
},
"downloads": -1,
"filename": "VortexGPT-0.1.tar.gz",
"has_sig": false,
"md5_digest": "46d158eca0bbed4cb8f753c304961e80",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.6",
"size": 20180,
"upload_time": "2024-01-12T11:36:02",
"upload_time_iso_8601": "2024-01-12T11:36:02.084194Z",
"url": "https://files.pythonhosted.org/packages/4d/74/88350d8e683a67b6344ae9a9aa56d910d2210c00ec79497f7d2033669d12/VortexGPT-0.1.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-01-12 11:36:02",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "HelpingAI",
"github_project": "VORTEX",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"lcname": "vortexgpt"
}