<p align="center">
<img src="https://github.com/Simatwa/python-tgpt/blob/main/assets/py-tgpt.png?raw=true" width='40%'>
</p>
<!-- <h1 align="center"> python-tgpt </h1> -->
<p align="center">
<!--
<a href="https://github.com/Simatwa/python-tgpt/actions/workflows/python-test.yml"><img src="https://github.com/Simatwa/python-tgpt/actions/workflows/python-test.yml/badge.svg" alt="Python Test"/></a>
-->
<a href="https://github.com/Simatwa/python-tgpt/blob/main/LICENSE"><img alt="License" src="https://img.shields.io/static/v1?logo=GPL&color=Blue&message=MIT&label=License"/></a>
<a href=""><img alt="Python version" src="https://img.shields.io/pypi/pyversions/python-tgpt"/></a>
<a href="https://pypi.org/project/python-tgpt"><img alt="PyPi" src="https://img.shields.io/pypi/v/python-tgpt?color=green"/></a>
<a href="https://github.com/psf/black"><img alt="Black" src="https://img.shields.io/badge/code%20style-black-000000.svg"/></a>
<a href="https://python-tgpt.onrender.com"><img alt="Website status" src="https://img.shields.io/website?url=https://python-tgpt.onrender.com"/></a>
<a href="https://github.com/Simatwa/python-tgpt/actions/workflows/python-package.yml"><img alt="Python Package flow" src="https://github.com/Simatwa/python-tgpt/actions/workflows/python-package.yml/badge.svg?branch=master"/></a>
<a href="https://pepy.tech/project/python-tgpt"><img src="https://static.pepy.tech/personalized-badge/python-tgpt?period=total&units=international_system&left_color=grey&right_color=blue&left_text=Downloads" alt="Downloads"></a>
<a href="https://github.com/Simatwa/python-tgpt/releases/latest"><img src="https://img.shields.io/github/downloads/Simatwa/python-tgpt/total?label=Asset%20Downloads&color=success" alt="Downloads"></img></a>
<a href="https://github.com/Simatwa/python-tgpt/releases"><img src="https://img.shields.io/github/v/release/Simatwa/python-tgpt?color=success&label=Release&logo=github" alt="Latest release"></img></a>
<a href="https://github.com/Simatwa/python-tgpt/releases"><img src="https://img.shields.io/github/release-date/Simatwa/python-tgpt?label=Release date&logo=github" alt="release date"></img></a>
<a href="https://hits.seeyoufarm.com"><img src="https://hits.seeyoufarm.com/api/count/incr/badge.svg?url=https%3A%2F%2Fgithub.com/Simatwa/python-tgpt"/></a>
<a href="https://wakatime.com/badge/github/Simatwa/tgpt2"><img src="https://wakatime.com/badge/github/Simatwa/tgpt2.svg" alt="wakatime"></a>
</p>
<h3 align="center">
python-tgpt
</h3>
<p align="center">
<img src="https://github.com/Simatwa/python-tgpt/blob/main/assets/demo-1.gif?raw=True" width='80%'/>
</p>
```python
>>> import pytgpt.phind as phind
>>> bot = phind.PHIND()
>>> bot.chat('hello there')
'Hello! How can I assist you today?'
```
```python
from pytgpt.imager import Imager
img = Imager()
generated_images = img.generate(prompt="Cyberpunk", amount=3, stream=True)
img.save(generated_images)
```
This project enables seamless interaction with over **45 free LLM providers** without requiring an API Key and generating images as well.
The name *python-tgpt* draws inspiration from its parent project [tgpt](https://github.com/aandrew-me/tgpt), which operates on [Golang](https://go.dev/). Through this Python adaptation, users can effortlessly engage with a number of free LLMs available, fostering a smoother AI interaction experience.
### Features
- 🐍 Python package
- [🌐 FastAPI for web integration](https://python-tgpt.onrender.com)
- ⌨️ Command-line interface
- 🧠 Multiple LLM providers - **45+**
- 🌊 Stream and non-stream response
- 🚀 Ready to use (No API key required)
- 🎯 Customizable script generation and execution
- 🔌 Offline support for Large Language Models
- 🎨 Image generation capabilities
- 🎤 Text-to-audio conversion capabilities
- ⛓️ Chained requests via proxy
- 🗨️ Enhanced conversational chat experience
- 💾 Capability to save prompts and responses (Conversation)
- 🔄 Ability to load previous conversations
- 🚀 Pass [awesome-chatgpt prompts](https://github.com/f/awesome-chatgpt-prompts) easily
- 🤖 [Telegram bot](https://t.me/pytgpt_bot) - interface
- 🔄 Asynchronous support for all major operations.
## Providers
These are simply the hosts of the LLMs, they include:
- [Koboldai](https://koboldai-koboldcpp-tiefighter.hf.space)
- [OpenGPTs](https://opengpts-example-vz4y4ooboq-uc.a.run.app/)
- [OpenAI](https://chat.openai.com) *(API key required)*
- [Phind](https://www.phind.com)
- [Blackboxai](https://www.blackbox.ai)
- [gpt4all](https://gpt4all.io) *(Offline)*
- [Poe](https://poe.com) - Poe|Quora *(Session ID required)*
- [Groq](https://console.groq.com/playground) *(API Key required)*
- [Perplexity](https://www.perplexity.ai)
- [YepChat](https://yep.com)
<details>
<summary>
41+ providers proudly offered by [gpt4free](https://github.com/xtekky/gpt4free).
</summary>
- To list working providers run:
```sh
$ pytgpt gpt4free test -y
```
</details>
## Prerequisites
- [x] [Python>=3.10](https://python.org) *(Optional)*
## Installation and Usage
### Installation
Download binaries for your system from [here.](https://github.com/Simatwa/python-tgpt/releases/latest/)
Alternatively, you can install non-binaries. *(Recommended)*
1. Developers:
```sh
pip install --upgrade python-tgpt
```
2. Commandline:
```sh
pip install --upgrade "python-tgpt[cli]"
```
3. Full installation:
```sh
pip install --upgrade "python-tgpt[all]"
```
> `pip install -U "python-tgt[api]"` will install REST API dependencies.
#### Termux extras
1. Developers:
```sh
pip install --upgrade "python-tgpt[termux]"
```
2. Commandline:
```sh
pip install --upgrade "python-tgpt[termux-cli]"
```
3. Full installation:
```sh
pip install --upgrade "python-tgpt[termux-all]"
```
> `pip install -U "python-tgt[termux-api]"` will install REST API dependencies
## Usage
This package offers a convenient command-line interface.
> [!NOTE]
> `phind` is the default *provider*.
- For a quick response:
```bash
python -m pytgpt generate "<Your prompt>"
```
- For interactive mode:
```bash
python -m pytgpt interactive "<Kickoff prompt (though not mandatory)>"
```
Make use of flag `--provider` followed by the provider name of your choice. e.g `--provider koboldai`
> To list all providers offered by gpt4free, use following commands: ```pytgpt gpt4free list providers```
You can also simply use `pytgpt` instead of `python -m pytgpt`.
Starting from version 0.2.7, running `$ pytgpt` without any other command or option will automatically enter the `interactive` mode. Otherwise, you'll need to explicitly declare the desired action, for example, by running `$ pytgpt generate`.
<details>
<summary>
<h3>Developer Docs</h3>
</summary>
1. Generate a quick response
```python
from pytgpt.phind import PHIND
bot = PHIND()
resp = bot.chat('<Your prompt>')
print(resp)
# Output : How can I assist you today?
```
2. Get back whole response
```python
from pytgpt.phind import PHIND
bot = PHIND()
resp = bot.chat('<Your prompt>')
print(resp)
# Output
"""
{'id': 'chatcmpl-gp6cwu2e5ez3ltoyti4z', 'object': 'chat.completion.chunk', 'created': 1731257890, 'model': 'phind-instant-llama-3_1-08-31-2024-checkpoint-1500', 'choices': [{'index': 0, 'delta': {'content': "Hello! I'm an AI assistant created by Phind to help with programming tasks. How can I assist you today?"}, 'finish_reason': None}]}
"""
```
#### Stream Response
Just add parameter `stream` with value `true`.
1. Text Generated only
```python
from pytgpt.phind import PHIND
bot = PHIND()
response = bot.chat('hello', stream=True)
for chunk in response:
print(chunk)
# output
"""
Hello
Hello!
Hello! How
Hello! How can
Hello! How can I
Hello! How can I assist
Hello! How can I assist you
Hello! How can I assist you today
Hello! How can I assist you today?
"""
```
2. Whole Response
```python
from pytgpt.leo import LEO
bot = LEO()
resp = bot.ask('<Your Prompt>', stream=True)
for value in resp:
print(value)
# Output
"""
{'id': 'chatcmpl-icz6a4m1nbbclw9hhgol', 'object': 'chat.completion.chunk', 'created': 1731258032, 'model': 'phind-instant-llama-3_1-08-31-2024-checkpoint-1500', 'choices': [{'index': 0, 'delta': {'content': 'Hello'}, 'finish_reason': None}]}
{'id': 'chatcmpl-icz6a4m1nbbclw9hhgol', 'object': 'chat.completion.chunk', 'created': 1731258032, 'model': 'phind-instant-llama-3_1-08-31-2024-checkpoint-1500', 'choices': [{'index': 0, 'delta': {'content': "Hello! I'm an AI"}, 'finish_reason': None}]}
{'id': 'chatcmpl-icz6a4m1nbbclw9hhgol', 'object': 'chat.completion.chunk', 'created': 1731258032, 'model': 'phind-instant-llama-3_1-08-31-2024-checkpoint-1500', 'choices': [{'index': 0, 'delta': {'content': "Hello! I'm an AI assistant created by Phind to help with coding and technical tasks. How"}, 'finish_reason': None}]}
{'id': 'chatcmpl-icz6a4m1nbbclw9hhgol', 'object': 'chat.completion.chunk', 'created': 1731258032, 'model': 'phind-instant-llama-3_1-08-31-2024-checkpoint-1500', 'choices': [{'index': 0, 'delta': {'content': "Hello! I'm an AI assistant created by Phind to help with coding and technical tasks. How can I assist you today?"}, 'finish_reason': None}]}
"""
```
<details>
<summary>
Auto - *(selects any working provider)*
</summary>
```python
import pytgpt.auto import auto
bot = auto.AUTO()
print(bot.chat("<Your-prompt>"))
```
</details>
<details>
<summary>
Openai
</summary>
```python
import pytgpt.openai as openai
bot = openai.OPENAI("<OPENAI-API-KEY>")
print(bot.chat("<Your-prompt>"))
```
</details>
<details>
<summary>
Koboldai
</summary>
```python
import pytgpt.koboldai as koboldai
bot = koboldai.KOBOLDAI()
print(bot.chat("<Your-prompt>"))
```
</details>
<details>
<summary>
Opengpt
</summary>
```python
import pytgpt.opengpt as opengpt
bot = opengpt.OPENGPT()
print(bot.chat("<Your-prompt>"))
```
</details>
<details>
<summary>
phind
</summary>
```python
import pytgpt.phind as phind
bot = phind.PHIND()
print(bot.chat("<Your-prompt>"))
```
</details>
<details>
<summary>
Gpt4free providers
</summary>
```python
import pytgpt.gpt4free as gpt4free
bot = gpt4free.GPT4FREE(provider="Koala")
print(bot.chat("<Your-prompt>"))
```
</details>
### Asynchronous
**Version 0.7.0** introduces asynchronous implementation to almost all providers except a few such as *perplexity*, which relies on other libraries which lacks such implementation.
To make it easier, you just have to prefix `Async` to the common synchronous class name. For instance `OPENGPT` will be accessed as `AsyncOPENGPT`:
#### Streaming Whole ai response.
```python
import asyncio
from pytgpt.phind import AsyncPHIND
async def main():
async_ask = await AsyncPHIND(False).ask(
"Critique that python is cool.",
stream=True
)
async for streaming_response in async_ask:
print(
streaming_response
)
asyncio.run(
main()
)
```
#### Streaming just the text
```python
import asyncio
from pytgpt.phind import AsyncPHIND
async def main():
async_ask = await AsyncPHIND(False).chat(
"Critique that python is cool.",
stream=True
)
async for streaming_text in async_ask:
print(
streaming_text
)
asyncio.run(
main()
)
```
</details>
<details>
<summary>
To obtain more tailored responses, consider utilizing [optimizers](pytgpt/utils.py) using the `optimizer` parameter. Its values can be set to either `code` or `system_command`.
</summary>
```python
from pytgpt.phind import PHIND
bot = PHIND()
resp = bot.ask('<Your Prompt>', optimizer='code')
print(resp)
```
</details>
</details>
> [!IMPORTANT]
> Commencing from [v0.1.0](https://github.com/Simatwa/python-tgpt/releases/), the default mode of interaction is conversational. This mode enhances the interactive experience, offering better control over the chat history. By associating previous prompts and responses, it tailors conversations for a more engaging experience.
You can still disable the mode:
```python
bot = koboldai.KOBOLDAI(is_conversation=False)
```
Utilize the `--disable-conversation` flag in the console to achieve the same functionality.
> [!CAUTION]
> **Bard** autohandles context due to the obvious reason; the `is_conversation` parameter is not necessary at all hence not required when initializing the class. Also be informed that majority of providers offered by *gpt4free* requires *Google Chrome* inorder to function.
### Image Generation
This has been made possible by [pollinations.ai](https://pollination.ai).
```sh
$ pytgpt imager "<prompt>"
# e.g pytgpt imager "Coding bot"
```
<details>
<summary>
Developers
</summary>
```python
from pytgpt.imager import Imager
img = Imager()
generated_img = img.generate('Coding bot') # [bytes]
img.save(generated_img)
```
<details>
<summary>
Download Multiple Images
</summary>
```python
from pytgpt.imager import Imager
img = Imager()
img_generator = img.generate('Coding bot', amount=3, stream=True)
img.save(img_generator)
# RAM friendly
```
</details>
#### Using **Prodia** provider
```python
from pytgpt.imager import Prodia
img = Prodia()
img_generator = img.generate('Coding bot', amount=3, stream=True)
img.save(img_generator)
```
</details>
### Advanced Usage of Placeholders
The `generate` functionality has been enhanced starting from *v0.3.0* to enable comprehensive utilization of the `--with-copied` option and support for accepting piped inputs. This improvement introduces placeholders, offering dynamic values for more versatile interactions.
| Placeholder | Represents |
| ------------ | ----------- |
| `{{stream}}` | The piped input |
| `{{copied}}` | The last copied text |
This feature is particularly beneficial for intricate operations. For example:
```bash
$ git diff | pytgpt generate "Here is a diff file: {{stream}} Make a concise commit message from it, aligning with my commit message history: {{copied}}" --new
```
> In this illustration, `{{stream}}` denotes the result of the `$ git diff` operation, while `{{copied}}` signifies the content copied from the output of the `$ git log` command.
### Awesome Prompts
[These prompts](https://github.com/Simatwa/gpt-cli/blob/main/assets/all-acts.pdf?raw=True) are designed to guide the AI's behavior or responses in a particular direction, encouraging it to exhibit certain characteristics or behaviors. The term "awesome-prompt" is not a formal term in AI or machine learning literature, but it encapsulates the idea of crafting prompts that are effective in achieving desired outcomes. Let's say you want it to behave like a *Linux Terminal*, *PHP Interpreter*, or just to [**JAIL BREAK.**](https://gist.github.com/coolaj86/6f4f7b30129b0251f61fa7baaa881516)
Instances :
```sh
$ pytgpt interactive --awesome-prompt "Linux Terminal"
# Act like a Linux Terminal
$ pytgpt interactive -ap DAN
# Jailbreak
```
> [!NOTE]
> Awesome prompts are alternative to `--intro`.
> Run `$ pytgpt awesome whole` to list available prompts (*200+*).
> Run `$ pytgpt awesome --help` for more info.
### Introducing RawDog
RawDog is a masterpiece feature that exploits the versatile capabilities of Python to command and control your system as per your needs. You can literally do anything with it, since it generates and executes python codes, driven by **your prompts**! To have a bite of *rawdog* simply append the flag `--rawdog` *shortform* `-rd` in *generate/interactive* mode. This introduces a never seen-before feature in the *tgpt ecosystem*. Thanks to [AbanteAI/rawdog](https://github.com/AbanteAI/rawdog) for the idea.
This can be useful in some ways. For instance :
```sh
$ pytgpt generate -n -q "Visualize the disk usage using pie chart" --rawdog
```
This will pop up a window showing system disk usage as shown below.
<p align="center">
<img src="https://github.com/Simatwa/python-tgpt/blob/main/assets/Figure_1.png?raw=true" width='60%'>
</p>
## Passing Environment Variables
Pytgpt **v0.4.6** introduces a convention way of taking variables from the environment.
To achieve that, set the environment variables in your operating system or script with prefix `PYTGPT_` followed by the option name in uppercase, replacing dashes with underscores.
For example, for the option `--provider`, you would set an environment variable `PYTGPT_PROVIDER` to provide a default value for that option. Same case applies to boolean flags such as `--rawdog` whose environment variable will be `PYTGPT_RAWDOG` with value being either `true/false`. Finally, `--awesome-prompt` will take the environment variable `PYTGPT_AWESOME_PROMPT`.
> [!NOTE]
> This is **NOT** limited to any command
The environment variables can be overridden by explicitly declaring new value.
> [!TIP]
> Save the variables in a `.env` file in your current directory or export them in your `~/.zshrc` file.
> To load previous conversations from a `.txt` file, use the `-fp` or `--filepath` flag. If no flag is passed, the default one will be used. To load context from a file without altering its content, use the `--retain-file` flag.
## Dynamic Provider & Further Interfaces
Version **0.4.6** also introduces dynamic provider called `g4fauto`, which represents the fastest working g4f-based provider.
> [!TIP]
> To launch web interface for g4f-based providers simply run `$ pytgpt gpt4free gui`.
> `$ pytgpt api run` will start the REST-API. Access docs and redoc at */docs* and */redoc* respectively.
To launch the web interface for g4f-based providers, execute the following command in your terminal:
```bash
$ pytgpt gpt4free gui
```
This command initializes the Web-user interface for interacting with g4f-based providers.
To start the REST-API:
```bash
$ pytgpt api run
```
This command starts the RESTful API server, enabling you to interact with the service programmatically.
For accessing the documentation and redoc, navigate to the following paths in your web browser:
- Documentation: `/docs`
- ReDoc: `/redoc`
## Speech Synthesis
To enable speech synthesis of responses, ensure you have either the [VLC player](https://www.videolan.org/vlc/index.html) installed on your system or, if you are a [Termux](https://termux.org) user, the [Termux:API](https://wiki.termux.com/wiki/Termux:API) package.
To activate speech synthesis, use the `--talk-to-me` flag or its shorthand `-ttm` when running your commands. For example:
```bash
$ pytgpt generate "Generate an ogre story" --talk-to-me
```
or
```bash
$ pytgpt interactive -ttm
```
This flag instructs the system to audiolize the ai responses and then play them, enhancing the user experience by providing auditory feedback.
Version **0.6.4** introduces another dynamic provider, `auto`, which denotes the working provider **overall**. This relieves you of the workload of manually checking a working provider each time you fire up pytgpt. However, `auto` as a provider does not work so well with streaming responses, so probably you would need to sacrifice performance for the sake of reliability.
## [Telegram Bot](https://github.com/Simatwa/pytgpt-bot)
If you're not satisfied with the existing interfaces, [pytgpt-bot](https://github.com/Simatwa/pytgpt-bot) could be the solution you're seeking. This bot is designed to enhance your experience by offering a wide range of functionalities. Whether you're interested in engaging in AI-driven conversations, creating images and audio from text, or exploring other innovative features, [pytgpt-bot is equipped to meet your needs.](https://github.com/Simatwa/pytgpt-bot)
The bot is maintained as a separate project so you just have to execute a command to get it installed :
```
$ pip install pytgpt-bot
```
Usage : `pytgpt bot run <bot-api-token>`
Or you can simply interact with the one running now as [@pytgpt-bot](https://t.me/pytgpt_bot)
<details>
<summary>
For more usage info run `$ pytgpt --help`
</summary>
```
Usage: pytgpt [OPTIONS] COMMAND [ARGS]...
Options:
-v, --version Show the version and exit.
-h, --help Show this message and exit.
Commands:
api FastAPI control endpoint
awesome Perform CRUD operations on awesome-prompts
bot Telegram bot interface control
generate Generate a quick response with AI
gpt4free Discover gpt4free models, providers etc
imager Generate images with pollinations.ai
interactive Chat with AI interactively (Default)
utils Utility endpoint for pytgpt
```
</details>
### API Health Status
| No. | API | Status |
|--------|-----|--------|
| 1. | [On-render](https://python-tgpt.onrender.com) | [cron-job](https://pqfzhmvz.status.cron-job.org/) |
## [CHANGELOG](https://github.com/Simatwa/python-tgpt/blob/main/docs/CHANGELOG.md)
## Acknowledgements
1. [x] [tgpt](https://github.com/aandrew-me/tgpt)
2. [x] [gpt4free](https://github.com/xtekky/gpt4free)
Raw data
{
"_id": null,
"home_page": "https://github.com/Simatwa/python-tgpt",
"name": "python-tgpt",
"maintainer": "Smartwa",
"docs_url": null,
"requires_python": ">=3.10",
"maintainer_email": null,
"keywords": "chatgpt, gpt, tgpt, pytgpt, chatgpt-cli, chatgpt-sdk, chatgpt-api, llama-api, leo, llama2, blackboxai, opengpt, koboldai, openai, bard, gpt4free, gpt4all-cli, gptcli, poe-api, perplexity, novita, gpt4free",
"author": "Smartwa",
"author_email": "simatwacaleb@proton.me",
"download_url": "https://files.pythonhosted.org/packages/23/b8/42f8a04b0aee5b3a716cbb74e7ad1ea32969bff492474ddcb0c7fd1c0ad4/python_tgpt-0.8.1.tar.gz",
"platform": null,
"description": "<p align=\"center\">\n<img src=\"https://github.com/Simatwa/python-tgpt/blob/main/assets/py-tgpt.png?raw=true\" width='40%'>\n</p>\n\n<!-- <h1 align=\"center\"> python-tgpt </h1> -->\n<p align=\"center\">\n<!--\n<a href=\"https://github.com/Simatwa/python-tgpt/actions/workflows/python-test.yml\"><img src=\"https://github.com/Simatwa/python-tgpt/actions/workflows/python-test.yml/badge.svg\" alt=\"Python Test\"/></a>\n-->\n<a href=\"https://github.com/Simatwa/python-tgpt/blob/main/LICENSE\"><img alt=\"License\" src=\"https://img.shields.io/static/v1?logo=GPL&color=Blue&message=MIT&label=License\"/></a>\n<a href=\"\"><img alt=\"Python version\" src=\"https://img.shields.io/pypi/pyversions/python-tgpt\"/></a>\n<a href=\"https://pypi.org/project/python-tgpt\"><img alt=\"PyPi\" src=\"https://img.shields.io/pypi/v/python-tgpt?color=green\"/></a>\n<a href=\"https://github.com/psf/black\"><img alt=\"Black\" src=\"https://img.shields.io/badge/code%20style-black-000000.svg\"/></a>\n<a href=\"https://python-tgpt.onrender.com\"><img alt=\"Website status\" src=\"https://img.shields.io/website?url=https://python-tgpt.onrender.com\"/></a>\n<a href=\"https://github.com/Simatwa/python-tgpt/actions/workflows/python-package.yml\"><img alt=\"Python Package flow\" src=\"https://github.com/Simatwa/python-tgpt/actions/workflows/python-package.yml/badge.svg?branch=master\"/></a>\n<a href=\"https://pepy.tech/project/python-tgpt\"><img src=\"https://static.pepy.tech/personalized-badge/python-tgpt?period=total&units=international_system&left_color=grey&right_color=blue&left_text=Downloads\" alt=\"Downloads\"></a>\n<a href=\"https://github.com/Simatwa/python-tgpt/releases/latest\"><img src=\"https://img.shields.io/github/downloads/Simatwa/python-tgpt/total?label=Asset%20Downloads&color=success\" alt=\"Downloads\"></img></a>\n<a href=\"https://github.com/Simatwa/python-tgpt/releases\"><img src=\"https://img.shields.io/github/v/release/Simatwa/python-tgpt?color=success&label=Release&logo=github\" alt=\"Latest release\"></img></a>\n<a href=\"https://github.com/Simatwa/python-tgpt/releases\"><img src=\"https://img.shields.io/github/release-date/Simatwa/python-tgpt?label=Release date&logo=github\" alt=\"release date\"></img></a>\n<a href=\"https://hits.seeyoufarm.com\"><img src=\"https://hits.seeyoufarm.com/api/count/incr/badge.svg?url=https%3A%2F%2Fgithub.com/Simatwa/python-tgpt\"/></a> \n<a href=\"https://wakatime.com/badge/github/Simatwa/tgpt2\"><img src=\"https://wakatime.com/badge/github/Simatwa/tgpt2.svg\" alt=\"wakatime\"></a>\n</p>\n\n<h3 align=\"center\">\npython-tgpt\n</h3> \n\n<p align=\"center\">\n<img src=\"https://github.com/Simatwa/python-tgpt/blob/main/assets/demo-1.gif?raw=True\" width='80%'/>\n</p>\n\n\n```python\n>>> import pytgpt.phind as phind\n>>> bot = phind.PHIND()\n>>> bot.chat('hello there')\n'Hello! How can I assist you today?'\n\n```\n\n```python\nfrom pytgpt.imager import Imager\nimg = Imager()\ngenerated_images = img.generate(prompt=\"Cyberpunk\", amount=3, stream=True)\nimg.save(generated_images)\n```\n\nThis project enables seamless interaction with over **45 free LLM providers** without requiring an API Key and generating images as well.\n\nThe name *python-tgpt* draws inspiration from its parent project [tgpt](https://github.com/aandrew-me/tgpt), which operates on [Golang](https://go.dev/). Through this Python adaptation, users can effortlessly engage with a number of free LLMs available, fostering a smoother AI interaction experience.\n\n### Features\n\n- \ud83d\udc0d Python package\n- [\ud83c\udf10 FastAPI for web integration](https://python-tgpt.onrender.com)\n- \u2328\ufe0f Command-line interface\n- \ud83e\udde0 Multiple LLM providers - **45+**\n- \ud83c\udf0a Stream and non-stream response\n- \ud83d\ude80 Ready to use (No API key required)\n- \ud83c\udfaf Customizable script generation and execution\n- \ud83d\udd0c Offline support for Large Language Models\n- \ud83c\udfa8 Image generation capabilities\n- \ud83c\udfa4 Text-to-audio conversion capabilities\n- \u26d3\ufe0f Chained requests via proxy\n- \ud83d\udde8\ufe0f Enhanced conversational chat experience\n- \ud83d\udcbe Capability to save prompts and responses (Conversation)\n- \ud83d\udd04 Ability to load previous conversations\n- \ud83d\ude80 Pass [awesome-chatgpt prompts](https://github.com/f/awesome-chatgpt-prompts) easily\n- \ud83e\udd16 [Telegram bot](https://t.me/pytgpt_bot) - interface\n- \ud83d\udd04 Asynchronous support for all major operations.\n\n\n## Providers\n\nThese are simply the hosts of the LLMs, they include:\n\n- [Koboldai](https://koboldai-koboldcpp-tiefighter.hf.space)\n- [OpenGPTs](https://opengpts-example-vz4y4ooboq-uc.a.run.app/)\n- [OpenAI](https://chat.openai.com) *(API key required)*\n- [Phind](https://www.phind.com)\n- [Blackboxai](https://www.blackbox.ai)\n- [gpt4all](https://gpt4all.io) *(Offline)*\n- [Poe](https://poe.com) - Poe|Quora *(Session ID required)*\n- [Groq](https://console.groq.com/playground) *(API Key required)*\n- [Perplexity](https://www.perplexity.ai)\n- [YepChat](https://yep.com)\n\n\n<details>\n\n<summary>\n\n41+ providers proudly offered by [gpt4free](https://github.com/xtekky/gpt4free).\n\n</summary>\n\n- To list working providers run:\n ```sh\n $ pytgpt gpt4free test -y\n ```\n</details>\n\n## Prerequisites\n\n- [x] [Python>=3.10](https://python.org) *(Optional)*\n\n## Installation and Usage\n\n### Installation\n\nDownload binaries for your system from [here.](https://github.com/Simatwa/python-tgpt/releases/latest/)\n\nAlternatively, you can install non-binaries. *(Recommended)*\n\n1. Developers:\n\n ```sh\n pip install --upgrade python-tgpt\n ```\n\n2. Commandline:\n\n ```sh\n pip install --upgrade \"python-tgpt[cli]\"\n ```\n\n3. Full installation:\n\n ```sh\n pip install --upgrade \"python-tgpt[all]\"\n ```\n\n> `pip install -U \"python-tgt[api]\"` will install REST API dependencies.\n\n#### Termux extras\n\n1. Developers:\n\n ```sh\n pip install --upgrade \"python-tgpt[termux]\"\n ```\n\n2. Commandline:\n\n ```sh\n pip install --upgrade \"python-tgpt[termux-cli]\"\n ```\n\n3. Full installation:\n\n ```sh\n pip install --upgrade \"python-tgpt[termux-all]\"\n ```\n\n> `pip install -U \"python-tgt[termux-api]\"` will install REST API dependencies\n\n\n## Usage\n\nThis package offers a convenient command-line interface.\n\n> [!NOTE]\n> `phind` is the default *provider*.\n\n- For a quick response:\n ```bash\n python -m pytgpt generate \"<Your prompt>\"\n ```\n\n- For interactive mode:\n ```bash\n python -m pytgpt interactive \"<Kickoff prompt (though not mandatory)>\"\n ```\n\nMake use of flag `--provider` followed by the provider name of your choice. e.g `--provider koboldai`\n\n> To list all providers offered by gpt4free, use following commands: ```pytgpt gpt4free list providers```\n\nYou can also simply use `pytgpt` instead of `python -m pytgpt`.\n\nStarting from version 0.2.7, running `$ pytgpt` without any other command or option will automatically enter the `interactive` mode. Otherwise, you'll need to explicitly declare the desired action, for example, by running `$ pytgpt generate`.\n\n\n<details>\n\n<summary>\n<h3>Developer Docs</h3>\n</summary>\n\n1. Generate a quick response\n\n```python\nfrom pytgpt.phind import PHIND\nbot = PHIND()\nresp = bot.chat('<Your prompt>')\nprint(resp)\n# Output : How can I assist you today?\n```\n\n2. Get back whole response\n\n```python\nfrom pytgpt.phind import PHIND\nbot = PHIND()\nresp = bot.chat('<Your prompt>')\nprint(resp)\n# Output\n\"\"\"\n{'id': 'chatcmpl-gp6cwu2e5ez3ltoyti4z', 'object': 'chat.completion.chunk', 'created': 1731257890, 'model': 'phind-instant-llama-3_1-08-31-2024-checkpoint-1500', 'choices': [{'index': 0, 'delta': {'content': \"Hello! I'm an AI assistant created by Phind to help with programming tasks. How can I assist you today?\"}, 'finish_reason': None}]}\n\"\"\"\n```\n\n#### Stream Response \n\nJust add parameter `stream` with value `true`.\n\n1. Text Generated only \n\n```python\nfrom pytgpt.phind import PHIND\nbot = PHIND()\nresponse = bot.chat('hello', stream=True)\nfor chunk in response:\n print(chunk)\n# output\n\"\"\"\nHello\nHello!\nHello! How\nHello! How can\nHello! How can I\nHello! How can I assist\nHello! How can I assist you\nHello! How can I assist you today\nHello! How can I assist you today?\n\"\"\"\n```\n\n2. Whole Response\n\n```python\nfrom pytgpt.leo import LEO\nbot = LEO()\nresp = bot.ask('<Your Prompt>', stream=True)\nfor value in resp:\n print(value)\n# Output\n\"\"\"\n{'id': 'chatcmpl-icz6a4m1nbbclw9hhgol', 'object': 'chat.completion.chunk', 'created': 1731258032, 'model': 'phind-instant-llama-3_1-08-31-2024-checkpoint-1500', 'choices': [{'index': 0, 'delta': {'content': 'Hello'}, 'finish_reason': None}]}\n{'id': 'chatcmpl-icz6a4m1nbbclw9hhgol', 'object': 'chat.completion.chunk', 'created': 1731258032, 'model': 'phind-instant-llama-3_1-08-31-2024-checkpoint-1500', 'choices': [{'index': 0, 'delta': {'content': \"Hello! I'm an AI\"}, 'finish_reason': None}]}\n{'id': 'chatcmpl-icz6a4m1nbbclw9hhgol', 'object': 'chat.completion.chunk', 'created': 1731258032, 'model': 'phind-instant-llama-3_1-08-31-2024-checkpoint-1500', 'choices': [{'index': 0, 'delta': {'content': \"Hello! I'm an AI assistant created by Phind to help with coding and technical tasks. How\"}, 'finish_reason': None}]}\n{'id': 'chatcmpl-icz6a4m1nbbclw9hhgol', 'object': 'chat.completion.chunk', 'created': 1731258032, 'model': 'phind-instant-llama-3_1-08-31-2024-checkpoint-1500', 'choices': [{'index': 0, 'delta': {'content': \"Hello! I'm an AI assistant created by Phind to help with coding and technical tasks. How can I assist you today?\"}, 'finish_reason': None}]}\n\"\"\"\n```\n\n<details>\n\n<summary>\nAuto - *(selects any working provider)*\n\n</summary>\n\n```python\nimport pytgpt.auto import auto\nbot = auto.AUTO()\nprint(bot.chat(\"<Your-prompt>\"))\n```\n\n</details>\n\n<details>\n\n<summary>\nOpenai\n\n</summary>\n\n```python\nimport pytgpt.openai as openai\nbot = openai.OPENAI(\"<OPENAI-API-KEY>\")\nprint(bot.chat(\"<Your-prompt>\"))\n```\n\n</details>\n\n\n<details>\n\n<summary>\nKoboldai\n\n</summary>\n\n```python\nimport pytgpt.koboldai as koboldai\nbot = koboldai.KOBOLDAI()\nprint(bot.chat(\"<Your-prompt>\"))\n```\n\n</details>\n\n\n<details>\n\n<summary>\nOpengpt\n\n</summary>\n\n```python\nimport pytgpt.opengpt as opengpt\nbot = opengpt.OPENGPT()\nprint(bot.chat(\"<Your-prompt>\"))\n```\n\n</details>\n\n<details>\n\n<summary>\nphind\n\n</summary>\n\n```python\nimport pytgpt.phind as phind\nbot = phind.PHIND()\nprint(bot.chat(\"<Your-prompt>\"))\n```\n\n</details>\n\n<details>\n\n<summary>\nGpt4free providers\n\n</summary>\n\n```python\nimport pytgpt.gpt4free as gpt4free\nbot = gpt4free.GPT4FREE(provider=\"Koala\")\nprint(bot.chat(\"<Your-prompt>\"))\n```\n\n</details>\n\n### Asynchronous\n\n**Version 0.7.0** introduces asynchronous implementation to almost all providers except a few such as *perplexity*, which relies on other libraries which lacks such implementation.\n\nTo make it easier, you just have to prefix `Async` to the common synchronous class name. For instance `OPENGPT` will be accessed as `AsyncOPENGPT`:\n\n#### Streaming Whole ai response.\n\n```python\nimport asyncio\nfrom pytgpt.phind import AsyncPHIND\n\nasync def main():\n async_ask = await AsyncPHIND(False).ask(\n \"Critique that python is cool.\",\n stream=True\n )\n async for streaming_response in async_ask:\n print(\n streaming_response\n )\n\nasyncio.run(\n main()\n)\n```\n\n#### Streaming just the text\n\n```python\nimport asyncio\nfrom pytgpt.phind import AsyncPHIND\n\nasync def main():\n async_ask = await AsyncPHIND(False).chat(\n \"Critique that python is cool.\",\n stream=True\n )\n async for streaming_text in async_ask:\n print(\n streaming_text\n )\n\nasyncio.run(\n main()\n)\n```\n\n</details>\n\n<details>\n\n<summary>\n\nTo obtain more tailored responses, consider utilizing [optimizers](pytgpt/utils.py) using the `optimizer` parameter. Its values can be set to either `code` or `system_command`.\n\n</summary>\n\n```python\nfrom pytgpt.phind import PHIND\nbot = PHIND()\nresp = bot.ask('<Your Prompt>', optimizer='code')\nprint(resp)\n```\n\n</details>\n\n</details>\n\n> [!IMPORTANT]\n> Commencing from [v0.1.0](https://github.com/Simatwa/python-tgpt/releases/), the default mode of interaction is conversational. This mode enhances the interactive experience, offering better control over the chat history. By associating previous prompts and responses, it tailors conversations for a more engaging experience.\n\nYou can still disable the mode:\n\n```python\nbot = koboldai.KOBOLDAI(is_conversation=False)\n```\n\nUtilize the `--disable-conversation` flag in the console to achieve the same functionality.\n\n> [!CAUTION]\n> **Bard** autohandles context due to the obvious reason; the `is_conversation` parameter is not necessary at all hence not required when initializing the class. Also be informed that majority of providers offered by *gpt4free* requires *Google Chrome* inorder to function.\n\n### Image Generation\n\nThis has been made possible by [pollinations.ai](https://pollination.ai).\n```sh\n$ pytgpt imager \"<prompt>\"\n# e.g pytgpt imager \"Coding bot\"\n```\n\n<details>\n\n<summary>\nDevelopers\n</summary>\n\n```python\nfrom pytgpt.imager import Imager\n\nimg = Imager()\n\ngenerated_img = img.generate('Coding bot') # [bytes]\n\nimg.save(generated_img)\n```\n\n<details>\n\n<summary>\nDownload Multiple Images\n</summary>\n\n```python\n\nfrom pytgpt.imager import Imager\n\nimg = Imager()\n\nimg_generator = img.generate('Coding bot', amount=3, stream=True)\n\nimg.save(img_generator)\n\n# RAM friendly\n```\n\n</details>\n\n#### Using **Prodia** provider\n\n```python\nfrom pytgpt.imager import Prodia\n\nimg = Prodia()\n\nimg_generator = img.generate('Coding bot', amount=3, stream=True)\n\nimg.save(img_generator)\n```\n\n</details>\n\n### Advanced Usage of Placeholders\n\nThe `generate` functionality has been enhanced starting from *v0.3.0* to enable comprehensive utilization of the `--with-copied` option and support for accepting piped inputs. This improvement introduces placeholders, offering dynamic values for more versatile interactions.\n\n| Placeholder | Represents |\n| ------------ | ----------- |\n| `{{stream}}` | The piped input |\n| `{{copied}}` | The last copied text |\n\nThis feature is particularly beneficial for intricate operations. For example:\n\n```bash\n$ git diff | pytgpt generate \"Here is a diff file: {{stream}} Make a concise commit message from it, aligning with my commit message history: {{copied}}\" --new\n```\n> In this illustration, `{{stream}}` denotes the result of the `$ git diff` operation, while `{{copied}}` signifies the content copied from the output of the `$ git log` command.\n\n### Awesome Prompts\n\n[These prompts](https://github.com/Simatwa/gpt-cli/blob/main/assets/all-acts.pdf?raw=True) are designed to guide the AI's behavior or responses in a particular direction, encouraging it to exhibit certain characteristics or behaviors. The term \"awesome-prompt\" is not a formal term in AI or machine learning literature, but it encapsulates the idea of crafting prompts that are effective in achieving desired outcomes. Let's say you want it to behave like a *Linux Terminal*, *PHP Interpreter*, or just to [**JAIL BREAK.**](https://gist.github.com/coolaj86/6f4f7b30129b0251f61fa7baaa881516)\n\nInstances :\n\n```sh\n$ pytgpt interactive --awesome-prompt \"Linux Terminal\"\n# Act like a Linux Terminal\n\n$ pytgpt interactive -ap DAN\n# Jailbreak\n```\n\n> [!NOTE]\n> Awesome prompts are alternative to `--intro`.\n> Run `$ pytgpt awesome whole` to list available prompts (*200+*).\n> Run `$ pytgpt awesome --help` for more info.\n\n### Introducing RawDog\n\nRawDog is a masterpiece feature that exploits the versatile capabilities of Python to command and control your system as per your needs. You can literally do anything with it, since it generates and executes python codes, driven by **your prompts**! To have a bite of *rawdog* simply append the flag `--rawdog` *shortform* `-rd` in *generate/interactive* mode. This introduces a never seen-before feature in the *tgpt ecosystem*. Thanks to [AbanteAI/rawdog](https://github.com/AbanteAI/rawdog) for the idea.\n\nThis can be useful in some ways. For instance :\n\n ```sh\n $ pytgpt generate -n -q \"Visualize the disk usage using pie chart\" --rawdog\n ```\n \n This will pop up a window showing system disk usage as shown below.\n \n <p align=\"center\">\n <img src=\"https://github.com/Simatwa/python-tgpt/blob/main/assets/Figure_1.png?raw=true\" width='60%'>\n </p>\n\n## Passing Environment Variables\n\nPytgpt **v0.4.6** introduces a convention way of taking variables from the environment.\nTo achieve that, set the environment variables in your operating system or script with prefix `PYTGPT_` followed by the option name in uppercase, replacing dashes with underscores.\n\nFor example, for the option `--provider`, you would set an environment variable `PYTGPT_PROVIDER` to provide a default value for that option. Same case applies to boolean flags such as `--rawdog` whose environment variable will be `PYTGPT_RAWDOG` with value being either `true/false`. Finally, `--awesome-prompt` will take the environment variable `PYTGPT_AWESOME_PROMPT`.\n\n> [!NOTE]\n> This is **NOT** limited to any command\n\nThe environment variables can be overridden by explicitly declaring new value.\n\n> [!TIP]\n> Save the variables in a `.env` file in your current directory or export them in your `~/.zshrc` file.\n> To load previous conversations from a `.txt` file, use the `-fp` or `--filepath` flag. If no flag is passed, the default one will be used. To load context from a file without altering its content, use the `--retain-file` flag.\n\n## Dynamic Provider & Further Interfaces\n\nVersion **0.4.6** also introduces dynamic provider called `g4fauto`, which represents the fastest working g4f-based provider.\n\n> [!TIP]\n> To launch web interface for g4f-based providers simply run `$ pytgpt gpt4free gui`.\n> `$ pytgpt api run` will start the REST-API. Access docs and redoc at */docs* and */redoc* respectively.\nTo launch the web interface for g4f-based providers, execute the following command in your terminal:\n\n```bash\n$ pytgpt gpt4free gui\n```\n\nThis command initializes the Web-user interface for interacting with g4f-based providers.\n\nTo start the REST-API:\n\n```bash\n$ pytgpt api run\n```\n\nThis command starts the RESTful API server, enabling you to interact with the service programmatically.\n\nFor accessing the documentation and redoc, navigate to the following paths in your web browser:\n- Documentation: `/docs`\n- ReDoc: `/redoc`\n\n## Speech Synthesis\n\nTo enable speech synthesis of responses, ensure you have either the [VLC player](https://www.videolan.org/vlc/index.html) installed on your system or, if you are a [Termux](https://termux.org) user, the [Termux:API](https://wiki.termux.com/wiki/Termux:API) package.\n\nTo activate speech synthesis, use the `--talk-to-me` flag or its shorthand `-ttm` when running your commands. For example:\n```bash\n$ pytgpt generate \"Generate an ogre story\" --talk-to-me\n```\nor\n```bash\n$ pytgpt interactive -ttm\n```\nThis flag instructs the system to audiolize the ai responses and then play them, enhancing the user experience by providing auditory feedback.\n\nVersion **0.6.4** introduces another dynamic provider, `auto`, which denotes the working provider **overall**. This relieves you of the workload of manually checking a working provider each time you fire up pytgpt. However, `auto` as a provider does not work so well with streaming responses, so probably you would need to sacrifice performance for the sake of reliability.\n\n## [Telegram Bot](https://github.com/Simatwa/pytgpt-bot)\n\nIf you're not satisfied with the existing interfaces, [pytgpt-bot](https://github.com/Simatwa/pytgpt-bot) could be the solution you're seeking. This bot is designed to enhance your experience by offering a wide range of functionalities. Whether you're interested in engaging in AI-driven conversations, creating images and audio from text, or exploring other innovative features, [pytgpt-bot is equipped to meet your needs.](https://github.com/Simatwa/pytgpt-bot)\n\nThe bot is maintained as a separate project so you just have to execute a command to get it installed :\n\n```\n$ pip install pytgpt-bot\n```\n\nUsage : `pytgpt bot run <bot-api-token>`\n\nOr you can simply interact with the one running now as [@pytgpt-bot](https://t.me/pytgpt_bot)\n\n<details>\n\n<summary>\n\nFor more usage info run `$ pytgpt --help`\n\n</summary>\n\n```\nUsage: pytgpt [OPTIONS] COMMAND [ARGS]...\n\nOptions:\n -v, --version Show the version and exit.\n -h, --help Show this message and exit.\n\nCommands:\n api FastAPI control endpoint\n awesome Perform CRUD operations on awesome-prompts\n bot Telegram bot interface control\n generate Generate a quick response with AI\n gpt4free Discover gpt4free models, providers etc\n imager Generate images with pollinations.ai\n interactive Chat with AI interactively (Default)\n utils Utility endpoint for pytgpt\n```\n\n</details>\n\n### API Health Status\n\n| No. | API | Status |\n|--------|-----|--------|\n| 1. | [On-render](https://python-tgpt.onrender.com) | [cron-job](https://pqfzhmvz.status.cron-job.org/) |\n\n\n## [CHANGELOG](https://github.com/Simatwa/python-tgpt/blob/main/docs/CHANGELOG.md)\n\n## Acknowledgements\n\n1. [x] [tgpt](https://github.com/aandrew-me/tgpt)\n2. [x] [gpt4free](https://github.com/xtekky/gpt4free)\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "Interact with AI without API key",
"version": "0.8.1",
"project_urls": {
"Bug Report": "https://github.com/Simatwa/python-tgpt/issues/new",
"Documentation": "https://github.com/Simatwa/python-tgpt/blob/main/docs",
"Download": "https://github.com/Simatwa/python-tgpt/releases",
"Homepage": "https://github.com/Simatwa/python-tgpt",
"Issue Tracker": "https://github.com/Simatwa/python-tgpt/issues",
"Source Code": "https://github.com/Simatwa/python-tgpt"
},
"split_keywords": [
"chatgpt",
" gpt",
" tgpt",
" pytgpt",
" chatgpt-cli",
" chatgpt-sdk",
" chatgpt-api",
" llama-api",
" leo",
" llama2",
" blackboxai",
" opengpt",
" koboldai",
" openai",
" bard",
" gpt4free",
" gpt4all-cli",
" gptcli",
" poe-api",
" perplexity",
" novita",
" gpt4free"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "23b842f8a04b0aee5b3a716cbb74e7ad1ea32969bff492474ddcb0c7fd1c0ad4",
"md5": "0db59cc0d9ef7ba95dd730122ff6a8e6",
"sha256": "5cdae032aeb0bd90425463ace351b99d4403768d57e7970d6faef0ab8e895fb4"
},
"downloads": -1,
"filename": "python_tgpt-0.8.1.tar.gz",
"has_sig": false,
"md5_digest": "0db59cc0d9ef7ba95dd730122ff6a8e6",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.10",
"size": 81359,
"upload_time": "2024-11-21T10:37:41",
"upload_time_iso_8601": "2024-11-21T10:37:41.328829Z",
"url": "https://files.pythonhosted.org/packages/23/b8/42f8a04b0aee5b3a716cbb74e7ad1ea32969bff492474ddcb0c7fd1c0ad4/python_tgpt-0.8.1.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-11-21 10:37:41",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "Simatwa",
"github_project": "python-tgpt",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"requirements": [
{
"name": "requests",
"specs": [
[
">=",
"2.32.3"
]
]
},
{
"name": "click",
"specs": [
[
"==",
"8.1.3"
]
]
},
{
"name": "rich",
"specs": [
[
"==",
"13.3.4"
]
]
},
{
"name": "clipman",
"specs": [
[
"==",
"3.1.0"
]
]
},
{
"name": "pyperclip",
"specs": [
[
"==",
"1.8.2"
]
]
},
{
"name": "appdirs",
"specs": [
[
"==",
"1.4.4"
]
]
},
{
"name": "colorama",
"specs": [
[
"==",
"0.4.6"
]
]
},
{
"name": "g4f",
"specs": [
[
">=",
"0.3.3.3"
]
]
},
{
"name": "pyyaml",
"specs": [
[
"==",
"6.0.1"
]
]
},
{
"name": "matplotlib",
"specs": []
},
{
"name": "gpt4all",
"specs": [
[
"==",
"2.2.0"
]
]
},
{
"name": "poe-api-wrapper",
"specs": [
[
"==",
"1.7.0"
]
]
},
{
"name": "python-dotenv",
"specs": [
[
"==",
"1.0.0"
]
]
},
{
"name": "brotli",
"specs": [
[
"==",
"1.1.0"
]
]
},
{
"name": "Helpingai-T2",
"specs": [
[
"==",
"0.5"
]
]
},
{
"name": "fastapi",
"specs": [
[
"==",
"0.115.4"
]
]
},
{
"name": "python-vlc",
"specs": [
[
">=",
"3.0.20"
]
]
},
{
"name": "httpx",
"specs": [
[
">=",
"0.27.2"
]
]
},
{
"name": "prompt-toolkit",
"specs": [
[
"==",
"3.0.48"
]
]
}
],
"lcname": "python-tgpt"
}