pygpt-net


Namepygpt-net JSON
Version 2.2.18 PyPI version JSON
download
home_pagehttps://github.com/szczyglis-dev/py-gpt
SummaryDesktop AI Assistant powered by GPT-4, GPT-4V, GPT-3.5, DALL-E 3, Langchain LLMs, Llama-index, Whisper with chatbot, assistant, text completion, vision and image generation, internet access, chat with files, commands and code execution, file upload and download and more
upload_time2024-05-05 16:07:30
maintainerNone
docs_urlNone
authorMarcin Szczyglinski
requires_python<3.12,>=3.10
licenseMIT
keywords py_gpt py-gpt pygpt desktop app gpt gpt4 gpt4-v gpt3.5 gpt-4 gpt-4v gpt-3.5 tts whisper vision chatgpt dall-e chat chatbot assistant text completion image generation ai api openai api key langchain llama-index presets ui qt pyside
VCS
bugtrack_url
requirements aiohttp aiosignal altgraph annotated-types anyio asgiref async-timeout asyncio attrs azure-core azure-identity backoff bcrypt beautifulsoup4 bleach bs4 build cachetools certifi cffi charset-normalizer chroma-hnswlib chromadb chromedriver-autoinstaller click coloredlogs croniter cryptography cssselect dataclasses-json defusedxml Deprecated dirtyjson distro docker docutils docx2txt EbookLib elastic-transport elasticsearch exceptiongroup fastapi fastjsonschema feedfinder2 feedparser filelock flatbuffers frozenlist fsspec future gkeepapi google-api-core google-api-python-client google-auth google-auth-httplib2 google-auth-oauthlib googleapis-common-protos gpsoauth greenlet grpcio h11 html2text httpcore httplib2 httptools httpx huggingface-hub humanfriendly idna importlib-metadata importlib-resources iniconfig jaraco.classes jeepney jieba3k Jinja2 joblib jsonpatch jsonpointer jsonschema jsonschema-specifications jupyter_client jupyter_core jupyterlab_pygments keyring kubernetes langchain langchain-community langchain-core langchain-experimental langchain-openai langsmith llama-index llama-index-agent-openai llama-index-cli llama-index-core llama-index-embeddings-azure-openai llama-index-embeddings-openai llama-index-indices-managed-llama-cloud llama-index-legacy llama-index-llms-azure-openai llama-index-llms-openai llama-index-multi-modal-llms-openai llama-index-program-openai llama-index-question-gen-openai llama-index-readers-chatgpt-plugin llama-index-readers-database llama-index-readers-file llama-index-readers-github llama-index-readers-google llama-index-readers-llama-parse llama-index-readers-microsoft-onedrive llama-index-readers-twitter llama-index-readers-web llama-index-vector-stores-chroma llama-index-vector-stores-elasticsearch llama-index-vector-stores-pinecone llama-index-vector-stores-redis llama-parse llamaindex-py-client lxml Markdown markdown-it-py MarkupSafe marshmallow mdurl mistune mmh3 monotonic more-itertools mpmath msal msal-extensions multidict mypy-extensions nbclient nbconvert nbformat nest-asyncio networkx newspaper3k nltk numpy oauth2client oauthlib onnxruntime openai opencv-python opentelemetry-api opentelemetry-exporter-otlp-proto-common opentelemetry-exporter-otlp-proto-grpc opentelemetry-instrumentation opentelemetry-instrumentation-asgi opentelemetry-instrumentation-fastapi opentelemetry-proto opentelemetry-sdk opentelemetry-semantic-conventions opentelemetry-util-http orjson outcome overrides packaging pandas pandocfilters pillow pinecone-client pip-tools pkginfo platformdirs playwright pluggy plumbum ply portalocker posthog protobuf psutil pulsar-client pyaml pyasn1 pyasn1-modules PyAudio pycparser pycryptodomex pydantic pydantic_core PyDrive pydub pyee pygame Pygments pyinstaller pyinstaller-hooks-contrib PyJWT PyMuPDF PyMuPDFb pyparsing pypdf PyPika pyproject_hooks pyserial PySide6 PySide6-Addons PySide6-Essentials PySocks pytest python-dateutil python-dotenv pytz pyxdg PyYAML pyzmq qt-material readme-renderer redis referencing regex requests requests-file requests-oauthlib requests-toolbelt retrying rfc3986 rich rpds-py rsa SecretStorage selenium sgmllib3k shiboken6 show-in-file-manager six sniffio sortedcontainers soupsieve SpeechRecognition SQLAlchemy starlette sympy tenacity tiktoken tinycss2 tinysegmenter tldextract tokenizers tomli tornado tqdm traitlets trio trio-websocket tweepy twine typer typing-inspect typing_extensions tzdata uritemplate urllib3 uvicorn watchfiles webencodings websocket-client websockets wikipedia wrapt wsproto yarl youtube-transcript-api zipp
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # PyGPT - Desktop AI Assistant

[![pygpt](https://snapcraft.io/pygpt/badge.svg)](https://snapcraft.io/pygpt)

Release: **2.2.18** | build: **2024.05.05** | Python: **>=3.10, <3.12**

Official website: https://pygpt.net | Documentation: https://pygpt.readthedocs.io

Snap Store: https://snapcraft.io/pygpt | PyPi: https://pypi.org/project/pygpt-net

Compiled version for Linux (`tar.gz`) and Windows 10/11 (`msi`) 64-bit: https://pygpt.net/#download

## Overview

**PyGPT** is **all-in-one** Desktop AI Assistant that provides direct interaction with OpenAI language models, including `GPT-4`, `GPT-4 Vision`, and `GPT-3.5`, through the `OpenAI API`. The application also integrates with alternative LLMs, like those available on `HuggingFace`, by utilizing `Langchain`.

This assistant offers multiple modes of operation such as chat, assistants, completions, and image-related tasks using `DALL-E 3` for generation and `GPT-4 Vision` for image analysis. **PyGPT** has filesystem capabilities for file I/O, can generate and run Python code, execute system commands, execute custom commands and manage file transfers. It also allows models to perform web searches with the `Google` and `Microsoft Bing`.

For audio interactions, **PyGPT** includes speech synthesis using the `Microsoft Azure`, `Google`, `Eleven Labs` and `OpenAI` Text-To-Speech services. Additionally, it features speech recognition capabilities provided by `OpenAI Whisper`, `Google` and `Bing` enabling the application to understand spoken commands and transcribe audio inputs into text. It features context memory with save and load functionality, enabling users to resume interactions from predefined points in the conversation. Prompt creation and management are streamlined through an intuitive preset system.

**PyGPT**'s functionality extends through plugin support, allowing for custom enhancements. Its multi-modal capabilities make it an adaptable tool for a range of AI-assisted operations, such as text-based interactions, system automation, daily assisting, vision applications, natural language processing, code generation and image creation.

Multiple operation modes are included, such as chat, text completion, assistant, vision, Langchain, Chat with files (via `Llama-index`), commands execution, external API calls and image generation, making **PyGPT** a multi-tool for many AI-driven tasks.

**Video** (mp4, version `2.2.0`, build `2024-04-28`):

https://github.com/szczyglis-dev/py-gpt/assets/61396542/7140ded4-1639-4c12-ac33-201b68b99a16

**Screenshot** (version `2.2.0`, build `2024-04-28`):

![v2_main](https://github.com/szczyglis-dev/py-gpt/assets/61396542/d9f2e67f-919c-4faa-b059-6e2f5efd23e6)

You can download compiled 64-bit versions for Windows and Linux here: https://pygpt.net/#download

## Features

- Desktop AI Assistant for `Linux`, `Windows` and `Mac`, written in Python.
- Works similarly to `ChatGPT`, but locally (on a desktop computer).
- 9 modes of operation: Chat, Vision, Completion, Assistant, Image generation, Langchain, Chat with files, Experts and Agent (autonomous).
- Supports multiple models: `GPT-4`, `GPT-3.5`, and any model accessible through `Langchain`.
- Included support features for individuals with disabilities: customizable keyboard shortcuts, voice control, and translation of on-screen actions into audio via speech synthesis.
- Handles and stores the full context of conversations (short-term memory).
- Real-time video camera capture in Vision mode.
- Internet access via `Google` and `Microsoft Bing`.
- Speech synthesis via `Microsoft Azure`, `Google`, `Eleven Labs` and `OpenAI` Text-To-Speech services.
- Speech recognition via `OpenAI Whisper`, `Google`, `Google Cloud` and `Microsoft Bing`.
- Image analysis via `GPT-4 Vision`.
- Crontab / Task scheduler included.
- Integrated `Langchain` support (you can connect to any LLM, e.g., on `HuggingFace`).
- Integrated `Llama-index` support: chat with `txt`, `pdf`, `csv`, `html`, `md`, `docx`, `json`, `epub`, `xlsx`, `xml`, webpages, `Google`, `GitHub`, video/audio, images and other data types, or use conversation history as additional context provided to the model.
- Integrated calendar, day notes and search in contexts by selected date.
- Commands execution (via plugins: access to the local filesystem, Python code interpreter, system commands execution).
- Custom commands creation and execution.
- Manages files and attachments with options to upload, download, and organize.
- Context history with the capability to revert to previous contexts (long-term memory).
- Allows you to easily manage prompts with handy editable presets.
- Provides an intuitive operation and interface.
- Includes a notepad.
- Includes simple painter / drawing tool.
- Includes optional Autonomous Mode (Agents).
- Supports multiple languages.
- Enables the use of all the powerful features of `GPT-4`, `GPT-4V`, and `GPT-3.5`.
- Requires no previous knowledge of using AI models.
- Simplifies image generation using `DALL-E 3` and `DALL-E 2`.
- Possesses the potential to support future OpenAI models.
- Fully configurable.
- Themes support.
- Real-time code syntax highlighting.
- Plugins support.
- Built-in token usage calculation.
- It's open source; source code is available on `GitHub`.
- Utilizes the user's own API key.

The application is free, open-source, and runs on PCs with `Linux`, `Windows 10`, `Windows 11` and `Mac`. 
Full Python source code is available on `GitHub`.

**PyGPT uses the user's API key  -  to use the application, 
you must have a registered OpenAI account and your own API key.**

You can also use built-it Langchain support to connect to other Large Language Models (LLMs), 
such as those on HuggingFace. Additional API keys may be required.

# Installation

## Compiled versions (Linux, Windows 10 and 11)

You can download compiled versions for `Linux` and `Windows` (10/11). 

Download the `.msi` or `tar.gz` for the appropriate OS from the download page at https://pygpt.net and then extract files from the archive and run the application. 64-bit only.

## Snap Store

You can install **PyGPT** directly from Snap Store:

```commandline
sudo snap install pygpt
```

To manage future updates just use:

```commandline
sudo snap refresh pygpt
```

[![Get it from the Snap Store](https://snapcraft.io/static/images/badges/en/snap-store-black.svg)](https://snapcraft.io/pygpt)

**Using camera:** to use camera in Snap version you must connect the camera with:

```commandline
sudo snap connect pygpt:camera
```

**Using microphone:** to use microphone in Snap version you must connect the microphone with:

```commandline
sudo snap connect pygpt:audio-record :audio-record
```

## PyPi (pip)

The application can also be installed from `PyPi` using `pip install`:

1. Create virtual environment:

```commandline
python3 -m venv venv
source venv/bin/activate
```

2. Install from PyPi:

``` commandline
pip install pygpt-net
```

3. Once installed run the command to start the application:

``` commandline
pygpt
```

## Source Code

An alternative method is to download the source code from `GitHub` and execute the application using the Python interpreter (>=3.10, <3.12). 

### Running from GitHub source code

1. Clone git repository or download .zip file:

```commandline
git clone https://github.com/szczyglis-dev/py-gpt.git
cd py-gpt
```

2. Create virtual environment:

```commandline
python3 -m venv venv
source venv/bin/activate
```

3. Install requirements:

```commandline
pip install -r requirements.txt
```

4. Run the application:

```commandline
python3 run.py
```

**Install with Poetry**

1. Clone git repository or download .zip file:

```commandline
git clone https://github.com/szczyglis-dev/py-gpt.git
cd py-gpt
```

2. Install Poetry (if not installed):

```commandline
pip install poetry
```

3. Create a new virtual environment that uses Python 3.10:

```commandline
poetry env use python3.10
poetry shell
```

4. Install requirements:

```commandline
poetry install
```

5. Run the application:

```commandline
poetry run python3 run.py
```

**Tip**: you can use `PyInstaller` to create a compiled version of
the application for your system (required version >= `6.0.0`).

### Troubleshooting

If you have a problems with `xcb` plugin with newer versions of PySide on Linux, e.g. like this:

```commandline
qt.qpa.plugin: Could not load the Qt platform plugin "xcb" in "" even though it was found.
This application failed to start because no Qt platform plugin could be initialized. 
Reinstalling the application may fix this problem.
```

...then install `libxcb`:

```commandline
sudo apt install libxcb-cursor0
```

If you have a problems with audio on Linux, then try to install `portaudio19-dev` and/or `libasound2`:

```commandline
sudo apt install portaudio19-dev
```

```commandline
sudo apt install libasound2
sudo apt install libasound2-data 
sudo apt install libasound2-plugins
```

**Access to camera in Snap version:**

To use camera in Vision mode in Snap version you must connect the camera with:

```commandline
sudo snap connect pygpt:camera
```

**Access to microphone in Snap version:**

To use microphone in Snap version you must connect the microphone with:

```commandline
sudo snap connect pygpt:audio-record :audio-record
```

**Windows and VC++ Redistributable**

On Windows, the proper functioning requires the installation of the `VC++ Redistributable`, which can be found on the Microsoft website:

https://learn.microsoft.com/en-us/cpp/windows/latest-supported-vc-redist

The libraries from this environment are used by `PySide6` - one of the base packages used by PyGPT. 
The absence of the installed libraries may cause display errors or completely prevent the application from running.

It may also be necessary to add the path `C:\path\to\venv\Lib\python3.x\site-packages\PySide6` to the `PATH` variable.

**WebEngine/Chromium renderer and OpenGL problems**

If you have a problems with `WebEngine / Chromium` renderer you can force the legacy mode by launching the app with command line arguments:

``` ini
python3 run.py --legacy=1
```

and to force disable OpenGL hardware acceleration:

``` ini
python3 run.py --disable-gpu=1
```

You can also manualy enable legacy mode by editing config file - open the `%WORKDIR%/config.json` config file in editor and set the following options:

``` json
"render.engine": "legacy",
"render.open_gl": false,
```

## Other requirements

For operation, an internet connection is needed (for API connectivity), a registered OpenAI account, 
and an active API key that must be input into the program.

## Debugging and logging

**Tip:** Go to `Debugging and Logging` section for instructions on how to log and diagnose issues in a more detailed manner.


# Quick Start

## Setting-up OpenAI API KEY

**Tip:** The API key is required to work with the OpenAI API. If you wish to use custom API endpoints or local API that do not require API keys, simply enter anything into the API key field to avoid a prompt about the API key being empty.

During the initial launch, you must configure your API key within the application.

To do so, navigate to the menu:

``` ini
Config -> Settings...
```

and then paste the API key into the `OpenAI API KEY` field.

![v2_settings](https://github.com/szczyglis-dev/py-gpt/assets/61396542/43622c58-6cdb-4ed8-b47d-47729763db04)

The API key can be obtained by registering on the OpenAI website:

<https://platform.openai.com>

Your API keys will be available here:

<https://platform.openai.com/account/api-keys>

**Note:** The ability to use models within the application depends on the API user's access to a given model!

# Working modes

## Chat

**+ inline Vision and Image generation**

This mode in **PyGPT** mirrors `ChatGPT`, allowing you to chat with models such as `GPT-4`, `GPT-4 Turbo` and `GPT-3.5`. It's easy to switch models whenever you want. It works by using the `ChatCompletion API`.

The main part of the interface is a chat window where conversations appear. Right below that is where you type your messages. On the right side of the screen, there's a section to set up or change your system prompts. You can also save these setups as presets to quickly switch between different models or tasks.

Above where you type your messages, the interface shows you the number of tokens your message will use up as you type it – this helps to keep track of usage. There's also a feature to upload files in this area. Go to the `Files` tab to manage your uploads or add attachments to send to the OpenAI API (but this makes effect only in `Assisant` and `Vision` modes).

![v2_mode_chat](https://github.com/szczyglis-dev/py-gpt/assets/61396542/f573ee22-8539-4259-b180-f97e54bc0d94)

**Vision:** If you want to send photos or image from camera to analysis you must enable plugin **GPT-4 Vision Inline** in the Plugins menu.
Plugin allows you to send photos or image from camera to analysis in any Chat mode:

![v3_vision_plugins](https://github.com/szczyglis-dev/py-gpt/assets/61396542/104b0a80-7cf8-4a02-aa74-27e89ad2e409)

With this plugin, you can capture an image with your camera or attach an image and send it for analysis to discuss the photograph:

![v3_vision_chat](https://github.com/szczyglis-dev/py-gpt/assets/61396542/3fbd99e2-5bbf-4bd4-81d8-fd4d7db9d8eb)

**Image generation:** If you want to generate images (using DALL-E) directly in chat you must enable plugin **DALL-E 3 Inline** in the Plugins menu.
Plugin allows you to generate images in Chat mode:

![v3_img_chat](https://github.com/szczyglis-dev/py-gpt/assets/61396542/c288a4b3-c932-4201-b5a3-8452aea49817)

## Completion

This mode provides in-depth access to a broader range of capabilities offered by Large Language Models (LLMs). While it maintains a chat-like interface for user interaction, it introduces additional settings and functional richness beyond typical chat exchanges. Users can leverage this mode to prompt models for complex text completions, role-play dialogues between different characters, perform text analysis, and execute a variety of other sophisticated tasks. It supports any model provided by the OpenAI API as well as other models through `Langchain`.

Similar to chat mode, on the right-hand side of the interface, there are convenient presets. These allow you to fine-tune instructions and swiftly transition between varied configurations and pre-made prompt templates.

Additionally, this mode offers options for labeling the AI and the user, making it possible to simulate dialogues between specific characters - for example, you could create a conversation between Batman and the Joker, as predefined in the prompt. This feature presents a range of creative possibilities for setting up different conversational scenarios in an engaging and exploratory manner.

![v2_mode_completion](https://github.com/szczyglis-dev/py-gpt/assets/61396542/045ecb99-edcb-4eb1-9ff0-0b493dee0e27)

From version `2.0.107` the `davinci` models are deprecated and has been replaced with `gpt-3.5-turbo-instruct` model in Completion mode.

## Assistants

This mode uses the new OpenAI's **Assistants API**.

This mode expands on the basic chat functionality by including additional external tools like a `Code Interpreter` for executing code, `Retrieval Files` for accessing files, and custom `Functions` for enhanced interaction and integration with other APIs or services. In this mode, you can easily upload and download files. **PyGPT** streamlines file management, enabling you to quickly upload documents and manage files created by the model.

Setting up new assistants is simple - a single click is all it takes, and they instantly sync with the `OpenAI API`. Importing assistants you've previously created with OpenAI into **PyGPT** is also a seamless process.

![v2_mode_assistant](https://github.com/szczyglis-dev/py-gpt/assets/61396542/5c3b5604-928d-4f29-940a-21cc83c8dc34)

In Assistant mode you are allowed to storage your files (per Assistant) and manage them easily from app:

![v2_mode_assistant_upload](https://github.com/szczyglis-dev/py-gpt/assets/61396542/b2c835ea-2816-4b85-bb6f-e08874e758f7)

Please note that token usage calculation is unavailable in this mode. Nonetheless, file (attachment) 
uploads are supported. Simply navigate to the `Files` tab to effortlessly manage files and attachments which 
can be sent to the OpenAI API.

### Vector stores (via Assistants API)

Assistant mode supports the use of external vector databases offered by the OpenAI API. This feature allows you to store your files in a database and then search them using the Assistant's API. Each assistant can be linked to one vector database—if a database is linked, all files uploaded in this mode will be stored in the linked vector database. If an assistant does not have a linked vector database, a temporary database is automatically created during the file upload, which is accessible only in the current thread. Files from temporary databases are automatically deleted after 7 days.

To enable the use of vector stores, enable the `Chat with files` checkbox in the Assistant settings. This enables the `File search` tool in Assistants API.

To manage external vector databases, click the DB icon next to the vector database selection list in the Assistant creation and editing window. In this management window, you can create a new database, edit an existing one, or import a list of all existing databases from the OpenAI server:

![v2_assistant_stores](https://github.com/szczyglis-dev/py-gpt/assets/61396542/2f605326-5bf5-4c82-8dfd-cb1c0edf6724)

You can define, using `Expire days`, how long files should be automatically kept in the database before deletion (as storing files on OpenAI incurs costs). If the value is set to 0, files will not be automatically deleted.

The vector database in use will be displayed in the list of uploaded files, on the field to the right—if a file is stored in a database, the name of the database will be displayed there; if not, information will be shown indicating that the file is only accessible within the thread:

![v2_assistant_stores_upload](https://github.com/szczyglis-dev/py-gpt/assets/61396542/8f13c2eb-f961-4eae-b08b-0b4937f06ca9)

## Vision (GPT-4 Vision)

**INFO:** From version `2.2.6` (2024-04-30) Vision is available directly in Chat mode, without any plugins - if the model supports Vision (currently: `gpt-4-turbo` and `gpt-4-turbo-2024-04-09`).

This mode enables image analysis using the `GPT-4 Vision` model. Functioning much like the chat mode, 
it also allows you to upload images or provide URLs to images. The vision feature can analyze both local 
images and those found online. 

Vision is integrated into any chat mode via plugin `GPT-4 Vision (inline)`. Just enable the plugin and use Vision in standard modes.

Vision mode also includes real-time video capture from camera. To enable capture check the option `Camera` on the right-bottom corner. It will enable real-time capturing from your camera. To capture image from camera and append it to chat just click on video at left side. You can also enable `Auto capture` - image will be captured and appended to chat message every time you send message.

![v2_capture_enable](https://github.com/szczyglis-dev/py-gpt/assets/61396542/c40ce0b4-57c8-4643-9982-25d15e68377e)

**1) Video camera real-time image capture**

![v2_capture1](https://github.com/szczyglis-dev/py-gpt/assets/61396542/477bb7fa-4639-42bb-8466-937e88e4a835)

![v3_vision_chat](https://github.com/szczyglis-dev/py-gpt/assets/61396542/3fbd99e2-5bbf-4bd4-81d8-fd4d7db9d8eb)

**2) you can also provide an image URL**

![v2_mode_vision](https://github.com/szczyglis-dev/py-gpt/assets/61396542/d1b68225-bf7f-4aa5-9562-b973211b57d7)

**3) or you can just upload your local images or use the inline Vision in the standard chat mode:**

![v2_mode_vision_upload](https://github.com/szczyglis-dev/py-gpt/assets/61396542/7885d7d0-072e-4053-a81b-6374711fd348)

**Tip:** When using `Vision (inline)` by utilizing a plugin in standard mode, such as `Chat` (not `Vision` mode), the `+ Vision` special checkbox will appear at the bottom of the Chat window. It will be automatically enabled any time you provide content for analysis (like an uploaded photo). When the checkbox is enabled, the vision model is used. If you wish to exit the vision model after image analysis, simply uncheck the checkbox. It will activate again automatically when the next image content for analysis is provided.

## Langchain

This mode enables you to work with models that are supported by `Langchain`. The Langchain support is integrated 
into the application, allowing you to interact with any LLM by simply supplying a configuration 
file for the specific model. You can add as many models as you like; just list them in the configuration 
file named `models.json`.

Available LLMs providers supported by **PyGPT**:

```
- OpenAI
- Azure OpenAI
- HuggingFace
- Anthropic
- Llama 2
- Ollama
```

![v2_mode_langchain](https://github.com/szczyglis-dev/py-gpt/assets/61396542/0471b6f9-7953-42cc-92bd-007f2c2e59d0)

You have the ability to add custom model wrappers for models that are not available by default in **PyGPT**. 
To integrate a new model, you can create your own wrapper and register it with the application. 
Detailed instructions for this process are provided in the section titled `Managing models / Adding models via Langchain`.

##  Chat with files (Llama-index)

This mode enables chat interaction with your documents and entire context history through conversation. 
It seamlessly incorporates `Llama-index` into the chat interface, allowing for immediate querying of your indexed documents.

**Querying single files**

You can also query individual files "on the fly" using the `query_file` command from the `Files I/O` plugin. This allows you to query any file by simply asking a question about that file. A temporary index will be created in memory for the file being queried, and an answer will be returned from it. From version `2.1.9` similar command is available for querying web and external content: `Directly query web content with Llama-index`.

For example:

If you have a file: `data/my_cars.txt` with content `My car is red.`

You can ask for: `Query the file my_cars.txt about what color my car is.`

And you will receive the response: `Red`.

Note: this command indexes the file only for the current query and does not persist it in the database. To store queried files also in the standard index you must enable the option "Auto-index readed files" in plugin settings. Remember to enable "Execute commands" checkbox to allow usage of query commands. 

**Using Chat with files mode**

In this mode, you are querying the whole index, stored in a vector store database.
To start, you need to index (embed) the files you want to use as additional context.
Embedding transforms your text data into vectors. If you're unfamiliar with embeddings and how they work, check out this article:

https://stackoverflow.blog/2023/11/09/an-intuitive-introduction-to-text-embeddings/

For a visualization from OpenAI's page, see this picture:

![vectors](https://github.com/szczyglis-dev/py-gpt/assets/61396542/4bbb3860-58a0-410d-b5cb-3fbfadf1a367)

Source: https://cdn.openai.com/new-and-improved-embedding-model/draft-20221214a/vectors-3.svg

To index your files, simply copy or upload them  into the `data` directory and initiate indexing (embedding) by clicking the `Index all` button, or right-click on a file and select `Index...`. Additionally, you have the option to utilize data from indexed files in any Chat mode by activating the `Chat with files (Llama-index, inline)` plugin.

![v2_idx1](https://github.com/szczyglis-dev/py-gpt/assets/61396542/c3dfbc89-cbfe-4ae3-b7e7-821401d755cd)

After the file(s) are indexed (embedded in vector store), you can use context from them in chat mode:

![v2_idx2](https://github.com/szczyglis-dev/py-gpt/assets/61396542/70c9ab66-82d9-4f61-81ed-268743bfa6b4)

Built-in file loaders: 

**Files:**

- CSV files (csv)
- Epub files (epub)
- Excel .xlsx spreadsheets (xlsx)
- HTML files (html, htm)
- IPYNB Notebook files (ipynb)
- Image (vision) (jpg, jpeg, png, gif, bmp, tiff, webp)
- JSON files (json)
- Markdown files (md)
- PDF documents (pdf)
- Txt/raw files (txt)
- Video/audio (mp4, avi, mov, mkv, webm, mp3, mpeg, mpga, m4a, wav)
- Word .docx documents (docx)
- XML files (xml)

**Web/external content:**

- Bitbucket
- ChatGPT Retrieval Plugin
- GitHub Issues
- GitHub Repository
- Google Calendar
- Google Docs
- Google Drive 
- Google Gmail
- Google Keep
- Google Sheets
- Microsoft OneDrive
- RSS
- SQL Database
- Sitemap (XML)
- Twitter/X posts
- Webpages (crawling any webpage content)
- YouTube (transcriptions)

You can configure data loaders in `Settings / Llama-index / Data Loaders` by providing list of keyword arguments for specified loaders.
You can also develop and provide your own custom loader and register it within the application.

Llama-index is also integrated with context database - you can use data from database (your context history) as additional context in discussion. 
Options for indexing existing context history or enabling real-time indexing new ones (from database) are available in `Settings / Llama-index` section.

**WARNING:** remember that when indexing content, API calls to the embedding model are used. Each indexing consumes additional tokens. Always control the number of tokens used on the OpenAI page.

**Tip:** when using `Chat with files` you are using additional context from db data and files indexed from `data` directory, not the files sending via `Attachments` tab. 
Attachments tab in `Chat with files` mode can be used to provide images to `Vision (inline)` plugin only.

**Token limit:** When you use `Chat with files` in non-query mode, Llama-index adds extra context to the system prompt. If you use a plugins (which also adds more instructions to system prompt), you might go over the maximum number of tokens allowed. If you get a warning that says you've used too many tokens, turn off plugins you're not using or turn off the "Execute commands" option to reduce the number of tokens used by the system prompt.

**Available vector stores** (provided by `Llama-index`):

```
- ChromaVectorStore
- ElasticsearchStore
- PinecodeVectorStore
- RedisVectorStore
- SimpleVectorStore
```

You can configure selected vector store by providing config options like `api_key`, etc. in `Settings -> Llama-index` window. 
Arguments provided here (on list: `Vector Store (**kwargs)` in `Advanced settings` will be passed to selected vector store provider. 
You can check keyword arguments needed by selected provider on Llama-index API reference page: 

https://docs.llamaindex.ai/en/stable/api_reference/storage/vector_store.html

Which keyword arguments are passed to providers?

For `ChromaVectorStore` and `SimpleVectorStore` all arguments are set by PyGPT and passed internally (you do not need to configure anything).
For other providers you can provide these arguments:

**ElasticsearchStore**

Keyword arguments for ElasticsearchStore(`**kwargs`):

- `index_name` (default: current index ID, already set, not required)
- any other keyword arguments provided on list

**PinecodeVectorStore**

Keyword arguments for Pinecone(`**kwargs`):

- `api_key`
- index_name (default: current index ID, already set, not required)

**RedisVectorStore**

Keyword arguments for RedisVectorStore(`**kwargs`):

- `index_name` (default: current index ID, already set, not required)
- any other keyword arguments provided on list

You can extend list of available providers by creating custom provider and registering it on app launch.

By default, you are using chat-based mode when using `Chat with files`.
If you want to only query index (without chat) you can enable `Query index only (without chat)` option.

### Adding custom vector stores and data loaders

You can create a custom vector store provider or data loader for your data and develop a custom launcher for the application. To register your custom vector store provider or data loader, simply register it by passing the vector store provider instance to `vector_stores` keyword argument and loader instance in the `loaders` keyword argument:

```python

# custom_launcher.py

from pygpt_net.app import run
from plugins import CustomPlugin, OtherCustomPlugin
from llms import CustomLLM
from vector_stores import CustomVectorStore
from loaders import CustomLoader

plugins = [
    CustomPlugin(),
    OtherCustomPlugin(),
]
llms = [
    CustomLLM(),
]
vector_stores = [
    CustomVectorStore(),
]
loaders = [
    CustomLoader(),
]

run(
    plugins=plugins,
    llms=llms,
    vector_stores=vector_stores,  # <--- list with custom vector store providers
    loaders=loaders  # <--- list with custom data loaders
)
```
The vector store provider must be an instance of `pygpt_net.provider.vector_stores.base.BaseStore`. 
You can review the code of the built-in providers in `pygpt_net.provider.vector_stores` and use them as examples when creating a custom provider.

The data loader must be an instance of `pygpt_net.provider.loaders.base.BaseLoader`. 
You can review the code of the built-in loaders in `pygpt_net.provider.loaders` and use them as examples when creating a custom loader.

**Configuring data loaders**

In the `Settings -> Llama-index -> Data loaders` section you can define the additional keyword arguments to pass into data loader instance.

In most cases, an internal Llama-index loaders are used internally. 
You can check these base loaders e.g. here:

File: https://github.com/run-llama/llama_index/tree/main/llama-index-integrations/readers/llama-index-readers-file/llama_index/readers/file

Web: https://github.com/run-llama/llama_index/tree/main/llama-index-integrations/readers/llama-index-readers-web

**Tip:** to index an external data or data from the Web just ask for it, by using `Command: Web Search` plugin, e.g. you can ask the model with `Please index the youtube video: URL to video`, etc. Data loader for a specified content will be choosen automatically.

Allowed additional keyword arguments for built-in data loaders (files):

**CSV Files**  (file_csv)

- `concat_rows` - bool, default: `True`
- `encoding` - str, default: `utf-8`

**HTML Files**  (file_html)

- `tag` - str, default: `section`
- `ignore_no_id` - bool, default: `False`

**Image (vision)**  (file_image_vision)

This loader can operate in two modes: local model and API.
If the local mode is enabled, then the local model will be used. The local mode requires a Python/PyPi version of the application and is not available in the compiled or Snap versions.
If the API mode (default) is selected, then the OpenAI API and the standard vision model will be used. 

**Note:** Usage of API mode consumes additional tokens in OpenAI API (for `GPT-4 Vision` model)!

Local mode requires `torch`, `transformers`, `sentencepiece` and `Pillow` to be installed and uses the `Salesforce/blip2-opt-2.7b` model to describing images.

- `keep_image` - bool, default: `False`
- `local_prompt` - str, default: `Question: describe what you see in this image. Answer:`
- `api_prompt` - str, default: `Describe what you see in this image` - Prompt to use in API
- `api_model` - str, default: `gpt-4-vision-preview` - Model to use in API
- `api_tokens` - int, default: `1000` - Max output tokens in API

**IPYNB Notebook files**  (file_ipynb)

- `parser_config` - dict, default: `None`
- `concatenate` - bool, default: `False`

**Markdown files**  (file_md)

- `remove_hyperlinks` - bool, default: `True`
- `remove_images` - bool, default: `True`

**PDF documents**  (file_pdf)

- `return_full_document` - bool, default: `False`

**Video/Audio**  (file_video_audio)

This loader can operate in two modes: local model and API.
If the local mode is enabled, then the local `Whisper` model will be used. The local mode requires a Python/PyPi version of the application and is not available in the compiled or Snap versions.
If the API mode (default) is selected, then the currently selected provider in `Audio Input` plugin will be used. If the `OpenAI Whisper` is chosen then the OpenAI API and the API Whisper model will be used. 

**Note:** Usage of Whisper via API consumes additional tokens in OpenAI API (for `Whisper` model)!

Local mode requires `torch` and `openai-whisper` to be installed and uses the `Whisper` model locally to transcribing video and audio.

- `model_version` - str, default: `base` - Whisper model to use, available models: https://github.com/openai/whisper

**XML files**  (file_xml)

- `tree_level_split` - int, default: `0`

Allowed additional keyword arguments for built-in data loaders (Web and external content):

**Bitbucket**  (web_bitbucket)

- `username` - str, default: `None`
- `api_key` - str, default: `None`
- `extensions_to_skip` - list, default: `[]`

**ChatGPT Retrieval**  (web_chatgpt_retrieval)

- `endpoint_url` - str, default: `None`
- `bearer_token` - str, default: `None`
- `retries` - int, default: `None`
- `batch_size` - int, default: `100`

**Google Calendar** (web_google_calendar)

- `credentials_path` - str, default: `credentials.json`
- `token_path` - str, default: `token.json`

**Google Docs** (web_google_docs)

- `credentials_path` - str, default: `credentials.json`
- `token_path` - str, default: `token.json`

**Google Drive** (web_google_drive)

- `credentials_path` - str, default: `credentials.json`
- `token_path` - str, default: `token.json`
- `pydrive_creds_path` - str, default: `creds.txt`

**Google Gmail** (web_google_gmail)

- `credentials_path` - str, default: `credentials.json`
- `token_path` - str, default: `token.json`
- `use_iterative_parser` - bool, default: `False`
- `max_results` - int, default: `10`
- `results_per_page` - int, default: `None`

**Google Keep** (web_google_keep)

- `credentials_path` - str, default: `keep_credentials.json`

**Google Sheets** (web_google_sheets)

- `credentials_path` - str, default: `credentials.json`
- `token_path` - str, default: `token.json`

**GitHub Issues**  (web_github_issues)

- `token` - str, default: `None`
- `verbose` - bool, default: `False`

**GitHub Repository**  (web_github_repository)

- `token` - str, default: `None`
- `verbose` - bool, default: `False`
- `concurrent_requests` - int, default: `5`
- `timeout` - int, default: `5`
- `retries` - int, default: `0`
- `filter_dirs_include` - list, default: `None`
- `filter_dirs_exclude` - list, default: `None`
- `filter_file_ext_include` - list, default: `None`
- `filter_file_ext_exclude` - list, default: `None`

**Microsoft OneDrive**  (web_microsoft_onedrive)

- `client_id` - str, default: `None`
- `client_secret` - str, default: `None`
- `tenant_id` - str, default: `consumers`

**Sitemap (XML)**  (web_sitemap)

- `html_to_text` - bool, default: `False`
- `limit` - int, default: `10`

**SQL Database**  (web_database)

- `engine` - str, default: `None`
- `uri` - str, default: `None`
- `scheme` - str, default: `None`
- `host` - str, default: `None`
- `port` - str, default: `None`
- `user` - str, default: `None`
- `password` - str, default: `None`
- `dbname` - str, default: `None`

**Twitter/X posts**  (web_twitter)

- `bearer_token` - str, default: `None`
- `num_tweets` - int, default: `100`

##  Agent (autonomous) 

**This mode is experimental.**

**WARNING: Please use this mode with caution** - autonomous mode, when connected with other plugins, may produce unexpected results!

The mode activates autonomous mode, where AI begins a conversation with itself. 
You can set this loop to run for any number of iterations. Throughout this sequence, the model will engage
in self-dialogue, answering his own questions and comments, in order to find the best possible solution, subjecting previously generated steps to criticism.

![v2_agent_toolbox](https://github.com/szczyglis-dev/py-gpt/assets/61396542/a0ae5d13-942e-4a18-9c53-33e7ad1886ff)

**WARNING:** Setting the number of run steps (iterations) to `0` activates an infinite loop which can generate a large number of requests and cause very high token consumption, so use this option with caution! Confirmation will be displayed every time you run the infinite loop.

This mode is similar to `Auto-GPT` - it can be used to create more advanced inferences and to solve problems by breaking them down into subtasks that the model will autonomously perform one after another until the goal is achieved.

You can create presets with custom instructions for multiple agents, incorporating various workflows, instructions, and goals to achieve.

All plugins are available for agents, so you can enable features such as file access, command execution, web searching, image generation, vision analysis, etc., for your agents. Connecting agents with plugins can create a fully autonomous, self-sufficient system. All currently enabled plugins are automatically available to the Agent.

When the `Auto-stop` option is enabled, the agent will attempt to stop once the goal has been reached.

**Options**

The agent is essentially a **virtual** mode that internally sequences the execution of a selected underlying mode. 
You can choose which internal mode the agent should use in the settings:

```Settings / Agent (autonomous) / Sub-mode to use```

Available choices include: `chat`, `completion`, `langchain`, `vision`, `llama_index` (Chat with files).

Default is: `chat`.

If you want to use the Llama-index mode when running the agent, you can also specify which index `Llama-index` should use with the option:

```Settings / Agent (autonomous) / Index to use```

![v2_agent_settings](https://github.com/szczyglis-dev/py-gpt/assets/61396542/c577d219-eb25-4f0e-9ea5-adf20a6b6b81)


##  Experts (co-op, co-operation mode)

Added in version 2.2.7 (2024-05-01).

**This mode is experimental.**

Expert mode allows for the creation of experts (using presets) and then consulting them during a conversation. In this mode, a primary base context is created for conducting the conversation. From within this context, the model can make requests to an expert to perform a task and return the results to the main thread. When an expert is called in the background, a separate context is created for them with their own memory. This means that each expert, during the life of one main context, also has access to their own memory via their separate, isolated context.

**In simple terms - you can imagine an expert as a separate, additional instance of the model running in the background, which can be called at any moment for assistance, with its own context and memory, as well as its own specialized instructions in a given subject.**

Experts do not share contexts with one another, and the only point of contact between them is the main conversation thread. In this main thread, the model acts as a manager of experts, who can exchange data between them as needed.

An expert is selected based on the name in the presets; for example, naming your expert as: ID = python_expert, name = "Python programmer" will create an expert whom the model will attempt to invoke for matters related to Python programming. You can also manually request to refer to a given expert:

```bash
Call the Python expert to generate some code.
```

Experts can be activated or deactivated - to enable or disable use RMB context menu to select the `Enable/Disable` options from the presets list. Only enabled experts are available to use in the thread.

Experts can also be used in `Agent (autonomous)` mode - by creating a new agent using a preset. Simply move the appropriate experts to the active list to automatically make them available for use by the agent.

You can also use experts in "inline" mode - by activating the `Experts (inline)` plugin. This allows for the use of experts in any mode, such as normal chat.

Expert mode, like agent mode, is a "virtual" mode - you need to select a target mode of operation for it, which can be done in the settings at `Settings / Agent (autonomous) / Sub-mode for experts`.

You can also ask for a list of active experts at any time:

```bash
Give me a list of active experts.
```

# Accessibility

Since version `2.2.8`, PyGPT has added beta support for disabled people and voice control. This may be very useful for blind people.


In the `Config / Accessibility` menu, you can turn on accessibility features such as:


- activating voice control

- translating actions and events on the screen with audio speech

- setting up keyboard shortcuts for actions.


**Using voice control**

Voice control can be turned on in two ways: globally, through settings in `Config -> Accessibility`, and by using the `Voice control (inline)` plugin. Both options let you use the same voice commands, but they work a bit differently - the global option allows you to run commands outside of a conversation, anywhere, while the plugin option lets you execute commands directly during a conversation – allowing you to interact with the model and execute commands at the same time, within the conversation.

In the plugin (inline) option, you can also turn on a special trigger word that will be needed for content to be recognized as a voice command. You can set this up by going to `Plugins -> Settings -> Voice Control (inline)`:

```bash
Magic prefix for voice commands
```

**Tip:** When the voice control is enabled via a plugin, simply provide commands while providing the content of the conversation by using the standard `Microphone` button.


**Enabling voice control globally**


Turn on the voice control option in `Config / Accessibility`:


```bash
Enable voice control (using microphone)
```

Once you enable this option, an `Voice Control` button will appear at the bottom right corner of the window. When you click on this button, the microphone will start listening; clicking it again stops listening and starts recognizing the voice command you said. You can cancel voice recording at any time with the `ESC` key. You can also set a keyboard shortcut to turn voice recording on/off.


Voice command recognition works based on a model, so you don't have to worry about saying things perfectly.


**Here's a list of commands you can ask for by voice:**

- Get the current application status
- Exit the application
- Enable audio output
- Disable audio output
- Enable audio input
- Disable audio input
- Add a memo to the calendar
- Clear memos from calendar
- Read the calendar memos
- Enable the camera
- Disable the camera
- Capture image from camera
- Create a new context
- Go to the previous context
- Go to the next context
- Go to the latest context
- Focus on the input
- Send the input
- Clear the input
- Get current conversation info
- Get available commands list
- Stop executing current action
- Clear the attachments
- Read the last conversation entry
- Read the whole conversation
- Rename current context
- Search for a conversation
- Clear the search results
- Send the message to input
- Append message to current input without sending it
- Switch to chat mode
- Switch to chat with files (llama-index) mode
- Switch to the next mode
- Switch to the previous mode
- Switch to the next model
- Switch to the previous model
- Add note to notepad
- Clear notepad contents
- Read current notepad contents
- Switch to the next preset
- Switch to the previous preset
- Switch to the chat tab
- Switch to the calendar tab
- Switch to the draw (painter) tab
- Switch to the files tab
- Switch to the notepad tab
- Switch to the next tab
- Switch to the previous tab
- Start listening for voice input
- Stop listening for voice input
- Toggle listening for voice input

More commands coming soon.

Just ask for an action that matches one of the descriptions above. These descriptions are also known to the model, and relevant commands are assigned to them. When you voice a command that fits one of those patterns, the model will trigger the appropriate action.


For convenience, you can enable a short sound to play when voice recording starts and stops. To do this, turn on the option:


```bash
Audio notify microphone listening start/stop
```

To enable a sound notification when a voice command is recognized and command execution begins, turn on the option:


```bash
Audio notify voice command execution
```

For voice translation of on-screen events and information about completed commands via speech synthesis, you can turn on the option:

```bash
Use voice synthesis to describe events on the screen.
```

![v2_access](https://github.com/szczyglis-dev/py-gpt/assets/61396542/02dd161b-6fb1-48f9-9217-40c658888833)


# Files and attachments

## Input attachments (upload)

**PyGPT** makes it simple for users to upload files to the server and send them to the model for tasks like analysis, similar to attaching files in `ChatGPT`. There's a separate `Files` tab next to the text input area specifically for managing file uploads. Users can opt to have files automatically deleted after each upload or keep them on the list for repeated use.

![v2_file_input](https://github.com/szczyglis-dev/py-gpt/assets/61396542/bd3d9840-2bc4-4ba8-a603-69724f9eb620)

The attachment feature is available in both the `Assistant` and `Vision` modes at default.
In `Assistant` mode, you can send documents and files to analyze, while in `Vision` mode, you can send images.
In other modes, you can enable attachments by activating the `Vision (inline)` plugin (for providing images only).

## Files (download, code generation)

**PyGPT** enables the automatic download and saving of files created by the model. This is carried out in the background, with the files being saved to an `data` folder located within the user's working directory. To view or manage these files, users can navigate to the `Files` tab which features a file browser for this specific directory. Here, users have the interface to handle all files sent by the AI.

This `data` directory is also where the application stores files that are generated locally by the AI, such as code files or any other data requested from the model. Users have the option to execute code directly from the stored files and read their contents, with the results fed back to the AI. This hands-off process is managed by the built-in plugin system and model-triggered commands. You can also indexing files from this directory (using integrated `Llama-index`) and use it's contents as additional context provided to discussion.

The `Command: Files I/O` plugin takes care of file operations in the `data` directory, while the `Command: Code Interpreter` plugin allows for the execution of code from these files.

![v2_file_output](https://github.com/szczyglis-dev/py-gpt/assets/61396542/2ada219d-68c9-45e3-96af-86ac5bc73593)

To allow the model to manage files or python code execution, the `Execute commands` option must be active, along with the above-mentioned plugins:

![v2_code_execute](https://github.com/szczyglis-dev/py-gpt/assets/61396542/d5181eeb-6ab4-426f-93f0-037d256cb078)

# Draw (paint)

Using the `Draw` tool, you can create quick sketches and submit them to the model for analysis. You can also edit opened from disk or captured from camera images, for example, by adding elements like arrows or outlines to objects. Additionally, you can capture screenshots from the system - the captured image is placed in the drawing tool and attached to the query being sent.

![v2_draw](https://github.com/szczyglis-dev/py-gpt/assets/61396542/09c1de36-1241-4330-9fd7-67c6e09888fa)

To capture the screenshot just click on the `Ask with screenshot` option in a tray-icon dropdown:

![v2_screenshot](https://github.com/szczyglis-dev/py-gpt/assets/61396542/7305a814-ca76-4f8f-8908-47f6a9496fa5)

# Calendar

Using the calendar, you can go back to selected conversations from a specific day and add daily notes. After adding a note, it will be marked on the list, and you can change the color of its label by right-clicking and selecting `Set label color`. By clicking on a particular day of the week, conversations from that day will be displayed.

![v2_calendar](https://github.com/szczyglis-dev/py-gpt/assets/61396542/c7d17375-b61f-452c-81bc-62a7d466fc18)

# Context and memory

## Short and long-term memory

**PyGPT** features a continuous chat mode that maintains a long context of the ongoing dialogue. It preserves the entire conversation history and automatically appends it to each new message (prompt) you send to the AI. Additionally, you have the flexibility to revisit past conversations whenever you choose. The application keeps a record of your chat history, allowing you to resume discussions from the exact point you stopped.

## Handling multiple contexts

On the left side of the application interface, there is a panel that displays a list of saved conversations. You can save numerous contexts and switch between them with ease. This feature allows you to revisit and continue from any point in a previous conversation. **PyGPT** automatically generates a summary for each context, akin to the way `ChatGPT` operates and gives you the option to modify these titles itself.

![v2_context_list](https://github.com/szczyglis-dev/py-gpt/assets/61396542/9228ea4c-f30c-4b02-ba85-da10b4e2eb7b)

You can disable context support in the settings by using the following option:

``` ini
Config -> Settings -> Use context 
```

## Clearing history

You can clear the entire memory (all contexts) by selecting the menu option:

``` ini
File -> Clear history...
```

## Context storage

On the application side, the context is stored in the `SQLite` database located in the working directory (`db.sqlite`).
In addition, all history is also saved to `.txt` files for easy reading.

Once a conversation begins, a title for the chat is generated and displayed on the list to the left. This process is similar to `ChatGPT`, where the subject of the conversation is summarized, and a title for the thread is created based on that summary. You can change the name of the thread at any time.

# Presets

## What is preset?

Presets in **PyGPT** are essentially templates used to store and quickly apply different configurations. Each preset includes settings for the mode you want to use (such as chat, completion, or image generation), an initial system message, an assigned name for the AI, a username for the session, and the desired "temperature" for the conversation. A warmer "temperature" setting allows the AI to provide more creative responses, while a cooler setting encourages more predictable replies. These presets can be used across various modes and with models accessed via the `OpenAI API` or `Langchain`.

The system lets you create as many presets as needed and easily switch among them. Additionally, you can clone an existing preset, which is useful for creating variations based on previously set configurations and experimentation.

![v2_preset](https://github.com/szczyglis-dev/py-gpt/assets/61396542/88167631-feb6-45ca-a006-25a21ec2339e)

## Example usage

The application includes several sample presets that help you become acquainted with the mechanism of their use.


# Image generation (DALL-E 3 and DALL-E 2)

## DALL-E 3

**PyGPT** enables quick and easy image creation with `DALL-E 3`. 
The older model version, `DALL-E 2`, is also accessible. Generating images is akin to a chat conversation  -  a user's prompt triggers the generation, followed by downloading, saving to the computer, 
and displaying the image onscreen. You can send raw prompt to `DALL-E` in `Image generation` mode or ask the model for the best prompt.

Image generation using DALL-E is available in every mode via plugin `DALL-E 3 Image Generation (inline)`. Just ask any model, in any mode, like e.g. GPT-4 to generate an image and it will do it inline, without need to mode change.

If you want to generate images (using DALL-E) directly in chat you must enable plugin **DALL-E 3 Inline** in the Plugins menu.
Plugin allows you to generate images in Chat mode:

![v3_img_chat](https://github.com/szczyglis-dev/py-gpt/assets/61396542/c288a4b3-c932-4201-b5a3-8452aea49817)

## Multiple variants

You can generate up to **4 different variants** (DALL-E 2) for a given prompt in one session. DALL-E 3 allows one image.
To select the desired number of variants to create, use the slider located in the right-hand corner at 
the bottom of the screen. This replaces the conversation temperature slider when you switch to image generation mode.

## Raw mode

There is an option for switching prompt generation mode.

If **Raw Mode** is enabled, DALL-E will receive the prompt exactly as you have provided it.
If **Raw Mode** is disabled, GPT will generate the best prompt for you based on your instructions.

![v2_dalle2](https://github.com/szczyglis-dev/py-gpt/assets/61396542/e1c30292-8ed0-4346-8b85-6d7a02d65fb6)

## Image storage

Once you've generated an image, you can easily save it anywhere on your disk by right-clicking on it. 
You also have the options to delete it or view it in full size in your web browser.

**Tip:** Use presets to save your prepared prompts. 
This lets you quickly use them again for generating new images later on.

The app keeps a history of all your prompts, allowing you to revisit any session and reuse previous 
prompts for creating new images.

Images are stored in ``img`` directory in **PyGPT** user data folder.

# Managing models

All models are specified in the configuration file `models.json`, which you can customize. 
This file is located in your working directory. You can add new models provided directly by `OpenAI API`
and those supported by `Langchain` to this file. Configuration for Langchain wrapper is placed in `langchain` key.

## Adding custom LLMs via Langchain

To add a new model using the Langchain wrapper in **PyGPT**, insert the model's configuration details into the `models.json` file. This should include the model's name, its supported modes (either `chat`, `completion`, or both), the LLM provider (which can be either e.g. `OpenAI` or `HuggingFace`), and, if you are using a `HuggingFace` model, an optional `API KEY`.

Example of models configuration - `models.json`:

```
"gpt-3.5-turbo": {
    "id": "gpt-3.5-turbo",
    "name": "gpt-3.5-turbo",
    "mode": [
        "chat",
        "assistant",
        "langchain",
        "llama_index"
    ],
    "langchain": {
        "provider": "openai",
        "mode": [
            "chat"
        ],
        "args": [
            {
                "name": "model_name",
                "value": "gpt-3.5-turbo",
                "type": "str"
            }
        ],
        "env": [
            {
                "name": "OPENAI_API_KEY",
                "value": "{api_key}"
            }
        ]
    },
    "llama_index": {
        "provider": "openai",
        "mode": [
            "chat"
        ],
        "args": [
            {
                "name": "model",
                "value": "gpt-3.5-turbo",
                "type": "str"
            }
        ],
        "env": [
            {
                "name": "OPENAI_API_KEY",
                "value": "{api_key}"
            }
        ]
    },
    "ctx": 4096,
    "tokens": 4096,
    "default": false
},
```

There is bult-in support for those LLMs providers:

```
- OpenAI (openai)
- Azure OpenAI (azure_openai)
- HuggingFace (huggingface)
- Anthropic (anthropic)
- Llama 2 (llama2)
- Ollama (ollama)
```

## Adding custom LLM providers

Handling LLMs with Langchain is implemented through separated wrappers. This allows for the addition of support for any provider and model available via Langchain. All built-in wrappers for the models and its providers are placed in the `pygpt_net.provider.llms`.

These wrappers are loaded into the application during startup using `launcher.add_llm()` method:

```python
# app.py

from pygpt_net.provider.llms.openai import OpenAILLM
from pygpt_net.provider.llms.azure_openai import AzureOpenAILLM
from pygpt_net.provider.llms.anthropic import AnthropicLLM
from pygpt_net.provider.llms.hugging_face import HuggingFaceLLM
from pygpt_net.provider.llms.llama import Llama2LLM
from pygpt_net.provider.llms.ollama import OllamaLLM


def run(**kwargs):
    """Runs the app."""
    # Initialize the app
    launcher = Launcher()
    launcher.init()

    # Register plugins
    ...

    # Register langchain LLMs wrappers
    launcher.add_llm(OpenAILLM())
    launcher.add_llm(AzureOpenAILLM())
    launcher.add_llm(AnthropicLLM())
    launcher.add_llm(HuggingFaceLLM())
    launcher.add_llm(Llama2LLM())
    launcher.add_llm(OllamaLLM())

    # Launch the app
    launcher.run()
```

To add support for providers not included by default, you can create your own wrapper that returns a custom model to the application and then pass this custom wrapper to the launcher.

Extending **PyGPT** with custom plugins and LLM wrappers is straightforward:

- Pass instances of custom plugins and LLM wrappers directly to the launcher.

To register custom LLM wrappers:

- Provide a list of LLM wrapper instances as `llms` keyword argument.

**Example:**


```python
# launcher.py

from pygpt_net.app import run
from plugins import CustomPlugin, OtherCustomPlugin
from llms import CustomLLM

plugins = [
    CustomPlugin(),
    OtherCustomPlugin(),
]
llms = [
    CustomLLM(),
]
vector_stores = []

run(
    plugins=plugins, 
    llms=llms, 
    vector_stores=vector_stores
)
```

**Examples (tutorial files)** 

See the `examples` directory in this repository with examples of custom launcher, plugin, vector store, LLM (Langchain and Llama-index) provider and data loader:

- `examples/custom_launcher.py`

- `examples/example_audio_input.py`

- `examples/example_audio_output.py`

- `examples/example_data_loader.py`

- `examples/example_llm.py`  <-- use it as an example

- `examples/example_plugin.py`

- `examples/example_vector_store.py`

- `examples/example_web_search.py`

These example files can be used as a starting point for creating your own extensions for **PyGPT**.

To integrate your own model or provider into **PyGPT**, you can also reference the classes located in the `pygpt_net.provider.llms`. These samples can act as an more complex example for your custom class. Ensure that your custom wrapper class includes two essential methods: `chat` and `completion`. These methods should return the respective objects required for the model to operate in `chat` and `completion` modes.


## Adding custom Vector Store providers

**From version 2.0.114 you can also register your own Vector Store provider**:

```python
# app.py

# vector stores
from pygpt_net.provider.vector_stores.chroma import ChromaProvider
from pygpt_net.provider.vector_stores.elasticsearch import ElasticsearchProvider
from pygpt_net.provider.vector_stores.pinecode import PinecodeProvider
from pygpt_net.provider.vector_stores.redis import RedisProvider
from pygpt_net.provider.vector_stores.simple import SimpleProvider

def run(**kwargs):
    # ...
    # register base vector store providers (llama-index)
    launcher.add_vector_store(ChromaProvider())
    launcher.add_vector_store(ElasticsearchProvider())
    launcher.add_vector_store(PinecodeProvider())
    launcher.add_vector_store(RedisProvider())
    launcher.add_vector_store(SimpleProvider())

    # register custom vector store providers (llama-index)
    vector_stores = kwargs.get('vector_stores', None)
    if isinstance(vector_stores, list):
        for store in vector_stores:
            launcher.add_vector_store(store)

    # ...
```

To register your custom vector store provider just register it by passing provider instance in `vector_stores` keyword argument:

```python

# custom_launcher.py

from pygpt_net.app import run
from plugins import CustomPlugin, OtherCustomPlugin
from llms import CustomLLM
from vector_stores import CustomVectorStore

plugins = [
    CustomPlugin(),
    OtherCustomPlugin(),
]
llms = [
    CustomLLM(),
]
vector_stores = [
    CustomVectorStore(),
]

run(
    plugins=plugins,
    llms=llms,
    vector_stores=vector_stores
)
```

# Plugins

**PyGPT** can be enhanced with plugins to add new features.

**Tip:** Plugins works best with GPT-4 models.

The following plugins are currently available, and model can use them instantly:

- `Audio Input` - provides speech recognition.

- `Audio Output` - provides voice synthesis.

- `Autonomous Agent (inline)` - enables autonomous conversation (AI to AI), manages loop, and connects output back to input. This is the inline Agent mode.

- `Chat with files (Llama-index, inline)` - plugin integrates `Llama-index` storage in any chat and provides additional knowledge into context (from indexed files and previous context from database).

- `Command: API calls` - plugin lets you connect the model to the external services using custom defined API calls.

- `Command: Code Interpreter` - responsible for generating and executing Python code, functioning much like 
the Code Interpreter on ChatGPT, but locally. This means GPT can interface with any script, application, or code. 
The plugin can also execute system commands, allowing GPT to integrate with your operating system. 
Plugins can work in conjunction to perform sequential tasks; for example, the `Files` plugin can write generated 
Python code to a file, which the `Code Interpreter` can execute it and return its result to GPT.

- `Command: Custom Commands` - allows you to create and execute custom commands on your system.

- `Command: Files I/O` - provides access to the local filesystem, enabling GPT to read and write files, 
as well as list and create directories.

- `Command: Web Search` - provides the ability to connect to the Web, search web pages for current data, and index external content using Llama-index data loaders.

- `Command: Serial port / USB` - plugin provides commands for reading and sending data to USB ports.

- `Context history (calendar, inline)` - provides access to context history database.

- `Crontab / Task scheduler` - plugin provides cron-based job scheduling - you can schedule tasks/prompts to be sent at any time using cron-based syntax for task setup.

- `DALL-E 3: Image Generation (inline)` - integrates DALL-E 3 image generation with any chat and mode. Just enable and ask for image in Chat mode, using standard model like GPT-4. The plugin does not require the `Execute commands` option to be enabled.

- `Experts (inline)` - allows calling experts in any chat mode. This is the inline Experts (co-op) mode.

- `GPT-4 Vision (inline)` - integrates Vision capabilities with any chat mode, not just Vision mode. When the plugin is enabled, the model temporarily switches to vision in the background when an image attachment or vision capture is provided.

- `Real Time` - automatically appends the current date and time to the system prompt, informing the model about current time.

- `System Prompt Extra (append)` - appends additional system prompts (extra data) from a list to every current system prompt. You can enhance every system prompt with extra instructions that will be automatically appended to the system prompt.

- `Voice Control (inline)` - provides voice control command execution within a conversation.


## Audio Input

The plugin facilitates speech recognition (by default using the `Whisper` model from OpenAI, `Google` and `Bing` are also available). It allows for voice commands to be relayed to the AI using your own voice. Whisper doesn't require any extra API keys or additional configurations; it uses the main OpenAI key. In the plugin's configuration options, you should adjust the volume level (min energy) at which the plugin will respond to your microphone. Once the plugin is activated, a new `Speak` option will appear at the bottom near the `Send` button  -  when this is enabled, the application will respond to the voice received from the microphone.

The plugin can be extended with other speech recognition providers.

Options:

- `Provider` *provider*

Choose the provider. *Default:* `Whisper`

Available providers:

- Whisper (via `OpenAI API`)
- Whisper (local model) - not available in compiled and Snap versions, only Python/PyPi version
- Google (via `SpeechRecognition` library)
- Google Cloud (via `SpeechRecognition` library)
- Microsoft Bing (via `SpeechRecognition` library)

**Whisper (API)**

- `Model` *whisper_model*

Choose the model. *Default:* `whisper-1`

**Whisper (local)**

- `Model` *whisper_local_model*

Choose the local model. *Default:* `base`

Available models: https://github.com/openai/whisper

**Google**

- `Additional keywords arguments` *google_args*

Additional keywords arguments for r.recognize_google(audio, **kwargs)

**Google Cloud**

- `Additional keywords arguments` *google_cloud_args*

Additional keywords arguments for r.recognize_google_cloud(audio, **kwargs)

**Bing**

- `Additional keywords arguments` *bing_args*

Additional keywords arguments for r.recognize_bing(audio, **kwargs)


**General options**

- `Auto send` *auto_send*

Automatically send recognized speech as input text after recognition. *Default:* `True`

- `Advanced mode` *advanced*

Enable only if you want to use advanced mode and the settings below. Do not enable this option if you just want to use the simplified mode (default). *Default:* `False`


**Advanced mode options**

- `Timeout` *timeout*

The duration in seconds that the application waits for voice input from the microphone. *Default:* `5`

- `Phrase max length` *phrase_length*

Maximum duration for a voice sample (in seconds). *Default:* `10`

- `Min energy` *min_energy*

Minimum threshold multiplier above the noise level to begin recording. *Default:* `1.3`

- `Adjust for ambient noise` *adjust_noise*

Enables adjustment to ambient noise levels. *Default:* `True`

- `Continuous listen` *continuous_listen*

Experimental: continuous listening - do not stop listening after a single input. 
Warning: This feature may lead to unexpected results and requires fine-tuning with 
the rest of the options! If disabled, listening must be started manually 
by enabling the `Speak` option. *Default:* `False`


- `Wait for response` *wait_response*

Wait for a response before initiating listening for the next input. *Default:* `True`

- `Magic word` *magic_word*

Activate listening only after the magic word is provided. *Default:* `False`

- `Reset Magic word` *magic_word_reset*

Reset the magic word status after it is received (the magic word will need to be provided again). *Default:* `True`

- `Magic words` *magic_words*

List of magic words to initiate listening (Magic word mode must be enabled). *Default:* `OK, Okay, Hey GPT, OK GPT`

- `Magic word timeout` *magic_word_timeout*

The number of seconds the application waits for magic word. *Default:* `1`

- `Magic word phrase max length` *magic_word_phrase_length*

The minimum phrase duration for magic word. *Default:* `2`

- `Prefix words` *prefix_words*

List of words that must initiate each phrase to be processed. For example, you can define words like "OK" or "GPT"—if set, any phrases not starting with those words will be ignored. Insert multiple words or phrases separated by commas. Leave empty to deactivate.  *Default:* `empty`

- `Stop words` *stop_words*

List of words that will stop the listening process. *Default:* `stop, exit, quit, end, finish, close, terminate, kill, halt, abort`

Options related to Speech Recognition internals:

- `energy_threshold` *recognition_energy_threshold*

Represents the energy level threshold for sounds. *Default:* `300`

- `dynamic_energy_threshold` *recognition_dynamic_energy_threshold*

Represents whether the energy level threshold (see recognizer_instance.energy_threshold) for sounds 
should be automatically adjusted based on the currently ambient noise level while listening. *Default:* `True`

- `dynamic_energy_adjustment_damping` *recognition_dynamic_energy_adjustment_damping*

Represents approximately the fraction of the current energy threshold that is retained after one second 
of dynamic threshold adjustment. *Default:* `0.15`

- `pause_threshold` *recognition_pause_threshold*

Represents the minimum length of silence (in seconds) that will register as the end of a phrase. *Default:* `0.8`

- `adjust_for_ambient_noise: duration` *recognition_adjust_for_ambient_noise_duration*

The duration parameter is the maximum number of seconds that it will dynamically adjust the threshold 
for before returning. *Default:* `1`

Options reference: https://pypi.org/project/SpeechRecognition/1.3.1/

## Audio Output

The plugin lets you turn text into speech using the TTS model from OpenAI or other services like ``Microsoft Azure``, ``Google``, and ``Eleven Labs``. You can add more text-to-speech providers to it too. `OpenAI TTS` does not require any additional API keys or extra configuration; it utilizes the main OpenAI key. 
Microsoft Azure requires to have an Azure API Key. Before using speech synthesis via `Microsoft Azure`, `Google` or `Eleven Labs`, you must configure the audio plugin with your API keys, regions and voices if required.

![v2_azure](https://github.com/szczyglis-dev/py-gpt/assets/61396542/8035e9a5-5a01-44a1-85da-6e44c52459e4)

Through the available options, you can select the voice that you want the model to use. More voice synthesis providers coming soon.

To enable voice synthesis, activate the `Audio Output` plugin in the `Plugins` menu or turn on the `Audio Output` option in the `Audio / Voice` menu (both options in the menu achieve the same outcome).

**Options**

- `Provider` *provider*

Choose the provider. *Default:* `OpenAI TTS`

Available providers:

- OpenAI TTS
- Microsoft Azure TTS
- Google TTS
- Eleven Labs TTS

**OpenAI Text-To-Speech**

- `Model` *openai_model*

Choose the model. Available options:

```
  - tts-1
  - tts-1-hd
```
*Default:* `tts-1`

- `Voice` *openai_voice*

Choose the voice. Available voices to choose from:

```
  - alloy
  - echo
  - fable
  - onyx
  - nova
  - shimmer
```

*Default:* `alloy`

**Microsoft Azure Text-To-Speech**

- `Azure API Key` *azure_api_key*

Here, you should enter the API key, which can be obtained by registering for free on the following website: https://azure.microsoft.com/en-us/services/cognitive-services/text-to-speech

- `Azure Region` *azure_region*

You must also provide the appropriate region for Azure here. *Default:* `eastus`

- `Voice (EN)` *azure_voice_en*

Here you can specify the name of the voice used for speech synthesis for English. *Default:* `en-US-AriaNeural`

- `Voice (non-English)` *azure_voice_pl*

Here you can specify the name of the voice used for speech synthesis for other non-english languages. *Default:* `pl-PL-AgnieszkaNeural`

**Google Text-To-Speech**

- `Google Cloud Text-to-speech API Key` *google_api_key*

You can obtain your own API key at: https://console.cloud.google.com/apis/library/texttospeech.googleapis.com

- `Voice` *google_voice*

Specify voice. Voices: https://cloud.google.com/text-to-speech/docs/voices

- `Language code` *google_api_key*

Language code. Language codes: https://cloud.google.com/speech-to-text/docs/speech-to-text-supported-languages

**Eleven Labs Text-To-Speech**

- `Eleven Labs API Key` *eleven_labs_api_key*

You can obtain your own API key at: https://elevenlabs.io/speech-synthesis

- `Voice ID` *eleven_labs_voice*

Voice ID. Voices: https://elevenlabs.io/voice-library

- `Model` *eleven_labs_model*

Specify model. Models: https://elevenlabs.io/docs/speech-synthesis/models


If speech synthesis is enabled, a voice will be additionally generated in the background while generating a response via GPT.

Both `OpenAI TTS` and `OpenAI Whisper` use the same single API key provided for the OpenAI API, with no additional keys required.

## Autonomous Agent (inline)

**WARNING: Please use autonomous mode with caution!** - this mode, when connected with other plugins, may produce unexpected results!

The plugin activates autonomous mode in standard chat modes, where AI begins a conversation with itself. 
You can set this loop to run for any number of iterations. Throughout this sequence, the model will engage
in self-dialogue, answering his own questions and comments, in order to find the best possible solution, subjecting previously generated steps to criticism.

This mode is similar to `Auto-GPT` - it can be used to create more advanced inferences and to solve problems by breaking them down into subtasks that the model will autonomously perform one after another until the goal is achieved. The plugin is capable of working in cooperation with other plugins, thus it can utilize tools such as web search, access to the file system, or image generation using `DALL-E`.

You can adjust the number of iterations for the self-conversation in the `Plugins / Settings...` menu under the following option:

- `Iterations` *iterations*

*Default:* `3`

**WARNING**: Setting this option to `0` activates an **infinity loop** which can generate a large number of requests and cause very high token consumption, so use this option with caution!

- `Prompts` *prompts*

Editable list of prompts used to instruct how to handle autonomous mode, you can create as many prompts as you want. 
First active prompt on list will be used to handle autonomous mode. **INFO:** At least one active prompt is required!

- `Auto-stop after goal is reached` *auto_stop*

If enabled, plugin will stop after goal is reached." *Default:* `True`

- `Reverse roles between iterations` *reverse_roles*

Only for Completion/Langchain modes. 
If enabled, this option reverses the roles (AI <> user) with each iteration. For example, 
if in the previous iteration the response was generated for "Batman," the next iteration will use that 
response to generate an input for "Joker." *Default:* `True`

## Chat with files (Llama-index, inline)

Plugin integrates `Llama-index` storage in any chat and provides additional knowledge into context.

- `Ask Llama-index first` *ask_llama_first*

When enabled, then `Llama-index` will be asked first, and response will be used as additional knowledge in prompt. When disabled, then `Llama-index` will be asked only when needed. **INFO: Disabled in autonomous mode (via plugin)!** *Default:* `False`

- `Auto-prepare question before asking Llama-index first` *prepare_question*

When enabled, then question will be prepared before asking Llama-index first to create best query. *Default:* `False`

- `Model for question preparation` *model_prepare_question*

Model used to prepare question before asking Llama-index. *Default:* `gpt-3.5-turbo`

- `Max output tokens for question preparation` *prepare_question_max_tokens*

Max tokens in output when preparing question before asking Llama-index. *Default:* `500`

- `Prompt for question preparation` *syntax_prepare_question*

System prompt for question preparation.

- `Max characters in question` *max_question_chars*

Max characters in question when querying Llama-index, 0 = no limit. *Default:* `1000`

- `Append metadata to context` *append_meta*

If enabled, then metadata from Llama-index will be appended to additional context. *Default:* `False`

- `Model` *model_query*

Model used for querying `Llama-index`. *Default:* `gpt-3.5-turbo`

- `Indexes IDs` *idx*

Indexes to use. If you want to use multiple indexes at once then separate them by comma. *Default:* `base`


## Command: API calls

**PyGPT** lets you connect the model to the external services using custom defined API calls.

To activate this feature, turn on the `Command: API calls` plugin found in the `Plugins` menu.

In this plugin you can provide list of allowed API calls, their parameters and request types. The model will replace provided placeholders with required params and make API call to external service.

- `Your custom API calls` *cmds*

You can provide custom API calls on the list here.

Params to specify for API call:

- **Enabled** (True / False)
- **Name:** unique API call name (ID)
- **Instruction:** description for model when and how to use this API call
- **GET params:** list, separated by comma, GET params to append to endpoint URL
- **POST params:** list, separated by comma, POST params to send in POST request
- **POST JSON:** provide the JSON object, template to send in POST JSON request, use `%param%` as POST param placeholders
- **Headers:** provide the JSON object with dictionary of extra request headers, like Authorization, API keys, etc.
- **Request type:** use GET for basic GET request, POST to send encoded POST params or POST_JSON to send JSON-encoded object as body
- **Endpoint:** API endpoint URL, use `{param}` as GET param placeholders

An example API call is provided with plugin by default, it calls the Wikipedia API:

- Name: `search_wiki`
- Instructiom: `send API call to Wikipedia to search pages by query`
- GET params: `query, limit`
- Type: `GET`
- API endpoint: https://en.wikipedia.org/w/api.php?action=opensearch&limit={limit}&format=json&search={query}

In the above example, every time you ask the model for query Wiki for provided query (e.g. `Call the Wikipedia API for query: Nikola Tesla`) it will replace placeholders in provided API endpoint URL with a generated query and it will call prepared API endpoint URL, like below:

https://en.wikipedia.org/w/api.php?action=opensearch&limit=5&format=json&search=Nikola%20Tesla

You can specify type of request: `GET`, `POST` and `POST JSON`.

In the `POST` request you can provide POST params, they will be encoded and send as POST data.

In the `POST JSON` request you must provide JSON object template to be send, using `%param%` placeholders in the JSON object to be replaced with the model.

You can also provide any required credentials, like Authorization headers, API keys, tokens, etc. using the `headers` field - you can provide a JSON object here with a dictionary `key => value` - provided JSON object will be converted to headers dictonary and send with the request.

- `Disable SSL verify` *disable_ssl*

Disables SSL verification when making requests. *Default:* `False`

- `Timeout` *timeout*

Connection timeout (seconds). *Default:* `5`

- `User agent` *user_agent*

User agent to use when making requests. *Default:* `Mozilla/5.0`


## Command: Code Interpreter

### Executing Code

The plugin operates similarly to the `Code Interpreter` in `ChatGPT`, with the key difference that it works locally on the user's system. It allows for the execution of any Python code on the computer that the model may generate. When combined with the `Command: Files I/O` plugin, it facilitates running code from files saved in the `data` directory. You can also prepare your own code files and enable the model to use them or add your own plugin for this purpose. You can execute commands and code on the host machine or in Docker container.

**Code interpreter:** a real-time Python code interpreter is built-in. Click the `<>` icon to open the interpreter window. Both the input and output of the interpreter are connected to the plugin. Any output generated by the executed code will be displayed in the interpreter. Additionally, you can request the model to retrieve contents from the interpreter window output.

![v2_python](https://github.com/szczyglis-dev/py-gpt/assets/61396542/793e554c-7619-402a-8370-ab89c7464fec)

### Executing system commands

Another feature is the ability to execute system commands and return their results. With this functionality, the plugin can run any system command, retrieve the output, and then feed the result back to the model. When used with other features, this provides extensive integration capabilities with the system.



**Tip:** always remember to enable the `Execute commands` option to allow execute commands from the plugins.


**Options:**

- `Python command template` *python_cmd_tpl*

Python command template (use {filename} as path to file placeholder). *Default:* `python3 {filename}`

- `Enable: code_execute` *cmd.code_execute*

Allows `code_execute` command execution. If enabled, provides Python code execution (generate and execute from file). *Default:* `True`

- `Enable: code_execute_all` *cmd.code_execute_all*

Allows `code_execute_all` command execution. If enabled, provides execution of all the Python code in interpreter window. *Default:* `True`

- `Enable: code_execute_file` *cmd.code_execute_file*

Allows `code_execute_file` command execution. If enabled, provides Python code execution from existing .py file. *Default:* `True`
 
- `Enable: sys_exec` *cmd.sys_exec*

Allows `sys_exec` command execution. If enabled, provides system commands execution. *Default:* `True`

- `Enable: get_python_output` *cmd.get_python_output*

Allows `get_python_output` command execution. If enabled, it allows retrieval of the output from the Python code interpreter window. *Default:* `True`

- `Enable: get_python_input` *cmd.get_python_input*

Allows `get_python_input` command execution. If enabled, it allows retrieval all input code (from edit section) from the Python code interpreter window. *Default:* `True`

- `Enable: clear_python_output` *cmd.clear_python_output*

Allows `clear_python_output` command execution. If enabled, it allows clear the output of the Python code interpreter window. *Default:* `True`

- `Sandbox (docker container)` *sandbox_docker*

Execute commands in sandbox (docker container). Docker must be installed and running. *Default:* `False`

- `Docker image` *sandbox_docker_image*

Docker image to use for sandbox *Default:* `python:3.8-alpine`

- `Auto-append CWD to sys_exec` *auto_cwd*

Automatically append current working directory to `sys_exec` command. *Default:* `True`

- `Connect to the Python code interpreter window` *attach_output*

Automatically attach code input/output to the Python code interpreter window. *Default:* `True`


## Command: Custom Commands

With the `Custom Commands` plugin, you can integrate **PyGPT** with your operating system and scripts or applications. You can define an unlimited number of custom commands and instruct GPT on when and how to execute them. Configuration is straightforward, and **PyGPT** includes a simple tutorial command for testing and learning how it works:

![v2_custom_cmd](https://github.com/szczyglis-dev/py-gpt/assets/61396542/b30b8724-9ca1-44b1-abc7-78241588e1f6)

To add a new custom command, click the **ADD** button and then:

1. Provide a name for your command: this is a unique identifier for GPT.
2. Provide an `instruction` explaining what this command does; GPT will know when to use the command based on this instruction.
3. Define `params`, separated by commas - GPT will send data to your commands using these params. These params will be placed into placeholders you have defined in the `cmd` field. For example:

If you want instruct GPT to execute your Python script named `smart_home_lights.py` with an argument, such as `1` to turn the light ON, and `0` to turn it OFF, define it as follows:

- **name**: lights_cmd
- **instruction**: turn lights on/off; use 1 as 'arg' to turn ON, or 0 as 'arg' to turn OFF
- **params**: arg
- **cmd**: `python /path/to/smart_home_lights.py {arg}`

The setup defined above will work as follows:

When you ask GPT to turn your lights ON, GPT will locate this command and prepare the command `python /path/to/smart_home_lights.py {arg}` with `{arg}` replaced with `1`. On your system, it will execute the command:

```python /path/to/smart_home_lights.py 1```

And that's all. GPT will take care of the rest when you ask to turn ON the lights.

You can define as many placeholders and parameters as you desire.

Here are some predefined system placeholders for use:

- `{_time}` - current time in `H:M:S` format
- `{_date}` - current date in `Y-m-d` format
- `{_datetime}` - current date and time in `Y-m-d H:M:S` format
- `{_file}` - path to the file from which the command is invoked
- `{_home}` - path to **PyGPT**'s home/working directory

You can connect predefined placeholders with your own params.

*Example:*

- **name**: song_cmd
- **instruction**: store the generated song on hard disk
- **params**: song_text, title
- **cmd**: `echo "{song_text}" > {_home}/{title}.txt`


With the setup above, every time you ask GPT to generate a song for you and save it to the disk, it will:

1. Generate a song.
2. Locate your command.
3. Execute the command by sending the song's title and text.
4. The command will save the song text into a file named with the song's title in the PyGPT working directory.

**Example tutorial command**

**PyGPT** provides simple tutorial command to show how it works, to run it just ask GPT for execute `tutorial test command` and it will show you how it works:

```> please execute tutorial test command```

![v2_custom_cmd_example](https://github.com/szczyglis-dev/py-gpt/assets/61396542/97cbc5b9-0dd9-487e-9182-d9873dea42ab)

## Command: Files I/O

The plugin allows for file management within the local filesystem. It enables the model to create, read, write and query files located in the `data` directory, which can be found in the user's work directory. With this plugin, the AI can also generate Python code files and thereafter execute that code within the user's system.

Plugin capabilities include:

- Sending files as attachments
- Reading files
- Appending to files
- Writing files
- Deleting files and directories
- Listing files and directories
- Creating directories
- Downloading files
- Copying files and directories
- Moving (renaming) files and directories
- Reading file info
- Indexing files and directories using Llama-index
- Querying files using Llama-index
- Searching for files and directories

If a file being created (with the same name) already exists, a prefix including the date and time is added to the file name.

**Options:**

**General**

- `Enable: send (upload) file as attachment` *cmd.send_file*

Allows `cmd.send_file` command execution. *Default:* `True`

- `Enable: read file` *cmd.read_file*

Allows `read_file` command execution. *Default:* `True`

- `Enable: append to file` *cmd.append_file*

Allows `append_file` command execution. Text-based files only (plain text, JSON, CSV, etc.) *Default:* `True`

- `Enable: save file` *cmd.save_file*

Allows `save_file` command execution. Text-based files only (plain text, JSON, CSV, etc.) *Default:* `True`

- `Enable: delete file` *cmd.delete_file*

Allows `delete_file` command execution. *Default:* `True`

- `Enable: list files (ls)` *cmd.list_files*

Allows `list_dir` command execution. *Default:* `True`

- `Enable: list files in dirs in directory (ls)` *cmd.list_dir*

Allows `mkdir` command execution. *Default:* `True`

- `Enable: downloading files` *cmd.download_file*

Allows `download_file` command execution. *Default:* `True`

- `Enable: removing directories` *cmd.rmdir*

Allows `rmdir` command execution. *Default:* `True`

- `Enable: copying files` *cmd.copy_file*

Allows `copy_file` command execution. *Default:* `True`

- `Enable: copying directories (recursive)` *cmd.copy_dir*

Allows `copy_dir` command execution. *Default:* `True`

- `Enable: move files and directories (rename)` *cmd.move*

Allows `move` command execution. *Default:* `True`

- `Enable: check if path is directory` *cmd.is_dir*

Allows `is_dir` command execution. *Default:* `True`

- `Enable: check if path is file` *cmd.is_file*

Allows `is_file` command execution. *Default:* `True`

- `Enable: check if file or directory exists` *cmd.file_exists*

Allows `file_exists` command execution. *Default:* `True`

- `Enable: get file size` *cmd.file_size*

Allows `file_size` command execution. *Default:* `True`

- `Enable: get file info` *cmd.file_info*

Allows `file_info` command execution. *Default:* `True`

- `Enable: find file or directory` *cmd.find*

Allows `find` command execution. *Default:* `True`

- `Enable: get current working directory` *cmd.cwd*

Allows `cwd` command execution. *Default:* `True`

- `Use data loaders` *use_loaders*

Use data loaders from Llama-index for file reading (`read_file` command). *Default:* `True`

**Indexing**

- `Enable: quick query the file with Llama-index` *cmd.query_file*

Allows `query_file` command execution (in-memory index). If enabled, model will be able to quick index file into memory and query it for data (in-memory index) *Default:* `True`

- `Model for query in-memory index` *model_tmp_query*

Model used for query temporary index for `query_file` command (in-memory index). *Default:* `gpt-3.5-turbo`

- `Enable: indexing files to persistent index` *cmd.file_index*

Allows `file_index` command execution. If enabled, model will be able to index file or directory using Llama-index (persistent index). *Default:* `True`

- `Index to use when indexing files` *idx*

ID of index to use for indexing files (persistent index). *Default:* `base`

- `Auto index reading files` *auto_index*

If enabled, every time file is read, it will be automatically indexed (persistent index). *Default:* `False`

- `Only index reading files` *only_index*

If enabled, file will be indexed without return its content on file read (persistent index). *Default:* `False`


## Command: Web Search

**PyGPT** lets you connect GPT to the internet and carry out web searches in real time as you make queries.

To activate this feature, turn on the `Command: Web Search` plugin found in the `Plugins` menu.

Web searches are provided by `Google Custom Search Engine` and `Microsoft Bing` APIs and can be extended with other search engine providers. 

**Options**

- `Provider` *provider*

Choose the provider. *Default:* `Google`

Available providers:

- Google
- Microsoft Bing

**Google**

To use this provider, you need an API key, which you can obtain by registering an account at:

https://developers.google.com/custom-search/v1/overview

After registering an account, create a new project and select it from the list of available projects:

https://programmablesearchengine.google.com/controlpanel/all

After selecting your project, you need to enable the `Whole Internet Search` option in its settings. 
Then, copy the following two items into **PyGPT**:

- `Api Key`
- `CX ID`

These data must be configured in the appropriate fields in the `Plugins / Settings...` menu:

![v2_plugin_google](https://github.com/szczyglis-dev/py-gpt/assets/61396542/f2e0df62-caaa-40ef-9b1e-239b2f912ec8)

- `Google Custom Search API KEY` *google_api_key*

You can obtain your own API key at https://developers.google.com/custom-search/v1/overview

- `Google Custom Search CX ID` *google_api_cx*

You will find your CX ID at https://programmablesearchengine.google.com/controlpanel/all - remember to enable "Search on ALL internet pages" option in project settings.

**Microsoft Bing**

- `Bing Search API KEY` *bing_api_key*

You can obtain your own API key at https://www.microsoft.com/en-us/bing/apis/bing-web-search-api

- `Bing Search API endpoint` *bing_endpoint*

API endpoint for Bing Search API, default: https://api.bing.microsoft.com/v7.0/search

**General options**

- `Number of pages to search` *num_pages*

Number of max pages to search per query. *Default:* `10`

- `Max content characters` *max_page_content_length*

Max characters of page content to get (0 = unlimited). *Default:* `0`

- `Per-page content chunk size` *chunk_size*

Per-page content chunk size (max characters per chunk). *Default:* `20000`

- `Disable SSL verify` *disable_ssl*

Disables SSL verification when crawling web pages. *Default:* `False`

- `Timeout` *timeout*

Connection timeout (seconds). *Default:* `5`

- `User agent` *user_agent*

User agent to use when making requests. *Default:* `Mozilla/5.0`.

- `Max result length` *max_result_length*

Max length of summarized result (characters). *Default:* `1500`

- `Max summary tokens` *summary_max_tokens*

Max tokens in output when generating summary. *Default:* `1500`

- `Enable: search the Web` *cmd.web_search*

Allows `web_search` command execution. If enabled, model will be able to search the Web. *Default:* `True`

- `Enable: opening URLs` *cmd.web_url_open*

Allows `web_url_open` command execution. If enabled, model will be able to open specified URL and summarize content. *Default:* `True`

- `Enable: reading the raw content from URLs` *cmd.web_url_raw*

Allows `web_url_raw` command execution. If enabled, model will be able to open specified URL and get the raw content. *Default:* `True`

- `Enable: getting a list of URLs from search results` *cmd.web_urls*

Allows `web_urls` command execution. If enabled, model will be able to search the Web and get founded URLs list. *Default:* `True`

- `Enable: indexing web and external content` *cmd.web_index*

Allows `web_index` command execution. If enabled, model will be able to index pages and external content using Llama-index (persistent index). *Default:* `True`

- `Enable: quick query the web and external content` *cmd.web_index_query*

Allows `web_index_query` command execution. If enabled, model will be able to quick index and query web content using Llama-index (in-memory index). *Default:* `True`

- `Auto-index all used URLs using Llama-index` *auto_index*

If enabled, every URL used by the model will be automatically indexed using Llama-index (persistent index). *Default:* `False`

- `Index to use` *idx*

ID of index to use for web page indexing (persistent index). *Default:* `base`

- `Model used for web page summarize` *summary_model*

Model used for web page summarize. *Default:* `gpt-3.5-turbo-1106`

- `Summarize prompt` *prompt_summarize*

Prompt used for web search results summarize, use {query} as a placeholder for search query.

- `Summarize prompt (URL open)` *prompt_summarize_url*

Prompt used for specified URL page summarize.

## Command: Serial port / USB

Provides commands for reading and sending data to USB ports.

**Tip:** in Snap version you must connect the interface first: https://snapcraft.io/docs/serial-port-interface

You can send commands to, for example, an Arduino or any other controllers using the serial port for communication.

![v2_serial](https://github.com/szczyglis-dev/py-gpt/assets/61396542/386d46fa-2e7c-43a6-918c-17eeef9344e0)

Above is an example of co-operation with the following code uploaded to `Arduino Uno` and connected via USB:

```cpp
// example.ino

void setup() {
  Serial.begin(9600);
}

void loop() {
  if (Serial.available() > 0) {
    String input = Serial.readStringUntil('\n');
    if (input.length() > 0) {
      Serial.println("OK, response for: " + input);
    }
  }
}
```

**Options**

- `USB port` *serial_port*

USB port name, e.g. `/dev/ttyUSB0`, `/dev/ttyACM0`, `COM3`. *Default:* `/dev/ttyUSB0`

- `Connection speed (baudrate, bps)` *serial_bps*

Port connection speed, in bps. *Default:* `9600`

- `Timeout` *timeout*

Timeout in seconds. *Default:* `1`

- `Sleep` *sleep*

Sleep in seconds after connection *Default:* `2`

- `Enable: Send text commands to USB port` *cmd.serial_send*

Allows `serial_send` command execution. *Default:* `True`

- `Enable: Send raw bytes to USB port` *cmd.serial_send_bytes*

Allows `serial_send_bytes` command execution. *Default:* `True`

- `Enable: Read data from USB port` *cmd.serial_read*

Allows `serial_read` command execution. *Default:* `True`

## Context history (calendar, inline)

Provides access to context history database.
Plugin also provides access to reading and creating day notes.

Examples of use, you can ask e.g. for the following:

```Give me today day note```

```Save a new note for today```

```Update my today note with...```

```Get the list of yesterday conversations```

```Get contents of conversation ID 123```

etc.

You can also use `@` ID tags to automatically use summary of previous contexts in current discussion.
To use context from previous discussion with specified ID use following syntax in your query:

```@123```

Where `123` is the ID of previous context (conversation) in database, example of use:

```Let's talk about discussion @123```


**Options**

- `Enable: using context @ ID tags` *use_tags*

When enabled, it allows to automatically retrieve context history using @ tags, e.g. use @123 in question to use summary of context with ID 123 as additional context. *Default:* `False`

- `Enable: get date range context list` *cmd.get_ctx_list_in_date_range*

Allows `get_ctx_list_in_date_range` command execution. If enabled, it allows getting the list of context history (previous conversations). *Default:* `True

- `Enable: get context content by ID` *cmd.get_ctx_content_by_id*

Allows `get_ctx_content_by_id` command execution. If enabled, it allows getting summarized content of context with defined ID. *Default:* `True`

- `Enable: count contexts in date range` *cmd.count_ctx_in_date*

Allows `count_ctx_in_date` command execution. If enabled, it allows counting contexts in date range. *Default:* `True`

- `Enable: get day note` *cmd.get_day_note*

Allows `get_day_note` command execution. If enabled, it allows retrieving day note for specific date. *Default:* `True`

- `Enable: add day note` *cmd.add_day_note*

Allows `add_day_note` command execution. If enabled, it allows adding day note for specific date. *Default:* `True`

- `Enable: update day note` *cmd.update_day_note*

Allows `update_day_note` command execution. If enabled, it allows updating day note for specific date. *Default:* `True`

- `Enable: remove day note` *cmd.remove_day_note*

Allows `remove_day_note` command execution. If enabled, it allows removing day note for specific date. *Default:* `True`

- `Model` *model_summarize*

Model used for summarize. *Default:* `gpt-3.5-turbo`

- `Max summary tokens` *summary_max_tokens*

Max tokens in output when generating summary. *Default:* `1500`

- `Max contexts to retrieve` *ctx_items_limit*

Max items in context history list to retrieve in one query. 0 = no limit. *Default:* `30`

- `Per-context items content chunk size` *chunk_size*

Per-context content chunk size (max characters per chunk). *Default:* `100000 chars`

**Options (advanced)**

- `Prompt: @ tags (system)` *prompt_tag_system*

Prompt for use @ tag (system).

- `Prompt: @ tags (summary)` *prompt_tag_summary*

Prompt for use @ tag (summary).


## Crontab / Task scheduler

Plugin provides cron-based job scheduling - you can schedule tasks/prompts to be sent at any time using cron-based syntax for task setup.

![v2_crontab](https://github.com/szczyglis-dev/py-gpt/assets/61396542/9fe8b25e-bbd2-4f03-9e5b-438e6f04d784)

- `Your tasks` *crontab*

Add your cron-style tasks here. 
They will be executed automatically at the times you specify in the cron-based job format. 
If you are unfamiliar with Cron, consider visiting the Cron Guru page for assistance: https://crontab.guru

Number of active tasks is always displayed in a tray dropdown menu:

![v2_crontab_tray](https://github.com/szczyglis-dev/py-gpt/assets/61396542/f9d1825f-4511-4b7f-bdce-45ee18408021)

- `Create a new context on job run` *new_ctx*

If enabled, then a new context will be created on every run of the job. *Default:* `True`

- `Show notification on job run` *show_notify*

If enabled, then a tray notification will be shown on every run of the job. *Default:* `True`


## DALL-E 3: Image Generation (inline)

The plugin integrates `DALL-E 3` image generation with any chat mode. Simply enable it and request an image in Chat mode, using a standard model such as `GPT-4`. The plugin does not require the `Execute commands` option to be enabled.

**Options**

- `Prompt` *prompt*

The prompt is used to generate a query for the `DALL-E` image generation model, which runs in the background.

##  Experts (inline)

The plugin allows calling experts in any chat mode. This is the inline Experts (co-op) mode.

See the `Mode -> Experts` section for more details.

## GPT-4 Vision (inline)

The plugin integrates vision capabilities across all chat modes, not just Vision mode. Once enabled, it allows the model to seamlessly switch to vision processing in the background whenever an image attachment or vision capture is detected.

**Tip:** When using `Vision (inline)` by utilizing a plugin in standard mode, such as `Chat` (not `Vision` mode), the `+ Vision` special checkbox will appear at the bottom of the Chat window. It will be automatically enabled any time you provide content for analysis (like an uploaded photo). When the checkbox is enabled, the vision model is used. If you wish to exit the vision model after image analysis, simply uncheck the checkbox. It will activate again automatically when the next image content for analysis is provided.

**Options**

- `Model` *model*

The model used to temporarily provide vision capabilities. *Default:* `gpt-4-vision-preview`.

- `Prompt` *prompt*

The prompt used for vision mode. It will append or replace current system prompt when using vision model.

- `Replace prompt` *replace_prompt*

Replace whole system prompt with vision prompt against appending it to the current prompt. *Default:* `False`

- `Enable: capturing images from camera` *cmd.camera_capture*

Allows `capture` command execution. If enabled, model will be able to capture images from camera itself. The `Execute commands` option must be enabled. *Default:* `False`

- `Enable: making screenshots` *cmd.make_screenshot*

Allows `screenshot` command execution. If enabled, model will be able to making screenshots itself. The `Execute commands` option must be enabled. *Default:* `False`

## Real Time

This plugin automatically adds the current date and time to each system prompt you send. 
You have the option to include just the date, just the time, or both.

When enabled, it quietly enhances each system prompt with current time information before sending it to GPT.

**Options**

- `Append time` *hour*

If enabled, it appends the current time to the system prompt. *Default:* `True`

- `Append date` *date*

If enabled, it appends the current date to the system prompt.  *Default:* `True`

- `Template` *tpl*

Template to append to the system prompt. The placeholder `{time}` will be replaced with the 
current date and time in real-time. *Default:* `Current time is {time}.`

## System Prompt Extra (append)

The plugin appends additional system prompts (extra data) from a list to every current system prompt. 
You can enhance every system prompt with extra instructions that will be automatically appended to the system prompt.

**Options**

- `Prompts` *prompts*

List of extra prompts - prompts that will be appended to system prompt. 
All active extra prompts defined on list will be appended to the system prompt in the order they are listed here.


## Voice Control (inline)

The plugin provides voice control command execution within a conversation.

See the ``Accessibility`` section for more details.


# Creating Your Own Plugins

You can create your own plugin for **PyGPT** at any time. The plugin can be written in Python and then registered with the application just before launching it. All plugins included with the app are stored in the `plugin` directory - you can use them as coding examples for your own plugins.

PyGPT can be extended with:

- Custom plugins

- Custom LLMs wrappers

- Custom vector store providers

- Custom data loaders

- Custom audio input providers

- Custom audio output providers

- Custom web search engine providers


**Examples (tutorial files)** 

See the `examples` directory in this repository with examples of custom launcher, plugin, vector store, LLM (Langchain and Llama-index) provider and data loader:

- `examples/custom_launcher.py`

- `examples/example_audio_input.py`

- `examples/example_audio_output.py`

- `examples/example_data_loader.py`

- `examples/example_llm.py`

- `examples/example_plugin.py`

- `examples/example_vector_store.py`

- `examples/example_web_search.py`

These example files can be used as a starting point for creating your own extensions for **PyGPT**.

Extending PyGPT with custom plugins, LLMs wrappers and vector stores:

- You can pass custom plugin instances, LLMs wrappers and vector store providers to the launcher.

- This is useful if you want to extend PyGPT with your own plugins, vectors storage and LLMs.

To register custom plugins:

- Pass a list with the plugin instances as `plugins` keyword argument.

To register custom LLMs wrappers:

- Pass a list with the LLMs wrappers instances as `llms` keyword argument.

To register custom vector store providers:

- Pass a list with the vector store provider instances as `vector_stores` keyword argument.

To register custom data loaders:

- Pass a list with the data loader instances as `loaders` keyword argument.

To register custom audio input providers:

- Pass a list with the audio input provider instances as `audio_input` keyword argument.

To register custom audio output providers:

- Pass a list with the audio output provider instances as `audio_output` keyword argument.

To register custom web providers:

- Pass a list with the web provider instances as `web` keyword argument.

**Example:**


```python
# custom_launcher.py

from pygpt_net.app import run
from plugins import CustomPlugin, OtherCustomPlugin
from llms import CustomLLM
from vector_stores import CustomVectorStore

plugins = [
    CustomPlugin(),
    OtherCustomPlugin(),
]
llms = [
    CustomLLM(),
]
vector_stores = [
    CustomVectorStore(),
]

run(
    plugins=plugins,
    llms=llms,
    vector_stores=vector_stores
)
```

## Handling events

In the plugin, you can receive and modify dispatched events.
To do this, create a method named `handle(self, event, *args, **kwargs)` and handle the received events like here:

```python
# custom_plugin.py

from pygpt_net.core.dispatcher import Event


def handle(self, event: Event, *args, **kwargs):
    """
    Handle dispatched events

    :param event: event object
    """
    name = event.name
    data = event.data
    ctx = event.ctx

    if name == Event.INPUT_BEFORE:
        self.some_method(data['value'])
    elif name == Event.CTX_BEGIN:
        self.some_other_method(ctx)
    else:
    	# ...
```

**List of Events**

Event names are defined in `Event` class in `pygpt_net.core.dispatcher.Event`.

Syntax: `event name` - triggered on, `event data` *(data type)*:

- `AI_NAME` - when preparing an AI name, `data['value']` *(string, name of the AI assistant)*

- `AUDIO_INPUT_STOP` - force stop audio input

- `AUDIO_INPUT_TOGGLE` - when speech input is enabled or disabled, `data['value']` *(bool, True/False)*

- `AUDIO_OUTPUT_STOP` - force stop audio output

- `AUDIO_OUTPUT_TOGGLE` - when speech output is enabled or disabled, `data['value']` *(bool, True/False)*

- `AUDIO_READ_TEXT` - on text read with speech synthesis, `data['value']` *(str)*

- `CMD_EXECUTE` - when a command is executed, `data['commands']` *(list, commands and arguments)*

- `CMD_INLINE` - when an inline command is executed, `data['commands']` *(list, commands and arguments)*

- `CMD_SYNTAX` - when appending syntax for commands, `data['prompt'], data['syntax']` *(string, list, prompt and list with commands usage syntax)*

- `CMD_SYNTAX_INLINE` - when appending syntax for commands (inline mode), `data['prompt'], data['syntax']` *(string, list, prompt and list with commands usage syntax)*

- `CTX_AFTER` - after the context item is sent, `ctx`

- `CTX_BEFORE` - before the context item is sent, `ctx`

- `CTX_BEGIN` - when context item create, `ctx`

- `CTX_END` - when context item handling is finished, `ctx`

- `CTX_SELECT` - when context is selected on list, `data['value']` *(int, ctx meta ID)*

- `DISABLE` - when the plugin is disabled, `data['value']` *(string, plugin ID)*

- `ENABLE` - when the plugin is enabled, `data['value']` *(string, plugin ID)*

- `FORCE_STOP` - on force stop plugins

- `INPUT_BEFORE` - upon receiving input from the textarea, `data['value']` *(string, text to be sent)*

- `MODE_BEFORE` - before the mode is selected `data['value'], data['prompt']` *(string, string, mode ID)*

- `MODE_SELECT` - on mode select `data['value']` *(string, mode ID)*

- `MODEL_BEFORE` - before the model is selected `data['value']` *(string, model ID)*

- `MODEL_SELECT` - on model select `data['value']` *(string, model ID)*

- `PLUGIN_SETTINGS_CHANGED` - on plugin settings update

- `PLUGIN_OPTION_GET` - on request for plugin option value `data['name'], data['value']` *(string, any, name of requested option, value)*

- `POST_PROMPT` - after preparing a system prompt, `data['value']` *(string, system prompt)*

- `PRE_PROMPT` - before preparing a system prompt, `data['value']` *(string, system prompt)*

- `SYSTEM_PROMPT` - when preparing a system prompt, `data['value']` *(string, system prompt)*

- `UI_ATTACHMENTS` - when the attachment upload elements are rendered, `data['value']` *(bool, show True/False)*

- `UI_VISION` - when the vision elements are rendered, `data['value']` *(bool, show True/False)*

- `USER_NAME` - when preparing a user's name, `data['value']` *(string, name of the user)*

- `USER_SEND` - just before the input text is sent, `data['value']` *(string, input text)*


You can stop the propagation of a received event at any time by setting `stop` to `True`:

```
event.stop = True
```

# Functions and commands execute

**Tip:** `gpt-4-1106-preview` is the best model to use for command handling, The `gpt-4-turbo-preview` model can sometimes refuse to execute commands.

**PyGPT** uses an internal syntax to define commands and their parameters, which can then be used by the model and executed on the application side or even directly in the system. This syntax looks as follows (example command below):

```~###~{"cmd": "send_email", "params": {"quote": "Why don't skeletons fight each other? They don't have the guts!"}}~###~```

It is JSON wrapped between `~###~`. The application extracts the JSON object from such formatted text and executes the appropriate function based on the provided parameters and command name. Many of these types of commands are defined in plugins (e.g., those used for file operations or internet searches). You can also define your own commands using the `Custom Commands` plugin, or simply by creating your own plugin and adding it to the application.

**Tip:** The `Execute commands` option checkbox must be enabled to allow the execution of commands from plugins. Disable the option if you do not want to use commands, to prevent additional token usage (as the command execution system prompt consumes additional tokens).

![v2_code_execute](https://github.com/szczyglis-dev/py-gpt/assets/61396542/d5181eeb-6ab4-426f-93f0-037d256cb078)

A special system prompt responsible for invoking commands is added to the main system prompt if the `Execute commands` option is active.

However, there is an additional possibility to define your own commands and execute them with the help of GPT.
These are functions - defined on the OpenAI API side and described using JSON objects. You can find a complete guide on how to define functions here:

https://platform.openai.com/docs/guides/function-calling

https://cookbook.openai.com/examples/how_to_call_functions_with_chat_models


PyGPT offers compatibility of these functions with commands used in the application. All you need to do is define the appropriate functions using the syntax required by OpenAI, and PyGPT will do the rest, translating such syntax on the fly into its own internal format.

You can define functions for modes: `Chat` and `Assistants`.
Note that - in Chat mode, they should be defined in `Presets`, and for Assistants, in the `Assistant` settings.

**Example of usage:**

1) Chat

Create a new Preset, open the Preset edit dialog and add a new function using `+ Function` button with the following content:

**Name:** `send_email`

**Description:** `Sends a quote using email`

**Params (JSON):**

```json
{
        "type": "object",
        "properties": {
            "quote": {
                "type": "string",
                "description": "A generated funny quote"
            }
        },
        "required": [
            "quote"
        ]
}
```

Then, in the `Custom Commands` plugin, create a new command with the same name and the same parameters:

**Command name:** `send_email`

**Instruction/prompt:** `send mail` *(don't needed, because it will be called on OpenAI side)*

**Params list:** `quote`

**Command to execute:** `echo "OK. Email sent: {quote}"`

At next, enable the `Execute commands` option and enable the plugin.

Ask GPT in Chat mode:

```Create a funny quote and email it```

In response you will receive prepared command, like this:

```~###~{"cmd": "send_email", "params": {"quote": "Why do we tell actors to 'break a leg?' Because every play has a cast!"}}~###~```

After receiving this, PyGPT will execute the system `echo` command with params given from `params` field and replacing `{quote}` placeholder with `quote` param value.

As a result, response like this will be sent to the model:

```[{"request": {"cmd": "send_email"}, "result": "OK. Email sent: Why do we tell actors to 'break a leg?' Because every play has a cast!"}]```


2) Assistant

In this mode (via Assistants API), it should be done similarly, with the difference that here the functions should be defined in the assistant's settings.

With this flow you can use both forms - OpenAI and PyGPT - to define and execute commands and functions in the application. They will cooperate with each other and you can use them interchangeably.

# Tools

PyGPT features several useful tools, including:

- Indexer
- Media Player
- Image viewer
- Text editor
- Transcribe audio/video files
- Python code interpreter

![v2_tool_menu](https://github.com/szczyglis-dev/py-gpt/assets/61396542/fb3f44af-f0de-4e18-bcac-e20389a651c9)


### Indexer


This tool allows indexing of local files or directories and external web content to a vector database, which can then be used with the `Chat with Files` mode. Using this tool, you can manage local indexes and add new data with built-in `Llama-index` integration.

![v2_tool_indexer](https://github.com/szczyglis-dev/py-gpt/assets/61396542/1caeab6e-6119-44e2-a7cb-ed34f8fe9e30)

### Media Player


A simple video/audio player that allows you to play video files directly from within the app.


### Image Viewer


A simple image browser that lets you preview images directly within the app.


### Text Editor


A simple text editor that enables you to edit text files directly within the app.


### Transcribe Audio/Video Files


An audio transcription tool with which you can prepare a transcript from a video or audio file. It will use a speech recognition plugin to generate the text from the file.


### Python Code Interpreter


This tool allows you to run Python code directly from within the app. It is integrated with the `Code Interpreter` plugin, ensuring that code generated by the model is automatically available from the interpreter. In the plugin settings, you can enable the execution of code in a Docker environment.

# Token usage calculation

## Input tokens

The application features a token calculator. It attempts to forecast the number of tokens that 
a particular query will consume and displays this estimate in real time. This gives you improved 
control over your token usage. The app provides detailed information about the tokens used for the user's prompt, 
the system prompt, any additional data, and those used within the context (the memory of previous entries).

**Remember that these are only approximate calculations and do not include, for example, the number of tokens consumed by some plugins. You can find the exact number of tokens used on the OpenAI website.**

![v2_tokens1](https://github.com/szczyglis-dev/py-gpt/assets/61396542/29b610be-9e96-41cc-84f0-1b946886f801)

## Total tokens

After receiving a response from the model, the application displays the actual total number of tokens used for the query (received from the API).

![v2_tokens2](https://github.com/szczyglis-dev/py-gpt/assets/61396542/c81e95b5-7c33-41a6-8910-21d674db37e5)

# Configuration

## Settings

The following basic options can be modified directly within the application:

``` ini
Config -> Settings...
```

![v2_settings](https://github.com/szczyglis-dev/py-gpt/assets/61396542/43622c58-6cdb-4ed8-b47d-47729763db04)

**General**

- `OpenAI API KEY`: The personal API key you'll need to enter into the application for it to function.

- `OpenAI ORGANIZATION KEY`: The organization's API key, which is optional for use within the application.

- `API Endpoint`: OpenAI API endpoint URL, default: https://api.openai.com/v1.

- `Number of notepads`: Number of notepad tabs. Restart of the application is required for this option to take effect.

- `Minimize to tray on exit`: Minimize to tray icon on exit. Tray icon enabled is required for this option to work. Default: False.

- `Render engine`: chat output render engine: `WebEngine / Chromium` - for full HTML/CSS and `Legacy (markdown)` for legacy, simple markdown CSS output. Default: WebEngine / Chromium.

- `OpenGL hardware acceleration`: enables hardware acceleration in `WebEngine / Chromium` renderer.  Default: False.

- `Application environment (os.environ)`: Additional environment vars to set on application start.

**Layout**

- `Zoom`: Adjusts the zoom in chat window (web render view). `WebEngine / Chromium` render mode only.

- `Code syntax highlight`: Syntax highlight theme in code blocks. `WebEngine / Chromium` render mode only.

- `Font Size (chat window)`: Adjusts the font size in the chat window (plain-text) and notepads.

- `Font Size (input)`: Adjusts the font size in the input window.

- `Font Size (ctx list)`: Adjusts the font size in contexts list.

- `Font Size (toolbox)`: Adjusts the font size in toolbox on right.

- `Layout density`: Adjusts layout elements density. Default: -1. 

- `DPI scaling`: Enable/disable DPI scaling. Restart of the application is required for this option to take effect. Default: True. 

- `DPI factor`: DPI factor. Restart of the application is required for this option to take effect. Default: 1.0. 

- `Display tips (help descriptions)`: Display help tips, Default: True.

- `Store dialog window positions`: Enable or disable dialogs positions store/restore, Default: True.

- `Use theme colors in chat window`: Use color theme in chat window, Default: True.

- `Disable markdown formatting in output`: Enables plain-text display in output window, Default: False.

**Files and attachments**

- `Store attachments in the workdir upload directory`: Enable to store a local copy of uploaded attachments for future use. Default: True

- `Store images, capture and upload in data directory`: Enable to store everything in single data directory. Default: False

- `Directory for file downloads`: Subdirectory for downloaded files, e.g. in Assistants mode, inside "data". Default: "download"

**Context**

- `Context Threshold`: Sets the number of tokens reserved for the model to respond to the next prompt.

- `Limit of last contexts on list to show  (0 = unlimited)`: Limit of the last contexts on list, default: 0 (unlimited)

- `Use Context`: Toggles the use of conversation context (memory of previous inputs).

- `Store History`: Toggles conversation history store.

- `Store Time in History`: Chooses whether timestamps are added to the .txt files.

- `Context Auto-summary`: Enables automatic generation of titles for contexts, Default: True.

- `Lock incompatible modes`: If enabled, the app will create a new context when switched to an incompatible mode within an existing context.

- `Search also in conversation content, not only in titles`: When enabled, context search will also consider the content of conversations, not just the titles of conversations.

- `Show Llama-index sources`: If enabled, sources utilized will be displayed in the response (if available, it will not work in streamed chat).

- `Show code interpreter output`: If enabled, output from the code interpreter in the Assistant API will be displayed in real-time (in stream mode), Default: True.

- `Use extra context output`: If enabled, plain text output (if available) from command results will be displayed alongside the JSON output, Default: True.

- `Convert lists to paragraphs`: If enabled, lists (ul, ol) will be converted to paragraphs (p), Default: True.

- `Model used for auto-summary`: Model used for context auto-summary (default: *gpt-3.5-turbo-1106*).

**Models**

- `Max Output Tokens`: Sets the maximum number of tokens the model can generate for a single response.

- `Max Total Tokens`: Sets the maximum token count that the application can send to the model, including the conversation context.

- `RPM limit`: Sets the limit of maximum requests per minute (RPM), 0 = no limit.

- `Temperature`: Sets the randomness of the conversation. A lower value makes the model's responses more deterministic, while a higher value increases creativity and abstraction.

- `Top-p`: A parameter that influences the model's response diversity, similar to temperature. For more information, please check the OpenAI documentation.

- `Frequency Penalty`: Decreases the likelihood of repetition in the model's responses.

- `Presence Penalty`: Discourages the model from mentioning topics that have already been brought up in the conversation.

**Prompts**

- `Command execute: instruction`: Prompt for appending command execution instructions. Placeholders: {schema}, {extra}

- `Command execute: extra footer (non-Assistant modes)`: Extra footer to append after commands JSON schema.

- `Command execute: extra footer (Assistant mode only)`: PAdditional instructions to separate local commands from the remote environment that is already configured in the Assistants.

- `Context: auto-summary (system prompt)`: System prompt for context auto-summary.

- `Context: auto-summary (user message)`: User message for context auto-summary. Placeholders: {input}, {output}

- `Agent: system instruction`: Prompt to instruct how to handle autonomous mode.

- `Agent: continue`: Prompt sent to automatically continue the conversation.

- `Agent: goal update`: Prompt to instruct how to update current goal status.

- `Experts: Master prompt`: Prompt to instruct how to handle experts.

- `DALL-E: image generate`: Prompt for generating prompts for DALL-E (if raw-mode is disabled).

**Images**

- `DALL-E Image size`: The resolution of the generated images (DALL-E). Default: 1792x1024.

- `DALL-E Image quality`: The image quality of the generated images (DALL-E). Default: standard.

- `Open image dialog after generate`: Enable the image dialog to open after an image is generated in Image mode.

- `DALL-E: prompt generation model`: Model used for generating prompts for DALL-E (if raw-mode is disabled).

**Vision**

- `Vision: Camera capture width (px)`: Video capture resolution (width).

- `Vision: Camera capture height (px)`: Video capture resolution (height).

- `Vision: Camera IDX (number)`: Video capture camera index (number of camera).

- `Vision: Image capture quality`: Video capture image JPEG quality (%).

**Indexes (Llama-index)**

- `Indexes`: List of created indexes.

- `Vector Store`: Vector store to use (vector database provided by Llama-index).

- `Vector Store (**kwargs)`: Keyword arguments for vector store provider (api_key, index_name, etc.).

- `Embeddings provider`: Embeddings provider.

- `Embeddings provider (ENV)`: ENV vars to embeddings provider (API keys, etc.).

- `Embeddings provider (**kwargs)`: Keyword arguments for embeddings provider (model name, etc.).

- `RPM limit for embeddings API calls`: Specify the limit of maximum requests per minute (RPM), 0 = no limit.

- `Recursive directory indexing`: Enables recursive directory indexing, default is False.

- `Replace old document versions in the index during re-indexing`: If enabled, previous versions of documents will be deleted from the index when the newest versions are indexed, default is True.

- `Excluded file extensions`: File extensions to exclude if no data loader for this extension, separated by comma.

- `Force exclude files`: If enabled, the exclusion list will be applied even when the data loader for the extension is active. Default: False.

- `Custom metadata to append/replace to indexed documents (file)`: Define custom metadata key => value fields for specified file extensions, separate extensions by comma.\nAllowed placeholders: {path}, {relative_path} {filename}, {dirname}, {relative_dir} {ext}, {size}, {mtime}, {date}, {date_time}, {time}, {timestamp}. Use * (asterisk) as extension if you want to apply field to all files. Set empty value to remove field with specified key from metadata.

- `Custom metadata to append/replace to indexed documents (web)`: Define custom metadata key => value fields for specified external data loaders.\nAllowed placeholders: {date}, {date_time}, {time}, {timestamp} + {data loader args}

- `Additional keyword arguments (**kwargs) for data loaders`: Additional keyword arguments, such as settings, API keys, for the data loader. These arguments will be passed to the loader; please refer to the Llama-index or LlamaHub loaders reference for a list of allowed arguments for the specified data loader.

- `Use local models in Video/Audio and Image (vision) loaders`: Enables usage of local models in Video/Audio and Image (vision) loaders. If disabled then API models will be used (GPT-4 Vision and Whisper). Note: local models will work only in Python version (not compiled/Snap). Default: False.

- `Auto-index DB in real time`: Enables conversation context auto-indexing in defined modes.

- `ID of index for auto-indexing`: Index to use if auto-indexing of conversation context is enabled.

- `Enable auto-index in modes`: List of modes with enabled context auto-index, separated by comma.

- `DB (ALL), DB (UPDATE), FILES (ALL)`: Index the data – batch indexing is available here.

**Agent and experts**

- `Sub-mode to use`: Sub-mode to use in Agent mode (chat, completion, langchain, llama_index, etc.). Default: chat.

- `Sub-mode for experts`: Sub-mode to use in Experts mode (chat, completion, langchain, llama_index, etc.). Default: chat.

- `Index to use`: Only if sub-mode is llama_index (Chat with files), choose the index to use in Agent mode.

- `Display a tray notification when the goal is achieved.`: If enabled, a notification will be displayed after goal achieved / finished run.

**Accessibility**

- `Enable voice control (using microphone)`: enables voice control (using microphone and defined commands).

- `Model`: model used for voice command recognition.

- `Use voice synthesis to describe events on the screen.`: enables audio description of on-screen events.

- `Use audio output cache`: If enabled, all static audio outputs will be cached on the disk instead of being generated every time. Default: True.

- `Audio notify microphone listening start/stop`: enables audio "tick" notify when microphone listening started/ended.

- `Audio notify voice command execution`: enables audio "tick" notify when voice command is executed.

- `Control shortcut keys`: configuration for keyboard shortcuts for a specified actions.

- `Blacklist for voice synthesis events describe (ignored events)`: list of muted events for 'Use voice synthesis to describe event' option.

- `Voice control actions blacklist`: Disable actions in voice control; add actions to the blacklist to prevent execution through voice commands.

**Updates**

- `Check for updates on start`: Enables checking for updates on start. Default: True.

- `Check for updates in background`: Enables checking for updates in background (checking every 5 minutes). Default: True.

**Developer**

- `Show debug menu`: Enables debug (developer) menu.

- `Log and debug context`: Enables logging of context input/output.

- `Log and debug events`: Enables logging of event dispatch.

- `Log plugin usage to console`: Enables logging of plugin usage to console.

- `Log DALL-E usage to console`: Enables logging of DALL-E usage to console.

- `Log Llama-index usage to console`: Enables logging of Llama-index usage to console.

- `Log Assistants usage to console`: Enables logging of Assistants API usage to console.

- `Log level`: toggle log level (ERROR|WARNING|INFO|DEBUG)


## JSON files

The configuration is stored in JSON files for easy manual modification outside of the application. 
These configuration files are located in the user's work directory within the following subdirectory:

``` ini
{HOME_DIR}/.config/pygpt-net/
```

# Notepad

The application has a built-in notepad, divided into several tabs. This can be useful for storing information in a convenient way, without the need to open an external text editor. The content of the notepad is automatically saved whenever the content changes.

![v2_notepad](https://github.com/szczyglis-dev/py-gpt/assets/61396542/f6aa0126-bad1-4e6c-ace6-72e979186433)

# Profiles

You can create many "profiles" for an app and switch between them. Each profile uses its own configuration, settings, history of contexts, and a separate folder for user files. This allows you to make many setups and quickly switch between them, changing the whole setting with one click.

The app allows you to make new profiles, edit existing ones, and duplicate current ones.

To make a new profile, select the option from the menu `Config -> Profile -> New profile...`

To edit saved profiles, choose the option from the menu `Config -> Profile -> Edit profiles...`

To switch to a created profile, pick the profile from the menu: `Config -> Profile -> (profile name)`

Each profile uses its own user directory (workdir). You can link a newly created (or edited) profile to an already existing workdir with its configuration.

The name of the currently active profile is shown in (Profile Name) in the window title.

# Advanced configuration

## Manual configuration


You can manually edit the configuration files in this directory (this is your work directory):

``` ini
{HOME_DIR}/.config/pygpt-net/
```

- `assistants.json` - stores the list of assistants.
- `attachments.json` - stores the list of current attachments.
- `config.json` - stores the main configuration settings.
- `models.json` - stores models configurations.
- `cache` - a directory for audio cache.
- `capture` - a directory for captured images from camera and screenshots
- `css` - a directory for CSS stylesheets (user override)
- `history` - a directory for context history in `.txt` format.
- `idx` - `Llama-index` indexes
- `img` - a directory for images generated with `DALL-E 3` and `DALL-E 2`, saved as `.png` files.
- `locale` - a directory for locales (user override)
- `data` - a directory for data files and files downloaded/generated by GPT.
- `presets` - a directory for presets stored as `.json` files.
- `upload` - a directory for local copies of attachments coming from outside the workdir
- `db.sqlite` - a database with contexts, notepads and indexes data records
- `app.log` - a file with error and debug log

---

## Translations / Locale

Locale `.ini` files are located in the app directory:

``` ini
./data/locale
```

This directory is automatically scanned when the application launches. To add a new translation, 
create and save the file with the appropriate name, for example:

``` ini
locale.es.ini   
```

This will add Spanish as a selectable language in the application's language menu.

**Overwriting CSS and locales with Your Own Files:**

You can also overwrite files in the `locale` and `css` app directories with your own files in the user directory. 
This allows you to overwrite language files or CSS styles in a very simple way - by just creating files in your working directory.


``` ini
{HOME_DIR}/.config/pygpt-net/
```

- `locale` - a directory for locales in `.ini` format.
- `css` - a directory for CSS styles in `.css` format.

**Adding Your Own Fonts**

You can add your own fonts and use them in CSS files. To load your own fonts, you should place them in the `%workdir%/fonts` directory. Supported font types include: `otf`, `ttf`.
You can see the list of loaded fonts in `Debug / Config`.

**Example:**

```
%workdir%
|_css
|_data
|_fonts
   |_MyFont
     |_MyFont-Regular.ttf
     |_MyFont-Bold.ttf
     |...
```

```css
pre {{
    font-family: 'MyFont';
}}
```

## Debugging and Logging

In `Settings -> Developer` dialog, you can enable the `Show debug menu` option to turn on the debugging menu. The menu allows you to inspect the status of application elements. In the debugging menu, there is a `Logger` option that opens a log window. In the window, the program's operation is displayed in real-time.

**Logging levels**:

By default, all errors and exceptions are logged to the file:

```ini
{HOME_DIR}/.config/pygpt-net/app.log
```

To increase the logging level (`ERROR` level is default), run the application with `--debug` argument:

``` ini
python3 run.py --debug=1
```

or

```ini
python3 run.py --debug=2
```

The value `1` enables the `INFO`logging level.

The value `2` enables the `DEBUG` logging level (most information).

## Compatibility (legacy) mode

If you have a problems with `WebEngine / Chromium` renderer you can force the legacy mode by launching the app with command line arguments:

``` ini
python3 run.py --legacy=1
```

and to force disable OpenGL hardware acceleration:

``` ini
python3 run.py --disable-gpu=1
```

You can also manualy enable legacy mode by editing config file - open the `%WORKDIR%/config.json` config file in editor and set the following options:

``` json
"render.engine": "legacy",
"render.open_gl": false,
```

## Updates

### Updating PyGPT

**PyGPT** comes with an integrated update notification system. When a new version with additional features is released, you'll receive an alert within the app. 

To get the new version, simply download it and start using it in place of the old one. All your custom settings like configuration, presets, indexes, and past conversations will be kept and ready to use right away in the new version.


## Coming soon

- Enhanced integration with Langchain
- More vector databases support
- Development of autonomous agents

## DISCLAIMER

This application is not officially associated with OpenAI. The author shall not be held liable for any damages 
resulting from the use of this application. It is provided "as is," without any form of warranty. 
Users are reminded to be mindful of token usage - always verify the number of tokens utilized by the model on 
the OpenAI website and engage with the application responsibly. Activating plugins, such as Web Search,
may consume additional tokens that are not displayed in the main window. 

**Always monitor your actual token usage on the OpenAI website.**

---

# CHANGELOG

## Recent changes:

**2.2.18 (2024-05-05)**

- Fix: prevent crash if no audio to play.

**2.2.17 (2024-05-05)**

- Fix: Added prevent try to play audio if empty output.
- Disabled playing finish event on audio or voice control enabled.

**2.2.16 (2024-05-05)**

- Escape key now stops response generation and audio output (if playing).
- Voice control options added to the Audio menu.
- Added cache on disk for generated static audio content.
- Added plugin translations for other languages.

**2.2.15 (2024-05-04)**

- Added audio output stop on audio input start.
- Added notify about unrecognized command.
- Voice control improvements.

**2.2.14 (2024-05-04)**

- Added a 'Voice Control (inline)' plugin that allows for voice command control directly during a conversation.
- Added configuration in 'Settings -> Accessibility' for a blacklist of actions available as voice commands.

**2.2.13 (2024-05-03)**

- Added stretch to dictionary config fields.
- Removed redundant attachments clear event.

**2.2.12 (2024-05-03)**

- Improved speech recognition.
- Added minimum required length of audio input.
- Added missing translations.
- Fixed settings hooks triggering on profile switch.

**2.2.11 (2024-05-03)**

- Added a blacklist for events for the voice event description in settings.
- Added a delay to playing audio when describing events.
- Sorted the list of events in the configuration.

**2.2.10 (2024-05-03)**

- Extended voice control commands list.
- Extended actions and keyboard shortcuts.

**2.2.9 (2024-05-02)**

- Added more commands to voice control: search for contexts, clear search, add, read and clear calendar memos, context rename.

**2.2.8 (2024-05-02)**

- Added support for disabled people, including voice control and screen event translation with audio synthesis.
- A new section in Settings called 'Accessibility' has been added with options for assistance: voice control, keyboard shortcut definitions for actions, and screen event translation using audio synthesis.
- A new section called 'Accessibility' has been added to the Documentation.

The full changelog is located in the [CHANGELOG.md](https://github.com/szczyglis-dev/py-gpt/blob/master/CHANGELOG.md) file in the main folder of this repository.


# Credits and links

**Official website:** <https://pygpt.net>

**Documentation:** <https://pygpt.readthedocs.io>

**Support and donate:** <https://pygpt.net/#donate>

**GitHub:** <https://github.com/szczyglis-dev/py-gpt>

**Snap Store:** <https://snapcraft.io/pygpt>

**PyPI:** <https://pypi.org/project/pygpt-net>

**Author:** Marcin Szczygliński (Poland, EU)

**Contact:** <info@pygpt.net>

**License:** MIT License

# Special thanks

GitHub's community:

- [@BillionShields](https://github.com/BillionShields)

- [@gfsysa](https://github.com/gfsysa)

- [@glinkot](https://github.com/glinkot)

- [@kaneda2004](https://github.com/kaneda2004)

- [@linnflux](https://github.com/linnflux)

- [@moritz-t-w](https://github.com/moritz-t-w)

- [@oleksii-honchar](https://github.com/oleksii-honchar)

- [@yf007](https://github.com/yf007)

## Third-party libraries

Full list of external libraries used in this project is located in the [requirements.txt](https://github.com/szczyglis-dev/py-gpt/blob/master/requirements.txt) file in the main folder of the repository.

All used SVG icons are from `Material Design Icons` provided by Google:

https://github.com/google/material-design-icons

https://fonts.google.com/icons

Monaspace fonts provided by GitHub: https://github.com/githubnext/monaspace

Code of the Llama-index offline loaders integrated into app is taken from LlamaHub: https://llamahub.ai

Awesome ChatGPT Prompts (used in templates): https://github.com/f/awesome-chatgpt-prompts/

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/szczyglis-dev/py-gpt",
    "name": "pygpt-net",
    "maintainer": null,
    "docs_url": null,
    "requires_python": "<3.12,>=3.10",
    "maintainer_email": null,
    "keywords": "py_gpt, py-gpt, pygpt, desktop, app, gpt, gpt4, gpt4-v, gpt3.5, gpt-4, gpt-4V, gpt-3.5, tts, whisper, vision, chatgpt, dall-e, chat, chatbot, assistant, text completion, image generation, ai, api, openai, api key, langchain, llama-index, presets, ui, qt, pyside",
    "author": "Marcin Szczyglinski",
    "author_email": "info@pygpt.net",
    "download_url": "https://files.pythonhosted.org/packages/4a/ed/fec6da641016678c1362f61817a14b695509fdd4c25d1931a38aa49b2f78/pygpt_net-2.2.18.tar.gz",
    "platform": null,
    "description": "# PyGPT - Desktop AI Assistant\n\n[![pygpt](https://snapcraft.io/pygpt/badge.svg)](https://snapcraft.io/pygpt)\n\nRelease: **2.2.18** | build: **2024.05.05** | Python: **>=3.10, <3.12**\n\nOfficial website: https://pygpt.net | Documentation: https://pygpt.readthedocs.io\n\nSnap Store: https://snapcraft.io/pygpt | PyPi: https://pypi.org/project/pygpt-net\n\nCompiled version for Linux (`tar.gz`) and Windows 10/11 (`msi`) 64-bit: https://pygpt.net/#download\n\n## Overview\n\n**PyGPT** is **all-in-one** Desktop AI Assistant that provides direct interaction with OpenAI language models, including `GPT-4`, `GPT-4 Vision`, and `GPT-3.5`, through the `OpenAI API`. The application also integrates with alternative LLMs, like those available on `HuggingFace`, by utilizing `Langchain`.\n\nThis assistant offers multiple modes of operation such as chat, assistants, completions, and image-related tasks using `DALL-E 3` for generation and `GPT-4 Vision` for image analysis. **PyGPT** has filesystem capabilities for file I/O, can generate and run Python code, execute system commands, execute custom commands and manage file transfers. It also allows models to perform web searches with the `Google` and `Microsoft Bing`.\n\nFor audio interactions, **PyGPT** includes speech synthesis using the `Microsoft Azure`, `Google`, `Eleven Labs` and `OpenAI` Text-To-Speech services. Additionally, it features speech recognition capabilities provided by `OpenAI Whisper`, `Google` and `Bing` enabling the application to understand spoken commands and transcribe audio inputs into text. It features context memory with save and load functionality, enabling users to resume interactions from predefined points in the conversation. Prompt creation and management are streamlined through an intuitive preset system.\n\n**PyGPT**'s functionality extends through plugin support, allowing for custom enhancements. Its multi-modal capabilities make it an adaptable tool for a range of AI-assisted operations, such as text-based interactions, system automation, daily assisting, vision applications, natural language processing, code generation and image creation.\n\nMultiple operation modes are included, such as chat, text completion, assistant, vision, Langchain, Chat with files (via `Llama-index`), commands execution, external API calls and image generation, making **PyGPT** a multi-tool for many AI-driven tasks.\n\n**Video** (mp4, version `2.2.0`, build `2024-04-28`):\n\nhttps://github.com/szczyglis-dev/py-gpt/assets/61396542/7140ded4-1639-4c12-ac33-201b68b99a16\n\n**Screenshot** (version `2.2.0`, build `2024-04-28`):\n\n![v2_main](https://github.com/szczyglis-dev/py-gpt/assets/61396542/d9f2e67f-919c-4faa-b059-6e2f5efd23e6)\n\nYou can download compiled 64-bit versions for Windows and Linux here: https://pygpt.net/#download\n\n## Features\n\n- Desktop AI Assistant for `Linux`, `Windows` and `Mac`, written in Python.\n- Works similarly to `ChatGPT`, but locally (on a desktop computer).\n- 9 modes of operation: Chat, Vision, Completion, Assistant, Image generation, Langchain, Chat with files, Experts and Agent (autonomous).\n- Supports multiple models: `GPT-4`, `GPT-3.5`, and any model accessible through `Langchain`.\n- Included support features for individuals with disabilities: customizable keyboard shortcuts, voice control, and translation of on-screen actions into audio via speech synthesis.\n- Handles and stores the full context of conversations (short-term memory).\n- Real-time video camera capture in Vision mode.\n- Internet access via `Google` and `Microsoft Bing`.\n- Speech synthesis via `Microsoft Azure`, `Google`, `Eleven Labs` and `OpenAI` Text-To-Speech services.\n- Speech recognition via `OpenAI Whisper`, `Google`, `Google Cloud` and `Microsoft Bing`.\n- Image analysis via `GPT-4 Vision`.\n- Crontab / Task scheduler included.\n- Integrated `Langchain` support (you can connect to any LLM, e.g., on `HuggingFace`).\n- Integrated `Llama-index` support: chat with `txt`, `pdf`, `csv`, `html`, `md`, `docx`, `json`, `epub`, `xlsx`, `xml`, webpages, `Google`, `GitHub`, video/audio, images and other data types, or use conversation history as additional context provided to the model.\n- Integrated calendar, day notes and search in contexts by selected date.\n- Commands execution (via plugins: access to the local filesystem, Python code interpreter, system commands execution).\n- Custom commands creation and execution.\n- Manages files and attachments with options to upload, download, and organize.\n- Context history with the capability to revert to previous contexts (long-term memory).\n- Allows you to easily manage prompts with handy editable presets.\n- Provides an intuitive operation and interface.\n- Includes a notepad.\n- Includes simple painter / drawing tool.\n- Includes optional Autonomous Mode (Agents).\n- Supports multiple languages.\n- Enables the use of all the powerful features of `GPT-4`, `GPT-4V`, and `GPT-3.5`.\n- Requires no previous knowledge of using AI models.\n- Simplifies image generation using `DALL-E 3` and `DALL-E 2`.\n- Possesses the potential to support future OpenAI models.\n- Fully configurable.\n- Themes support.\n- Real-time code syntax highlighting.\n- Plugins support.\n- Built-in token usage calculation.\n- It's open source; source code is available on `GitHub`.\n- Utilizes the user's own API key.\n\nThe application is free, open-source, and runs on PCs with `Linux`, `Windows 10`, `Windows 11` and `Mac`. \nFull Python source code is available on `GitHub`.\n\n**PyGPT uses the user's API key  -  to use the application, \nyou must have a registered OpenAI account and your own API key.**\n\nYou can also use built-it Langchain support to connect to other Large Language Models (LLMs), \nsuch as those on HuggingFace. Additional API keys may be required.\n\n# Installation\n\n## Compiled versions (Linux, Windows 10 and 11)\n\nYou can download compiled versions for `Linux` and `Windows` (10/11). \n\nDownload the `.msi` or `tar.gz` for the appropriate OS from the download page at https://pygpt.net and then extract files from the archive and run the application. 64-bit only.\n\n## Snap Store\n\nYou can install **PyGPT** directly from Snap Store:\n\n```commandline\nsudo snap install pygpt\n```\n\nTo manage future updates just use:\n\n```commandline\nsudo snap refresh pygpt\n```\n\n[![Get it from the Snap Store](https://snapcraft.io/static/images/badges/en/snap-store-black.svg)](https://snapcraft.io/pygpt)\n\n**Using camera:** to use camera in Snap version you must connect the camera with:\n\n```commandline\nsudo snap connect pygpt:camera\n```\n\n**Using microphone:** to use microphone in Snap version you must connect the microphone with:\n\n```commandline\nsudo snap connect pygpt:audio-record :audio-record\n```\n\n## PyPi (pip)\n\nThe application can also be installed from `PyPi` using `pip install`:\n\n1. Create virtual environment:\n\n```commandline\npython3 -m venv venv\nsource venv/bin/activate\n```\n\n2. Install from PyPi:\n\n``` commandline\npip install pygpt-net\n```\n\n3. Once installed run the command to start the application:\n\n``` commandline\npygpt\n```\n\n## Source Code\n\nAn alternative method is to download the source code from `GitHub` and execute the application using the Python interpreter (>=3.10, <3.12). \n\n### Running from GitHub source code\n\n1. Clone git repository or download .zip file:\n\n```commandline\ngit clone https://github.com/szczyglis-dev/py-gpt.git\ncd py-gpt\n```\n\n2. Create virtual environment:\n\n```commandline\npython3 -m venv venv\nsource venv/bin/activate\n```\n\n3. Install requirements:\n\n```commandline\npip install -r requirements.txt\n```\n\n4. Run the application:\n\n```commandline\npython3 run.py\n```\n\n**Install with Poetry**\n\n1. Clone git repository or download .zip file:\n\n```commandline\ngit clone https://github.com/szczyglis-dev/py-gpt.git\ncd py-gpt\n```\n\n2. Install Poetry (if not installed):\n\n```commandline\npip install poetry\n```\n\n3. Create a new virtual environment that uses Python 3.10:\n\n```commandline\npoetry env use python3.10\npoetry shell\n```\n\n4. Install requirements:\n\n```commandline\npoetry install\n```\n\n5. Run the application:\n\n```commandline\npoetry run python3 run.py\n```\n\n**Tip**: you can use `PyInstaller` to create a compiled version of\nthe application for your system (required version >= `6.0.0`).\n\n### Troubleshooting\n\nIf you have a problems with `xcb` plugin with newer versions of PySide on Linux, e.g. like this:\n\n```commandline\nqt.qpa.plugin: Could not load the Qt platform plugin \"xcb\" in \"\" even though it was found.\nThis application failed to start because no Qt platform plugin could be initialized. \nReinstalling the application may fix this problem.\n```\n\n...then install `libxcb`:\n\n```commandline\nsudo apt install libxcb-cursor0\n```\n\nIf you have a problems with audio on Linux, then try to install `portaudio19-dev` and/or `libasound2`:\n\n```commandline\nsudo apt install portaudio19-dev\n```\n\n```commandline\nsudo apt install libasound2\nsudo apt install libasound2-data \nsudo apt install libasound2-plugins\n```\n\n**Access to camera in Snap version:**\n\nTo use camera in Vision mode in Snap version you must connect the camera with:\n\n```commandline\nsudo snap connect pygpt:camera\n```\n\n**Access to microphone in Snap version:**\n\nTo use microphone in Snap version you must connect the microphone with:\n\n```commandline\nsudo snap connect pygpt:audio-record :audio-record\n```\n\n**Windows and VC++ Redistributable**\n\nOn Windows, the proper functioning requires the installation of the `VC++ Redistributable`, which can be found on the Microsoft website:\n\nhttps://learn.microsoft.com/en-us/cpp/windows/latest-supported-vc-redist\n\nThe libraries from this environment are used by `PySide6` - one of the base packages used by PyGPT. \nThe absence of the installed libraries may cause display errors or completely prevent the application from running.\n\nIt may also be necessary to add the path `C:\\path\\to\\venv\\Lib\\python3.x\\site-packages\\PySide6` to the `PATH` variable.\n\n**WebEngine/Chromium renderer and OpenGL problems**\n\nIf you have a problems with `WebEngine / Chromium` renderer you can force the legacy mode by launching the app with command line arguments:\n\n``` ini\npython3 run.py --legacy=1\n```\n\nand to force disable OpenGL hardware acceleration:\n\n``` ini\npython3 run.py --disable-gpu=1\n```\n\nYou can also manualy enable legacy mode by editing config file - open the `%WORKDIR%/config.json` config file in editor and set the following options:\n\n``` json\n\"render.engine\": \"legacy\",\n\"render.open_gl\": false,\n```\n\n## Other requirements\n\nFor operation, an internet connection is needed (for API connectivity), a registered OpenAI account, \nand an active API key that must be input into the program.\n\n## Debugging and logging\n\n**Tip:** Go to `Debugging and Logging` section for instructions on how to log and diagnose issues in a more detailed manner.\n\n\n# Quick Start\n\n## Setting-up OpenAI API KEY\n\n**Tip:** The API key is required to work with the OpenAI API. If you wish to use custom API endpoints or local API that do not require API keys, simply enter anything into the API key field to avoid a prompt about the API key being empty.\n\nDuring the initial launch, you must configure your API key within the application.\n\nTo do so, navigate to the menu:\n\n``` ini\nConfig -> Settings...\n```\n\nand then paste the API key into the `OpenAI API KEY` field.\n\n![v2_settings](https://github.com/szczyglis-dev/py-gpt/assets/61396542/43622c58-6cdb-4ed8-b47d-47729763db04)\n\nThe API key can be obtained by registering on the OpenAI website:\n\n<https://platform.openai.com>\n\nYour API keys will be available here:\n\n<https://platform.openai.com/account/api-keys>\n\n**Note:** The ability to use models within the application depends on the API user's access to a given model!\n\n# Working modes\n\n## Chat\n\n**+ inline Vision and Image generation**\n\nThis mode in **PyGPT** mirrors `ChatGPT`, allowing you to chat with models such as `GPT-4`, `GPT-4 Turbo` and `GPT-3.5`. It's easy to switch models whenever you want. It works by using the `ChatCompletion API`.\n\nThe main part of the interface is a chat window where conversations appear. Right below that is where you type your messages. On the right side of the screen, there's a section to set up or change your system prompts. You can also save these setups as presets to quickly switch between different models or tasks.\n\nAbove where you type your messages, the interface shows you the number of tokens your message will use up as you type it \u2013 this helps to keep track of usage. There's also a feature to upload files in this area. Go to the `Files` tab to manage your uploads or add attachments to send to the OpenAI API (but this makes effect only in `Assisant` and `Vision` modes).\n\n![v2_mode_chat](https://github.com/szczyglis-dev/py-gpt/assets/61396542/f573ee22-8539-4259-b180-f97e54bc0d94)\n\n**Vision:** If you want to send photos or image from camera to analysis you must enable plugin **GPT-4 Vision Inline** in the Plugins menu.\nPlugin allows you to send photos or image from camera to analysis in any Chat mode:\n\n![v3_vision_plugins](https://github.com/szczyglis-dev/py-gpt/assets/61396542/104b0a80-7cf8-4a02-aa74-27e89ad2e409)\n\nWith this plugin, you can capture an image with your camera or attach an image and send it for analysis to discuss the photograph:\n\n![v3_vision_chat](https://github.com/szczyglis-dev/py-gpt/assets/61396542/3fbd99e2-5bbf-4bd4-81d8-fd4d7db9d8eb)\n\n**Image generation:** If you want to generate images (using DALL-E) directly in chat you must enable plugin **DALL-E 3 Inline** in the Plugins menu.\nPlugin allows you to generate images in Chat mode:\n\n![v3_img_chat](https://github.com/szczyglis-dev/py-gpt/assets/61396542/c288a4b3-c932-4201-b5a3-8452aea49817)\n\n## Completion\n\nThis mode provides in-depth access to a broader range of capabilities offered by Large Language Models (LLMs). While it maintains a chat-like interface for user interaction, it introduces additional settings and functional richness beyond typical chat exchanges. Users can leverage this mode to prompt models for complex text completions, role-play dialogues between different characters, perform text analysis, and execute a variety of other sophisticated tasks. It supports any model provided by the OpenAI API as well as other models through `Langchain`.\n\nSimilar to chat mode, on the right-hand side of the interface, there are convenient presets. These allow you to fine-tune instructions and swiftly transition between varied configurations and pre-made prompt templates.\n\nAdditionally, this mode offers options for labeling the AI and the user, making it possible to simulate dialogues between specific characters - for example, you could create a conversation between Batman and the Joker, as predefined in the prompt. This feature presents a range of creative possibilities for setting up different conversational scenarios in an engaging and exploratory manner.\n\n![v2_mode_completion](https://github.com/szczyglis-dev/py-gpt/assets/61396542/045ecb99-edcb-4eb1-9ff0-0b493dee0e27)\n\nFrom version `2.0.107` the `davinci` models are deprecated and has been replaced with `gpt-3.5-turbo-instruct` model in Completion mode.\n\n## Assistants\n\nThis mode uses the new OpenAI's **Assistants API**.\n\nThis mode expands on the basic chat functionality by including additional external tools like a `Code Interpreter` for executing code, `Retrieval Files` for accessing files, and custom `Functions` for enhanced interaction and integration with other APIs or services. In this mode, you can easily upload and download files. **PyGPT** streamlines file management, enabling you to quickly upload documents and manage files created by the model.\n\nSetting up new assistants is simple - a single click is all it takes, and they instantly sync with the `OpenAI API`. Importing assistants you've previously created with OpenAI into **PyGPT** is also a seamless process.\n\n![v2_mode_assistant](https://github.com/szczyglis-dev/py-gpt/assets/61396542/5c3b5604-928d-4f29-940a-21cc83c8dc34)\n\nIn Assistant mode you are allowed to storage your files (per Assistant) and manage them easily from app:\n\n![v2_mode_assistant_upload](https://github.com/szczyglis-dev/py-gpt/assets/61396542/b2c835ea-2816-4b85-bb6f-e08874e758f7)\n\nPlease note that token usage calculation is unavailable in this mode. Nonetheless, file (attachment) \nuploads are supported. Simply navigate to the `Files` tab to effortlessly manage files and attachments which \ncan be sent to the OpenAI API.\n\n### Vector stores (via Assistants API)\n\nAssistant mode supports the use of external vector databases offered by the OpenAI API. This feature allows you to store your files in a database and then search them using the Assistant's API. Each assistant can be linked to one vector database\u2014if a database is linked, all files uploaded in this mode will be stored in the linked vector database. If an assistant does not have a linked vector database, a temporary database is automatically created during the file upload, which is accessible only in the current thread. Files from temporary databases are automatically deleted after 7 days.\n\nTo enable the use of vector stores, enable the `Chat with files` checkbox in the Assistant settings. This enables the `File search` tool in Assistants API.\n\nTo manage external vector databases, click the DB icon next to the vector database selection list in the Assistant creation and editing window. In this management window, you can create a new database, edit an existing one, or import a list of all existing databases from the OpenAI server:\n\n![v2_assistant_stores](https://github.com/szczyglis-dev/py-gpt/assets/61396542/2f605326-5bf5-4c82-8dfd-cb1c0edf6724)\n\nYou can define, using `Expire days`, how long files should be automatically kept in the database before deletion (as storing files on OpenAI incurs costs). If the value is set to 0, files will not be automatically deleted.\n\nThe vector database in use will be displayed in the list of uploaded files, on the field to the right\u2014if a file is stored in a database, the name of the database will be displayed there; if not, information will be shown indicating that the file is only accessible within the thread:\n\n![v2_assistant_stores_upload](https://github.com/szczyglis-dev/py-gpt/assets/61396542/8f13c2eb-f961-4eae-b08b-0b4937f06ca9)\n\n## Vision (GPT-4 Vision)\n\n**INFO:** From version `2.2.6` (2024-04-30) Vision is available directly in Chat mode, without any plugins - if the model supports Vision (currently: `gpt-4-turbo` and `gpt-4-turbo-2024-04-09`).\n\nThis mode enables image analysis using the `GPT-4 Vision` model. Functioning much like the chat mode, \nit also allows you to upload images or provide URLs to images. The vision feature can analyze both local \nimages and those found online. \n\nVision is integrated into any chat mode via plugin `GPT-4 Vision (inline)`. Just enable the plugin and use Vision in standard modes.\n\nVision mode also includes real-time video capture from camera. To enable capture check the option `Camera` on the right-bottom corner. It will enable real-time capturing from your camera. To capture image from camera and append it to chat just click on video at left side. You can also enable `Auto capture` - image will be captured and appended to chat message every time you send message.\n\n![v2_capture_enable](https://github.com/szczyglis-dev/py-gpt/assets/61396542/c40ce0b4-57c8-4643-9982-25d15e68377e)\n\n**1) Video camera real-time image capture**\n\n![v2_capture1](https://github.com/szczyglis-dev/py-gpt/assets/61396542/477bb7fa-4639-42bb-8466-937e88e4a835)\n\n![v3_vision_chat](https://github.com/szczyglis-dev/py-gpt/assets/61396542/3fbd99e2-5bbf-4bd4-81d8-fd4d7db9d8eb)\n\n**2) you can also provide an image URL**\n\n![v2_mode_vision](https://github.com/szczyglis-dev/py-gpt/assets/61396542/d1b68225-bf7f-4aa5-9562-b973211b57d7)\n\n**3) or you can just upload your local images or use the inline Vision in the standard chat mode:**\n\n![v2_mode_vision_upload](https://github.com/szczyglis-dev/py-gpt/assets/61396542/7885d7d0-072e-4053-a81b-6374711fd348)\n\n**Tip:** When using `Vision (inline)` by utilizing a plugin in standard mode, such as `Chat` (not `Vision` mode), the `+ Vision` special checkbox will appear at the bottom of the Chat window. It will be automatically enabled any time you provide content for analysis (like an uploaded photo). When the checkbox is enabled, the vision model is used. If you wish to exit the vision model after image analysis, simply uncheck the checkbox. It will activate again automatically when the next image content for analysis is provided.\n\n## Langchain\n\nThis mode enables you to work with models that are supported by `Langchain`. The Langchain support is integrated \ninto the application, allowing you to interact with any LLM by simply supplying a configuration \nfile for the specific model. You can add as many models as you like; just list them in the configuration \nfile named `models.json`.\n\nAvailable LLMs providers supported by **PyGPT**:\n\n```\n- OpenAI\n- Azure OpenAI\n- HuggingFace\n- Anthropic\n- Llama 2\n- Ollama\n```\n\n![v2_mode_langchain](https://github.com/szczyglis-dev/py-gpt/assets/61396542/0471b6f9-7953-42cc-92bd-007f2c2e59d0)\n\nYou have the ability to add custom model wrappers for models that are not available by default in **PyGPT**. \nTo integrate a new model, you can create your own wrapper and register it with the application. \nDetailed instructions for this process are provided in the section titled `Managing models / Adding models via Langchain`.\n\n##  Chat with files (Llama-index)\n\nThis mode enables chat interaction with your documents and entire context history through conversation. \nIt seamlessly incorporates `Llama-index` into the chat interface, allowing for immediate querying of your indexed documents.\n\n**Querying single files**\n\nYou can also query individual files \"on the fly\" using the `query_file` command from the `Files I/O` plugin. This allows you to query any file by simply asking a question about that file. A temporary index will be created in memory for the file being queried, and an answer will be returned from it. From version `2.1.9` similar command is available for querying web and external content: `Directly query web content with Llama-index`.\n\nFor example:\n\nIf you have a file: `data/my_cars.txt` with content `My car is red.`\n\nYou can ask for: `Query the file my_cars.txt about what color my car is.`\n\nAnd you will receive the response: `Red`.\n\nNote: this command indexes the file only for the current query and does not persist it in the database. To store queried files also in the standard index you must enable the option \"Auto-index readed files\" in plugin settings. Remember to enable \"Execute commands\" checkbox to allow usage of query commands. \n\n**Using Chat with files mode**\n\nIn this mode, you are querying the whole index, stored in a vector store database.\nTo start, you need to index (embed) the files you want to use as additional context.\nEmbedding transforms your text data into vectors. If you're unfamiliar with embeddings and how they work, check out this article:\n\nhttps://stackoverflow.blog/2023/11/09/an-intuitive-introduction-to-text-embeddings/\n\nFor a visualization from OpenAI's page, see this picture:\n\n![vectors](https://github.com/szczyglis-dev/py-gpt/assets/61396542/4bbb3860-58a0-410d-b5cb-3fbfadf1a367)\n\nSource: https://cdn.openai.com/new-and-improved-embedding-model/draft-20221214a/vectors-3.svg\n\nTo index your files, simply copy or upload them  into the `data` directory and initiate indexing (embedding) by clicking the `Index all` button, or right-click on a file and select `Index...`. Additionally, you have the option to utilize data from indexed files in any Chat mode by activating the `Chat with files (Llama-index, inline)` plugin.\n\n![v2_idx1](https://github.com/szczyglis-dev/py-gpt/assets/61396542/c3dfbc89-cbfe-4ae3-b7e7-821401d755cd)\n\nAfter the file(s) are indexed (embedded in vector store), you can use context from them in chat mode:\n\n![v2_idx2](https://github.com/szczyglis-dev/py-gpt/assets/61396542/70c9ab66-82d9-4f61-81ed-268743bfa6b4)\n\nBuilt-in file loaders: \n\n**Files:**\n\n- CSV files (csv)\n- Epub files (epub)\n- Excel .xlsx spreadsheets (xlsx)\n- HTML files (html, htm)\n- IPYNB Notebook files (ipynb)\n- Image (vision) (jpg, jpeg, png, gif, bmp, tiff, webp)\n- JSON files (json)\n- Markdown files (md)\n- PDF documents (pdf)\n- Txt/raw files (txt)\n- Video/audio (mp4, avi, mov, mkv, webm, mp3, mpeg, mpga, m4a, wav)\n- Word .docx documents (docx)\n- XML files (xml)\n\n**Web/external content:**\n\n- Bitbucket\n- ChatGPT Retrieval Plugin\n- GitHub Issues\n- GitHub Repository\n- Google Calendar\n- Google Docs\n- Google Drive \n- Google Gmail\n- Google Keep\n- Google Sheets\n- Microsoft OneDrive\n- RSS\n- SQL Database\n- Sitemap (XML)\n- Twitter/X posts\n- Webpages (crawling any webpage content)\n- YouTube (transcriptions)\n\nYou can configure data loaders in `Settings / Llama-index / Data Loaders` by providing list of keyword arguments for specified loaders.\nYou can also develop and provide your own custom loader and register it within the application.\n\nLlama-index is also integrated with context database - you can use data from database (your context history) as additional context in discussion. \nOptions for indexing existing context history or enabling real-time indexing new ones (from database) are available in `Settings / Llama-index` section.\n\n**WARNING:** remember that when indexing content, API calls to the embedding model are used. Each indexing consumes additional tokens. Always control the number of tokens used on the OpenAI page.\n\n**Tip:** when using `Chat with files` you are using additional context from db data and files indexed from `data` directory, not the files sending via `Attachments` tab. \nAttachments tab in `Chat with files` mode can be used to provide images to `Vision (inline)` plugin only.\n\n**Token limit:** When you use `Chat with files` in non-query mode, Llama-index adds extra context to the system prompt. If you use a plugins (which also adds more instructions to system prompt), you might go over the maximum number of tokens allowed. If you get a warning that says you've used too many tokens, turn off plugins you're not using or turn off the \"Execute commands\" option to reduce the number of tokens used by the system prompt.\n\n**Available vector stores** (provided by `Llama-index`):\n\n```\n- ChromaVectorStore\n- ElasticsearchStore\n- PinecodeVectorStore\n- RedisVectorStore\n- SimpleVectorStore\n```\n\nYou can configure selected vector store by providing config options like `api_key`, etc. in `Settings -> Llama-index` window. \nArguments provided here (on list: `Vector Store (**kwargs)` in `Advanced settings` will be passed to selected vector store provider. \nYou can check keyword arguments needed by selected provider on Llama-index API reference page: \n\nhttps://docs.llamaindex.ai/en/stable/api_reference/storage/vector_store.html\n\nWhich keyword arguments are passed to providers?\n\nFor `ChromaVectorStore` and `SimpleVectorStore` all arguments are set by PyGPT and passed internally (you do not need to configure anything).\nFor other providers you can provide these arguments:\n\n**ElasticsearchStore**\n\nKeyword arguments for ElasticsearchStore(`**kwargs`):\n\n- `index_name` (default: current index ID, already set, not required)\n- any other keyword arguments provided on list\n\n**PinecodeVectorStore**\n\nKeyword arguments for Pinecone(`**kwargs`):\n\n- `api_key`\n- index_name (default: current index ID, already set, not required)\n\n**RedisVectorStore**\n\nKeyword arguments for RedisVectorStore(`**kwargs`):\n\n- `index_name` (default: current index ID, already set, not required)\n- any other keyword arguments provided on list\n\nYou can extend list of available providers by creating custom provider and registering it on app launch.\n\nBy default, you are using chat-based mode when using `Chat with files`.\nIf you want to only query index (without chat) you can enable `Query index only (without chat)` option.\n\n### Adding custom vector stores and data loaders\n\nYou can create a custom vector store provider or data loader for your data and develop a custom launcher for the application. To register your custom vector store provider or data loader, simply register it by passing the vector store provider instance to `vector_stores` keyword argument and loader instance in the `loaders` keyword argument:\n\n```python\n\n# custom_launcher.py\n\nfrom pygpt_net.app import run\nfrom plugins import CustomPlugin, OtherCustomPlugin\nfrom llms import CustomLLM\nfrom vector_stores import CustomVectorStore\nfrom loaders import CustomLoader\n\nplugins = [\n    CustomPlugin(),\n    OtherCustomPlugin(),\n]\nllms = [\n    CustomLLM(),\n]\nvector_stores = [\n    CustomVectorStore(),\n]\nloaders = [\n    CustomLoader(),\n]\n\nrun(\n    plugins=plugins,\n    llms=llms,\n    vector_stores=vector_stores,  # <--- list with custom vector store providers\n    loaders=loaders  # <--- list with custom data loaders\n)\n```\nThe vector store provider must be an instance of `pygpt_net.provider.vector_stores.base.BaseStore`. \nYou can review the code of the built-in providers in `pygpt_net.provider.vector_stores` and use them as examples when creating a custom provider.\n\nThe data loader must be an instance of `pygpt_net.provider.loaders.base.BaseLoader`. \nYou can review the code of the built-in loaders in `pygpt_net.provider.loaders` and use them as examples when creating a custom loader.\n\n**Configuring data loaders**\n\nIn the `Settings -> Llama-index -> Data loaders` section you can define the additional keyword arguments to pass into data loader instance.\n\nIn most cases, an internal Llama-index loaders are used internally. \nYou can check these base loaders e.g. here:\n\nFile: https://github.com/run-llama/llama_index/tree/main/llama-index-integrations/readers/llama-index-readers-file/llama_index/readers/file\n\nWeb: https://github.com/run-llama/llama_index/tree/main/llama-index-integrations/readers/llama-index-readers-web\n\n**Tip:** to index an external data or data from the Web just ask for it, by using `Command: Web Search` plugin, e.g. you can ask the model with `Please index the youtube video: URL to video`, etc. Data loader for a specified content will be choosen automatically.\n\nAllowed additional keyword arguments for built-in data loaders (files):\n\n**CSV Files**  (file_csv)\n\n- `concat_rows` - bool, default: `True`\n- `encoding` - str, default: `utf-8`\n\n**HTML Files**  (file_html)\n\n- `tag` - str, default: `section`\n- `ignore_no_id` - bool, default: `False`\n\n**Image (vision)**  (file_image_vision)\n\nThis loader can operate in two modes: local model and API.\nIf the local mode is enabled, then the local model will be used. The local mode requires a Python/PyPi version of the application and is not available in the compiled or Snap versions.\nIf the API mode (default) is selected, then the OpenAI API and the standard vision model will be used. \n\n**Note:** Usage of API mode consumes additional tokens in OpenAI API (for `GPT-4 Vision` model)!\n\nLocal mode requires `torch`, `transformers`, `sentencepiece` and `Pillow` to be installed and uses the `Salesforce/blip2-opt-2.7b` model to describing images.\n\n- `keep_image` - bool, default: `False`\n- `local_prompt` - str, default: `Question: describe what you see in this image. Answer:`\n- `api_prompt` - str, default: `Describe what you see in this image` - Prompt to use in API\n- `api_model` - str, default: `gpt-4-vision-preview` - Model to use in API\n- `api_tokens` - int, default: `1000` - Max output tokens in API\n\n**IPYNB Notebook files**  (file_ipynb)\n\n- `parser_config` - dict, default: `None`\n- `concatenate` - bool, default: `False`\n\n**Markdown files**  (file_md)\n\n- `remove_hyperlinks` - bool, default: `True`\n- `remove_images` - bool, default: `True`\n\n**PDF documents**  (file_pdf)\n\n- `return_full_document` - bool, default: `False`\n\n**Video/Audio**  (file_video_audio)\n\nThis loader can operate in two modes: local model and API.\nIf the local mode is enabled, then the local `Whisper` model will be used. The local mode requires a Python/PyPi version of the application and is not available in the compiled or Snap versions.\nIf the API mode (default) is selected, then the currently selected provider in `Audio Input` plugin will be used. If the `OpenAI Whisper` is chosen then the OpenAI API and the API Whisper model will be used. \n\n**Note:** Usage of Whisper via API consumes additional tokens in OpenAI API (for `Whisper` model)!\n\nLocal mode requires `torch` and `openai-whisper` to be installed and uses the `Whisper` model locally to transcribing video and audio.\n\n- `model_version` - str, default: `base` - Whisper model to use, available models: https://github.com/openai/whisper\n\n**XML files**  (file_xml)\n\n- `tree_level_split` - int, default: `0`\n\nAllowed additional keyword arguments for built-in data loaders (Web and external content):\n\n**Bitbucket**  (web_bitbucket)\n\n- `username` - str, default: `None`\n- `api_key` - str, default: `None`\n- `extensions_to_skip` - list, default: `[]`\n\n**ChatGPT Retrieval**  (web_chatgpt_retrieval)\n\n- `endpoint_url` - str, default: `None`\n- `bearer_token` - str, default: `None`\n- `retries` - int, default: `None`\n- `batch_size` - int, default: `100`\n\n**Google Calendar** (web_google_calendar)\n\n- `credentials_path` - str, default: `credentials.json`\n- `token_path` - str, default: `token.json`\n\n**Google Docs** (web_google_docs)\n\n- `credentials_path` - str, default: `credentials.json`\n- `token_path` - str, default: `token.json`\n\n**Google Drive** (web_google_drive)\n\n- `credentials_path` - str, default: `credentials.json`\n- `token_path` - str, default: `token.json`\n- `pydrive_creds_path` - str, default: `creds.txt`\n\n**Google Gmail** (web_google_gmail)\n\n- `credentials_path` - str, default: `credentials.json`\n- `token_path` - str, default: `token.json`\n- `use_iterative_parser` - bool, default: `False`\n- `max_results` - int, default: `10`\n- `results_per_page` - int, default: `None`\n\n**Google Keep** (web_google_keep)\n\n- `credentials_path` - str, default: `keep_credentials.json`\n\n**Google Sheets** (web_google_sheets)\n\n- `credentials_path` - str, default: `credentials.json`\n- `token_path` - str, default: `token.json`\n\n**GitHub Issues**  (web_github_issues)\n\n- `token` - str, default: `None`\n- `verbose` - bool, default: `False`\n\n**GitHub Repository**  (web_github_repository)\n\n- `token` - str, default: `None`\n- `verbose` - bool, default: `False`\n- `concurrent_requests` - int, default: `5`\n- `timeout` - int, default: `5`\n- `retries` - int, default: `0`\n- `filter_dirs_include` - list, default: `None`\n- `filter_dirs_exclude` - list, default: `None`\n- `filter_file_ext_include` - list, default: `None`\n- `filter_file_ext_exclude` - list, default: `None`\n\n**Microsoft OneDrive**  (web_microsoft_onedrive)\n\n- `client_id` - str, default: `None`\n- `client_secret` - str, default: `None`\n- `tenant_id` - str, default: `consumers`\n\n**Sitemap (XML)**  (web_sitemap)\n\n- `html_to_text` - bool, default: `False`\n- `limit` - int, default: `10`\n\n**SQL Database**  (web_database)\n\n- `engine` - str, default: `None`\n- `uri` - str, default: `None`\n- `scheme` - str, default: `None`\n- `host` - str, default: `None`\n- `port` - str, default: `None`\n- `user` - str, default: `None`\n- `password` - str, default: `None`\n- `dbname` - str, default: `None`\n\n**Twitter/X posts**  (web_twitter)\n\n- `bearer_token` - str, default: `None`\n- `num_tweets` - int, default: `100`\n\n##  Agent (autonomous) \n\n**This mode is experimental.**\n\n**WARNING: Please use this mode with caution** - autonomous mode, when connected with other plugins, may produce unexpected results!\n\nThe mode activates autonomous mode, where AI begins a conversation with itself. \nYou can set this loop to run for any number of iterations. Throughout this sequence, the model will engage\nin self-dialogue, answering his own questions and comments, in order to find the best possible solution, subjecting previously generated steps to criticism.\n\n![v2_agent_toolbox](https://github.com/szczyglis-dev/py-gpt/assets/61396542/a0ae5d13-942e-4a18-9c53-33e7ad1886ff)\n\n**WARNING:** Setting the number of run steps (iterations) to `0` activates an infinite loop which can generate a large number of requests and cause very high token consumption, so use this option with caution! Confirmation will be displayed every time you run the infinite loop.\n\nThis mode is similar to `Auto-GPT` - it can be used to create more advanced inferences and to solve problems by breaking them down into subtasks that the model will autonomously perform one after another until the goal is achieved.\n\nYou can create presets with custom instructions for multiple agents, incorporating various workflows, instructions, and goals to achieve.\n\nAll plugins are available for agents, so you can enable features such as file access, command execution, web searching, image generation, vision analysis, etc., for your agents. Connecting agents with plugins can create a fully autonomous, self-sufficient system. All currently enabled plugins are automatically available to the Agent.\n\nWhen the `Auto-stop` option is enabled, the agent will attempt to stop once the goal has been reached.\n\n**Options**\n\nThe agent is essentially a **virtual** mode that internally sequences the execution of a selected underlying mode. \nYou can choose which internal mode the agent should use in the settings:\n\n```Settings / Agent (autonomous) / Sub-mode to use```\n\nAvailable choices include: `chat`, `completion`, `langchain`, `vision`, `llama_index` (Chat with files).\n\nDefault is: `chat`.\n\nIf you want to use the Llama-index mode when running the agent, you can also specify which index `Llama-index` should use with the option:\n\n```Settings / Agent (autonomous) / Index to use```\n\n![v2_agent_settings](https://github.com/szczyglis-dev/py-gpt/assets/61396542/c577d219-eb25-4f0e-9ea5-adf20a6b6b81)\n\n\n##  Experts (co-op, co-operation mode)\n\nAdded in version 2.2.7 (2024-05-01).\n\n**This mode is experimental.**\n\nExpert mode allows for the creation of experts (using presets) and then consulting them during a conversation. In this mode, a primary base context is created for conducting the conversation. From within this context, the model can make requests to an expert to perform a task and return the results to the main thread. When an expert is called in the background, a separate context is created for them with their own memory. This means that each expert, during the life of one main context, also has access to their own memory via their separate, isolated context.\n\n**In simple terms - you can imagine an expert as a separate, additional instance of the model running in the background, which can be called at any moment for assistance, with its own context and memory, as well as its own specialized instructions in a given subject.**\n\nExperts do not share contexts with one another, and the only point of contact between them is the main conversation thread. In this main thread, the model acts as a manager of experts, who can exchange data between them as needed.\n\nAn expert is selected based on the name in the presets; for example, naming your expert as: ID = python_expert, name = \"Python programmer\" will create an expert whom the model will attempt to invoke for matters related to Python programming. You can also manually request to refer to a given expert:\n\n```bash\nCall the Python expert to generate some code.\n```\n\nExperts can be activated or deactivated - to enable or disable use RMB context menu to select the `Enable/Disable` options from the presets list. Only enabled experts are available to use in the thread.\n\nExperts can also be used in `Agent (autonomous)` mode - by creating a new agent using a preset. Simply move the appropriate experts to the active list to automatically make them available for use by the agent.\n\nYou can also use experts in \"inline\" mode - by activating the `Experts (inline)` plugin. This allows for the use of experts in any mode, such as normal chat.\n\nExpert mode, like agent mode, is a \"virtual\" mode - you need to select a target mode of operation for it, which can be done in the settings at `Settings / Agent (autonomous) / Sub-mode for experts`.\n\nYou can also ask for a list of active experts at any time:\n\n```bash\nGive me a list of active experts.\n```\n\n# Accessibility\n\nSince version `2.2.8`, PyGPT has added beta support for disabled people and voice control. This may be very useful for blind people.\n\n\nIn the `Config / Accessibility` menu, you can turn on accessibility features such as:\n\n\n- activating voice control\n\n- translating actions and events on the screen with audio speech\n\n- setting up keyboard shortcuts for actions.\n\n\n**Using voice control**\n\nVoice control can be turned on in two ways: globally, through settings in `Config -> Accessibility`, and by using the `Voice control (inline)` plugin. Both options let you use the same voice commands, but they work a bit differently - the global option allows you to run commands outside of a conversation, anywhere, while the plugin option lets you execute commands directly during a conversation \u2013 allowing you to interact with the model and execute commands at the same time, within the conversation.\n\nIn the plugin (inline) option, you can also turn on a special trigger word that will be needed for content to be recognized as a voice command. You can set this up by going to `Plugins -> Settings -> Voice Control (inline)`:\n\n```bash\nMagic prefix for voice commands\n```\n\n**Tip:** When the voice control is enabled via a plugin, simply provide commands while providing the content of the conversation by using the standard `Microphone` button.\n\n\n**Enabling voice control globally**\n\n\nTurn on the voice control option in `Config / Accessibility`:\n\n\n```bash\nEnable voice control (using microphone)\n```\n\nOnce you enable this option, an `Voice Control` button will appear at the bottom right corner of the window. When you click on this button, the microphone will start listening; clicking it again stops listening and starts recognizing the voice command you said. You can cancel voice recording at any time with the `ESC` key. You can also set a keyboard shortcut to turn voice recording on/off.\n\n\nVoice command recognition works based on a model, so you don't have to worry about saying things perfectly.\n\n\n**Here's a list of commands you can ask for by voice:**\n\n- Get the current application status\n- Exit the application\n- Enable audio output\n- Disable audio output\n- Enable audio input\n- Disable audio input\n- Add a memo to the calendar\n- Clear memos from calendar\n- Read the calendar memos\n- Enable the camera\n- Disable the camera\n- Capture image from camera\n- Create a new context\n- Go to the previous context\n- Go to the next context\n- Go to the latest context\n- Focus on the input\n- Send the input\n- Clear the input\n- Get current conversation info\n- Get available commands list\n- Stop executing current action\n- Clear the attachments\n- Read the last conversation entry\n- Read the whole conversation\n- Rename current context\n- Search for a conversation\n- Clear the search results\n- Send the message to input\n- Append message to current input without sending it\n- Switch to chat mode\n- Switch to chat with files (llama-index) mode\n- Switch to the next mode\n- Switch to the previous mode\n- Switch to the next model\n- Switch to the previous model\n- Add note to notepad\n- Clear notepad contents\n- Read current notepad contents\n- Switch to the next preset\n- Switch to the previous preset\n- Switch to the chat tab\n- Switch to the calendar tab\n- Switch to the draw (painter) tab\n- Switch to the files tab\n- Switch to the notepad tab\n- Switch to the next tab\n- Switch to the previous tab\n- Start listening for voice input\n- Stop listening for voice input\n- Toggle listening for voice input\n\nMore commands coming soon.\n\nJust ask for an action that matches one of the descriptions above. These descriptions are also known to the model, and relevant commands are assigned to them. When you voice a command that fits one of those patterns, the model will trigger the appropriate action.\n\n\nFor convenience, you can enable a short sound to play when voice recording starts and stops. To do this, turn on the option:\n\n\n```bash\nAudio notify microphone listening start/stop\n```\n\nTo enable a sound notification when a voice command is recognized and command execution begins, turn on the option:\n\n\n```bash\nAudio notify voice command execution\n```\n\nFor voice translation of on-screen events and information about completed commands via speech synthesis, you can turn on the option:\n\n```bash\nUse voice synthesis to describe events on the screen.\n```\n\n![v2_access](https://github.com/szczyglis-dev/py-gpt/assets/61396542/02dd161b-6fb1-48f9-9217-40c658888833)\n\n\n# Files and attachments\n\n## Input attachments (upload)\n\n**PyGPT** makes it simple for users to upload files to the server and send them to the model for tasks like analysis, similar to attaching files in `ChatGPT`. There's a separate `Files` tab next to the text input area specifically for managing file uploads. Users can opt to have files automatically deleted after each upload or keep them on the list for repeated use.\n\n![v2_file_input](https://github.com/szczyglis-dev/py-gpt/assets/61396542/bd3d9840-2bc4-4ba8-a603-69724f9eb620)\n\nThe attachment feature is available in both the `Assistant` and `Vision` modes at default.\nIn `Assistant` mode, you can send documents and files to analyze, while in `Vision` mode, you can send images.\nIn other modes, you can enable attachments by activating the `Vision (inline)` plugin (for providing images only).\n\n## Files (download, code generation)\n\n**PyGPT** enables the automatic download and saving of files created by the model. This is carried out in the background, with the files being saved to an `data` folder located within the user's working directory. To view or manage these files, users can navigate to the `Files` tab which features a file browser for this specific directory. Here, users have the interface to handle all files sent by the AI.\n\nThis `data` directory is also where the application stores files that are generated locally by the AI, such as code files or any other data requested from the model. Users have the option to execute code directly from the stored files and read their contents, with the results fed back to the AI. This hands-off process is managed by the built-in plugin system and model-triggered commands. You can also indexing files from this directory (using integrated `Llama-index`) and use it's contents as additional context provided to discussion.\n\nThe `Command: Files I/O` plugin takes care of file operations in the `data` directory, while the `Command: Code Interpreter` plugin allows for the execution of code from these files.\n\n![v2_file_output](https://github.com/szczyglis-dev/py-gpt/assets/61396542/2ada219d-68c9-45e3-96af-86ac5bc73593)\n\nTo allow the model to manage files or python code execution, the `Execute commands` option must be active, along with the above-mentioned plugins:\n\n![v2_code_execute](https://github.com/szczyglis-dev/py-gpt/assets/61396542/d5181eeb-6ab4-426f-93f0-037d256cb078)\n\n# Draw (paint)\n\nUsing the `Draw` tool, you can create quick sketches and submit them to the model for analysis. You can also edit opened from disk or captured from camera images, for example, by adding elements like arrows or outlines to objects. Additionally, you can capture screenshots from the system - the captured image is placed in the drawing tool and attached to the query being sent.\n\n![v2_draw](https://github.com/szczyglis-dev/py-gpt/assets/61396542/09c1de36-1241-4330-9fd7-67c6e09888fa)\n\nTo capture the screenshot just click on the `Ask with screenshot` option in a tray-icon dropdown:\n\n![v2_screenshot](https://github.com/szczyglis-dev/py-gpt/assets/61396542/7305a814-ca76-4f8f-8908-47f6a9496fa5)\n\n# Calendar\n\nUsing the calendar, you can go back to selected conversations from a specific day and add daily notes. After adding a note, it will be marked on the list, and you can change the color of its label by right-clicking and selecting `Set label color`. By clicking on a particular day of the week, conversations from that day will be displayed.\n\n![v2_calendar](https://github.com/szczyglis-dev/py-gpt/assets/61396542/c7d17375-b61f-452c-81bc-62a7d466fc18)\n\n# Context and memory\n\n## Short and long-term memory\n\n**PyGPT** features a continuous chat mode that maintains a long context of the ongoing dialogue. It preserves the entire conversation history and automatically appends it to each new message (prompt) you send to the AI. Additionally, you have the flexibility to revisit past conversations whenever you choose. The application keeps a record of your chat history, allowing you to resume discussions from the exact point you stopped.\n\n## Handling multiple contexts\n\nOn the left side of the application interface, there is a panel that displays a list of saved conversations. You can save numerous contexts and switch between them with ease. This feature allows you to revisit and continue from any point in a previous conversation. **PyGPT** automatically generates a summary for each context, akin to the way `ChatGPT` operates and gives you the option to modify these titles itself.\n\n![v2_context_list](https://github.com/szczyglis-dev/py-gpt/assets/61396542/9228ea4c-f30c-4b02-ba85-da10b4e2eb7b)\n\nYou can disable context support in the settings by using the following option:\n\n``` ini\nConfig -> Settings -> Use context \n```\n\n## Clearing history\n\nYou can clear the entire memory (all contexts) by selecting the menu option:\n\n``` ini\nFile -> Clear history...\n```\n\n## Context storage\n\nOn the application side, the context is stored in the `SQLite` database located in the working directory (`db.sqlite`).\nIn addition, all history is also saved to `.txt` files for easy reading.\n\nOnce a conversation begins, a title for the chat is generated and displayed on the list to the left. This process is similar to `ChatGPT`, where the subject of the conversation is summarized, and a title for the thread is created based on that summary. You can change the name of the thread at any time.\n\n# Presets\n\n## What is preset?\n\nPresets in **PyGPT** are essentially templates used to store and quickly apply different configurations. Each preset includes settings for the mode you want to use (such as chat, completion, or image generation), an initial system message, an assigned name for the AI, a username for the session, and the desired \"temperature\" for the conversation. A warmer \"temperature\" setting allows the AI to provide more creative responses, while a cooler setting encourages more predictable replies. These presets can be used across various modes and with models accessed via the `OpenAI API` or `Langchain`.\n\nThe system lets you create as many presets as needed and easily switch among them. Additionally, you can clone an existing preset, which is useful for creating variations based on previously set configurations and experimentation.\n\n![v2_preset](https://github.com/szczyglis-dev/py-gpt/assets/61396542/88167631-feb6-45ca-a006-25a21ec2339e)\n\n## Example usage\n\nThe application includes several sample presets that help you become acquainted with the mechanism of their use.\n\n\n# Image generation (DALL-E 3 and DALL-E 2)\n\n## DALL-E 3\n\n**PyGPT** enables quick and easy image creation with `DALL-E 3`. \nThe older model version, `DALL-E 2`, is also accessible. Generating images is akin to a chat conversation  -  a user's prompt triggers the generation, followed by downloading, saving to the computer, \nand displaying the image onscreen. You can send raw prompt to `DALL-E` in `Image generation` mode or ask the model for the best prompt.\n\nImage generation using DALL-E is available in every mode via plugin `DALL-E 3 Image Generation (inline)`. Just ask any model, in any mode, like e.g. GPT-4 to generate an image and it will do it inline, without need to mode change.\n\nIf you want to generate images (using DALL-E) directly in chat you must enable plugin **DALL-E 3 Inline** in the Plugins menu.\nPlugin allows you to generate images in Chat mode:\n\n![v3_img_chat](https://github.com/szczyglis-dev/py-gpt/assets/61396542/c288a4b3-c932-4201-b5a3-8452aea49817)\n\n## Multiple variants\n\nYou can generate up to **4 different variants** (DALL-E 2) for a given prompt in one session. DALL-E 3 allows one image.\nTo select the desired number of variants to create, use the slider located in the right-hand corner at \nthe bottom of the screen. This replaces the conversation temperature slider when you switch to image generation mode.\n\n## Raw mode\n\nThere is an option for switching prompt generation mode.\n\nIf **Raw Mode** is enabled, DALL-E will receive the prompt exactly as you have provided it.\nIf **Raw Mode** is disabled, GPT will generate the best prompt for you based on your instructions.\n\n![v2_dalle2](https://github.com/szczyglis-dev/py-gpt/assets/61396542/e1c30292-8ed0-4346-8b85-6d7a02d65fb6)\n\n## Image storage\n\nOnce you've generated an image, you can easily save it anywhere on your disk by right-clicking on it. \nYou also have the options to delete it or view it in full size in your web browser.\n\n**Tip:** Use presets to save your prepared prompts. \nThis lets you quickly use them again for generating new images later on.\n\nThe app keeps a history of all your prompts, allowing you to revisit any session and reuse previous \nprompts for creating new images.\n\nImages are stored in ``img`` directory in **PyGPT** user data folder.\n\n# Managing models\n\nAll models are specified in the configuration file `models.json`, which you can customize. \nThis file is located in your working directory. You can add new models provided directly by `OpenAI API`\nand those supported by `Langchain` to this file. Configuration for Langchain wrapper is placed in `langchain` key.\n\n## Adding custom LLMs via Langchain\n\nTo add a new model using the Langchain wrapper in **PyGPT**, insert the model's configuration details into the `models.json` file. This should include the model's name, its supported modes (either `chat`, `completion`, or both), the LLM provider (which can be either e.g. `OpenAI` or `HuggingFace`), and, if you are using a `HuggingFace` model, an optional `API KEY`.\n\nExample of models configuration - `models.json`:\n\n```\n\"gpt-3.5-turbo\": {\n    \"id\": \"gpt-3.5-turbo\",\n    \"name\": \"gpt-3.5-turbo\",\n    \"mode\": [\n        \"chat\",\n        \"assistant\",\n        \"langchain\",\n        \"llama_index\"\n    ],\n    \"langchain\": {\n        \"provider\": \"openai\",\n        \"mode\": [\n            \"chat\"\n        ],\n        \"args\": [\n            {\n                \"name\": \"model_name\",\n                \"value\": \"gpt-3.5-turbo\",\n                \"type\": \"str\"\n            }\n        ],\n        \"env\": [\n            {\n                \"name\": \"OPENAI_API_KEY\",\n                \"value\": \"{api_key}\"\n            }\n        ]\n    },\n    \"llama_index\": {\n        \"provider\": \"openai\",\n        \"mode\": [\n            \"chat\"\n        ],\n        \"args\": [\n            {\n                \"name\": \"model\",\n                \"value\": \"gpt-3.5-turbo\",\n                \"type\": \"str\"\n            }\n        ],\n        \"env\": [\n            {\n                \"name\": \"OPENAI_API_KEY\",\n                \"value\": \"{api_key}\"\n            }\n        ]\n    },\n    \"ctx\": 4096,\n    \"tokens\": 4096,\n    \"default\": false\n},\n```\n\nThere is bult-in support for those LLMs providers:\n\n```\n- OpenAI (openai)\n- Azure OpenAI (azure_openai)\n- HuggingFace (huggingface)\n- Anthropic (anthropic)\n- Llama 2 (llama2)\n- Ollama (ollama)\n```\n\n## Adding custom LLM providers\n\nHandling LLMs with Langchain is implemented through separated wrappers. This allows for the addition of support for any provider and model available via Langchain. All built-in wrappers for the models and its providers are placed in the `pygpt_net.provider.llms`.\n\nThese wrappers are loaded into the application during startup using `launcher.add_llm()` method:\n\n```python\n# app.py\n\nfrom pygpt_net.provider.llms.openai import OpenAILLM\nfrom pygpt_net.provider.llms.azure_openai import AzureOpenAILLM\nfrom pygpt_net.provider.llms.anthropic import AnthropicLLM\nfrom pygpt_net.provider.llms.hugging_face import HuggingFaceLLM\nfrom pygpt_net.provider.llms.llama import Llama2LLM\nfrom pygpt_net.provider.llms.ollama import OllamaLLM\n\n\ndef run(**kwargs):\n    \"\"\"Runs the app.\"\"\"\n    # Initialize the app\n    launcher = Launcher()\n    launcher.init()\n\n    # Register plugins\n    ...\n\n    # Register langchain LLMs wrappers\n    launcher.add_llm(OpenAILLM())\n    launcher.add_llm(AzureOpenAILLM())\n    launcher.add_llm(AnthropicLLM())\n    launcher.add_llm(HuggingFaceLLM())\n    launcher.add_llm(Llama2LLM())\n    launcher.add_llm(OllamaLLM())\n\n    # Launch the app\n    launcher.run()\n```\n\nTo add support for providers not included by default, you can create your own wrapper that returns a custom model to the application and then pass this custom wrapper to the launcher.\n\nExtending **PyGPT** with custom plugins and LLM wrappers is straightforward:\n\n- Pass instances of custom plugins and LLM wrappers directly to the launcher.\n\nTo register custom LLM wrappers:\n\n- Provide a list of LLM wrapper instances as `llms` keyword argument.\n\n**Example:**\n\n\n```python\n# launcher.py\n\nfrom pygpt_net.app import run\nfrom plugins import CustomPlugin, OtherCustomPlugin\nfrom llms import CustomLLM\n\nplugins = [\n    CustomPlugin(),\n    OtherCustomPlugin(),\n]\nllms = [\n    CustomLLM(),\n]\nvector_stores = []\n\nrun(\n    plugins=plugins, \n    llms=llms, \n    vector_stores=vector_stores\n)\n```\n\n**Examples (tutorial files)** \n\nSee the `examples` directory in this repository with examples of custom launcher, plugin, vector store, LLM (Langchain and Llama-index) provider and data loader:\n\n- `examples/custom_launcher.py`\n\n- `examples/example_audio_input.py`\n\n- `examples/example_audio_output.py`\n\n- `examples/example_data_loader.py`\n\n- `examples/example_llm.py`  <-- use it as an example\n\n- `examples/example_plugin.py`\n\n- `examples/example_vector_store.py`\n\n- `examples/example_web_search.py`\n\nThese example files can be used as a starting point for creating your own extensions for **PyGPT**.\n\nTo integrate your own model or provider into **PyGPT**, you can also reference the classes located in the `pygpt_net.provider.llms`. These samples can act as an more complex example for your custom class. Ensure that your custom wrapper class includes two essential methods: `chat` and `completion`. These methods should return the respective objects required for the model to operate in `chat` and `completion` modes.\n\n\n## Adding custom Vector Store providers\n\n**From version 2.0.114 you can also register your own Vector Store provider**:\n\n```python\n# app.py\n\n# vector stores\nfrom pygpt_net.provider.vector_stores.chroma import ChromaProvider\nfrom pygpt_net.provider.vector_stores.elasticsearch import ElasticsearchProvider\nfrom pygpt_net.provider.vector_stores.pinecode import PinecodeProvider\nfrom pygpt_net.provider.vector_stores.redis import RedisProvider\nfrom pygpt_net.provider.vector_stores.simple import SimpleProvider\n\ndef run(**kwargs):\n    # ...\n    # register base vector store providers (llama-index)\n    launcher.add_vector_store(ChromaProvider())\n    launcher.add_vector_store(ElasticsearchProvider())\n    launcher.add_vector_store(PinecodeProvider())\n    launcher.add_vector_store(RedisProvider())\n    launcher.add_vector_store(SimpleProvider())\n\n    # register custom vector store providers (llama-index)\n    vector_stores = kwargs.get('vector_stores', None)\n    if isinstance(vector_stores, list):\n        for store in vector_stores:\n            launcher.add_vector_store(store)\n\n    # ...\n```\n\nTo register your custom vector store provider just register it by passing provider instance in `vector_stores` keyword argument:\n\n```python\n\n# custom_launcher.py\n\nfrom pygpt_net.app import run\nfrom plugins import CustomPlugin, OtherCustomPlugin\nfrom llms import CustomLLM\nfrom vector_stores import CustomVectorStore\n\nplugins = [\n    CustomPlugin(),\n    OtherCustomPlugin(),\n]\nllms = [\n    CustomLLM(),\n]\nvector_stores = [\n    CustomVectorStore(),\n]\n\nrun(\n    plugins=plugins,\n    llms=llms,\n    vector_stores=vector_stores\n)\n```\n\n# Plugins\n\n**PyGPT** can be enhanced with plugins to add new features.\n\n**Tip:** Plugins works best with GPT-4 models.\n\nThe following plugins are currently available, and model can use them instantly:\n\n- `Audio Input` - provides speech recognition.\n\n- `Audio Output` - provides voice synthesis.\n\n- `Autonomous Agent (inline)` - enables autonomous conversation (AI to AI), manages loop, and connects output back to input. This is the inline Agent mode.\n\n- `Chat with files (Llama-index, inline)` - plugin integrates `Llama-index` storage in any chat and provides additional knowledge into context (from indexed files and previous context from database).\n\n- `Command: API calls` - plugin lets you connect the model to the external services using custom defined API calls.\n\n- `Command: Code Interpreter` - responsible for generating and executing Python code, functioning much like \nthe Code Interpreter on ChatGPT, but locally. This means GPT can interface with any script, application, or code. \nThe plugin can also execute system commands, allowing GPT to integrate with your operating system. \nPlugins can work in conjunction to perform sequential tasks; for example, the `Files` plugin can write generated \nPython code to a file, which the `Code Interpreter` can execute it and return its result to GPT.\n\n- `Command: Custom Commands` - allows you to create and execute custom commands on your system.\n\n- `Command: Files I/O` - provides access to the local filesystem, enabling GPT to read and write files, \nas well as list and create directories.\n\n- `Command: Web Search` - provides the ability to connect to the Web, search web pages for current data, and index external content using Llama-index data loaders.\n\n- `Command: Serial port / USB` - plugin provides commands for reading and sending data to USB ports.\n\n- `Context history (calendar, inline)` - provides access to context history database.\n\n- `Crontab / Task scheduler` - plugin provides cron-based job scheduling - you can schedule tasks/prompts to be sent at any time using cron-based syntax for task setup.\n\n- `DALL-E 3: Image Generation (inline)` - integrates DALL-E 3 image generation with any chat and mode. Just enable and ask for image in Chat mode, using standard model like GPT-4. The plugin does not require the `Execute commands` option to be enabled.\n\n- `Experts (inline)` - allows calling experts in any chat mode. This is the inline Experts (co-op) mode.\n\n- `GPT-4 Vision (inline)` - integrates Vision capabilities with any chat mode, not just Vision mode. When the plugin is enabled, the model temporarily switches to vision in the background when an image attachment or vision capture is provided.\n\n- `Real Time` - automatically appends the current date and time to the system prompt, informing the model about current time.\n\n- `System Prompt Extra (append)` - appends additional system prompts (extra data) from a list to every current system prompt. You can enhance every system prompt with extra instructions that will be automatically appended to the system prompt.\n\n- `Voice Control (inline)` - provides voice control command execution within a conversation.\n\n\n## Audio Input\n\nThe plugin facilitates speech recognition (by default using the `Whisper` model from OpenAI, `Google` and `Bing` are also available). It allows for voice commands to be relayed to the AI using your own voice. Whisper doesn't require any extra API keys or additional configurations; it uses the main OpenAI key. In the plugin's configuration options, you should adjust the volume level (min energy) at which the plugin will respond to your microphone. Once the plugin is activated, a new `Speak` option will appear at the bottom near the `Send` button  -  when this is enabled, the application will respond to the voice received from the microphone.\n\nThe plugin can be extended with other speech recognition providers.\n\nOptions:\n\n- `Provider` *provider*\n\nChoose the provider. *Default:* `Whisper`\n\nAvailable providers:\n\n- Whisper (via `OpenAI API`)\n- Whisper (local model) - not available in compiled and Snap versions, only Python/PyPi version\n- Google (via `SpeechRecognition` library)\n- Google Cloud (via `SpeechRecognition` library)\n- Microsoft Bing (via `SpeechRecognition` library)\n\n**Whisper (API)**\n\n- `Model` *whisper_model*\n\nChoose the model. *Default:* `whisper-1`\n\n**Whisper (local)**\n\n- `Model` *whisper_local_model*\n\nChoose the local model. *Default:* `base`\n\nAvailable models: https://github.com/openai/whisper\n\n**Google**\n\n- `Additional keywords arguments` *google_args*\n\nAdditional keywords arguments for r.recognize_google(audio, **kwargs)\n\n**Google Cloud**\n\n- `Additional keywords arguments` *google_cloud_args*\n\nAdditional keywords arguments for r.recognize_google_cloud(audio, **kwargs)\n\n**Bing**\n\n- `Additional keywords arguments` *bing_args*\n\nAdditional keywords arguments for r.recognize_bing(audio, **kwargs)\n\n\n**General options**\n\n- `Auto send` *auto_send*\n\nAutomatically send recognized speech as input text after recognition. *Default:* `True`\n\n- `Advanced mode` *advanced*\n\nEnable only if you want to use advanced mode and the settings below. Do not enable this option if you just want to use the simplified mode (default). *Default:* `False`\n\n\n**Advanced mode options**\n\n- `Timeout` *timeout*\n\nThe duration in seconds that the application waits for voice input from the microphone. *Default:* `5`\n\n- `Phrase max length` *phrase_length*\n\nMaximum duration for a voice sample (in seconds). *Default:* `10`\n\n- `Min energy` *min_energy*\n\nMinimum threshold multiplier above the noise level to begin recording. *Default:* `1.3`\n\n- `Adjust for ambient noise` *adjust_noise*\n\nEnables adjustment to ambient noise levels. *Default:* `True`\n\n- `Continuous listen` *continuous_listen*\n\nExperimental: continuous listening - do not stop listening after a single input. \nWarning: This feature may lead to unexpected results and requires fine-tuning with \nthe rest of the options! If disabled, listening must be started manually \nby enabling the `Speak` option. *Default:* `False`\n\n\n- `Wait for response` *wait_response*\n\nWait for a response before initiating listening for the next input. *Default:* `True`\n\n- `Magic word` *magic_word*\n\nActivate listening only after the magic word is provided. *Default:* `False`\n\n- `Reset Magic word` *magic_word_reset*\n\nReset the magic word status after it is received (the magic word will need to be provided again). *Default:* `True`\n\n- `Magic words` *magic_words*\n\nList of magic words to initiate listening (Magic word mode must be enabled). *Default:* `OK, Okay, Hey GPT, OK GPT`\n\n- `Magic word timeout` *magic_word_timeout*\n\nThe number of seconds the application waits for magic word. *Default:* `1`\n\n- `Magic word phrase max length` *magic_word_phrase_length*\n\nThe minimum phrase duration for magic word. *Default:* `2`\n\n- `Prefix words` *prefix_words*\n\nList of words that must initiate each phrase to be processed. For example, you can define words like \"OK\" or \"GPT\"\u2014if set, any phrases not starting with those words will be ignored. Insert multiple words or phrases separated by commas. Leave empty to deactivate.  *Default:* `empty`\n\n- `Stop words` *stop_words*\n\nList of words that will stop the listening process. *Default:* `stop, exit, quit, end, finish, close, terminate, kill, halt, abort`\n\nOptions related to Speech Recognition internals:\n\n- `energy_threshold` *recognition_energy_threshold*\n\nRepresents the energy level threshold for sounds. *Default:* `300`\n\n- `dynamic_energy_threshold` *recognition_dynamic_energy_threshold*\n\nRepresents whether the energy level threshold (see recognizer_instance.energy_threshold) for sounds \nshould be automatically adjusted based on the currently ambient noise level while listening. *Default:* `True`\n\n- `dynamic_energy_adjustment_damping` *recognition_dynamic_energy_adjustment_damping*\n\nRepresents approximately the fraction of the current energy threshold that is retained after one second \nof dynamic threshold adjustment. *Default:* `0.15`\n\n- `pause_threshold` *recognition_pause_threshold*\n\nRepresents the minimum length of silence (in seconds) that will register as the end of a phrase. *Default:* `0.8`\n\n- `adjust_for_ambient_noise: duration` *recognition_adjust_for_ambient_noise_duration*\n\nThe duration parameter is the maximum number of seconds that it will dynamically adjust the threshold \nfor before returning. *Default:* `1`\n\nOptions reference: https://pypi.org/project/SpeechRecognition/1.3.1/\n\n## Audio Output\n\nThe plugin lets you turn text into speech using the TTS model from OpenAI or other services like ``Microsoft Azure``, ``Google``, and ``Eleven Labs``. You can add more text-to-speech providers to it too. `OpenAI TTS` does not require any additional API keys or extra configuration; it utilizes the main OpenAI key. \nMicrosoft Azure requires to have an Azure API Key. Before using speech synthesis via `Microsoft Azure`, `Google` or `Eleven Labs`, you must configure the audio plugin with your API keys, regions and voices if required.\n\n![v2_azure](https://github.com/szczyglis-dev/py-gpt/assets/61396542/8035e9a5-5a01-44a1-85da-6e44c52459e4)\n\nThrough the available options, you can select the voice that you want the model to use. More voice synthesis providers coming soon.\n\nTo enable voice synthesis, activate the `Audio Output` plugin in the `Plugins` menu or turn on the `Audio Output` option in the `Audio / Voice` menu (both options in the menu achieve the same outcome).\n\n**Options**\n\n- `Provider` *provider*\n\nChoose the provider. *Default:* `OpenAI TTS`\n\nAvailable providers:\n\n- OpenAI TTS\n- Microsoft Azure TTS\n- Google TTS\n- Eleven Labs TTS\n\n**OpenAI Text-To-Speech**\n\n- `Model` *openai_model*\n\nChoose the model. Available options:\n\n```\n  - tts-1\n  - tts-1-hd\n```\n*Default:* `tts-1`\n\n- `Voice` *openai_voice*\n\nChoose the voice. Available voices to choose from:\n\n```\n  - alloy\n  - echo\n  - fable\n  - onyx\n  - nova\n  - shimmer\n```\n\n*Default:* `alloy`\n\n**Microsoft Azure Text-To-Speech**\n\n- `Azure API Key` *azure_api_key*\n\nHere, you should enter the API key, which can be obtained by registering for free on the following website: https://azure.microsoft.com/en-us/services/cognitive-services/text-to-speech\n\n- `Azure Region` *azure_region*\n\nYou must also provide the appropriate region for Azure here. *Default:* `eastus`\n\n- `Voice (EN)` *azure_voice_en*\n\nHere you can specify the name of the voice used for speech synthesis for English. *Default:* `en-US-AriaNeural`\n\n- `Voice (non-English)` *azure_voice_pl*\n\nHere you can specify the name of the voice used for speech synthesis for other non-english languages. *Default:* `pl-PL-AgnieszkaNeural`\n\n**Google Text-To-Speech**\n\n- `Google Cloud Text-to-speech API Key` *google_api_key*\n\nYou can obtain your own API key at: https://console.cloud.google.com/apis/library/texttospeech.googleapis.com\n\n- `Voice` *google_voice*\n\nSpecify voice. Voices: https://cloud.google.com/text-to-speech/docs/voices\n\n- `Language code` *google_api_key*\n\nLanguage code. Language codes: https://cloud.google.com/speech-to-text/docs/speech-to-text-supported-languages\n\n**Eleven Labs Text-To-Speech**\n\n- `Eleven Labs API Key` *eleven_labs_api_key*\n\nYou can obtain your own API key at: https://elevenlabs.io/speech-synthesis\n\n- `Voice ID` *eleven_labs_voice*\n\nVoice ID. Voices: https://elevenlabs.io/voice-library\n\n- `Model` *eleven_labs_model*\n\nSpecify model. Models: https://elevenlabs.io/docs/speech-synthesis/models\n\n\nIf speech synthesis is enabled, a voice will be additionally generated in the background while generating a response via GPT.\n\nBoth `OpenAI TTS` and `OpenAI Whisper` use the same single API key provided for the OpenAI API, with no additional keys required.\n\n## Autonomous Agent (inline)\n\n**WARNING: Please use autonomous mode with caution!** - this mode, when connected with other plugins, may produce unexpected results!\n\nThe plugin activates autonomous mode in standard chat modes, where AI begins a conversation with itself. \nYou can set this loop to run for any number of iterations. Throughout this sequence, the model will engage\nin self-dialogue, answering his own questions and comments, in order to find the best possible solution, subjecting previously generated steps to criticism.\n\nThis mode is similar to `Auto-GPT` - it can be used to create more advanced inferences and to solve problems by breaking them down into subtasks that the model will autonomously perform one after another until the goal is achieved. The plugin is capable of working in cooperation with other plugins, thus it can utilize tools such as web search, access to the file system, or image generation using `DALL-E`.\n\nYou can adjust the number of iterations for the self-conversation in the `Plugins / Settings...` menu under the following option:\n\n- `Iterations` *iterations*\n\n*Default:* `3`\n\n**WARNING**: Setting this option to `0` activates an **infinity loop** which can generate a large number of requests and cause very high token consumption, so use this option with caution!\n\n- `Prompts` *prompts*\n\nEditable list of prompts used to instruct how to handle autonomous mode, you can create as many prompts as you want. \nFirst active prompt on list will be used to handle autonomous mode. **INFO:** At least one active prompt is required!\n\n- `Auto-stop after goal is reached` *auto_stop*\n\nIf enabled, plugin will stop after goal is reached.\" *Default:* `True`\n\n- `Reverse roles between iterations` *reverse_roles*\n\nOnly for Completion/Langchain modes. \nIf enabled, this option reverses the roles (AI <> user) with each iteration. For example, \nif in the previous iteration the response was generated for \"Batman,\" the next iteration will use that \nresponse to generate an input for \"Joker.\" *Default:* `True`\n\n## Chat with files (Llama-index, inline)\n\nPlugin integrates `Llama-index` storage in any chat and provides additional knowledge into context.\n\n- `Ask Llama-index first` *ask_llama_first*\n\nWhen enabled, then `Llama-index` will be asked first, and response will be used as additional knowledge in prompt. When disabled, then `Llama-index` will be asked only when needed. **INFO: Disabled in autonomous mode (via plugin)!** *Default:* `False`\n\n- `Auto-prepare question before asking Llama-index first` *prepare_question*\n\nWhen enabled, then question will be prepared before asking Llama-index first to create best query. *Default:* `False`\n\n- `Model for question preparation` *model_prepare_question*\n\nModel used to prepare question before asking Llama-index. *Default:* `gpt-3.5-turbo`\n\n- `Max output tokens for question preparation` *prepare_question_max_tokens*\n\nMax tokens in output when preparing question before asking Llama-index. *Default:* `500`\n\n- `Prompt for question preparation` *syntax_prepare_question*\n\nSystem prompt for question preparation.\n\n- `Max characters in question` *max_question_chars*\n\nMax characters in question when querying Llama-index, 0 = no limit. *Default:* `1000`\n\n- `Append metadata to context` *append_meta*\n\nIf enabled, then metadata from Llama-index will be appended to additional context. *Default:* `False`\n\n- `Model` *model_query*\n\nModel used for querying `Llama-index`. *Default:* `gpt-3.5-turbo`\n\n- `Indexes IDs` *idx*\n\nIndexes to use. If you want to use multiple indexes at once then separate them by comma. *Default:* `base`\n\n\n## Command: API calls\n\n**PyGPT** lets you connect the model to the external services using custom defined API calls.\n\nTo activate this feature, turn on the `Command: API calls` plugin found in the `Plugins` menu.\n\nIn this plugin you can provide list of allowed API calls, their parameters and request types. The model will replace provided placeholders with required params and make API call to external service.\n\n- `Your custom API calls` *cmds*\n\nYou can provide custom API calls on the list here.\n\nParams to specify for API call:\n\n- **Enabled** (True / False)\n- **Name:** unique API call name (ID)\n- **Instruction:** description for model when and how to use this API call\n- **GET params:** list, separated by comma, GET params to append to endpoint URL\n- **POST params:** list, separated by comma, POST params to send in POST request\n- **POST JSON:** provide the JSON object, template to send in POST JSON request, use `%param%` as POST param placeholders\n- **Headers:** provide the JSON object with dictionary of extra request headers, like Authorization, API keys, etc.\n- **Request type:** use GET for basic GET request, POST to send encoded POST params or POST_JSON to send JSON-encoded object as body\n- **Endpoint:** API endpoint URL, use `{param}` as GET param placeholders\n\nAn example API call is provided with plugin by default, it calls the Wikipedia API:\n\n- Name: `search_wiki`\n- Instructiom: `send API call to Wikipedia to search pages by query`\n- GET params: `query, limit`\n- Type: `GET`\n- API endpoint: https://en.wikipedia.org/w/api.php?action=opensearch&limit={limit}&format=json&search={query}\n\nIn the above example, every time you ask the model for query Wiki for provided query (e.g. `Call the Wikipedia API for query: Nikola Tesla`) it will replace placeholders in provided API endpoint URL with a generated query and it will call prepared API endpoint URL, like below:\n\nhttps://en.wikipedia.org/w/api.php?action=opensearch&limit=5&format=json&search=Nikola%20Tesla\n\nYou can specify type of request: `GET`, `POST` and `POST JSON`.\n\nIn the `POST` request you can provide POST params, they will be encoded and send as POST data.\n\nIn the `POST JSON` request you must provide JSON object template to be send, using `%param%` placeholders in the JSON object to be replaced with the model.\n\nYou can also provide any required credentials, like Authorization headers, API keys, tokens, etc. using the `headers` field - you can provide a JSON object here with a dictionary `key => value` - provided JSON object will be converted to headers dictonary and send with the request.\n\n- `Disable SSL verify` *disable_ssl*\n\nDisables SSL verification when making requests. *Default:* `False`\n\n- `Timeout` *timeout*\n\nConnection timeout (seconds). *Default:* `5`\n\n- `User agent` *user_agent*\n\nUser agent to use when making requests. *Default:* `Mozilla/5.0`\n\n\n## Command: Code Interpreter\n\n### Executing Code\n\nThe plugin operates similarly to the `Code Interpreter` in `ChatGPT`, with the key difference that it works locally on the user's system. It allows for the execution of any Python code on the computer that the model may generate. When combined with the `Command: Files I/O` plugin, it facilitates running code from files saved in the `data` directory. You can also prepare your own code files and enable the model to use them or add your own plugin for this purpose. You can execute commands and code on the host machine or in Docker container.\n\n**Code interpreter:** a real-time Python code interpreter is built-in. Click the `<>` icon to open the interpreter window. Both the input and output of the interpreter are connected to the plugin. Any output generated by the executed code will be displayed in the interpreter. Additionally, you can request the model to retrieve contents from the interpreter window output.\n\n![v2_python](https://github.com/szczyglis-dev/py-gpt/assets/61396542/793e554c-7619-402a-8370-ab89c7464fec)\n\n### Executing system commands\n\nAnother feature is the ability to execute system commands and return their results. With this functionality, the plugin can run any system command, retrieve the output, and then feed the result back to the model. When used with other features, this provides extensive integration capabilities with the system.\n\n\n\n**Tip:** always remember to enable the `Execute commands` option to allow execute commands from the plugins.\n\n\n**Options:**\n\n- `Python command template` *python_cmd_tpl*\n\nPython command template (use {filename} as path to file placeholder). *Default:* `python3 {filename}`\n\n- `Enable: code_execute` *cmd.code_execute*\n\nAllows `code_execute` command execution. If enabled, provides Python code execution (generate and execute from file). *Default:* `True`\n\n- `Enable: code_execute_all` *cmd.code_execute_all*\n\nAllows `code_execute_all` command execution. If enabled, provides execution of all the Python code in interpreter window. *Default:* `True`\n\n- `Enable: code_execute_file` *cmd.code_execute_file*\n\nAllows `code_execute_file` command execution. If enabled, provides Python code execution from existing .py file. *Default:* `True`\n \n- `Enable: sys_exec` *cmd.sys_exec*\n\nAllows `sys_exec` command execution. If enabled, provides system commands execution. *Default:* `True`\n\n- `Enable: get_python_output` *cmd.get_python_output*\n\nAllows `get_python_output` command execution. If enabled, it allows retrieval of the output from the Python code interpreter window. *Default:* `True`\n\n- `Enable: get_python_input` *cmd.get_python_input*\n\nAllows `get_python_input` command execution. If enabled, it allows retrieval all input code (from edit section) from the Python code interpreter window. *Default:* `True`\n\n- `Enable: clear_python_output` *cmd.clear_python_output*\n\nAllows `clear_python_output` command execution. If enabled, it allows clear the output of the Python code interpreter window. *Default:* `True`\n\n- `Sandbox (docker container)` *sandbox_docker*\n\nExecute commands in sandbox (docker container). Docker must be installed and running. *Default:* `False`\n\n- `Docker image` *sandbox_docker_image*\n\nDocker image to use for sandbox *Default:* `python:3.8-alpine`\n\n- `Auto-append CWD to sys_exec` *auto_cwd*\n\nAutomatically append current working directory to `sys_exec` command. *Default:* `True`\n\n- `Connect to the Python code interpreter window` *attach_output*\n\nAutomatically attach code input/output to the Python code interpreter window. *Default:* `True`\n\n\n## Command: Custom Commands\n\nWith the `Custom Commands` plugin, you can integrate **PyGPT** with your operating system and scripts or applications. You can define an unlimited number of custom commands and instruct GPT on when and how to execute them. Configuration is straightforward, and **PyGPT** includes a simple tutorial command for testing and learning how it works:\n\n![v2_custom_cmd](https://github.com/szczyglis-dev/py-gpt/assets/61396542/b30b8724-9ca1-44b1-abc7-78241588e1f6)\n\nTo add a new custom command, click the **ADD** button and then:\n\n1. Provide a name for your command: this is a unique identifier for GPT.\n2. Provide an `instruction` explaining what this command does; GPT will know when to use the command based on this instruction.\n3. Define `params`, separated by commas - GPT will send data to your commands using these params. These params will be placed into placeholders you have defined in the `cmd` field. For example:\n\nIf you want instruct GPT to execute your Python script named `smart_home_lights.py` with an argument, such as `1` to turn the light ON, and `0` to turn it OFF, define it as follows:\n\n- **name**: lights_cmd\n- **instruction**: turn lights on/off; use 1 as 'arg' to turn ON, or 0 as 'arg' to turn OFF\n- **params**: arg\n- **cmd**: `python /path/to/smart_home_lights.py {arg}`\n\nThe setup defined above will work as follows:\n\nWhen you ask GPT to turn your lights ON, GPT will locate this command and prepare the command `python /path/to/smart_home_lights.py {arg}` with `{arg}` replaced with `1`. On your system, it will execute the command:\n\n```python /path/to/smart_home_lights.py 1```\n\nAnd that's all. GPT will take care of the rest when you ask to turn ON the lights.\n\nYou can define as many placeholders and parameters as you desire.\n\nHere are some predefined system placeholders for use:\n\n- `{_time}` - current time in `H:M:S` format\n- `{_date}` - current date in `Y-m-d` format\n- `{_datetime}` - current date and time in `Y-m-d H:M:S` format\n- `{_file}` - path to the file from which the command is invoked\n- `{_home}` - path to **PyGPT**'s home/working directory\n\nYou can connect predefined placeholders with your own params.\n\n*Example:*\n\n- **name**: song_cmd\n- **instruction**: store the generated song on hard disk\n- **params**: song_text, title\n- **cmd**: `echo \"{song_text}\" > {_home}/{title}.txt`\n\n\nWith the setup above, every time you ask GPT to generate a song for you and save it to the disk, it will:\n\n1. Generate a song.\n2. Locate your command.\n3. Execute the command by sending the song's title and text.\n4. The command will save the song text into a file named with the song's title in the PyGPT working directory.\n\n**Example tutorial command**\n\n**PyGPT** provides simple tutorial command to show how it works, to run it just ask GPT for execute `tutorial test command` and it will show you how it works:\n\n```> please execute tutorial test command```\n\n![v2_custom_cmd_example](https://github.com/szczyglis-dev/py-gpt/assets/61396542/97cbc5b9-0dd9-487e-9182-d9873dea42ab)\n\n## Command: Files I/O\n\nThe plugin allows for file management within the local filesystem. It enables the model to create, read, write and query files located in the `data` directory, which can be found in the user's work directory. With this plugin, the AI can also generate Python code files and thereafter execute that code within the user's system.\n\nPlugin capabilities include:\n\n- Sending files as attachments\n- Reading files\n- Appending to files\n- Writing files\n- Deleting files and directories\n- Listing files and directories\n- Creating directories\n- Downloading files\n- Copying files and directories\n- Moving (renaming) files and directories\n- Reading file info\n- Indexing files and directories using Llama-index\n- Querying files using Llama-index\n- Searching for files and directories\n\nIf a file being created (with the same name) already exists, a prefix including the date and time is added to the file name.\n\n**Options:**\n\n**General**\n\n- `Enable: send (upload) file as attachment` *cmd.send_file*\n\nAllows `cmd.send_file` command execution. *Default:* `True`\n\n- `Enable: read file` *cmd.read_file*\n\nAllows `read_file` command execution. *Default:* `True`\n\n- `Enable: append to file` *cmd.append_file*\n\nAllows `append_file` command execution. Text-based files only (plain text, JSON, CSV, etc.) *Default:* `True`\n\n- `Enable: save file` *cmd.save_file*\n\nAllows `save_file` command execution. Text-based files only (plain text, JSON, CSV, etc.) *Default:* `True`\n\n- `Enable: delete file` *cmd.delete_file*\n\nAllows `delete_file` command execution. *Default:* `True`\n\n- `Enable: list files (ls)` *cmd.list_files*\n\nAllows `list_dir` command execution. *Default:* `True`\n\n- `Enable: list files in dirs in directory (ls)` *cmd.list_dir*\n\nAllows `mkdir` command execution. *Default:* `True`\n\n- `Enable: downloading files` *cmd.download_file*\n\nAllows `download_file` command execution. *Default:* `True`\n\n- `Enable: removing directories` *cmd.rmdir*\n\nAllows `rmdir` command execution. *Default:* `True`\n\n- `Enable: copying files` *cmd.copy_file*\n\nAllows `copy_file` command execution. *Default:* `True`\n\n- `Enable: copying directories (recursive)` *cmd.copy_dir*\n\nAllows `copy_dir` command execution. *Default:* `True`\n\n- `Enable: move files and directories (rename)` *cmd.move*\n\nAllows `move` command execution. *Default:* `True`\n\n- `Enable: check if path is directory` *cmd.is_dir*\n\nAllows `is_dir` command execution. *Default:* `True`\n\n- `Enable: check if path is file` *cmd.is_file*\n\nAllows `is_file` command execution. *Default:* `True`\n\n- `Enable: check if file or directory exists` *cmd.file_exists*\n\nAllows `file_exists` command execution. *Default:* `True`\n\n- `Enable: get file size` *cmd.file_size*\n\nAllows `file_size` command execution. *Default:* `True`\n\n- `Enable: get file info` *cmd.file_info*\n\nAllows `file_info` command execution. *Default:* `True`\n\n- `Enable: find file or directory` *cmd.find*\n\nAllows `find` command execution. *Default:* `True`\n\n- `Enable: get current working directory` *cmd.cwd*\n\nAllows `cwd` command execution. *Default:* `True`\n\n- `Use data loaders` *use_loaders*\n\nUse data loaders from Llama-index for file reading (`read_file` command). *Default:* `True`\n\n**Indexing**\n\n- `Enable: quick query the file with Llama-index` *cmd.query_file*\n\nAllows `query_file` command execution (in-memory index). If enabled, model will be able to quick index file into memory and query it for data (in-memory index) *Default:* `True`\n\n- `Model for query in-memory index` *model_tmp_query*\n\nModel used for query temporary index for `query_file` command (in-memory index). *Default:* `gpt-3.5-turbo`\n\n- `Enable: indexing files to persistent index` *cmd.file_index*\n\nAllows `file_index` command execution. If enabled, model will be able to index file or directory using Llama-index (persistent index). *Default:* `True`\n\n- `Index to use when indexing files` *idx*\n\nID of index to use for indexing files (persistent index). *Default:* `base`\n\n- `Auto index reading files` *auto_index*\n\nIf enabled, every time file is read, it will be automatically indexed (persistent index). *Default:* `False`\n\n- `Only index reading files` *only_index*\n\nIf enabled, file will be indexed without return its content on file read (persistent index). *Default:* `False`\n\n\n## Command: Web Search\n\n**PyGPT** lets you connect GPT to the internet and carry out web searches in real time as you make queries.\n\nTo activate this feature, turn on the `Command: Web Search` plugin found in the `Plugins` menu.\n\nWeb searches are provided by `Google Custom Search Engine` and `Microsoft Bing` APIs and can be extended with other search engine providers. \n\n**Options**\n\n- `Provider` *provider*\n\nChoose the provider. *Default:* `Google`\n\nAvailable providers:\n\n- Google\n- Microsoft Bing\n\n**Google**\n\nTo use this provider, you need an API key, which you can obtain by registering an account at:\n\nhttps://developers.google.com/custom-search/v1/overview\n\nAfter registering an account, create a new project and select it from the list of available projects:\n\nhttps://programmablesearchengine.google.com/controlpanel/all\n\nAfter selecting your project, you need to enable the `Whole Internet Search` option in its settings. \nThen, copy the following two items into **PyGPT**:\n\n- `Api Key`\n- `CX ID`\n\nThese data must be configured in the appropriate fields in the `Plugins / Settings...` menu:\n\n![v2_plugin_google](https://github.com/szczyglis-dev/py-gpt/assets/61396542/f2e0df62-caaa-40ef-9b1e-239b2f912ec8)\n\n- `Google Custom Search API KEY` *google_api_key*\n\nYou can obtain your own API key at https://developers.google.com/custom-search/v1/overview\n\n- `Google Custom Search CX ID` *google_api_cx*\n\nYou will find your CX ID at https://programmablesearchengine.google.com/controlpanel/all - remember to enable \"Search on ALL internet pages\" option in project settings.\n\n**Microsoft Bing**\n\n- `Bing Search API KEY` *bing_api_key*\n\nYou can obtain your own API key at https://www.microsoft.com/en-us/bing/apis/bing-web-search-api\n\n- `Bing Search API endpoint` *bing_endpoint*\n\nAPI endpoint for Bing Search API, default: https://api.bing.microsoft.com/v7.0/search\n\n**General options**\n\n- `Number of pages to search` *num_pages*\n\nNumber of max pages to search per query. *Default:* `10`\n\n- `Max content characters` *max_page_content_length*\n\nMax characters of page content to get (0 = unlimited). *Default:* `0`\n\n- `Per-page content chunk size` *chunk_size*\n\nPer-page content chunk size (max characters per chunk). *Default:* `20000`\n\n- `Disable SSL verify` *disable_ssl*\n\nDisables SSL verification when crawling web pages. *Default:* `False`\n\n- `Timeout` *timeout*\n\nConnection timeout (seconds). *Default:* `5`\n\n- `User agent` *user_agent*\n\nUser agent to use when making requests. *Default:* `Mozilla/5.0`.\n\n- `Max result length` *max_result_length*\n\nMax length of summarized result (characters). *Default:* `1500`\n\n- `Max summary tokens` *summary_max_tokens*\n\nMax tokens in output when generating summary. *Default:* `1500`\n\n- `Enable: search the Web` *cmd.web_search*\n\nAllows `web_search` command execution. If enabled, model will be able to search the Web. *Default:* `True`\n\n- `Enable: opening URLs` *cmd.web_url_open*\n\nAllows `web_url_open` command execution. If enabled, model will be able to open specified URL and summarize content. *Default:* `True`\n\n- `Enable: reading the raw content from URLs` *cmd.web_url_raw*\n\nAllows `web_url_raw` command execution. If enabled, model will be able to open specified URL and get the raw content. *Default:* `True`\n\n- `Enable: getting a list of URLs from search results` *cmd.web_urls*\n\nAllows `web_urls` command execution. If enabled, model will be able to search the Web and get founded URLs list. *Default:* `True`\n\n- `Enable: indexing web and external content` *cmd.web_index*\n\nAllows `web_index` command execution. If enabled, model will be able to index pages and external content using Llama-index (persistent index). *Default:* `True`\n\n- `Enable: quick query the web and external content` *cmd.web_index_query*\n\nAllows `web_index_query` command execution. If enabled, model will be able to quick index and query web content using Llama-index (in-memory index). *Default:* `True`\n\n- `Auto-index all used URLs using Llama-index` *auto_index*\n\nIf enabled, every URL used by the model will be automatically indexed using Llama-index (persistent index). *Default:* `False`\n\n- `Index to use` *idx*\n\nID of index to use for web page indexing (persistent index). *Default:* `base`\n\n- `Model used for web page summarize` *summary_model*\n\nModel used for web page summarize. *Default:* `gpt-3.5-turbo-1106`\n\n- `Summarize prompt` *prompt_summarize*\n\nPrompt used for web search results summarize, use {query} as a placeholder for search query.\n\n- `Summarize prompt (URL open)` *prompt_summarize_url*\n\nPrompt used for specified URL page summarize.\n\n## Command: Serial port / USB\n\nProvides commands for reading and sending data to USB ports.\n\n**Tip:** in Snap version you must connect the interface first: https://snapcraft.io/docs/serial-port-interface\n\nYou can send commands to, for example, an Arduino or any other controllers using the serial port for communication.\n\n![v2_serial](https://github.com/szczyglis-dev/py-gpt/assets/61396542/386d46fa-2e7c-43a6-918c-17eeef9344e0)\n\nAbove is an example of co-operation with the following code uploaded to `Arduino Uno` and connected via USB:\n\n```cpp\n// example.ino\n\nvoid setup() {\n  Serial.begin(9600);\n}\n\nvoid loop() {\n  if (Serial.available() > 0) {\n    String input = Serial.readStringUntil('\\n');\n    if (input.length() > 0) {\n      Serial.println(\"OK, response for: \" + input);\n    }\n  }\n}\n```\n\n**Options**\n\n- `USB port` *serial_port*\n\nUSB port name, e.g. `/dev/ttyUSB0`, `/dev/ttyACM0`, `COM3`. *Default:* `/dev/ttyUSB0`\n\n- `Connection speed (baudrate, bps)` *serial_bps*\n\nPort connection speed, in bps. *Default:* `9600`\n\n- `Timeout` *timeout*\n\nTimeout in seconds. *Default:* `1`\n\n- `Sleep` *sleep*\n\nSleep in seconds after connection *Default:* `2`\n\n- `Enable: Send text commands to USB port` *cmd.serial_send*\n\nAllows `serial_send` command execution. *Default:* `True`\n\n- `Enable: Send raw bytes to USB port` *cmd.serial_send_bytes*\n\nAllows `serial_send_bytes` command execution. *Default:* `True`\n\n- `Enable: Read data from USB port` *cmd.serial_read*\n\nAllows `serial_read` command execution. *Default:* `True`\n\n## Context history (calendar, inline)\n\nProvides access to context history database.\nPlugin also provides access to reading and creating day notes.\n\nExamples of use, you can ask e.g. for the following:\n\n```Give me today day note```\n\n```Save a new note for today```\n\n```Update my today note with...```\n\n```Get the list of yesterday conversations```\n\n```Get contents of conversation ID 123```\n\netc.\n\nYou can also use `@` ID tags to automatically use summary of previous contexts in current discussion.\nTo use context from previous discussion with specified ID use following syntax in your query:\n\n```@123```\n\nWhere `123` is the ID of previous context (conversation) in database, example of use:\n\n```Let's talk about discussion @123```\n\n\n**Options**\n\n- `Enable: using context @ ID tags` *use_tags*\n\nWhen enabled, it allows to automatically retrieve context history using @ tags, e.g. use @123 in question to use summary of context with ID 123 as additional context. *Default:* `False`\n\n- `Enable: get date range context list` *cmd.get_ctx_list_in_date_range*\n\nAllows `get_ctx_list_in_date_range` command execution. If enabled, it allows getting the list of context history (previous conversations). *Default:* `True\n\n- `Enable: get context content by ID` *cmd.get_ctx_content_by_id*\n\nAllows `get_ctx_content_by_id` command execution. If enabled, it allows getting summarized content of context with defined ID. *Default:* `True`\n\n- `Enable: count contexts in date range` *cmd.count_ctx_in_date*\n\nAllows `count_ctx_in_date` command execution. If enabled, it allows counting contexts in date range. *Default:* `True`\n\n- `Enable: get day note` *cmd.get_day_note*\n\nAllows `get_day_note` command execution. If enabled, it allows retrieving day note for specific date. *Default:* `True`\n\n- `Enable: add day note` *cmd.add_day_note*\n\nAllows `add_day_note` command execution. If enabled, it allows adding day note for specific date. *Default:* `True`\n\n- `Enable: update day note` *cmd.update_day_note*\n\nAllows `update_day_note` command execution. If enabled, it allows updating day note for specific date. *Default:* `True`\n\n- `Enable: remove day note` *cmd.remove_day_note*\n\nAllows `remove_day_note` command execution. If enabled, it allows removing day note for specific date. *Default:* `True`\n\n- `Model` *model_summarize*\n\nModel used for summarize. *Default:* `gpt-3.5-turbo`\n\n- `Max summary tokens` *summary_max_tokens*\n\nMax tokens in output when generating summary. *Default:* `1500`\n\n- `Max contexts to retrieve` *ctx_items_limit*\n\nMax items in context history list to retrieve in one query. 0 = no limit. *Default:* `30`\n\n- `Per-context items content chunk size` *chunk_size*\n\nPer-context content chunk size (max characters per chunk). *Default:* `100000 chars`\n\n**Options (advanced)**\n\n- `Prompt: @ tags (system)` *prompt_tag_system*\n\nPrompt for use @ tag (system).\n\n- `Prompt: @ tags (summary)` *prompt_tag_summary*\n\nPrompt for use @ tag (summary).\n\n\n## Crontab / Task scheduler\n\nPlugin provides cron-based job scheduling - you can schedule tasks/prompts to be sent at any time using cron-based syntax for task setup.\n\n![v2_crontab](https://github.com/szczyglis-dev/py-gpt/assets/61396542/9fe8b25e-bbd2-4f03-9e5b-438e6f04d784)\n\n- `Your tasks` *crontab*\n\nAdd your cron-style tasks here. \nThey will be executed automatically at the times you specify in the cron-based job format. \nIf you are unfamiliar with Cron, consider visiting the Cron Guru page for assistance: https://crontab.guru\n\nNumber of active tasks is always displayed in a tray dropdown menu:\n\n![v2_crontab_tray](https://github.com/szczyglis-dev/py-gpt/assets/61396542/f9d1825f-4511-4b7f-bdce-45ee18408021)\n\n- `Create a new context on job run` *new_ctx*\n\nIf enabled, then a new context will be created on every run of the job. *Default:* `True`\n\n- `Show notification on job run` *show_notify*\n\nIf enabled, then a tray notification will be shown on every run of the job. *Default:* `True`\n\n\n## DALL-E 3: Image Generation (inline)\n\nThe plugin integrates `DALL-E 3` image generation with any chat mode. Simply enable it and request an image in Chat mode, using a standard model such as `GPT-4`. The plugin does not require the `Execute commands` option to be enabled.\n\n**Options**\n\n- `Prompt` *prompt*\n\nThe prompt is used to generate a query for the `DALL-E` image generation model, which runs in the background.\n\n##  Experts (inline)\n\nThe plugin allows calling experts in any chat mode. This is the inline Experts (co-op) mode.\n\nSee the `Mode -> Experts` section for more details.\n\n## GPT-4 Vision (inline)\n\nThe plugin integrates vision capabilities across all chat modes, not just Vision mode. Once enabled, it allows the model to seamlessly switch to vision processing in the background whenever an image attachment or vision capture is detected.\n\n**Tip:** When using `Vision (inline)` by utilizing a plugin in standard mode, such as `Chat` (not `Vision` mode), the `+ Vision` special checkbox will appear at the bottom of the Chat window. It will be automatically enabled any time you provide content for analysis (like an uploaded photo). When the checkbox is enabled, the vision model is used. If you wish to exit the vision model after image analysis, simply uncheck the checkbox. It will activate again automatically when the next image content for analysis is provided.\n\n**Options**\n\n- `Model` *model*\n\nThe model used to temporarily provide vision capabilities. *Default:* `gpt-4-vision-preview`.\n\n- `Prompt` *prompt*\n\nThe prompt used for vision mode. It will append or replace current system prompt when using vision model.\n\n- `Replace prompt` *replace_prompt*\n\nReplace whole system prompt with vision prompt against appending it to the current prompt. *Default:* `False`\n\n- `Enable: capturing images from camera` *cmd.camera_capture*\n\nAllows `capture` command execution. If enabled, model will be able to capture images from camera itself. The `Execute commands` option must be enabled. *Default:* `False`\n\n- `Enable: making screenshots` *cmd.make_screenshot*\n\nAllows `screenshot` command execution. If enabled, model will be able to making screenshots itself. The `Execute commands` option must be enabled. *Default:* `False`\n\n## Real Time\n\nThis plugin automatically adds the current date and time to each system prompt you send. \nYou have the option to include just the date, just the time, or both.\n\nWhen enabled, it quietly enhances each system prompt with current time information before sending it to GPT.\n\n**Options**\n\n- `Append time` *hour*\n\nIf enabled, it appends the current time to the system prompt. *Default:* `True`\n\n- `Append date` *date*\n\nIf enabled, it appends the current date to the system prompt.  *Default:* `True`\n\n- `Template` *tpl*\n\nTemplate to append to the system prompt. The placeholder `{time}` will be replaced with the \ncurrent date and time in real-time. *Default:* `Current time is {time}.`\n\n## System Prompt Extra (append)\n\nThe plugin appends additional system prompts (extra data) from a list to every current system prompt. \nYou can enhance every system prompt with extra instructions that will be automatically appended to the system prompt.\n\n**Options**\n\n- `Prompts` *prompts*\n\nList of extra prompts - prompts that will be appended to system prompt. \nAll active extra prompts defined on list will be appended to the system prompt in the order they are listed here.\n\n\n## Voice Control (inline)\n\nThe plugin provides voice control command execution within a conversation.\n\nSee the ``Accessibility`` section for more details.\n\n\n# Creating Your Own Plugins\n\nYou can create your own plugin for **PyGPT** at any time. The plugin can be written in Python and then registered with the application just before launching it. All plugins included with the app are stored in the `plugin` directory - you can use them as coding examples for your own plugins.\n\nPyGPT can be extended with:\n\n- Custom plugins\n\n- Custom LLMs wrappers\n\n- Custom vector store providers\n\n- Custom data loaders\n\n- Custom audio input providers\n\n- Custom audio output providers\n\n- Custom web search engine providers\n\n\n**Examples (tutorial files)** \n\nSee the `examples` directory in this repository with examples of custom launcher, plugin, vector store, LLM (Langchain and Llama-index) provider and data loader:\n\n- `examples/custom_launcher.py`\n\n- `examples/example_audio_input.py`\n\n- `examples/example_audio_output.py`\n\n- `examples/example_data_loader.py`\n\n- `examples/example_llm.py`\n\n- `examples/example_plugin.py`\n\n- `examples/example_vector_store.py`\n\n- `examples/example_web_search.py`\n\nThese example files can be used as a starting point for creating your own extensions for **PyGPT**.\n\nExtending PyGPT with custom plugins, LLMs wrappers and vector stores:\n\n- You can pass custom plugin instances, LLMs wrappers and vector store providers to the launcher.\n\n- This is useful if you want to extend PyGPT with your own plugins, vectors storage and LLMs.\n\nTo register custom plugins:\n\n- Pass a list with the plugin instances as `plugins` keyword argument.\n\nTo register custom LLMs wrappers:\n\n- Pass a list with the LLMs wrappers instances as `llms` keyword argument.\n\nTo register custom vector store providers:\n\n- Pass a list with the vector store provider instances as `vector_stores` keyword argument.\n\nTo register custom data loaders:\n\n- Pass a list with the data loader instances as `loaders` keyword argument.\n\nTo register custom audio input providers:\n\n- Pass a list with the audio input provider instances as `audio_input` keyword argument.\n\nTo register custom audio output providers:\n\n- Pass a list with the audio output provider instances as `audio_output` keyword argument.\n\nTo register custom web providers:\n\n- Pass a list with the web provider instances as `web` keyword argument.\n\n**Example:**\n\n\n```python\n# custom_launcher.py\n\nfrom pygpt_net.app import run\nfrom plugins import CustomPlugin, OtherCustomPlugin\nfrom llms import CustomLLM\nfrom vector_stores import CustomVectorStore\n\nplugins = [\n    CustomPlugin(),\n    OtherCustomPlugin(),\n]\nllms = [\n    CustomLLM(),\n]\nvector_stores = [\n    CustomVectorStore(),\n]\n\nrun(\n    plugins=plugins,\n    llms=llms,\n    vector_stores=vector_stores\n)\n```\n\n## Handling events\n\nIn the plugin, you can receive and modify dispatched events.\nTo do this, create a method named `handle(self, event, *args, **kwargs)` and handle the received events like here:\n\n```python\n# custom_plugin.py\n\nfrom pygpt_net.core.dispatcher import Event\n\n\ndef handle(self, event: Event, *args, **kwargs):\n    \"\"\"\n    Handle dispatched events\n\n    :param event: event object\n    \"\"\"\n    name = event.name\n    data = event.data\n    ctx = event.ctx\n\n    if name == Event.INPUT_BEFORE:\n        self.some_method(data['value'])\n    elif name == Event.CTX_BEGIN:\n        self.some_other_method(ctx)\n    else:\n    \t# ...\n```\n\n**List of Events**\n\nEvent names are defined in `Event` class in `pygpt_net.core.dispatcher.Event`.\n\nSyntax: `event name` - triggered on, `event data` *(data type)*:\n\n- `AI_NAME` - when preparing an AI name, `data['value']` *(string, name of the AI assistant)*\n\n- `AUDIO_INPUT_STOP` - force stop audio input\n\n- `AUDIO_INPUT_TOGGLE` - when speech input is enabled or disabled, `data['value']` *(bool, True/False)*\n\n- `AUDIO_OUTPUT_STOP` - force stop audio output\n\n- `AUDIO_OUTPUT_TOGGLE` - when speech output is enabled or disabled, `data['value']` *(bool, True/False)*\n\n- `AUDIO_READ_TEXT` - on text read with speech synthesis, `data['value']` *(str)*\n\n- `CMD_EXECUTE` - when a command is executed, `data['commands']` *(list, commands and arguments)*\n\n- `CMD_INLINE` - when an inline command is executed, `data['commands']` *(list, commands and arguments)*\n\n- `CMD_SYNTAX` - when appending syntax for commands, `data['prompt'], data['syntax']` *(string, list, prompt and list with commands usage syntax)*\n\n- `CMD_SYNTAX_INLINE` - when appending syntax for commands (inline mode), `data['prompt'], data['syntax']` *(string, list, prompt and list with commands usage syntax)*\n\n- `CTX_AFTER` - after the context item is sent, `ctx`\n\n- `CTX_BEFORE` - before the context item is sent, `ctx`\n\n- `CTX_BEGIN` - when context item create, `ctx`\n\n- `CTX_END` - when context item handling is finished, `ctx`\n\n- `CTX_SELECT` - when context is selected on list, `data['value']` *(int, ctx meta ID)*\n\n- `DISABLE` - when the plugin is disabled, `data['value']` *(string, plugin ID)*\n\n- `ENABLE` - when the plugin is enabled, `data['value']` *(string, plugin ID)*\n\n- `FORCE_STOP` - on force stop plugins\n\n- `INPUT_BEFORE` - upon receiving input from the textarea, `data['value']` *(string, text to be sent)*\n\n- `MODE_BEFORE` - before the mode is selected `data['value'], data['prompt']` *(string, string, mode ID)*\n\n- `MODE_SELECT` - on mode select `data['value']` *(string, mode ID)*\n\n- `MODEL_BEFORE` - before the model is selected `data['value']` *(string, model ID)*\n\n- `MODEL_SELECT` - on model select `data['value']` *(string, model ID)*\n\n- `PLUGIN_SETTINGS_CHANGED` - on plugin settings update\n\n- `PLUGIN_OPTION_GET` - on request for plugin option value `data['name'], data['value']` *(string, any, name of requested option, value)*\n\n- `POST_PROMPT` - after preparing a system prompt, `data['value']` *(string, system prompt)*\n\n- `PRE_PROMPT` - before preparing a system prompt, `data['value']` *(string, system prompt)*\n\n- `SYSTEM_PROMPT` - when preparing a system prompt, `data['value']` *(string, system prompt)*\n\n- `UI_ATTACHMENTS` - when the attachment upload elements are rendered, `data['value']` *(bool, show True/False)*\n\n- `UI_VISION` - when the vision elements are rendered, `data['value']` *(bool, show True/False)*\n\n- `USER_NAME` - when preparing a user's name, `data['value']` *(string, name of the user)*\n\n- `USER_SEND` - just before the input text is sent, `data['value']` *(string, input text)*\n\n\nYou can stop the propagation of a received event at any time by setting `stop` to `True`:\n\n```\nevent.stop = True\n```\n\n# Functions and commands execute\n\n**Tip:** `gpt-4-1106-preview` is the best model to use for command handling, The `gpt-4-turbo-preview` model can sometimes refuse to execute commands.\n\n**PyGPT** uses an internal syntax to define commands and their parameters, which can then be used by the model and executed on the application side or even directly in the system. This syntax looks as follows (example command below):\n\n```~###~{\"cmd\": \"send_email\", \"params\": {\"quote\": \"Why don't skeletons fight each other? They don't have the guts!\"}}~###~```\n\nIt is JSON wrapped between `~###~`. The application extracts the JSON object from such formatted text and executes the appropriate function based on the provided parameters and command name. Many of these types of commands are defined in plugins (e.g., those used for file operations or internet searches). You can also define your own commands using the `Custom Commands` plugin, or simply by creating your own plugin and adding it to the application.\n\n**Tip:** The `Execute commands` option checkbox must be enabled to allow the execution of commands from plugins. Disable the option if you do not want to use commands, to prevent additional token usage (as the command execution system prompt consumes additional tokens).\n\n![v2_code_execute](https://github.com/szczyglis-dev/py-gpt/assets/61396542/d5181eeb-6ab4-426f-93f0-037d256cb078)\n\nA special system prompt responsible for invoking commands is added to the main system prompt if the `Execute commands` option is active.\n\nHowever, there is an additional possibility to define your own commands and execute them with the help of GPT.\nThese are functions - defined on the OpenAI API side and described using JSON objects. You can find a complete guide on how to define functions here:\n\nhttps://platform.openai.com/docs/guides/function-calling\n\nhttps://cookbook.openai.com/examples/how_to_call_functions_with_chat_models\n\n\nPyGPT offers compatibility of these functions with commands used in the application. All you need to do is define the appropriate functions using the syntax required by OpenAI, and PyGPT will do the rest, translating such syntax on the fly into its own internal format.\n\nYou can define functions for modes: `Chat` and `Assistants`.\nNote that - in Chat mode, they should be defined in `Presets`, and for Assistants, in the `Assistant` settings.\n\n**Example of usage:**\n\n1) Chat\n\nCreate a new Preset, open the Preset edit dialog and add a new function using `+ Function` button with the following content:\n\n**Name:** `send_email`\n\n**Description:** `Sends a quote using email`\n\n**Params (JSON):**\n\n```json\n{\n        \"type\": \"object\",\n        \"properties\": {\n            \"quote\": {\n                \"type\": \"string\",\n                \"description\": \"A generated funny quote\"\n            }\n        },\n        \"required\": [\n            \"quote\"\n        ]\n}\n```\n\nThen, in the `Custom Commands` plugin, create a new command with the same name and the same parameters:\n\n**Command name:** `send_email`\n\n**Instruction/prompt:** `send mail` *(don't needed, because it will be called on OpenAI side)*\n\n**Params list:** `quote`\n\n**Command to execute:** `echo \"OK. Email sent: {quote}\"`\n\nAt next, enable the `Execute commands` option and enable the plugin.\n\nAsk GPT in Chat mode:\n\n```Create a funny quote and email it```\n\nIn response you will receive prepared command, like this:\n\n```~###~{\"cmd\": \"send_email\", \"params\": {\"quote\": \"Why do we tell actors to 'break a leg?' Because every play has a cast!\"}}~###~```\n\nAfter receiving this, PyGPT will execute the system `echo` command with params given from `params` field and replacing `{quote}` placeholder with `quote` param value.\n\nAs a result, response like this will be sent to the model:\n\n```[{\"request\": {\"cmd\": \"send_email\"}, \"result\": \"OK. Email sent: Why do we tell actors to 'break a leg?' Because every play has a cast!\"}]```\n\n\n2) Assistant\n\nIn this mode (via Assistants API), it should be done similarly, with the difference that here the functions should be defined in the assistant's settings.\n\nWith this flow you can use both forms - OpenAI and PyGPT - to define and execute commands and functions in the application. They will cooperate with each other and you can use them interchangeably.\n\n# Tools\n\nPyGPT features several useful tools, including:\n\n- Indexer\n- Media Player\n- Image viewer\n- Text editor\n- Transcribe audio/video files\n- Python code interpreter\n\n![v2_tool_menu](https://github.com/szczyglis-dev/py-gpt/assets/61396542/fb3f44af-f0de-4e18-bcac-e20389a651c9)\n\n\n### Indexer\n\n\nThis tool allows indexing of local files or directories and external web content to a vector database, which can then be used with the `Chat with Files` mode. Using this tool, you can manage local indexes and add new data with built-in `Llama-index` integration.\n\n![v2_tool_indexer](https://github.com/szczyglis-dev/py-gpt/assets/61396542/1caeab6e-6119-44e2-a7cb-ed34f8fe9e30)\n\n### Media Player\n\n\nA simple video/audio player that allows you to play video files directly from within the app.\n\n\n### Image Viewer\n\n\nA simple image browser that lets you preview images directly within the app.\n\n\n### Text Editor\n\n\nA simple text editor that enables you to edit text files directly within the app.\n\n\n### Transcribe Audio/Video Files\n\n\nAn audio transcription tool with which you can prepare a transcript from a video or audio file. It will use a speech recognition plugin to generate the text from the file.\n\n\n### Python Code Interpreter\n\n\nThis tool allows you to run Python code directly from within the app. It is integrated with the `Code Interpreter` plugin, ensuring that code generated by the model is automatically available from the interpreter. In the plugin settings, you can enable the execution of code in a Docker environment.\n\n# Token usage calculation\n\n## Input tokens\n\nThe application features a token calculator. It attempts to forecast the number of tokens that \na particular query will consume and displays this estimate in real time. This gives you improved \ncontrol over your token usage. The app provides detailed information about the tokens used for the user's prompt, \nthe system prompt, any additional data, and those used within the context (the memory of previous entries).\n\n**Remember that these are only approximate calculations and do not include, for example, the number of tokens consumed by some plugins. You can find the exact number of tokens used on the OpenAI website.**\n\n![v2_tokens1](https://github.com/szczyglis-dev/py-gpt/assets/61396542/29b610be-9e96-41cc-84f0-1b946886f801)\n\n## Total tokens\n\nAfter receiving a response from the model, the application displays the actual total number of tokens used for the query (received from the API).\n\n![v2_tokens2](https://github.com/szczyglis-dev/py-gpt/assets/61396542/c81e95b5-7c33-41a6-8910-21d674db37e5)\n\n# Configuration\n\n## Settings\n\nThe following basic options can be modified directly within the application:\n\n``` ini\nConfig -> Settings...\n```\n\n![v2_settings](https://github.com/szczyglis-dev/py-gpt/assets/61396542/43622c58-6cdb-4ed8-b47d-47729763db04)\n\n**General**\n\n- `OpenAI API KEY`: The personal API key you'll need to enter into the application for it to function.\n\n- `OpenAI ORGANIZATION KEY`: The organization's API key, which is optional for use within the application.\n\n- `API Endpoint`: OpenAI API endpoint URL, default: https://api.openai.com/v1.\n\n- `Number of notepads`: Number of notepad tabs. Restart of the application is required for this option to take effect.\n\n- `Minimize to tray on exit`: Minimize to tray icon on exit. Tray icon enabled is required for this option to work. Default: False.\n\n- `Render engine`: chat output render engine: `WebEngine / Chromium` - for full HTML/CSS and `Legacy (markdown)` for legacy, simple markdown CSS output. Default: WebEngine / Chromium.\n\n- `OpenGL hardware acceleration`: enables hardware acceleration in `WebEngine / Chromium` renderer.  Default: False.\n\n- `Application environment (os.environ)`: Additional environment vars to set on application start.\n\n**Layout**\n\n- `Zoom`: Adjusts the zoom in chat window (web render view). `WebEngine / Chromium` render mode only.\n\n- `Code syntax highlight`: Syntax highlight theme in code blocks. `WebEngine / Chromium` render mode only.\n\n- `Font Size (chat window)`: Adjusts the font size in the chat window (plain-text) and notepads.\n\n- `Font Size (input)`: Adjusts the font size in the input window.\n\n- `Font Size (ctx list)`: Adjusts the font size in contexts list.\n\n- `Font Size (toolbox)`: Adjusts the font size in toolbox on right.\n\n- `Layout density`: Adjusts layout elements density. Default: -1. \n\n- `DPI scaling`: Enable/disable DPI scaling. Restart of the application is required for this option to take effect. Default: True. \n\n- `DPI factor`: DPI factor. Restart of the application is required for this option to take effect. Default: 1.0. \n\n- `Display tips (help descriptions)`: Display help tips, Default: True.\n\n- `Store dialog window positions`: Enable or disable dialogs positions store/restore, Default: True.\n\n- `Use theme colors in chat window`: Use color theme in chat window, Default: True.\n\n- `Disable markdown formatting in output`: Enables plain-text display in output window, Default: False.\n\n**Files and attachments**\n\n- `Store attachments in the workdir upload directory`: Enable to store a local copy of uploaded attachments for future use. Default: True\n\n- `Store images, capture and upload in data directory`: Enable to store everything in single data directory. Default: False\n\n- `Directory for file downloads`: Subdirectory for downloaded files, e.g. in Assistants mode, inside \"data\". Default: \"download\"\n\n**Context**\n\n- `Context Threshold`: Sets the number of tokens reserved for the model to respond to the next prompt.\n\n- `Limit of last contexts on list to show  (0 = unlimited)`: Limit of the last contexts on list, default: 0 (unlimited)\n\n- `Use Context`: Toggles the use of conversation context (memory of previous inputs).\n\n- `Store History`: Toggles conversation history store.\n\n- `Store Time in History`: Chooses whether timestamps are added to the .txt files.\n\n- `Context Auto-summary`: Enables automatic generation of titles for contexts, Default: True.\n\n- `Lock incompatible modes`: If enabled, the app will create a new context when switched to an incompatible mode within an existing context.\n\n- `Search also in conversation content, not only in titles`: When enabled, context search will also consider the content of conversations, not just the titles of conversations.\n\n- `Show Llama-index sources`: If enabled, sources utilized will be displayed in the response (if available, it will not work in streamed chat).\n\n- `Show code interpreter output`: If enabled, output from the code interpreter in the Assistant API will be displayed in real-time (in stream mode), Default: True.\n\n- `Use extra context output`: If enabled, plain text output (if available) from command results will be displayed alongside the JSON output, Default: True.\n\n- `Convert lists to paragraphs`: If enabled, lists (ul, ol) will be converted to paragraphs (p), Default: True.\n\n- `Model used for auto-summary`: Model used for context auto-summary (default: *gpt-3.5-turbo-1106*).\n\n**Models**\n\n- `Max Output Tokens`: Sets the maximum number of tokens the model can generate for a single response.\n\n- `Max Total Tokens`: Sets the maximum token count that the application can send to the model, including the conversation context.\n\n- `RPM limit`: Sets the limit of maximum requests per minute (RPM), 0 = no limit.\n\n- `Temperature`: Sets the randomness of the conversation. A lower value makes the model's responses more deterministic, while a higher value increases creativity and abstraction.\n\n- `Top-p`: A parameter that influences the model's response diversity, similar to temperature. For more information, please check the OpenAI documentation.\n\n- `Frequency Penalty`: Decreases the likelihood of repetition in the model's responses.\n\n- `Presence Penalty`: Discourages the model from mentioning topics that have already been brought up in the conversation.\n\n**Prompts**\n\n- `Command execute: instruction`: Prompt for appending command execution instructions. Placeholders: {schema}, {extra}\n\n- `Command execute: extra footer (non-Assistant modes)`: Extra footer to append after commands JSON schema.\n\n- `Command execute: extra footer (Assistant mode only)`: PAdditional instructions to separate local commands from the remote environment that is already configured in the Assistants.\n\n- `Context: auto-summary (system prompt)`: System prompt for context auto-summary.\n\n- `Context: auto-summary (user message)`: User message for context auto-summary. Placeholders: {input}, {output}\n\n- `Agent: system instruction`: Prompt to instruct how to handle autonomous mode.\n\n- `Agent: continue`: Prompt sent to automatically continue the conversation.\n\n- `Agent: goal update`: Prompt to instruct how to update current goal status.\n\n- `Experts: Master prompt`: Prompt to instruct how to handle experts.\n\n- `DALL-E: image generate`: Prompt for generating prompts for DALL-E (if raw-mode is disabled).\n\n**Images**\n\n- `DALL-E Image size`: The resolution of the generated images (DALL-E). Default: 1792x1024.\n\n- `DALL-E Image quality`: The image quality of the generated images (DALL-E). Default: standard.\n\n- `Open image dialog after generate`: Enable the image dialog to open after an image is generated in Image mode.\n\n- `DALL-E: prompt generation model`: Model used for generating prompts for DALL-E (if raw-mode is disabled).\n\n**Vision**\n\n- `Vision: Camera capture width (px)`: Video capture resolution (width).\n\n- `Vision: Camera capture height (px)`: Video capture resolution (height).\n\n- `Vision: Camera IDX (number)`: Video capture camera index (number of camera).\n\n- `Vision: Image capture quality`: Video capture image JPEG quality (%).\n\n**Indexes (Llama-index)**\n\n- `Indexes`: List of created indexes.\n\n- `Vector Store`: Vector store to use (vector database provided by Llama-index).\n\n- `Vector Store (**kwargs)`: Keyword arguments for vector store provider (api_key, index_name, etc.).\n\n- `Embeddings provider`: Embeddings provider.\n\n- `Embeddings provider (ENV)`: ENV vars to embeddings provider (API keys, etc.).\n\n- `Embeddings provider (**kwargs)`: Keyword arguments for embeddings provider (model name, etc.).\n\n- `RPM limit for embeddings API calls`: Specify the limit of maximum requests per minute (RPM), 0 = no limit.\n\n- `Recursive directory indexing`: Enables recursive directory indexing, default is False.\n\n- `Replace old document versions in the index during re-indexing`: If enabled, previous versions of documents will be deleted from the index when the newest versions are indexed, default is True.\n\n- `Excluded file extensions`: File extensions to exclude if no data loader for this extension, separated by comma.\n\n- `Force exclude files`: If enabled, the exclusion list will be applied even when the data loader for the extension is active. Default: False.\n\n- `Custom metadata to append/replace to indexed documents (file)`: Define custom metadata key => value fields for specified file extensions, separate extensions by comma.\\nAllowed placeholders: {path}, {relative_path} {filename}, {dirname}, {relative_dir} {ext}, {size}, {mtime}, {date}, {date_time}, {time}, {timestamp}. Use * (asterisk) as extension if you want to apply field to all files. Set empty value to remove field with specified key from metadata.\n\n- `Custom metadata to append/replace to indexed documents (web)`: Define custom metadata key => value fields for specified external data loaders.\\nAllowed placeholders: {date}, {date_time}, {time}, {timestamp} + {data loader args}\n\n- `Additional keyword arguments (**kwargs) for data loaders`: Additional keyword arguments, such as settings, API keys, for the data loader. These arguments will be passed to the loader; please refer to the Llama-index or LlamaHub loaders reference for a list of allowed arguments for the specified data loader.\n\n- `Use local models in Video/Audio and Image (vision) loaders`: Enables usage of local models in Video/Audio and Image (vision) loaders. If disabled then API models will be used (GPT-4 Vision and Whisper). Note: local models will work only in Python version (not compiled/Snap). Default: False.\n\n- `Auto-index DB in real time`: Enables conversation context auto-indexing in defined modes.\n\n- `ID of index for auto-indexing`: Index to use if auto-indexing of conversation context is enabled.\n\n- `Enable auto-index in modes`: List of modes with enabled context auto-index, separated by comma.\n\n- `DB (ALL), DB (UPDATE), FILES (ALL)`: Index the data \u2013 batch indexing is available here.\n\n**Agent and experts**\n\n- `Sub-mode to use`: Sub-mode to use in Agent mode (chat, completion, langchain, llama_index, etc.). Default: chat.\n\n- `Sub-mode for experts`: Sub-mode to use in Experts mode (chat, completion, langchain, llama_index, etc.). Default: chat.\n\n- `Index to use`: Only if sub-mode is llama_index (Chat with files), choose the index to use in Agent mode.\n\n- `Display a tray notification when the goal is achieved.`: If enabled, a notification will be displayed after goal achieved / finished run.\n\n**Accessibility**\n\n- `Enable voice control (using microphone)`: enables voice control (using microphone and defined commands).\n\n- `Model`: model used for voice command recognition.\n\n- `Use voice synthesis to describe events on the screen.`: enables audio description of on-screen events.\n\n- `Use audio output cache`: If enabled, all static audio outputs will be cached on the disk instead of being generated every time. Default: True.\n\n- `Audio notify microphone listening start/stop`: enables audio \"tick\" notify when microphone listening started/ended.\n\n- `Audio notify voice command execution`: enables audio \"tick\" notify when voice command is executed.\n\n- `Control shortcut keys`: configuration for keyboard shortcuts for a specified actions.\n\n- `Blacklist for voice synthesis events describe (ignored events)`: list of muted events for 'Use voice synthesis to describe event' option.\n\n- `Voice control actions blacklist`: Disable actions in voice control; add actions to the blacklist to prevent execution through voice commands.\n\n**Updates**\n\n- `Check for updates on start`: Enables checking for updates on start. Default: True.\n\n- `Check for updates in background`: Enables checking for updates in background (checking every 5 minutes). Default: True.\n\n**Developer**\n\n- `Show debug menu`: Enables debug (developer) menu.\n\n- `Log and debug context`: Enables logging of context input/output.\n\n- `Log and debug events`: Enables logging of event dispatch.\n\n- `Log plugin usage to console`: Enables logging of plugin usage to console.\n\n- `Log DALL-E usage to console`: Enables logging of DALL-E usage to console.\n\n- `Log Llama-index usage to console`: Enables logging of Llama-index usage to console.\n\n- `Log Assistants usage to console`: Enables logging of Assistants API usage to console.\n\n- `Log level`: toggle log level (ERROR|WARNING|INFO|DEBUG)\n\n\n## JSON files\n\nThe configuration is stored in JSON files for easy manual modification outside of the application. \nThese configuration files are located in the user's work directory within the following subdirectory:\n\n``` ini\n{HOME_DIR}/.config/pygpt-net/\n```\n\n# Notepad\n\nThe application has a built-in notepad, divided into several tabs. This can be useful for storing information in a convenient way, without the need to open an external text editor. The content of the notepad is automatically saved whenever the content changes.\n\n![v2_notepad](https://github.com/szczyglis-dev/py-gpt/assets/61396542/f6aa0126-bad1-4e6c-ace6-72e979186433)\n\n# Profiles\n\nYou can create many \"profiles\" for an app and switch between them. Each profile uses its own configuration, settings, history of contexts, and a separate folder for user files. This allows you to make many setups and quickly switch between them, changing the whole setting with one click.\n\nThe app allows you to make new profiles, edit existing ones, and duplicate current ones.\n\nTo make a new profile, select the option from the menu `Config -> Profile -> New profile...`\n\nTo edit saved profiles, choose the option from the menu `Config -> Profile -> Edit profiles...`\n\nTo switch to a created profile, pick the profile from the menu: `Config -> Profile -> (profile name)`\n\nEach profile uses its own user directory (workdir). You can link a newly created (or edited) profile to an already existing workdir with its configuration.\n\nThe name of the currently active profile is shown in (Profile Name) in the window title.\n\n# Advanced configuration\n\n## Manual configuration\n\n\nYou can manually edit the configuration files in this directory (this is your work directory):\n\n``` ini\n{HOME_DIR}/.config/pygpt-net/\n```\n\n- `assistants.json` - stores the list of assistants.\n- `attachments.json` - stores the list of current attachments.\n- `config.json` - stores the main configuration settings.\n- `models.json` - stores models configurations.\n- `cache` - a directory for audio cache.\n- `capture` - a directory for captured images from camera and screenshots\n- `css` - a directory for CSS stylesheets (user override)\n- `history` - a directory for context history in `.txt` format.\n- `idx` - `Llama-index` indexes\n- `img` - a directory for images generated with `DALL-E 3` and `DALL-E 2`, saved as `.png` files.\n- `locale` - a directory for locales (user override)\n- `data` - a directory for data files and files downloaded/generated by GPT.\n- `presets` - a directory for presets stored as `.json` files.\n- `upload` - a directory for local copies of attachments coming from outside the workdir\n- `db.sqlite` - a database with contexts, notepads and indexes data records\n- `app.log` - a file with error and debug log\n\n---\n\n## Translations / Locale\n\nLocale `.ini` files are located in the app directory:\n\n``` ini\n./data/locale\n```\n\nThis directory is automatically scanned when the application launches. To add a new translation, \ncreate and save the file with the appropriate name, for example:\n\n``` ini\nlocale.es.ini   \n```\n\nThis will add Spanish as a selectable language in the application's language menu.\n\n**Overwriting CSS and locales with Your Own Files:**\n\nYou can also overwrite files in the `locale` and `css` app directories with your own files in the user directory. \nThis allows you to overwrite language files or CSS styles in a very simple way - by just creating files in your working directory.\n\n\n``` ini\n{HOME_DIR}/.config/pygpt-net/\n```\n\n- `locale` - a directory for locales in `.ini` format.\n- `css` - a directory for CSS styles in `.css` format.\n\n**Adding Your Own Fonts**\n\nYou can add your own fonts and use them in CSS files. To load your own fonts, you should place them in the `%workdir%/fonts` directory. Supported font types include: `otf`, `ttf`.\nYou can see the list of loaded fonts in `Debug / Config`.\n\n**Example:**\n\n```\n%workdir%\n|_css\n|_data\n|_fonts\n   |_MyFont\n     |_MyFont-Regular.ttf\n     |_MyFont-Bold.ttf\n     |...\n```\n\n```css\npre {{\n    font-family: 'MyFont';\n}}\n```\n\n## Debugging and Logging\n\nIn `Settings -> Developer` dialog, you can enable the `Show debug menu` option to turn on the debugging menu. The menu allows you to inspect the status of application elements. In the debugging menu, there is a `Logger` option that opens a log window. In the window, the program's operation is displayed in real-time.\n\n**Logging levels**:\n\nBy default, all errors and exceptions are logged to the file:\n\n```ini\n{HOME_DIR}/.config/pygpt-net/app.log\n```\n\nTo increase the logging level (`ERROR` level is default), run the application with `--debug` argument:\n\n``` ini\npython3 run.py --debug=1\n```\n\nor\n\n```ini\npython3 run.py --debug=2\n```\n\nThe value `1` enables the `INFO`logging level.\n\nThe value `2` enables the `DEBUG` logging level (most information).\n\n## Compatibility (legacy) mode\n\nIf you have a problems with `WebEngine / Chromium` renderer you can force the legacy mode by launching the app with command line arguments:\n\n``` ini\npython3 run.py --legacy=1\n```\n\nand to force disable OpenGL hardware acceleration:\n\n``` ini\npython3 run.py --disable-gpu=1\n```\n\nYou can also manualy enable legacy mode by editing config file - open the `%WORKDIR%/config.json` config file in editor and set the following options:\n\n``` json\n\"render.engine\": \"legacy\",\n\"render.open_gl\": false,\n```\n\n## Updates\n\n### Updating PyGPT\n\n**PyGPT** comes with an integrated update notification system. When a new version with additional features is released, you'll receive an alert within the app. \n\nTo get the new version, simply download it and start using it in place of the old one. All your custom settings like configuration, presets, indexes, and past conversations will be kept and ready to use right away in the new version.\n\n\n## Coming soon\n\n- Enhanced integration with Langchain\n- More vector databases support\n- Development of autonomous agents\n\n## DISCLAIMER\n\nThis application is not officially associated with OpenAI. The author shall not be held liable for any damages \nresulting from the use of this application. It is provided \"as is,\" without any form of warranty. \nUsers are reminded to be mindful of token usage - always verify the number of tokens utilized by the model on \nthe OpenAI website and engage with the application responsibly. Activating plugins, such as Web Search,\nmay consume additional tokens that are not displayed in the main window. \n\n**Always monitor your actual token usage on the OpenAI website.**\n\n---\n\n# CHANGELOG\n\n## Recent changes:\n\n**2.2.18 (2024-05-05)**\n\n- Fix: prevent crash if no audio to play.\n\n**2.2.17 (2024-05-05)**\n\n- Fix: Added prevent try to play audio if empty output.\n- Disabled playing finish event on audio or voice control enabled.\n\n**2.2.16 (2024-05-05)**\n\n- Escape key now stops response generation and audio output (if playing).\n- Voice control options added to the Audio menu.\n- Added cache on disk for generated static audio content.\n- Added plugin translations for other languages.\n\n**2.2.15 (2024-05-04)**\n\n- Added audio output stop on audio input start.\n- Added notify about unrecognized command.\n- Voice control improvements.\n\n**2.2.14 (2024-05-04)**\n\n- Added a 'Voice Control (inline)' plugin that allows for voice command control directly during a conversation.\n- Added configuration in 'Settings -> Accessibility' for a blacklist of actions available as voice commands.\n\n**2.2.13 (2024-05-03)**\n\n- Added stretch to dictionary config fields.\n- Removed redundant attachments clear event.\n\n**2.2.12 (2024-05-03)**\n\n- Improved speech recognition.\n- Added minimum required length of audio input.\n- Added missing translations.\n- Fixed settings hooks triggering on profile switch.\n\n**2.2.11 (2024-05-03)**\n\n- Added a blacklist for events for the voice event description in settings.\n- Added a delay to playing audio when describing events.\n- Sorted the list of events in the configuration.\n\n**2.2.10 (2024-05-03)**\n\n- Extended voice control commands list.\n- Extended actions and keyboard shortcuts.\n\n**2.2.9 (2024-05-02)**\n\n- Added more commands to voice control: search for contexts, clear search, add, read and clear calendar memos, context rename.\n\n**2.2.8 (2024-05-02)**\n\n- Added support for disabled people, including voice control and screen event translation with audio synthesis.\n- A new section in Settings called 'Accessibility' has been added with options for assistance: voice control, keyboard shortcut definitions for actions, and screen event translation using audio synthesis.\n- A new section called 'Accessibility' has been added to the Documentation.\n\nThe full changelog is located in the [CHANGELOG.md](https://github.com/szczyglis-dev/py-gpt/blob/master/CHANGELOG.md) file in the main folder of this repository.\n\n\n# Credits and links\n\n**Official website:** <https://pygpt.net>\n\n**Documentation:** <https://pygpt.readthedocs.io>\n\n**Support and donate:** <https://pygpt.net/#donate>\n\n**GitHub:** <https://github.com/szczyglis-dev/py-gpt>\n\n**Snap Store:** <https://snapcraft.io/pygpt>\n\n**PyPI:** <https://pypi.org/project/pygpt-net>\n\n**Author:** Marcin Szczygli\u0144ski (Poland, EU)\n\n**Contact:** <info@pygpt.net>\n\n**License:** MIT License\n\n# Special thanks\n\nGitHub's community:\n\n- [@BillionShields](https://github.com/BillionShields)\n\n- [@gfsysa](https://github.com/gfsysa)\n\n- [@glinkot](https://github.com/glinkot)\n\n- [@kaneda2004](https://github.com/kaneda2004)\n\n- [@linnflux](https://github.com/linnflux)\n\n- [@moritz-t-w](https://github.com/moritz-t-w)\n\n- [@oleksii-honchar](https://github.com/oleksii-honchar)\n\n- [@yf007](https://github.com/yf007)\n\n## Third-party libraries\n\nFull list of external libraries used in this project is located in the [requirements.txt](https://github.com/szczyglis-dev/py-gpt/blob/master/requirements.txt) file in the main folder of the repository.\n\nAll used SVG icons are from `Material Design Icons` provided by Google:\n\nhttps://github.com/google/material-design-icons\n\nhttps://fonts.google.com/icons\n\nMonaspace fonts provided by GitHub: https://github.com/githubnext/monaspace\n\nCode of the Llama-index offline loaders integrated into app is taken from LlamaHub: https://llamahub.ai\n\nAwesome ChatGPT Prompts (used in templates): https://github.com/f/awesome-chatgpt-prompts/\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "Desktop AI Assistant powered by GPT-4, GPT-4V, GPT-3.5, DALL-E 3, Langchain LLMs, Llama-index, Whisper with chatbot, assistant, text completion, vision and image generation, internet access, chat with files, commands and code execution, file upload and download and more",
    "version": "2.2.18",
    "project_urls": {
        "Documentation": "https://pygpt.readthedocs.io/",
        "Homepage": "https://github.com/szczyglis-dev/py-gpt",
        "Repository": "https://github.com/szczyglis-dev/py-gpt"
    },
    "split_keywords": [
        "py_gpt",
        " py-gpt",
        " pygpt",
        " desktop",
        " app",
        " gpt",
        " gpt4",
        " gpt4-v",
        " gpt3.5",
        " gpt-4",
        " gpt-4v",
        " gpt-3.5",
        " tts",
        " whisper",
        " vision",
        " chatgpt",
        " dall-e",
        " chat",
        " chatbot",
        " assistant",
        " text completion",
        " image generation",
        " ai",
        " api",
        " openai",
        " api key",
        " langchain",
        " llama-index",
        " presets",
        " ui",
        " qt",
        " pyside"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "3907b56bb986cd803416877db845f8393d328626c0fb71cbee26a78d57dc1b98",
                "md5": "bc8bbc8f6782a1c73cbb6e6b942edf8b",
                "sha256": "3d6e9055f8319450ba28aeef01bba4795a79dd2f65fad44ae72f95e1f2998298"
            },
            "downloads": -1,
            "filename": "pygpt_net-2.2.18-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "bc8bbc8f6782a1c73cbb6e6b942edf8b",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "<3.12,>=3.10",
            "size": 3242304,
            "upload_time": "2024-05-05T16:07:22",
            "upload_time_iso_8601": "2024-05-05T16:07:22.455804Z",
            "url": "https://files.pythonhosted.org/packages/39/07/b56bb986cd803416877db845f8393d328626c0fb71cbee26a78d57dc1b98/pygpt_net-2.2.18-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "4aedfec6da641016678c1362f61817a14b695509fdd4c25d1931a38aa49b2f78",
                "md5": "27e04a4d2ed9910bc571fc9970dea9ad",
                "sha256": "fe7fb693260f68d320ff8dc44d49295c51c70c50e4b5b19dab83488d5fd57598"
            },
            "downloads": -1,
            "filename": "pygpt_net-2.2.18.tar.gz",
            "has_sig": false,
            "md5_digest": "27e04a4d2ed9910bc571fc9970dea9ad",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "<3.12,>=3.10",
            "size": 2753650,
            "upload_time": "2024-05-05T16:07:30",
            "upload_time_iso_8601": "2024-05-05T16:07:30.767214Z",
            "url": "https://files.pythonhosted.org/packages/4a/ed/fec6da641016678c1362f61817a14b695509fdd4c25d1931a38aa49b2f78/pygpt_net-2.2.18.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-05-05 16:07:30",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "szczyglis-dev",
    "github_project": "py-gpt",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "requirements": [
        {
            "name": "aiohttp",
            "specs": [
                [
                    "==",
                    "3.9.3"
                ]
            ]
        },
        {
            "name": "aiosignal",
            "specs": [
                [
                    "==",
                    "1.3.1"
                ]
            ]
        },
        {
            "name": "altgraph",
            "specs": [
                [
                    "==",
                    "0.17.4"
                ]
            ]
        },
        {
            "name": "annotated-types",
            "specs": [
                [
                    "==",
                    "0.6.0"
                ]
            ]
        },
        {
            "name": "anyio",
            "specs": [
                [
                    "==",
                    "4.3.0"
                ]
            ]
        },
        {
            "name": "asgiref",
            "specs": [
                [
                    "==",
                    "3.7.2"
                ]
            ]
        },
        {
            "name": "async-timeout",
            "specs": [
                [
                    "==",
                    "4.0.3"
                ]
            ]
        },
        {
            "name": "asyncio",
            "specs": [
                [
                    "==",
                    "3.4.3"
                ]
            ]
        },
        {
            "name": "attrs",
            "specs": [
                [
                    "==",
                    "23.2.0"
                ]
            ]
        },
        {
            "name": "azure-core",
            "specs": [
                [
                    "==",
                    "1.30.1"
                ]
            ]
        },
        {
            "name": "azure-identity",
            "specs": [
                [
                    "==",
                    "1.15.0"
                ]
            ]
        },
        {
            "name": "backoff",
            "specs": [
                [
                    "==",
                    "2.2.1"
                ]
            ]
        },
        {
            "name": "bcrypt",
            "specs": [
                [
                    "==",
                    "4.1.2"
                ]
            ]
        },
        {
            "name": "beautifulsoup4",
            "specs": [
                [
                    "==",
                    "4.12.3"
                ]
            ]
        },
        {
            "name": "bleach",
            "specs": [
                [
                    "==",
                    "6.0.0"
                ]
            ]
        },
        {
            "name": "bs4",
            "specs": [
                [
                    "==",
                    "0.0.2"
                ]
            ]
        },
        {
            "name": "build",
            "specs": [
                [
                    "==",
                    "1.0.3"
                ]
            ]
        },
        {
            "name": "cachetools",
            "specs": [
                [
                    "==",
                    "5.3.2"
                ]
            ]
        },
        {
            "name": "certifi",
            "specs": [
                [
                    "==",
                    "2024.2.2"
                ]
            ]
        },
        {
            "name": "cffi",
            "specs": [
                [
                    "==",
                    "1.15.1"
                ]
            ]
        },
        {
            "name": "charset-normalizer",
            "specs": [
                [
                    "==",
                    "3.3.2"
                ]
            ]
        },
        {
            "name": "chroma-hnswlib",
            "specs": [
                [
                    "==",
                    "0.7.3"
                ]
            ]
        },
        {
            "name": "chromadb",
            "specs": [
                [
                    "==",
                    "0.4.23"
                ]
            ]
        },
        {
            "name": "chromedriver-autoinstaller",
            "specs": [
                [
                    "==",
                    "0.6.4"
                ]
            ]
        },
        {
            "name": "click",
            "specs": [
                [
                    "==",
                    "8.1.7"
                ]
            ]
        },
        {
            "name": "coloredlogs",
            "specs": [
                [
                    "==",
                    "15.0.1"
                ]
            ]
        },
        {
            "name": "croniter",
            "specs": [
                [
                    "==",
                    "2.0.1"
                ]
            ]
        },
        {
            "name": "cryptography",
            "specs": [
                [
                    "==",
                    "42.0.4"
                ]
            ]
        },
        {
            "name": "cssselect",
            "specs": [
                [
                    "==",
                    "1.2.0"
                ]
            ]
        },
        {
            "name": "dataclasses-json",
            "specs": [
                [
                    "==",
                    "0.6.4"
                ]
            ]
        },
        {
            "name": "defusedxml",
            "specs": [
                [
                    "==",
                    "0.7.1"
                ]
            ]
        },
        {
            "name": "Deprecated",
            "specs": [
                [
                    "==",
                    "1.2.14"
                ]
            ]
        },
        {
            "name": "dirtyjson",
            "specs": [
                [
                    "==",
                    "1.0.8"
                ]
            ]
        },
        {
            "name": "distro",
            "specs": [
                [
                    "==",
                    "1.9.0"
                ]
            ]
        },
        {
            "name": "docker",
            "specs": [
                [
                    "==",
                    "7.0.0"
                ]
            ]
        },
        {
            "name": "docutils",
            "specs": [
                [
                    "==",
                    "0.19"
                ]
            ]
        },
        {
            "name": "docx2txt",
            "specs": [
                [
                    "==",
                    "0.8"
                ]
            ]
        },
        {
            "name": "EbookLib",
            "specs": [
                [
                    "==",
                    "0.18"
                ]
            ]
        },
        {
            "name": "elastic-transport",
            "specs": [
                [
                    "==",
                    "8.12.0"
                ]
            ]
        },
        {
            "name": "elasticsearch",
            "specs": [
                [
                    "==",
                    "8.12.1"
                ]
            ]
        },
        {
            "name": "exceptiongroup",
            "specs": [
                [
                    "==",
                    "1.2.0"
                ]
            ]
        },
        {
            "name": "fastapi",
            "specs": [
                [
                    "==",
                    "0.109.2"
                ]
            ]
        },
        {
            "name": "fastjsonschema",
            "specs": [
                [
                    "==",
                    "2.19.1"
                ]
            ]
        },
        {
            "name": "feedfinder2",
            "specs": [
                [
                    "==",
                    "0.0.4"
                ]
            ]
        },
        {
            "name": "feedparser",
            "specs": [
                [
                    "==",
                    "6.0.11"
                ]
            ]
        },
        {
            "name": "filelock",
            "specs": [
                [
                    "==",
                    "3.13.1"
                ]
            ]
        },
        {
            "name": "flatbuffers",
            "specs": [
                [
                    "==",
                    "23.5.26"
                ]
            ]
        },
        {
            "name": "frozenlist",
            "specs": [
                [
                    "==",
                    "1.4.1"
                ]
            ]
        },
        {
            "name": "fsspec",
            "specs": [
                [
                    "==",
                    "2024.2.0"
                ]
            ]
        },
        {
            "name": "future",
            "specs": [
                [
                    "==",
                    "1.0.0"
                ]
            ]
        },
        {
            "name": "gkeepapi",
            "specs": [
                [
                    "==",
                    "0.15.1"
                ]
            ]
        },
        {
            "name": "google-api-core",
            "specs": [
                [
                    "==",
                    "2.17.1"
                ]
            ]
        },
        {
            "name": "google-api-python-client",
            "specs": [
                [
                    "==",
                    "2.120.0"
                ]
            ]
        },
        {
            "name": "google-auth",
            "specs": [
                [
                    "==",
                    "2.28.0"
                ]
            ]
        },
        {
            "name": "google-auth-httplib2",
            "specs": [
                [
                    "==",
                    "0.2.0"
                ]
            ]
        },
        {
            "name": "google-auth-oauthlib",
            "specs": [
                [
                    "==",
                    "1.2.0"
                ]
            ]
        },
        {
            "name": "googleapis-common-protos",
            "specs": [
                [
                    "==",
                    "1.62.0"
                ]
            ]
        },
        {
            "name": "gpsoauth",
            "specs": [
                [
                    "==",
                    "1.0.4"
                ]
            ]
        },
        {
            "name": "greenlet",
            "specs": [
                [
                    "==",
                    "3.0.3"
                ]
            ]
        },
        {
            "name": "grpcio",
            "specs": [
                [
                    "==",
                    "1.60.1"
                ]
            ]
        },
        {
            "name": "h11",
            "specs": [
                [
                    "==",
                    "0.14.0"
                ]
            ]
        },
        {
            "name": "html2text",
            "specs": [
                [
                    "==",
                    "2020.1.16"
                ]
            ]
        },
        {
            "name": "httpcore",
            "specs": [
                [
                    "==",
                    "1.0.4"
                ]
            ]
        },
        {
            "name": "httplib2",
            "specs": [
                [
                    "==",
                    "0.22.0"
                ]
            ]
        },
        {
            "name": "httptools",
            "specs": [
                [
                    "==",
                    "0.6.1"
                ]
            ]
        },
        {
            "name": "httpx",
            "specs": [
                [
                    "==",
                    "0.27.0"
                ]
            ]
        },
        {
            "name": "huggingface-hub",
            "specs": [
                [
                    "==",
                    "0.20.3"
                ]
            ]
        },
        {
            "name": "humanfriendly",
            "specs": [
                [
                    "==",
                    "10.0"
                ]
            ]
        },
        {
            "name": "idna",
            "specs": [
                [
                    "==",
                    "3.6"
                ]
            ]
        },
        {
            "name": "importlib-metadata",
            "specs": [
                [
                    "==",
                    "6.11.0"
                ]
            ]
        },
        {
            "name": "importlib-resources",
            "specs": [
                [
                    "==",
                    "6.1.1"
                ]
            ]
        },
        {
            "name": "iniconfig",
            "specs": [
                [
                    "==",
                    "2.0.0"
                ]
            ]
        },
        {
            "name": "jaraco.classes",
            "specs": [
                [
                    "==",
                    "3.2.3"
                ]
            ]
        },
        {
            "name": "jeepney",
            "specs": [
                [
                    "==",
                    "0.8.0"
                ]
            ]
        },
        {
            "name": "jieba3k",
            "specs": [
                [
                    "==",
                    "0.35.1"
                ]
            ]
        },
        {
            "name": "Jinja2",
            "specs": [
                [
                    "==",
                    "3.1.3"
                ]
            ]
        },
        {
            "name": "joblib",
            "specs": [
                [
                    "==",
                    "1.3.2"
                ]
            ]
        },
        {
            "name": "jsonpatch",
            "specs": [
                [
                    "==",
                    "1.33"
                ]
            ]
        },
        {
            "name": "jsonpointer",
            "specs": [
                [
                    "==",
                    "2.4"
                ]
            ]
        },
        {
            "name": "jsonschema",
            "specs": [
                [
                    "==",
                    "4.21.1"
                ]
            ]
        },
        {
            "name": "jsonschema-specifications",
            "specs": [
                [
                    "==",
                    "2023.12.1"
                ]
            ]
        },
        {
            "name": "jupyter_client",
            "specs": [
                [
                    "==",
                    "8.6.0"
                ]
            ]
        },
        {
            "name": "jupyter_core",
            "specs": [
                [
                    "==",
                    "5.7.1"
                ]
            ]
        },
        {
            "name": "jupyterlab_pygments",
            "specs": [
                [
                    "==",
                    "0.3.0"
                ]
            ]
        },
        {
            "name": "keyring",
            "specs": [
                [
                    "==",
                    "23.13.1"
                ]
            ]
        },
        {
            "name": "kubernetes",
            "specs": [
                [
                    "==",
                    "29.0.0"
                ]
            ]
        },
        {
            "name": "langchain",
            "specs": [
                [
                    "==",
                    "0.1.9"
                ]
            ]
        },
        {
            "name": "langchain-community",
            "specs": [
                [
                    "==",
                    "0.0.24"
                ]
            ]
        },
        {
            "name": "langchain-core",
            "specs": [
                [
                    "==",
                    "0.1.27"
                ]
            ]
        },
        {
            "name": "langchain-experimental",
            "specs": [
                [
                    "==",
                    "0.0.52"
                ]
            ]
        },
        {
            "name": "langchain-openai",
            "specs": [
                [
                    "==",
                    "0.0.2.post1"
                ]
            ]
        },
        {
            "name": "langsmith",
            "specs": [
                [
                    "==",
                    "0.1.9"
                ]
            ]
        },
        {
            "name": "llama-index",
            "specs": [
                [
                    "==",
                    "0.10.13.post1"
                ]
            ]
        },
        {
            "name": "llama-index-agent-openai",
            "specs": [
                [
                    "==",
                    "0.1.5"
                ]
            ]
        },
        {
            "name": "llama-index-cli",
            "specs": [
                [
                    "==",
                    "0.1.5"
                ]
            ]
        },
        {
            "name": "llama-index-core",
            "specs": [
                [
                    "==",
                    "0.10.13"
                ]
            ]
        },
        {
            "name": "llama-index-embeddings-azure-openai",
            "specs": [
                [
                    "==",
                    "0.1.6"
                ]
            ]
        },
        {
            "name": "llama-index-embeddings-openai",
            "specs": [
                [
                    "==",
                    "0.1.6"
                ]
            ]
        },
        {
            "name": "llama-index-indices-managed-llama-cloud",
            "specs": [
                [
                    "==",
                    "0.1.3"
                ]
            ]
        },
        {
            "name": "llama-index-legacy",
            "specs": [
                [
                    "==",
                    "0.9.48"
                ]
            ]
        },
        {
            "name": "llama-index-llms-azure-openai",
            "specs": [
                [
                    "==",
                    "0.1.5"
                ]
            ]
        },
        {
            "name": "llama-index-llms-openai",
            "specs": [
                [
                    "==",
                    "0.1.6"
                ]
            ]
        },
        {
            "name": "llama-index-multi-modal-llms-openai",
            "specs": [
                [
                    "==",
                    "0.1.4"
                ]
            ]
        },
        {
            "name": "llama-index-program-openai",
            "specs": [
                [
                    "==",
                    "0.1.4"
                ]
            ]
        },
        {
            "name": "llama-index-question-gen-openai",
            "specs": [
                [
                    "==",
                    "0.1.3"
                ]
            ]
        },
        {
            "name": "llama-index-readers-chatgpt-plugin",
            "specs": [
                [
                    "==",
                    "0.1.3"
                ]
            ]
        },
        {
            "name": "llama-index-readers-database",
            "specs": [
                [
                    "==",
                    "0.1.3"
                ]
            ]
        },
        {
            "name": "llama-index-readers-file",
            "specs": [
                [
                    "==",
                    "0.1.6"
                ]
            ]
        },
        {
            "name": "llama-index-readers-github",
            "specs": [
                [
                    "==",
                    "0.1.7"
                ]
            ]
        },
        {
            "name": "llama-index-readers-google",
            "specs": [
                [
                    "==",
                    "0.1.4"
                ]
            ]
        },
        {
            "name": "llama-index-readers-llama-parse",
            "specs": [
                [
                    "==",
                    "0.1.3"
                ]
            ]
        },
        {
            "name": "llama-index-readers-microsoft-onedrive",
            "specs": [
                [
                    "==",
                    "0.1.3"
                ]
            ]
        },
        {
            "name": "llama-index-readers-twitter",
            "specs": [
                [
                    "==",
                    "0.1.3"
                ]
            ]
        },
        {
            "name": "llama-index-readers-web",
            "specs": [
                [
                    "==",
                    "0.1.6"
                ]
            ]
        },
        {
            "name": "llama-index-vector-stores-chroma",
            "specs": [
                [
                    "==",
                    "0.1.4"
                ]
            ]
        },
        {
            "name": "llama-index-vector-stores-elasticsearch",
            "specs": [
                [
                    "==",
                    "0.1.4"
                ]
            ]
        },
        {
            "name": "llama-index-vector-stores-pinecone",
            "specs": [
                [
                    "==",
                    "0.1.3"
                ]
            ]
        },
        {
            "name": "llama-index-vector-stores-redis",
            "specs": [
                [
                    "==",
                    "0.1.2"
                ]
            ]
        },
        {
            "name": "llama-parse",
            "specs": [
                [
                    "==",
                    "0.3.4"
                ]
            ]
        },
        {
            "name": "llamaindex-py-client",
            "specs": [
                [
                    "==",
                    "0.1.13"
                ]
            ]
        },
        {
            "name": "lxml",
            "specs": [
                [
                    "==",
                    "5.1.0"
                ]
            ]
        },
        {
            "name": "Markdown",
            "specs": [
                [
                    "==",
                    "3.5.2"
                ]
            ]
        },
        {
            "name": "markdown-it-py",
            "specs": [
                [
                    "==",
                    "2.2.0"
                ]
            ]
        },
        {
            "name": "MarkupSafe",
            "specs": [
                [
                    "==",
                    "2.1.5"
                ]
            ]
        },
        {
            "name": "marshmallow",
            "specs": [
                [
                    "==",
                    "3.20.2"
                ]
            ]
        },
        {
            "name": "mdurl",
            "specs": [
                [
                    "==",
                    "0.1.2"
                ]
            ]
        },
        {
            "name": "mistune",
            "specs": [
                [
                    "==",
                    "3.0.2"
                ]
            ]
        },
        {
            "name": "mmh3",
            "specs": [
                [
                    "==",
                    "4.1.0"
                ]
            ]
        },
        {
            "name": "monotonic",
            "specs": [
                [
                    "==",
                    "1.6"
                ]
            ]
        },
        {
            "name": "more-itertools",
            "specs": [
                [
                    "==",
                    "9.1.0"
                ]
            ]
        },
        {
            "name": "mpmath",
            "specs": [
                [
                    "==",
                    "1.3.0"
                ]
            ]
        },
        {
            "name": "msal",
            "specs": [
                [
                    "==",
                    "1.27.0"
                ]
            ]
        },
        {
            "name": "msal-extensions",
            "specs": [
                [
                    "==",
                    "1.1.0"
                ]
            ]
        },
        {
            "name": "multidict",
            "specs": [
                [
                    "==",
                    "6.0.5"
                ]
            ]
        },
        {
            "name": "mypy-extensions",
            "specs": [
                [
                    "==",
                    "1.0.0"
                ]
            ]
        },
        {
            "name": "nbclient",
            "specs": [
                [
                    "==",
                    "0.9.0"
                ]
            ]
        },
        {
            "name": "nbconvert",
            "specs": [
                [
                    "==",
                    "7.16.1"
                ]
            ]
        },
        {
            "name": "nbformat",
            "specs": [
                [
                    "==",
                    "5.9.2"
                ]
            ]
        },
        {
            "name": "nest-asyncio",
            "specs": [
                [
                    "==",
                    "1.6.0"
                ]
            ]
        },
        {
            "name": "networkx",
            "specs": [
                [
                    "==",
                    "3.2.1"
                ]
            ]
        },
        {
            "name": "newspaper3k",
            "specs": [
                [
                    "==",
                    "0.2.8"
                ]
            ]
        },
        {
            "name": "nltk",
            "specs": [
                [
                    "==",
                    "3.8.1"
                ]
            ]
        },
        {
            "name": "numpy",
            "specs": [
                [
                    "==",
                    "1.26.4"
                ]
            ]
        },
        {
            "name": "oauth2client",
            "specs": [
                [
                    "==",
                    "4.1.3"
                ]
            ]
        },
        {
            "name": "oauthlib",
            "specs": [
                [
                    "==",
                    "3.2.2"
                ]
            ]
        },
        {
            "name": "onnxruntime",
            "specs": [
                [
                    "==",
                    "1.17.0"
                ]
            ]
        },
        {
            "name": "openai",
            "specs": [
                [
                    "==",
                    "1.23.6"
                ]
            ]
        },
        {
            "name": "opencv-python",
            "specs": [
                [
                    "==",
                    "4.9.0.80"
                ]
            ]
        },
        {
            "name": "opentelemetry-api",
            "specs": [
                [
                    "==",
                    "1.22.0"
                ]
            ]
        },
        {
            "name": "opentelemetry-exporter-otlp-proto-common",
            "specs": [
                [
                    "==",
                    "1.22.0"
                ]
            ]
        },
        {
            "name": "opentelemetry-exporter-otlp-proto-grpc",
            "specs": [
                [
                    "==",
                    "1.22.0"
                ]
            ]
        },
        {
            "name": "opentelemetry-instrumentation",
            "specs": [
                [
                    "==",
                    "0.43b0"
                ]
            ]
        },
        {
            "name": "opentelemetry-instrumentation-asgi",
            "specs": [
                [
                    "==",
                    "0.43b0"
                ]
            ]
        },
        {
            "name": "opentelemetry-instrumentation-fastapi",
            "specs": [
                [
                    "==",
                    "0.43b0"
                ]
            ]
        },
        {
            "name": "opentelemetry-proto",
            "specs": [
                [
                    "==",
                    "1.22.0"
                ]
            ]
        },
        {
            "name": "opentelemetry-sdk",
            "specs": [
                [
                    "==",
                    "1.22.0"
                ]
            ]
        },
        {
            "name": "opentelemetry-semantic-conventions",
            "specs": [
                [
                    "==",
                    "0.43b0"
                ]
            ]
        },
        {
            "name": "opentelemetry-util-http",
            "specs": [
                [
                    "==",
                    "0.43b0"
                ]
            ]
        },
        {
            "name": "orjson",
            "specs": [
                [
                    "==",
                    "3.9.15"
                ]
            ]
        },
        {
            "name": "outcome",
            "specs": [
                [
                    "==",
                    "1.3.0.post0"
                ]
            ]
        },
        {
            "name": "overrides",
            "specs": [
                [
                    "==",
                    "7.7.0"
                ]
            ]
        },
        {
            "name": "packaging",
            "specs": [
                [
                    "==",
                    "23.2"
                ]
            ]
        },
        {
            "name": "pandas",
            "specs": [
                [
                    "==",
                    "2.2.0"
                ]
            ]
        },
        {
            "name": "pandocfilters",
            "specs": [
                [
                    "==",
                    "1.5.1"
                ]
            ]
        },
        {
            "name": "pillow",
            "specs": [
                [
                    "==",
                    "10.2.0"
                ]
            ]
        },
        {
            "name": "pinecone-client",
            "specs": [
                [
                    "==",
                    "3.1.0"
                ]
            ]
        },
        {
            "name": "pip-tools",
            "specs": [
                [
                    "==",
                    "7.3.0"
                ]
            ]
        },
        {
            "name": "pkginfo",
            "specs": [
                [
                    "==",
                    "1.9.6"
                ]
            ]
        },
        {
            "name": "platformdirs",
            "specs": [
                [
                    "==",
                    "4.2.0"
                ]
            ]
        },
        {
            "name": "playwright",
            "specs": [
                [
                    "==",
                    "1.41.2"
                ]
            ]
        },
        {
            "name": "pluggy",
            "specs": [
                [
                    "==",
                    "1.3.0"
                ]
            ]
        },
        {
            "name": "plumbum",
            "specs": [
                [
                    "==",
                    "1.8.2"
                ]
            ]
        },
        {
            "name": "ply",
            "specs": [
                [
                    "==",
                    "3.11"
                ]
            ]
        },
        {
            "name": "portalocker",
            "specs": [
                [
                    "==",
                    "2.8.2"
                ]
            ]
        },
        {
            "name": "posthog",
            "specs": [
                [
                    "==",
                    "3.4.2"
                ]
            ]
        },
        {
            "name": "protobuf",
            "specs": [
                [
                    "==",
                    "4.25.3"
                ]
            ]
        },
        {
            "name": "psutil",
            "specs": [
                [
                    "==",
                    "5.9.8"
                ]
            ]
        },
        {
            "name": "pulsar-client",
            "specs": [
                [
                    "==",
                    "3.4.0"
                ]
            ]
        },
        {
            "name": "pyaml",
            "specs": [
                [
                    "==",
                    "23.12.0"
                ]
            ]
        },
        {
            "name": "pyasn1",
            "specs": [
                [
                    "==",
                    "0.5.1"
                ]
            ]
        },
        {
            "name": "pyasn1-modules",
            "specs": [
                [
                    "==",
                    "0.3.0"
                ]
            ]
        },
        {
            "name": "PyAudio",
            "specs": [
                [
                    "==",
                    "0.2.14"
                ]
            ]
        },
        {
            "name": "pycparser",
            "specs": [
                [
                    "==",
                    "2.21"
                ]
            ]
        },
        {
            "name": "pycryptodomex",
            "specs": [
                [
                    "==",
                    "3.20.0"
                ]
            ]
        },
        {
            "name": "pydantic",
            "specs": [
                [
                    "==",
                    "2.6.1"
                ]
            ]
        },
        {
            "name": "pydantic_core",
            "specs": [
                [
                    "==",
                    "2.16.2"
                ]
            ]
        },
        {
            "name": "PyDrive",
            "specs": [
                [
                    "==",
                    "1.3.1"
                ]
            ]
        },
        {
            "name": "pydub",
            "specs": [
                [
                    "==",
                    "0.25.1"
                ]
            ]
        },
        {
            "name": "pyee",
            "specs": [
                [
                    "==",
                    "11.0.1"
                ]
            ]
        },
        {
            "name": "pygame",
            "specs": [
                [
                    "==",
                    "2.5.2"
                ]
            ]
        },
        {
            "name": "Pygments",
            "specs": [
                [
                    "==",
                    "2.15.0"
                ]
            ]
        },
        {
            "name": "pyinstaller",
            "specs": [
                [
                    "==",
                    "6.4.0"
                ]
            ]
        },
        {
            "name": "pyinstaller-hooks-contrib",
            "specs": [
                [
                    "==",
                    "2024.1"
                ]
            ]
        },
        {
            "name": "PyJWT",
            "specs": [
                [
                    "==",
                    "2.8.0"
                ]
            ]
        },
        {
            "name": "PyMuPDF",
            "specs": [
                [
                    "==",
                    "1.23.24"
                ]
            ]
        },
        {
            "name": "PyMuPDFb",
            "specs": [
                [
                    "==",
                    "1.23.22"
                ]
            ]
        },
        {
            "name": "pyparsing",
            "specs": [
                [
                    "==",
                    "3.1.1"
                ]
            ]
        },
        {
            "name": "pypdf",
            "specs": [
                [
                    "==",
                    "4.0.2"
                ]
            ]
        },
        {
            "name": "PyPika",
            "specs": [
                [
                    "==",
                    "0.48.9"
                ]
            ]
        },
        {
            "name": "pyproject_hooks",
            "specs": [
                [
                    "==",
                    "1.0.0"
                ]
            ]
        },
        {
            "name": "pyserial",
            "specs": [
                [
                    "==",
                    "3.5"
                ]
            ]
        },
        {
            "name": "PySide6",
            "specs": [
                [
                    "==",
                    "6.4.2"
                ]
            ]
        },
        {
            "name": "PySide6-Addons",
            "specs": [
                [
                    "==",
                    "6.4.2"
                ]
            ]
        },
        {
            "name": "PySide6-Essentials",
            "specs": [
                [
                    "==",
                    "6.4.2"
                ]
            ]
        },
        {
            "name": "PySocks",
            "specs": [
                [
                    "==",
                    "1.7.1"
                ]
            ]
        },
        {
            "name": "pytest",
            "specs": [
                [
                    "==",
                    "7.4.3"
                ]
            ]
        },
        {
            "name": "python-dateutil",
            "specs": [
                [
                    "==",
                    "2.8.2"
                ]
            ]
        },
        {
            "name": "python-dotenv",
            "specs": [
                [
                    "==",
                    "1.0.1"
                ]
            ]
        },
        {
            "name": "pytz",
            "specs": [
                [
                    "==",
                    "2024.1"
                ]
            ]
        },
        {
            "name": "pyxdg",
            "specs": [
                [
                    "==",
                    "0.28"
                ]
            ]
        },
        {
            "name": "PyYAML",
            "specs": [
                [
                    "==",
                    "6.0.1"
                ]
            ]
        },
        {
            "name": "pyzmq",
            "specs": [
                [
                    "==",
                    "25.1.2"
                ]
            ]
        },
        {
            "name": "qt-material",
            "specs": [
                [
                    "==",
                    "2.14"
                ]
            ]
        },
        {
            "name": "readme-renderer",
            "specs": [
                [
                    "==",
                    "37.3"
                ]
            ]
        },
        {
            "name": "redis",
            "specs": [
                [
                    "==",
                    "5.0.1"
                ]
            ]
        },
        {
            "name": "referencing",
            "specs": [
                [
                    "==",
                    "0.33.0"
                ]
            ]
        },
        {
            "name": "regex",
            "specs": [
                [
                    "==",
                    "2023.12.25"
                ]
            ]
        },
        {
            "name": "requests",
            "specs": [
                [
                    "==",
                    "2.31.0"
                ]
            ]
        },
        {
            "name": "requests-file",
            "specs": [
                [
                    "==",
                    "2.0.0"
                ]
            ]
        },
        {
            "name": "requests-oauthlib",
            "specs": [
                [
                    "==",
                    "1.3.1"
                ]
            ]
        },
        {
            "name": "requests-toolbelt",
            "specs": [
                [
                    "==",
                    "1.0.0"
                ]
            ]
        },
        {
            "name": "retrying",
            "specs": [
                [
                    "==",
                    "1.3.4"
                ]
            ]
        },
        {
            "name": "rfc3986",
            "specs": [
                [
                    "==",
                    "2.0.0"
                ]
            ]
        },
        {
            "name": "rich",
            "specs": [
                [
                    "==",
                    "13.3.4"
                ]
            ]
        },
        {
            "name": "rpds-py",
            "specs": [
                [
                    "==",
                    "0.18.0"
                ]
            ]
        },
        {
            "name": "rsa",
            "specs": [
                [
                    "==",
                    "4.9"
                ]
            ]
        },
        {
            "name": "SecretStorage",
            "specs": [
                [
                    "==",
                    "3.3.3"
                ]
            ]
        },
        {
            "name": "selenium",
            "specs": [
                [
                    "==",
                    "4.18.1"
                ]
            ]
        },
        {
            "name": "sgmllib3k",
            "specs": [
                [
                    "==",
                    "1.0.0"
                ]
            ]
        },
        {
            "name": "shiboken6",
            "specs": [
                [
                    "==",
                    "6.4.2"
                ]
            ]
        },
        {
            "name": "show-in-file-manager",
            "specs": [
                [
                    "==",
                    "1.1.4"
                ]
            ]
        },
        {
            "name": "six",
            "specs": [
                [
                    "==",
                    "1.16.0"
                ]
            ]
        },
        {
            "name": "sniffio",
            "specs": [
                [
                    "==",
                    "1.3.0"
                ]
            ]
        },
        {
            "name": "sortedcontainers",
            "specs": [
                [
                    "==",
                    "2.4.0"
                ]
            ]
        },
        {
            "name": "soupsieve",
            "specs": [
                [
                    "==",
                    "2.5"
                ]
            ]
        },
        {
            "name": "SpeechRecognition",
            "specs": [
                [
                    "==",
                    "3.10.1"
                ]
            ]
        },
        {
            "name": "SQLAlchemy",
            "specs": [
                [
                    "==",
                    "2.0.27"
                ]
            ]
        },
        {
            "name": "starlette",
            "specs": [
                [
                    "==",
                    "0.36.3"
                ]
            ]
        },
        {
            "name": "sympy",
            "specs": [
                [
                    "==",
                    "1.12"
                ]
            ]
        },
        {
            "name": "tenacity",
            "specs": [
                [
                    "==",
                    "8.2.3"
                ]
            ]
        },
        {
            "name": "tiktoken",
            "specs": [
                [
                    "==",
                    "0.5.2"
                ]
            ]
        },
        {
            "name": "tinycss2",
            "specs": [
                [
                    "==",
                    "1.2.1"
                ]
            ]
        },
        {
            "name": "tinysegmenter",
            "specs": [
                [
                    "==",
                    "0.3"
                ]
            ]
        },
        {
            "name": "tldextract",
            "specs": [
                [
                    "==",
                    "5.1.1"
                ]
            ]
        },
        {
            "name": "tokenizers",
            "specs": [
                [
                    "==",
                    "0.15.2"
                ]
            ]
        },
        {
            "name": "tomli",
            "specs": [
                [
                    "==",
                    "2.0.1"
                ]
            ]
        },
        {
            "name": "tornado",
            "specs": [
                [
                    "==",
                    "6.4"
                ]
            ]
        },
        {
            "name": "tqdm",
            "specs": [
                [
                    "==",
                    "4.66.2"
                ]
            ]
        },
        {
            "name": "traitlets",
            "specs": [
                [
                    "==",
                    "5.14.1"
                ]
            ]
        },
        {
            "name": "trio",
            "specs": [
                [
                    "==",
                    "0.24.0"
                ]
            ]
        },
        {
            "name": "trio-websocket",
            "specs": [
                [
                    "==",
                    "0.11.1"
                ]
            ]
        },
        {
            "name": "tweepy",
            "specs": [
                [
                    "==",
                    "4.14.0"
                ]
            ]
        },
        {
            "name": "twine",
            "specs": [
                [
                    "==",
                    "4.0.2"
                ]
            ]
        },
        {
            "name": "typer",
            "specs": [
                [
                    "==",
                    "0.9.0"
                ]
            ]
        },
        {
            "name": "typing-inspect",
            "specs": [
                [
                    "==",
                    "0.9.0"
                ]
            ]
        },
        {
            "name": "typing_extensions",
            "specs": [
                [
                    "==",
                    "4.9.0"
                ]
            ]
        },
        {
            "name": "tzdata",
            "specs": [
                [
                    "==",
                    "2024.1"
                ]
            ]
        },
        {
            "name": "uritemplate",
            "specs": [
                [
                    "==",
                    "4.1.1"
                ]
            ]
        },
        {
            "name": "urllib3",
            "specs": [
                [
                    "==",
                    "1.26.18"
                ]
            ]
        },
        {
            "name": "uvicorn",
            "specs": [
                [
                    "==",
                    "0.27.1"
                ]
            ]
        },
        {
            "name": "watchfiles",
            "specs": [
                [
                    "==",
                    "0.21.0"
                ]
            ]
        },
        {
            "name": "webencodings",
            "specs": [
                [
                    "==",
                    "0.5.1"
                ]
            ]
        },
        {
            "name": "websocket-client",
            "specs": [
                [
                    "==",
                    "1.7.0"
                ]
            ]
        },
        {
            "name": "websockets",
            "specs": [
                [
                    "==",
                    "12.0"
                ]
            ]
        },
        {
            "name": "wikipedia",
            "specs": [
                [
                    "==",
                    "1.4.0"
                ]
            ]
        },
        {
            "name": "wrapt",
            "specs": [
                [
                    "==",
                    "1.16.0"
                ]
            ]
        },
        {
            "name": "wsproto",
            "specs": [
                [
                    "==",
                    "1.2.0"
                ]
            ]
        },
        {
            "name": "yarl",
            "specs": [
                [
                    "==",
                    "1.9.4"
                ]
            ]
        },
        {
            "name": "youtube-transcript-api",
            "specs": [
                [
                    "==",
                    "0.6.2"
                ]
            ]
        },
        {
            "name": "zipp",
            "specs": [
                [
                    "==",
                    "3.17.0"
                ]
            ]
        }
    ],
    "lcname": "pygpt-net"
}
        
Elapsed time: 0.31423s