chap


Namechap JSON
Version 0.8.2 PyPI version JSON
download
home_pageNone
SummaryInteract with the OpenAI ChatGPT API (and other text generators)
upload_time2024-04-17 15:49:18
maintainerNone
docs_urlNone
authorNone
requires_python>=3.9
licenseNone
keywords llm tui chatgpt
VCS
bugtrack_url
requirements click httpx lorem-text platformdirs pyperclip simple_parsing textual tiktoken websockets
Travis-CI No Travis.
coveralls test coverage No coveralls.
            <!--
SPDX-FileCopyrightText: 2021 Jeff Epler

SPDX-License-Identifier: MIT
-->
[![Test](https://github.com/jepler/chap/actions/workflows/test.yml/badge.svg)](https://github.com/jepler/chap/actions/workflows/test.yml)
[![Release chap](https://github.com/jepler/chap/actions/workflows/release.yml/badge.svg?event=release)](https://github.com/jepler/chap/actions/workflows/release.yml)
[![PyPI](https://img.shields.io/pypi/v/chap)](https://pypi.org/project/chap/)

# chap - A Python interface to chatgpt and other LLMs, including a terminal user interface (tui)

![Chap screencast](https://raw.githubusercontent.com/jepler/chap/main/chap.gif)

## System requirements

Chap is primarily developed on Linux with Python 3.11. Moderate effort will be made to support versions back to Python 3.9 (Debian oldstable).

## Installation

If you want `chap` available as a command, just install with  `pipx install chap` or `pip install chap`.

Use a virtual environment unless you want it installed globally.

## Installation for development

Use one of the following two methods to run `chap` as a command, with the ability to edit the source files. You are welcome to submit valuable changes as [a pull request](https://github.com/jepler/chap/pulls).

### Via `pip install --editable .`

This is an "editable install", as [recommended by the Python Packaging Authority](https://setuptools.pypa.io/en/latest/userguide/development_mode.html).

Change directory to the root of the `chap` project.

Activate your virtual environment, then install `chap` in development mode:

```shell
pip install --editable .
```

In this mode, you get the `chap` command-line program installed, but you are able to edit the source files in the `src` directory in place.

### Via `chap-dev.py`

A simple shim script called `chap-dev.py` is included to demonstrate how to load and run the `chap` library without installing `chap` in development mode. This method may be more familiar to some developers.

Change directory to the root of the `chap` project.

Activate your virtual environment, then install requirements:

```shell
pip install -r requirements.txt
```

Run the shim script (with optional command flags as appropriate):

```shell
./chap-dev.py
```

In this mode, you can edit the source files in the `src` directory in place, and the shim script will pick up the changes via the `import` directive.

## Contributing

See [CONTRIBUTING.md](CONTRIBUTING.md).

## Code of Conduct

See [CODE\_OF\_CONDUCT.md](CODE_OF_CONDUCT.md).

## Configuration

Put your OpenAI API key in the platform configuration directory for chap, e.g., on linux/unix systems at `~/.config/chap/openai_api_key`

## Command-line usage

 * `chap ask "What advice would you give a 20th century human visiting the 21st century for the first time?"`

 * `chap render --last` / `chap cat --last`

 * `chap import chatgpt-style-chatlog.json` (for files from pionxzh/chatgpt-exporter)

 * `chap grep needle`

## Interactive terminal usage
The interactive terminal mode is accessed via `chap tui`.

There are a variety of keyboard shortcuts to be aware of:
 * tab/shift-tab to move between the entry field and the conversation, or between conversation items
 * While in the text box, F9 or (if supported by your terminal) alt+enter to submit multiline text
 * while on a conversation item:
   * ctrl+x to re-draft the message. This
     * saves a copy of the session in an auto-named file in the conversations folder
     * removes the conversation from this message to the end
     * puts the user's message in the text box to edit
   * ctrl+x to re-submit the message. This
     * saves a copy of the session in an auto-named file in the conversations folder
     * removes the conversation from this message to the end
     * puts the user's message in the text box
     * and submits it immediately
   * ctrl+y to yank the message. This places the response part of the current
     interaction in the operating system clipboard to be pasted (e..g, with
     ctrl+v or command+v in other software)
   * ctrl+q to toggle whether this message may be included in the contextual history for a future query.
     The exact way history is submitted is determined by the back-end, often by
     counting messages or tokens, but the ctrl+q toggle ensures this message (both the user
     and assistant message parts) are not considered.

## Sessions & Command-line Parameters

Details of session handling & command-line arguments are in flux.

By default, a new session is created. It is saved to the user's state directory
(e.g., `~/.local/state/chap` on linux/unix systems).

You can specify the session filename for a new session with `-n` or to re-open
an existing session with `-s`. Or, you can continue the last session with
`--last`.

You can set the "system message" with the `-S` flag.

You can select the text generating backend with the `-b` flag:
 * openai-chatgpt: the default, paid API, best quality results
 * llama-cpp: Works with [llama.cpp's http server](https://github.com/ggerganov/llama.cpp/blob/master/examples/server/README.md) and can run locally with various models,
 though it is [optimized for models that use the llama2-style prompting](https://huggingface.co/blog/llama2#how-to-prompt-llama-2).
 Set the server URL with `-B url:...`.
 * textgen: Works with https://github.com/oobabooga/text-generation-webui and can run locally with various models.
 Needs the server URL in *$configuration_directory/textgen\_url*.
 * lorem: local non-AI lorem generator for testing

## Environment variables

The backend can be set with the `CHAP_BACKEND` environment variable.

Backend settings can be set with `CHAP_<backend_name>_<parameter_name>`, with `backend_name` and `parameter_name` all in caps.

For instance, `CHAP_LLAMA_CPP_URL=http://server.local:8080/completion` changes the default server URL for the llama-cpp back-end.

## Importing from ChatGPT

The userscript https://github.com/pionxzh/chatgpt-exporter can export chat logs from chat.openai.com in a JSON format.

This format is different than chap's, especially since `chap` currently only represents a single branch of conversation in one log.

You can use the `chap import` command to import all the branches of a chatgpt-style chatlog in JSON format into a series of `chap`-style chat logs.

## Plug-ins

Chap supports back-end and command plug-ins.

"Back-ends" add additional text generators.

"Commands" add new ways to interact with text generators, session data, and so forth.

Install a plugin with `pip install` or `pipx inject` (depending how you installed chap) and then use it as normal.

[chap-backend-replay](https://pypi.org/project/chap-backend-replay/) is an example back-end plug-in. It replays answers from a previous session.

[chap-command-explain](https://pypi.org/project/chap-command-explain/) is an example command plug-in. It is similar to `chap ask`.

At this time, there is no stability guarantee for the API of commands or backends.

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "chap",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.9",
    "maintainer_email": null,
    "keywords": "llm, tui, chatgpt",
    "author": null,
    "author_email": "Jeff Epler <jepler@gmail.com>",
    "download_url": "https://files.pythonhosted.org/packages/eb/de/a46ed98b5879f6aa71fb6e9cf90fdb56c670914bf063a0fea4a3316b05e8/chap-0.8.2.tar.gz",
    "platform": null,
    "description": "<!--\nSPDX-FileCopyrightText: 2021 Jeff Epler\n\nSPDX-License-Identifier: MIT\n-->\n[![Test](https://github.com/jepler/chap/actions/workflows/test.yml/badge.svg)](https://github.com/jepler/chap/actions/workflows/test.yml)\n[![Release chap](https://github.com/jepler/chap/actions/workflows/release.yml/badge.svg?event=release)](https://github.com/jepler/chap/actions/workflows/release.yml)\n[![PyPI](https://img.shields.io/pypi/v/chap)](https://pypi.org/project/chap/)\n\n# chap - A Python interface to chatgpt and other LLMs, including a terminal user interface (tui)\n\n![Chap screencast](https://raw.githubusercontent.com/jepler/chap/main/chap.gif)\n\n## System requirements\n\nChap is primarily developed on Linux with Python 3.11. Moderate effort will be made to support versions back to Python 3.9 (Debian oldstable).\n\n## Installation\n\nIf you want `chap` available as a command, just install with  `pipx install chap` or `pip install chap`.\n\nUse a virtual environment unless you want it installed globally.\n\n## Installation for development\n\nUse one of the following two methods to run `chap` as a command, with the ability to edit the source files. You are welcome to submit valuable changes as [a pull request](https://github.com/jepler/chap/pulls).\n\n### Via `pip install --editable .`\n\nThis is an \"editable install\", as [recommended by the Python Packaging Authority](https://setuptools.pypa.io/en/latest/userguide/development_mode.html).\n\nChange directory to the root of the `chap` project.\n\nActivate your virtual environment, then install `chap` in development mode:\n\n```shell\npip install --editable .\n```\n\nIn this mode, you get the `chap` command-line program installed, but you are able to edit the source files in the `src` directory in place.\n\n### Via `chap-dev.py`\n\nA simple shim script called `chap-dev.py` is included to demonstrate how to load and run the `chap` library without installing `chap` in development mode. This method may be more familiar to some developers.\n\nChange directory to the root of the `chap` project.\n\nActivate your virtual environment, then install requirements:\n\n```shell\npip install -r requirements.txt\n```\n\nRun the shim script (with optional command flags as appropriate):\n\n```shell\n./chap-dev.py\n```\n\nIn this mode, you can edit the source files in the `src` directory in place, and the shim script will pick up the changes via the `import` directive.\n\n## Contributing\n\nSee [CONTRIBUTING.md](CONTRIBUTING.md).\n\n## Code of Conduct\n\nSee [CODE\\_OF\\_CONDUCT.md](CODE_OF_CONDUCT.md).\n\n## Configuration\n\nPut your OpenAI API key in the platform configuration directory for chap, e.g., on linux/unix systems at `~/.config/chap/openai_api_key`\n\n## Command-line usage\n\n * `chap ask \"What advice would you give a 20th century human visiting the 21st century for the first time?\"`\n\n * `chap render --last` / `chap cat --last`\n\n * `chap import chatgpt-style-chatlog.json` (for files from pionxzh/chatgpt-exporter)\n\n * `chap grep needle`\n\n## Interactive terminal usage\nThe interactive terminal mode is accessed via `chap tui`.\n\nThere are a variety of keyboard shortcuts to be aware of:\n * tab/shift-tab to move between the entry field and the conversation, or between conversation items\n * While in the text box, F9 or (if supported by your terminal) alt+enter to submit multiline text\n * while on a conversation item:\n   * ctrl+x to re-draft the message. This\n     * saves a copy of the session in an auto-named file in the conversations folder\n     * removes the conversation from this message to the end\n     * puts the user's message in the text box to edit\n   * ctrl+x to re-submit the message. This\n     * saves a copy of the session in an auto-named file in the conversations folder\n     * removes the conversation from this message to the end\n     * puts the user's message in the text box\n     * and submits it immediately\n   * ctrl+y to yank the message. This places the response part of the current\n     interaction in the operating system clipboard to be pasted (e..g, with\n     ctrl+v or command+v in other software)\n   * ctrl+q to toggle whether this message may be included in the contextual history for a future query.\n     The exact way history is submitted is determined by the back-end, often by\n     counting messages or tokens, but the ctrl+q toggle ensures this message (both the user\n     and assistant message parts) are not considered.\n\n## Sessions & Command-line Parameters\n\nDetails of session handling & command-line arguments are in flux.\n\nBy default, a new session is created. It is saved to the user's state directory\n(e.g., `~/.local/state/chap` on linux/unix systems).\n\nYou can specify the session filename for a new session with `-n` or to re-open\nan existing session with `-s`. Or, you can continue the last session with\n`--last`.\n\nYou can set the \"system message\" with the `-S` flag.\n\nYou can select the text generating backend with the `-b` flag:\n * openai-chatgpt: the default, paid API, best quality results\n * llama-cpp: Works with [llama.cpp's http server](https://github.com/ggerganov/llama.cpp/blob/master/examples/server/README.md) and can run locally with various models,\n though it is [optimized for models that use the llama2-style prompting](https://huggingface.co/blog/llama2#how-to-prompt-llama-2).\n Set the server URL with `-B url:...`.\n * textgen: Works with https://github.com/oobabooga/text-generation-webui and can run locally with various models.\n Needs the server URL in *$configuration_directory/textgen\\_url*.\n * lorem: local non-AI lorem generator for testing\n\n## Environment variables\n\nThe backend can be set with the `CHAP_BACKEND` environment variable.\n\nBackend settings can be set with `CHAP_<backend_name>_<parameter_name>`, with `backend_name` and `parameter_name` all in caps.\n\nFor instance, `CHAP_LLAMA_CPP_URL=http://server.local:8080/completion` changes the default server URL for the llama-cpp back-end.\n\n## Importing from ChatGPT\n\nThe userscript https://github.com/pionxzh/chatgpt-exporter can export chat logs from chat.openai.com in a JSON format.\n\nThis format is different than chap's, especially since `chap` currently only represents a single branch of conversation in one log.\n\nYou can use the `chap import` command to import all the branches of a chatgpt-style chatlog in JSON format into a series of `chap`-style chat logs.\n\n## Plug-ins\n\nChap supports back-end and command plug-ins.\n\n\"Back-ends\" add additional text generators.\n\n\"Commands\" add new ways to interact with text generators, session data, and so forth.\n\nInstall a plugin with `pip install` or `pipx inject` (depending how you installed chap) and then use it as normal.\n\n[chap-backend-replay](https://pypi.org/project/chap-backend-replay/) is an example back-end plug-in. It replays answers from a previous session.\n\n[chap-command-explain](https://pypi.org/project/chap-command-explain/) is an example command plug-in. It is similar to `chap ask`.\n\nAt this time, there is no stability guarantee for the API of commands or backends.\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "Interact with the OpenAI ChatGPT API (and other text generators)",
    "version": "0.8.2",
    "project_urls": {
        "homepage": "https://github.com/jepler/chap",
        "repository": "https://github.com/jepler/chap"
    },
    "split_keywords": [
        "llm",
        " tui",
        " chatgpt"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "2aa32edbc48d8bd2308b0e07474272e4f281f5668888af2d715c8d643af8fac7",
                "md5": "e39833b54a54b53ffbeef406e97feef7",
                "sha256": "0a5e0b631acffa97ce39d71ca6fa5c89fca17a4ecd63dfeed2e742bd8f2b2850"
            },
            "downloads": -1,
            "filename": "chap-0.8.2-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "e39833b54a54b53ffbeef406e97feef7",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.9",
            "size": 30083,
            "upload_time": "2024-04-17T15:49:13",
            "upload_time_iso_8601": "2024-04-17T15:49:13.857418Z",
            "url": "https://files.pythonhosted.org/packages/2a/a3/2edbc48d8bd2308b0e07474272e4f281f5668888af2d715c8d643af8fac7/chap-0.8.2-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "ebdea46ed98b5879f6aa71fb6e9cf90fdb56c670914bf063a0fea4a3316b05e8",
                "md5": "51a00b84888e75a4bfc8b05c7dff6584",
                "sha256": "a0b3976dbe5c29e601df0fbdc3e8128aa3fad1ae0b25a09409219101dccc2bf7"
            },
            "downloads": -1,
            "filename": "chap-0.8.2.tar.gz",
            "has_sig": false,
            "md5_digest": "51a00b84888e75a4bfc8b05c7dff6584",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.9",
            "size": 364252,
            "upload_time": "2024-04-17T15:49:18",
            "upload_time_iso_8601": "2024-04-17T15:49:18.439234Z",
            "url": "https://files.pythonhosted.org/packages/eb/de/a46ed98b5879f6aa71fb6e9cf90fdb56c670914bf063a0fea4a3316b05e8/chap-0.8.2.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-04-17 15:49:18",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "jepler",
    "github_project": "chap",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [
        {
            "name": "click",
            "specs": []
        },
        {
            "name": "httpx",
            "specs": []
        },
        {
            "name": "lorem-text",
            "specs": []
        },
        {
            "name": "platformdirs",
            "specs": []
        },
        {
            "name": "pyperclip",
            "specs": []
        },
        {
            "name": "simple_parsing",
            "specs": []
        },
        {
            "name": "textual",
            "specs": []
        },
        {
            "name": "tiktoken",
            "specs": []
        },
        {
            "name": "websockets",
            "specs": []
        }
    ],
    "lcname": "chap"
}
        
Elapsed time: 0.23935s