charla


Namecharla JSON
Version 2.1.0 PyPI version JSON
download
home_pageNone
SummaryA terminal based chat application that works with language models.
upload_time2024-12-13 22:28:29
maintainerNone
docs_urlNone
authorNone
requires_python>=3.10
licenseMIT
keywords chat-client chatbot chatgpt cli github models llama llm ollama terminal
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Charla: Chat with Language Models in a Terminal

[![PyPI - Version](https://img.shields.io/pypi/v/charla.svg)](https://pypi.org/project/charla)
[![PyPI - Python Version](https://img.shields.io/pypi/pyversions/charla.svg)](https://pypi.org/project/charla)

**Charla** is a terminal based application for chatting with language models. Charla integrates with Ollama and GitHub Models for exchanging messages with model services.

![preview](https://geeksta.net/img/tools/charla-chat-demo.gif)

## Features

* Terminal-based chat system that supports context aware conversations with language models.
* Support for local models via Ollama and remote models via GitHub Models.
* Chat sessions are saved as markdown files in the user's documents directory when ending a chat.
* Prompt history is saved and previously entered prompts are auto-suggested.
* Switch between single-line and multi-line input modes without interrupting the chat session.
* Store user preferences in user config or current directory settings files.
* Provide a system prompt for a chat session.
* Load content from local files and web pages to append to prompts.

## Installation

To use Charla with models on your computer, you need a running installation of the [Ollama server](https://ollama.com/download) and at least one supported language model must be installed. For [GitHub Models](https://github.com/marketplace/models) you need access to the service and a GitHub token. Please refer to the documentation of the service provider you want to use for installation and setup instructions.

Install Charla using `pipx`:

```console
pipx install charla
```

For GitHub models, set the environment variable GITHUB_TOKEN to your token. In Bash enter:

```console
export GITHUB_TOKEN=YOUR_GITHUB_TOKEN
```

## Usage

After successful installation and setup you can launch the chat console with the `charla` command in your terminal.

If you use Charla with Ollama, the default provider, you only need to specify the model to use, e.g.:

```console
charla -m phi3
```

If you want to use GitHub Models, you have to set the provider:

```console
charla -m gpt-4o --provider github
```

You can set a default model and change the default provider in your user settings file.

## Settings

Settings can be specified as command line arguments and in the settings files. Command line arguments have the highest priority. The location of your user config settings file depends on your operating system. Use the following command to show the location:

```console
charla settings --location
```

You can also store settings in the current working directory in a file named `.charla.json`. The settings in this local override the user config settings.

Example settings for using OpenAI's GPT-4o model and the GitHub Models service by default.

```json
{
    "model": "gpt-4o",
    "chats_path": "./chats",
    "prompt_history": "./prompt-history.txt",
    "provider": "github",
    "message_limit": 20,
    "multiline": false
}
```

## CLI Help

Output of `charla -h` with information on all available command line options.

<!-- START: DO NOT EDIT -->
```text
usage: charla [-h] [--model MODEL] [--chats-path CHATS_PATH] [--prompt-history PROMPT_HISTORY]
                             [--provider PROVIDER] [--message-limit MESSAGE_LIMIT] [--multiline] [--system-prompt SYSTEM_PROMPT]
                             [--version]
                             {settings} ...

Chat with language models.

positional arguments:
  {settings}            Sub Commands
    settings            Show current settings.

options:
  -h, --help            show this help message and exit
  --model MODEL, -m MODEL
                        Name of language model to chat with.
  --chats-path CHATS_PATH
                        Directory to store chats.
  --prompt-history PROMPT_HISTORY
                        File to store prompt history.
  --provider PROVIDER   Name of the provider to use.
  --message-limit MESSAGE_LIMIT
                        Maximum number of messages to send to GitHub Models service.
  --multiline           Use multiline mode.
  --system-prompt SYSTEM_PROMPT, -sp SYSTEM_PROMPT
                        File that contains system prompt to use.
  --version             show program's version number and exit

```
<!-- END: DO NOT EDIT -->

## Development

Run the command-line interface directly from the project source without installing the package:

```console
python -m charla.cli
```

### ollama API

Installed models:

```console
curl http://localhost:11434/api/tags
```

Model info:

```console
curl http://localhost:11434/api/show -d '{"name": "phi3"}'
```

## License

Charla is distributed under the terms of the [MIT](https://spdx.org/licenses/MIT.html) license.

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "charla",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.10",
    "maintainer_email": null,
    "keywords": "chat-client, chatbot, chatgpt, cli, github models, llama, llm, ollama, terminal",
    "author": null,
    "author_email": "Ramiro G\u00f3mez <code@ramiro.org>",
    "download_url": "https://files.pythonhosted.org/packages/1d/ce/7fc97e251efd348fc652b26cbf38788f8196cf66fbe30bce661a60ed8dc2/charla-2.1.0.tar.gz",
    "platform": null,
    "description": "# Charla: Chat with Language Models in a Terminal\n\n[![PyPI - Version](https://img.shields.io/pypi/v/charla.svg)](https://pypi.org/project/charla)\n[![PyPI - Python Version](https://img.shields.io/pypi/pyversions/charla.svg)](https://pypi.org/project/charla)\n\n**Charla** is a terminal based application for chatting with language models. Charla integrates with Ollama and GitHub Models for exchanging messages with model services.\n\n![preview](https://geeksta.net/img/tools/charla-chat-demo.gif)\n\n## Features\n\n* Terminal-based chat system that supports context aware conversations with language models.\n* Support for local models via Ollama and remote models via GitHub Models.\n* Chat sessions are saved as markdown files in the user's documents directory when ending a chat.\n* Prompt history is saved and previously entered prompts are auto-suggested.\n* Switch between single-line and multi-line input modes without interrupting the chat session.\n* Store user preferences in user config or current directory settings files.\n* Provide a system prompt for a chat session.\n* Load content from local files and web pages to append to prompts.\n\n## Installation\n\nTo use Charla with models on your computer, you need a running installation of the [Ollama server](https://ollama.com/download) and at least one supported language model must be installed. For [GitHub Models](https://github.com/marketplace/models) you need access to the service and a GitHub token. Please refer to the documentation of the service provider you want to use for installation and setup instructions.\n\nInstall Charla using `pipx`:\n\n```console\npipx install charla\n```\n\nFor GitHub models, set the environment variable GITHUB_TOKEN to your token. In Bash enter:\n\n```console\nexport GITHUB_TOKEN=YOUR_GITHUB_TOKEN\n```\n\n## Usage\n\nAfter successful installation and setup you can launch the chat console with the `charla` command in your terminal.\n\nIf you use Charla with Ollama, the default provider, you only need to specify the model to use, e.g.:\n\n```console\ncharla -m phi3\n```\n\nIf you want to use GitHub Models, you have to set the provider:\n\n```console\ncharla -m gpt-4o --provider github\n```\n\nYou can set a default model and change the default provider in your user settings file.\n\n## Settings\n\nSettings can be specified as command line arguments and in the settings files. Command line arguments have the highest priority. The location of your user config settings file depends on your operating system. Use the following command to show the location:\n\n```console\ncharla settings --location\n```\n\nYou can also store settings in the current working directory in a file named `.charla.json`. The settings in this local override the user config settings.\n\nExample settings for using OpenAI's GPT-4o model and the GitHub Models service by default.\n\n```json\n{\n    \"model\": \"gpt-4o\",\n    \"chats_path\": \"./chats\",\n    \"prompt_history\": \"./prompt-history.txt\",\n    \"provider\": \"github\",\n    \"message_limit\": 20,\n    \"multiline\": false\n}\n```\n\n## CLI Help\n\nOutput of `charla -h` with information on all available command line options.\n\n<!-- START: DO NOT EDIT -->\n```text\nusage: charla [-h] [--model MODEL] [--chats-path CHATS_PATH] [--prompt-history PROMPT_HISTORY]\n                             [--provider PROVIDER] [--message-limit MESSAGE_LIMIT] [--multiline] [--system-prompt SYSTEM_PROMPT]\n                             [--version]\n                             {settings} ...\n\nChat with language models.\n\npositional arguments:\n  {settings}            Sub Commands\n    settings            Show current settings.\n\noptions:\n  -h, --help            show this help message and exit\n  --model MODEL, -m MODEL\n                        Name of language model to chat with.\n  --chats-path CHATS_PATH\n                        Directory to store chats.\n  --prompt-history PROMPT_HISTORY\n                        File to store prompt history.\n  --provider PROVIDER   Name of the provider to use.\n  --message-limit MESSAGE_LIMIT\n                        Maximum number of messages to send to GitHub Models service.\n  --multiline           Use multiline mode.\n  --system-prompt SYSTEM_PROMPT, -sp SYSTEM_PROMPT\n                        File that contains system prompt to use.\n  --version             show program's version number and exit\n\n```\n<!-- END: DO NOT EDIT -->\n\n## Development\n\nRun the command-line interface directly from the project source without installing the package:\n\n```console\npython -m charla.cli\n```\n\n### ollama API\n\nInstalled models:\n\n```console\ncurl http://localhost:11434/api/tags\n```\n\nModel info:\n\n```console\ncurl http://localhost:11434/api/show -d '{\"name\": \"phi3\"}'\n```\n\n## License\n\nCharla is distributed under the terms of the [MIT](https://spdx.org/licenses/MIT.html) license.\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "A terminal based chat application that works with language models.",
    "version": "2.1.0",
    "project_urls": {
        "Documentation": "https://github.com/yaph/charla#readme",
        "Issues": "https://github.com/yaph/charla/issues",
        "Source": "https://github.com/yaph/charla"
    },
    "split_keywords": [
        "chat-client",
        " chatbot",
        " chatgpt",
        " cli",
        " github models",
        " llama",
        " llm",
        " ollama",
        " terminal"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "ecb594069f329171f590ea285f5e5ffba2472feb9f000e87cdc83ea765904140",
                "md5": "ea44742c4b23ba86cd41875916b1751b",
                "sha256": "690de640ca497eaffbed868e7cc0cd69110e6a237bfbc184cdaa69c686936876"
            },
            "downloads": -1,
            "filename": "charla-2.1.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "ea44742c4b23ba86cd41875916b1751b",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.10",
            "size": 11011,
            "upload_time": "2024-12-13T22:28:31",
            "upload_time_iso_8601": "2024-12-13T22:28:31.854775Z",
            "url": "https://files.pythonhosted.org/packages/ec/b5/94069f329171f590ea285f5e5ffba2472feb9f000e87cdc83ea765904140/charla-2.1.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "1dce7fc97e251efd348fc652b26cbf38788f8196cf66fbe30bce661a60ed8dc2",
                "md5": "07ee4cd52dab8d1edaf4cf21e0a15baa",
                "sha256": "f6897fef9138b6649d2e00fa1690adbc7972ed08b162845da879a53466b4ee0f"
            },
            "downloads": -1,
            "filename": "charla-2.1.0.tar.gz",
            "has_sig": false,
            "md5_digest": "07ee4cd52dab8d1edaf4cf21e0a15baa",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.10",
            "size": 14552,
            "upload_time": "2024-12-13T22:28:29",
            "upload_time_iso_8601": "2024-12-13T22:28:29.449550Z",
            "url": "https://files.pythonhosted.org/packages/1d/ce/7fc97e251efd348fc652b26cbf38788f8196cf66fbe30bce661a60ed8dc2/charla-2.1.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-12-13 22:28:29",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "yaph",
    "github_project": "charla#readme",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "charla"
}
        
Elapsed time: 0.63808s