qprom


Nameqprom JSON
Version 0.6.0 PyPI version JSON
download
home_pagehttps://github.com/MartinWie/qprom
SummaryA Python-based CLI tool to quickly interact with OpenAIs GPT models instead of relying on the web interface.
upload_time2024-03-06 00:41:07
maintainer
docs_urlNone
authorMartinWiechmann
requires_python
licenseMIT
keywords gpt-4 cli gpt-3 openai
VCS
bugtrack_url
requirements openai tiktoken argparse
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # qprom a.k.a Quick Prompt - ChatGPT CLI

[![OS](https://img.shields.io/badge/Runs%20on%3A-Linux%20%7C%20Mac-green)]() [![RunsOn](https://img.shields.io/github/license/MartinWie/AEnv)](https://github.com/MartinWie/AEnv/blob/master/LICENSE) [![Open Source](https://badges.frapsoft.com/os/v1/open-source.svg?v=103)](https://opensource.org/)

![qprom](https://github.com/MartinWie/qprom/blob/main/qprom_logo.png)

A Python-based CLI tool to quickly interact with OpenAI's GPT models instead of relying on the web interface.

## Table of Contents

1. [Description](#description)
2. [Installation](#installation)
3. [Setup](#Setup)
3. [Usage](#Usage)
4. [Todos](#Todos)
5. [License](#License)

## Description

qprom is a small project that lets you interact with OpenAI's GPT-4 and 3.5 chat API, quickly without having to use the web-ui.
This enables quicker response times and better [data privacy](https://openai.com/policies/api-data-usage-policies)

## Installation


```
pip install qprom
```

## Setup

Make sure you have your [OpenAI API key](https://platform.openai.com/account/api-keys).

When running qprom the script tries to fetch the OpenAI API key from a credentials file located in the `.qprom` folder within the user's home directory. 
If the API key is not found in the credentials file, the user is prompted to provide it, and the provided key is then stored in the aforementioned credentials file for future use.

## Usage

| Argument | Type    | Default         | Choices                         | Description                                                                                            | Optional |
|----------|---------|-----------------|---------------------------------|--------------------------------------------------------------------------------------------------------|---|
| `-p`     | String  | None            | None                            | Option to directly enter your prompt (Do not use this flag if you intend to have a multi-line prompt.) | yes |
| `-m`     | String  | `gpt-3.5-turbo` | `gpt-3.5-turbo`, `gpt-4`, `...` | Option to select the model                                                                             | yes |
| `-M`     | String  | `gpt-3.5-turbo` | `gpt-3.5-turbo`, `gpt-4`, `...` | Set the default model                                                                                  | yes |
| `-t`     | Float   | `0.3`           | Between `0` and `2`             | Option to configure the temperature                                                                    | yes |
| `-v`     | Boolean | `False`         | None                            | Enable verbose mode                                                                                    | yes |
| `-c`     | Boolean | `False`         | None                            | Enable conversation mode                                                                               | yes |
| `-tk`    | String  | `6500`          | None                            | Option to set the currently used token limit                                                           | yes |
| `-TK`    | String  | `6500`          | None                            | Option to configure the currently used token limit                                                     | yes |

### Usage

```bash
qprom -p <prompt> -m <model> -t <temperature> -v -c
```

- `<prompt>`: Replace with your prompt
- `<model>`: Replace with either `gpt-3.5-turbo` or `gpt-4`
- `<temperature>`: Replace with a float value between `0` and `2`
- `-v`: Add this flag to enable verbose mode
- `-c`: Add this flag to enable conversation mode

For example:

```bash
qprom -p "Translate the following English text to French: '{text}'" -m gpt-4 -t 0.7 -v
```

This will run the script with the provided prompt, using the `gpt-4` model, a temperature of `0.7`, and verbose mode enabled.

### Multi line prompting
To facilitate multi-line input for the prompt, invoke qprom without utilizing the -p parameter. This will prompt you for your input at runtime, where you can provide multiple lines as needed. To signal the end of your input, simply enter the string 'END'.

```bash
qprom
```

This will run qprom with default values model: `gpt-3.5-turbo`, a temperature of `0.7` and ask for the prompt during runtime.

### Set default model

```bash
qprom -M <model-name>
```

### Set token limit for prompt/conversation

```bash
qprom -tk <token-limit>
```

### Set default token limit

```bash
qprom -TK <token-limit>
```


### Piping console input into qprom 
Just pipe the prompt into qprom.

```bash
cat prompt.txt | qprom
```

## Todos

* Cleanup project / refactoring
* Add option to set default temperature
* Add option to re-set the API token
* Testing
* Add option to disable streaming and only print the full response


**Bug reports:**


## License

MIT [Link](https://github.com/MartinWie/qprom/blob/master/LICENSE)

## Support me :heart: :star: :money_with_wings:
If this project provided value, and you want to give something back, you can give the repo a star or support by buying me a coffee.

<a href="https://buymeacoffee.com/MartinWie" target="_blank"><img src="https://cdn.buymeacoffee.com/buttons/v2/default-blue.png" alt="Buy Me A Coffee" width="170"></a>

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/MartinWie/qprom",
    "name": "qprom",
    "maintainer": "",
    "docs_url": null,
    "requires_python": "",
    "maintainer_email": "",
    "keywords": "GPT-4 CLI GPT-3 OpenAI",
    "author": "MartinWiechmann",
    "author_email": "donotsuspend@googlegroups.com",
    "download_url": "https://files.pythonhosted.org/packages/33/dc/e61f9b80f6d0fb07cf2b260bbc3d1dc583e75a730b00b21a060758c11229/qprom-0.6.0.tar.gz",
    "platform": null,
    "description": "# qprom a.k.a Quick Prompt - ChatGPT CLI\n\n[![OS](https://img.shields.io/badge/Runs%20on%3A-Linux%20%7C%20Mac-green)]() [![RunsOn](https://img.shields.io/github/license/MartinWie/AEnv)](https://github.com/MartinWie/AEnv/blob/master/LICENSE) [![Open Source](https://badges.frapsoft.com/os/v1/open-source.svg?v=103)](https://opensource.org/)\n\n![qprom](https://github.com/MartinWie/qprom/blob/main/qprom_logo.png)\n\nA Python-based CLI tool to quickly interact with OpenAI's GPT models instead of relying on the web interface.\n\n## Table of Contents\n\n1. [Description](#description)\n2. [Installation](#installation)\n3. [Setup](#Setup)\n3. [Usage](#Usage)\n4. [Todos](#Todos)\n5. [License](#License)\n\n## Description\n\nqprom is a small project that lets you interact with OpenAI's GPT-4 and 3.5 chat API, quickly without having to use the web-ui.\nThis enables quicker response times and better [data privacy](https://openai.com/policies/api-data-usage-policies)\n\n## Installation\n\n\n```\npip install qprom\n```\n\n## Setup\n\nMake sure you have your [OpenAI API key](https://platform.openai.com/account/api-keys).\n\nWhen running qprom the script tries to fetch the OpenAI API key from a credentials file located in the `.qprom` folder within the user's home directory. \nIf the API key is not found in the credentials file, the user is prompted to provide it, and the provided key is then stored in the aforementioned credentials file for future use.\n\n## Usage\n\n| Argument | Type    | Default         | Choices                         | Description                                                                                            | Optional |\n|----------|---------|-----------------|---------------------------------|--------------------------------------------------------------------------------------------------------|---|\n| `-p`     | String  | None            | None                            | Option to directly enter your prompt (Do not use this flag if you intend to have a multi-line prompt.) | yes |\n| `-m`     | String  | `gpt-3.5-turbo` | `gpt-3.5-turbo`, `gpt-4`, `...` | Option to select the model                                                                             | yes |\n| `-M`     | String  | `gpt-3.5-turbo` | `gpt-3.5-turbo`, `gpt-4`, `...` | Set the default model                                                                                  | yes |\n| `-t`     | Float   | `0.3`           | Between `0` and `2`             | Option to configure the temperature                                                                    | yes |\n| `-v`     | Boolean | `False`         | None                            | Enable verbose mode                                                                                    | yes |\n| `-c`     | Boolean | `False`         | None                            | Enable conversation mode                                                                               | yes |\n| `-tk`    | String  | `6500`          | None                            | Option to set the currently used token limit                                                           | yes |\n| `-TK`    | String  | `6500`          | None                            | Option to configure the currently used token limit                                                     | yes |\n\n### Usage\n\n```bash\nqprom -p <prompt> -m <model> -t <temperature> -v -c\n```\n\n- `<prompt>`: Replace with your prompt\n- `<model>`: Replace with either `gpt-3.5-turbo` or `gpt-4`\n- `<temperature>`: Replace with a float value between `0` and `2`\n- `-v`: Add this flag to enable verbose mode\n- `-c`: Add this flag to enable conversation mode\n\nFor example:\n\n```bash\nqprom -p \"Translate the following English text to French: '{text}'\" -m gpt-4 -t 0.7 -v\n```\n\nThis will run the script with the provided prompt, using the `gpt-4` model, a temperature of `0.7`, and verbose mode enabled.\n\n### Multi line prompting\nTo facilitate multi-line input for the prompt, invoke qprom without utilizing the -p parameter. This will prompt you for your input at runtime, where you can provide multiple lines as needed. To signal the end of your input, simply enter the string 'END'.\n\n```bash\nqprom\n```\n\nThis will run qprom with default values model: `gpt-3.5-turbo`, a temperature of `0.7` and ask for the prompt during runtime.\n\n### Set default model\n\n```bash\nqprom -M <model-name>\n```\n\n### Set token limit for prompt/conversation\n\n```bash\nqprom -tk <token-limit>\n```\n\n### Set default token limit\n\n```bash\nqprom -TK <token-limit>\n```\n\n\n### Piping console input into qprom \nJust pipe the prompt into qprom.\n\n```bash\ncat prompt.txt | qprom\n```\n\n## Todos\n\n* Cleanup project / refactoring\n* Add option to set default temperature\n* Add option to re-set the API token\n* Testing\n* Add option to disable streaming and only print the full response\n\n\n**Bug reports:**\n\n\n## License\n\nMIT [Link](https://github.com/MartinWie/qprom/blob/master/LICENSE)\n\n## Support me :heart: :star: :money_with_wings:\nIf this project provided value, and you want to give something back, you can give the repo a star or support by buying me a coffee.\n\n<a href=\"https://buymeacoffee.com/MartinWie\" target=\"_blank\"><img src=\"https://cdn.buymeacoffee.com/buttons/v2/default-blue.png\" alt=\"Buy Me A Coffee\" width=\"170\"></a>\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "A Python-based CLI tool to quickly interact with OpenAIs GPT models instead of relying on the web interface.",
    "version": "0.6.0",
    "project_urls": {
        "Homepage": "https://github.com/MartinWie/qprom"
    },
    "split_keywords": [
        "gpt-4",
        "cli",
        "gpt-3",
        "openai"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "33dce61f9b80f6d0fb07cf2b260bbc3d1dc583e75a730b00b21a060758c11229",
                "md5": "515d0de3b33668461eb8384b2c68e6b5",
                "sha256": "336f69cc50343f5ad22e14a46a4cd5c414ab2947850f5afd698d6cdff328e927"
            },
            "downloads": -1,
            "filename": "qprom-0.6.0.tar.gz",
            "has_sig": false,
            "md5_digest": "515d0de3b33668461eb8384b2c68e6b5",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 9699,
            "upload_time": "2024-03-06T00:41:07",
            "upload_time_iso_8601": "2024-03-06T00:41:07.686352Z",
            "url": "https://files.pythonhosted.org/packages/33/dc/e61f9b80f6d0fb07cf2b260bbc3d1dc583e75a730b00b21a060758c11229/qprom-0.6.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-03-06 00:41:07",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "MartinWie",
    "github_project": "qprom",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "requirements": [
        {
            "name": "openai",
            "specs": [
                [
                    "==",
                    "1.10.0"
                ]
            ]
        },
        {
            "name": "tiktoken",
            "specs": [
                [
                    "==",
                    "0.5.2"
                ]
            ]
        },
        {
            "name": "argparse",
            "specs": [
                [
                    "==",
                    "1.4.0"
                ]
            ]
        }
    ],
    "lcname": "qprom"
}
        
Elapsed time: 0.24162s