askai


Nameaskai JSON
Version 1.0.5 PyPI version JSON
download
home_pagehttps://github.com/maxvfischer/askai
SummaryYour simple terminal helper
upload_time2023-01-18 18:56:37
maintainer
docs_urlNone
authorMax Fischer
requires_python
licenseMIT
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            <div align="center">
    <img style="display: block;" align="center" src="https://github.com/maxvfischer/askai/blob/main/images/logo.png?raw=True"/>
</div>

`askai` is a CLI integration with OpenAI's GPT3, enabling you to ask questions and 
receive the answers straight in your terminal.

![conda](https://github.com/maxvfischer/askai/blob/main/images/question_conda.svg?raw=True)


| ❗ **Other model integrations** ❗                                                                                                                            |
|-------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Currently, `askai` only supports integration with OpenAI. But as soon as other NLP API-endpoints start popping up, I will work on integrating them as well. |


## Installation

You can either install it through pip

```bash
pip install askai
```

or directly from the repo

```
git clone git@github.com:maxvfischer/askai.git
cd askai
pip install .
```

## Initialize askai

`askai` needs to be initialized to set up the default config and to connect your 
OpenAI API key. Note that the key is only stored locally in `~/.askai/key`.

```bash
askai init
```

![init](https://github.com/maxvfischer/askai/blob/main/images/init.svg?raw=True)

## Create OpenAI API-key

For `askai` to work, you need to add an OpenAI API-key. When creating an OpenAI account, 
they give you $18 to use for free. After that you need to set up a paid account to 
continue using their service. 

During the development and testing of this CLI, I used $0.67.

An OpenAI API-key can be created by:

1. Creating an account on OpenAI: https://openai.com/api/
2. Logging in and click on `New API keys`
3. Click `Create new secret key`

## How to use


### Simple question
Ask a question using your saved config.

```
askai "<QUESTION>"
```
![conda](https://github.com/maxvfischer/askai/blob/main/images/question_conda.svg?raw=True)


### Override config
It's possible to override the default config by using arguments:

```
askai "<QUESTION>" --num-answers <INT> --model <MODEL_STRING> --temperature <FLOAT> --top_p <FLOAT> --max-tokens <INT> --frequency-penalty <FLOAT> --presence_penalty <FLOAT>
```
![conda](https://github.com/maxvfischer/askai/blob/main/images/haiku.svg?raw=True)

| **Argument**        | **Allowed values**               | **Description**                                                                                                                                                |
|---------------------|----------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------|
| --num-answers or -n | \>0                              | Number of answers to generate. Note that more answers consume more tokens                                                                                      |
| --model or -m       | See list below                   | Which model to use. See list of available models below.                                                                                                        |
| --temperature or -t | 0.0 <= t <= 1.0                  | What sampling temperature to use. Higher value makes the model more  "creative". Do not use at the same time as `top-p`.                                       |
| --top-p             | 0.0 <= top_p <= 1.0              | What sampling nucleus to use. The model considers the results of the  tokens with top_p probability mass. Do not use at the same time as `temperature`.        |
| --max-tokens        | \>0                              | Maximum number of tokens used per question (incl. question + answer)                                                                                           |
| --frequency-penalty | -2.0 <= frequency_penalty <= 2.0 | Positive values penalize new tokens based on whether they appear in the text so  far, increasing the model's likelihood to talk about new topics.              |
| --presence-penalty  | -2.0 <= presence_penalty <= 2.0  | Positive values penalize new tokens based on their existing frequency in the text  so far, decreasing the model's likelihood to repeat the same line verbatim. |

## Update config
If you find yourself overriding the config a lot when asking questions, you can update the default config instead.

### Update all config values

```bash
askai config update all
```

### Update individual config values

```bash
askai config update num-answers
askai config update model
askai config update temperature
askai config update top-p
askai config update max-tokens
askai config update frequency-penalty
askai config update presence-penalty
```

### Reset to default config
```bash
askai config reset
```

### See current config
```
askai config show
```

## Update API-key

It's possible to update the API-key without re-initializing the CLI.

### Overwrite current API-key
```
askai key add
```

### Remove current API-key
```
askai key remove
```

## Available models

This list was updated from OpenAI's website on 2022-12-20 and might go out of date at any time. Please
check [here](https://beta.openai.com/docs/models) to see an accurate list.

#### Text-generating models
| --model          | Description                                  | Max tokens |
|------------------|----------------------------------------------|------------|
| text-davinci-003 | The most capable GPT3 model                  | 4000       |
| text-curie-001   | Worse than davinci. Still capable and faster | 2048       |
| text-babbage-001 | Can do straight forward tasks. Very fast     | 2048       |
| text-ada-001     | Capable of very simple tasks. Very fast      | 2048       |

#### Code-generating models
| --model          | Description                               | Max tokens |
|------------------|-------------------------------------------|------------|
| code-davinci-002 | Most capable code-generating model.       | 8000       |
| code-cushman-001 | Almost as capable as davinci, but faster. | 2048       |

## Important notes

Note that the answers generated by OpenAI and shown by `askai` is by no means a "truth". 
You always need to be skeptical. And of course, don't execute generated commands or code
without verifying that it does what you asked for.

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/maxvfischer/askai",
    "name": "askai",
    "maintainer": "",
    "docs_url": null,
    "requires_python": "",
    "maintainer_email": "",
    "keywords": "",
    "author": "Max Fischer",
    "author_email": "",
    "download_url": "https://files.pythonhosted.org/packages/7f/1f/f3ac4f0badcb8c4a8d63943b0408b1351444967b51d2cbcb13f51026bfbf/askai-1.0.5.tar.gz",
    "platform": null,
    "description": "<div align=\"center\">\n    <img style=\"display: block;\" align=\"center\" src=\"https://github.com/maxvfischer/askai/blob/main/images/logo.png?raw=True\"/>\n</div>\n\n`askai` is a CLI integration with OpenAI's GPT3, enabling you to ask questions and \nreceive the answers straight in your terminal.\n\n![conda](https://github.com/maxvfischer/askai/blob/main/images/question_conda.svg?raw=True)\n\n\n| \u2757 **Other model integrations** \u2757                                                                                                                            |\n|-------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| Currently, `askai` only supports integration with OpenAI. But as soon as other NLP API-endpoints start popping up, I will work on integrating them as well. |\n\n\n## Installation\n\nYou can either install it through pip\n\n```bash\npip install askai\n```\n\nor directly from the repo\n\n```\ngit clone git@github.com:maxvfischer/askai.git\ncd askai\npip install .\n```\n\n## Initialize askai\n\n`askai` needs to be initialized to set up the default config and to connect your \nOpenAI API key. Note that the key is only stored locally in `~/.askai/key`.\n\n```bash\naskai init\n```\n\n![init](https://github.com/maxvfischer/askai/blob/main/images/init.svg?raw=True)\n\n## Create OpenAI API-key\n\nFor `askai` to work, you need to add an OpenAI API-key. When creating an OpenAI account, \nthey give you $18 to use for free. After that you need to set up a paid account to \ncontinue using their service. \n\nDuring the development and testing of this CLI, I used $0.67.\n\nAn OpenAI API-key can be created by:\n\n1. Creating an account on OpenAI: https://openai.com/api/\n2. Logging in and click on `New API keys`\n3. Click `Create new secret key`\n\n## How to use\n\n\n### Simple question\nAsk a question using your saved config.\n\n```\naskai \"<QUESTION>\"\n```\n![conda](https://github.com/maxvfischer/askai/blob/main/images/question_conda.svg?raw=True)\n\n\n### Override config\nIt's possible to override the default config by using arguments:\n\n```\naskai \"<QUESTION>\" --num-answers <INT> --model <MODEL_STRING> --temperature <FLOAT> --top_p <FLOAT> --max-tokens <INT> --frequency-penalty <FLOAT> --presence_penalty <FLOAT>\n```\n![conda](https://github.com/maxvfischer/askai/blob/main/images/haiku.svg?raw=True)\n\n| **Argument**        | **Allowed values**               | **Description**                                                                                                                                                |\n|---------------------|----------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| --num-answers or -n | \\>0                              | Number of answers to generate. Note that more answers consume more tokens                                                                                      |\n| --model or -m       | See list below                   | Which model to use. See list of available models below.                                                                                                        |\n| --temperature or -t | 0.0 <= t <= 1.0                  | What sampling temperature to use. Higher value makes the model more  \"creative\". Do not use at the same time as `top-p`.                                       |\n| --top-p             | 0.0 <= top_p <= 1.0              | What sampling nucleus to use. The model considers the results of the  tokens with top_p probability mass. Do not use at the same time as `temperature`.        |\n| --max-tokens        | \\>0                              | Maximum number of tokens used per question (incl. question + answer)                                                                                           |\n| --frequency-penalty | -2.0 <= frequency_penalty <= 2.0 | Positive values penalize new tokens based on whether they appear in the text so  far, increasing the model's likelihood to talk about new topics.              |\n| --presence-penalty  | -2.0 <= presence_penalty <= 2.0  | Positive values penalize new tokens based on their existing frequency in the text  so far, decreasing the model's likelihood to repeat the same line verbatim. |\n\n## Update config\nIf you find yourself overriding the config a lot when asking questions, you can update the default config instead.\n\n### Update all config values\n\n```bash\naskai config update all\n```\n\n### Update individual config values\n\n```bash\naskai config update num-answers\naskai config update model\naskai config update temperature\naskai config update top-p\naskai config update max-tokens\naskai config update frequency-penalty\naskai config update presence-penalty\n```\n\n### Reset to default config\n```bash\naskai config reset\n```\n\n### See current config\n```\naskai config show\n```\n\n## Update API-key\n\nIt's possible to update the API-key without re-initializing the CLI.\n\n### Overwrite current API-key\n```\naskai key add\n```\n\n### Remove current API-key\n```\naskai key remove\n```\n\n## Available models\n\nThis list was updated from OpenAI's website on 2022-12-20 and might go out of date at any time. Please\ncheck [here](https://beta.openai.com/docs/models) to see an accurate list.\n\n#### Text-generating models\n| --model          | Description                                  | Max tokens |\n|------------------|----------------------------------------------|------------|\n| text-davinci-003 | The most capable GPT3 model                  | 4000       |\n| text-curie-001   | Worse than davinci. Still capable and faster | 2048       |\n| text-babbage-001 | Can do straight forward tasks. Very fast     | 2048       |\n| text-ada-001     | Capable of very simple tasks. Very fast      | 2048       |\n\n#### Code-generating models\n| --model          | Description                               | Max tokens |\n|------------------|-------------------------------------------|------------|\n| code-davinci-002 | Most capable code-generating model.       | 8000       |\n| code-cushman-001 | Almost as capable as davinci, but faster. | 2048       |\n\n## Important notes\n\nNote that the answers generated by OpenAI and shown by `askai` is by no means a \"truth\". \nYou always need to be skeptical. And of course, don't execute generated commands or code\nwithout verifying that it does what you asked for.\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "Your simple terminal helper",
    "version": "1.0.5",
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "7f1ff3ac4f0badcb8c4a8d63943b0408b1351444967b51d2cbcb13f51026bfbf",
                "md5": "8d47a276ce31ca81f59fdbbfaf29f45f",
                "sha256": "91a6097f28816413d91619f558d497c89896317bd6a9c76b48ef4df6a0250542"
            },
            "downloads": -1,
            "filename": "askai-1.0.5.tar.gz",
            "has_sig": false,
            "md5_digest": "8d47a276ce31ca81f59fdbbfaf29f45f",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 12848,
            "upload_time": "2023-01-18T18:56:37",
            "upload_time_iso_8601": "2023-01-18T18:56:37.898425Z",
            "url": "https://files.pythonhosted.org/packages/7f/1f/f3ac4f0badcb8c4a8d63943b0408b1351444967b51d2cbcb13f51026bfbf/askai-1.0.5.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-01-18 18:56:37",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "github_user": "maxvfischer",
    "github_project": "askai",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "askai"
}
        
Elapsed time: 0.03058s