gigashell


Namegigashell JSON
Version 0.9.4.3 PyPI version JSON
download
home_page
SummaryA command-line productivity tool powered by GigaChat models, will help you accomplish your tasks faster and more efficiently.
upload_time2023-10-04 10:38:06
maintainer
docs_urlNone
author
requires_python>=3.6
license
keywords cheet-sheet cli gigachain gigachat gpt openai productivity shell
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # GigaShell
Онлайн-инструмент для повышения продуктивности, работающий на основе больших языковых моделей. Как разработчики, мы можем использовать возможности ИИ для генерации команд оболочки, фрагментов кода, комментариев и документации, среди прочего. Забудьте о шпаргалках и заметках, с помощью этого инструмента вы можете получить точные ответы прямо в своем терминале, и вы, вероятно, обнаружите, что сокращаете свои ежедневные поиски в Google, экономя ваше драгоценное время и усилия. GigaShell совместим с несколькими платформами и поддерживает все основные операционные системы, включая Linux, macOS и Windows, а также все основные оболочки, такие как PowerShell, CMD, Bash, Zsh, Fish и многие другие.

<img src="demo_screen_1.png" width="1060"/>

Является форком библиотеки [ShellGPT](https://github.com/TheR1D/shell_gpt), адаптированым для русского языка и работы с GigaChat

## Installation
```shell
pip install gigashell
```
<!--
You'll need an OpenAI API key, you can generate one [here](https://beta.openai.com/account/api-keys).

If the`$OPENAI_API_KEY` environment variable is set it will be used, otherwise, you will be prompted for your key which will then be stored in `~/.config/gigashell/.gigarc`.
-->

## Usage
`giga` has a variety of use cases, including simple queries, shell queries, and code queries.
### Simple queries
We can use it as normal search engine, asking about anything:
```shell
giga "nginx default config file location"
# -> The default configuration file for Nginx is located at /etc/nginx/nginx.conf.
```
```shell
giga "mass of sun"
# -> = 1.99 × 10^30 kg
```
```shell
giga "1 hour and 30 minutes to seconds"
# -> 5,400 seconds
```
### Summarization and analyzing
GigaShell accepts prompt from both stdin and command line argument, you choose the most convenient input method for your preferences. Whether you prefer piping input through the terminal or specifying it directly as arguments, `giga` got you covered. This versatile feature is particularly useful when you need to pass file content or pipe output from other commands to the LLM models for summarization or analysis. For example, you can easily generate a git commit message based on a diff:
```shell
git diff | giga "Generate git commit message, for my changes"
# -> Commit message: Implement Model enum and get_edited_prompt()
```
You can analyze logs from various sources by passing them using stdin or command line arguments, along with a user-friendly prompt. This enables you to quickly identify errors and get suggestions for possible solutions:
```shell
docker logs -n 20 container_name | giga "check logs, find errors, provide possible solutions"
# ...
```
This powerful feature simplifies the process of managing and understanding data from different sources, making it easier for you to focus on what really matters: improving your projects and applications.

### Shell commands
Have you ever found yourself forgetting common shell commands, such as `chmod`, and needing to look up the syntax online? With `--shell` or shortcut `-s` option, you can quickly find and execute the commands you need right in the terminal.
```shell
giga --shell "make all files in current directory read only"
# -> chmod 444 *
# -> [E]xecute, [D]escribe, [A]bort: e
...
```
GigaShell is aware of OS and `$SHELL` you are using, it will provide shell command for specific system you have. For instance, if you ask `giga` to update your system, it will return a command based on your OS. Here's an example using macOS:
```shell
giga -s "update my system"
# -> sudo softwareupdate -i -a
# -> [E]xecute, [D]escribe, [A]bort: e
...
```
The same prompt, when used on Ubuntu, will generate a different suggestion:
```shell
giga -s "update my system"
# -> sudo apt update && sudo apt upgrade -y
# -> [E]xecute, [D]escribe, [A]bort: e
...
```
We can ask LLM to describe suggested shell command, it will provide a short description of what the command does:
```shell
giga -s "show all txt files in current folder"
# -> ls *.txt
# -> [E]xecute, [D]escribe, [A]bort: d
# -> List all files with .txt extension in current directory
# -> [E]xecute, [D]escribe, [A]bort: e
...
```
Let's try some docker containers:
```shell
giga -s "start nginx using docker, forward 443 and 80 port, mount current folder with index.html"
# -> docker run -d -p 443:443 -p 80:80 -v $(pwd):/usr/share/nginx/html nginx
# -> [E]xecute, [D]escribe, [A]bort: e
...
```
We can still use pipes to pass input to `giga` and get shell commands as output:
```shell
cat data.json | giga -s "curl localhost with provided json"
# -> curl -X POST -H "Content-Type: application/json" -d '{"a": 1, "b": 2, "c": 3}' http://localhost
````
We can apply additional shell magic in our prompt, in this example passing file names to ffmpeg:
```shell
ls
# -> 1.mp4 2.mp4 3.mp4
giga -s "using ffmpeg combine multiple videos into one without audio. Video file names: $(ls -m)"
# -> ffmpeg -i 1.mp4 -i 2.mp4 -i 3.mp4 -filter_complex "[0:v] [1:v] [2:v] concat=n=3:v=1 [v]" -map "[v]" out.mp4
# -> [E]xecute, [D]escribe, [A]bort: e
...
```
### Shell integration
Shell integration allows you to use GigaShell in your terminal with hotkeys. It is currently available for bash and zsh. It will allow you to have giga completions in your shell history, and also edit suggested commands right away.

https://github.com/TheR1D/shell_gpt/assets/16740832/bead0dab-0dd9-436d-88b7-6abfb2c556c1

To install shell integration, run:
```shell
giga --install-integration
# Restart your terminal to apply changes.
```
This will add few lines to your `.bashrc` or `.zshrc` file. After that, you can use `Ctrl+l` (by default) to invoke GigaShell. When you press `Ctrl+l` it will replace you current input line (buffer) with suggested command. You can then edit it and press `Enter` to execute.

### Generating code
With `--code` parameters we can query only code as output, for example:
```shell
giga --code "Solve classic fizz buzz problem using Python"
```
```python
for i in range(1, 101):
    if i % 3 == 0 and i % 5 == 0:
        print("FizzBuzz")
    elif i % 3 == 0:
        print("Fizz")
    elif i % 5 == 0:
        print("Buzz")
    else:
        print(i)
```
Since it is valid python code, we can redirect the output to file:
```shell
giga --code "solve classic fizz buzz problem using Python" > fizz_buzz.py
python fizz_buzz.py
# 1
# 2
# Fizz
# 4
# Buzz
# Fizz
# ...
```
We can also use pipes to pass input to `giga`:
```shell
cat fizz_buzz.py | giga --code "Generate comments for each line of my code"
```
```python
# Loop through numbers 1 to 100
for i in range(1, 101):
    # Check if number is divisible by both 3 and 5
    if i % 3 == 0 and i % 5 == 0:
        # Print "FizzBuzz" if number is divisible by both 3 and 5
        print("FizzBuzz")
    # Check if number is divisible by 3
    elif i % 3 == 0:
        # Print "Fizz" if number is divisible by 3
        print("Fizz")
    # Check if number is divisible by 5
    elif i % 5 == 0:
        # Print "Buzz" if number is divisible by 5
        print("Buzz")
    # If number is not divisible by 3 or 5, print the number itself
    else:
        print(i)
```

### Conversational Modes - Overview

Often it is important to preserve and recall a conversation and this is kept track of locally. `giga` creates conversational dialogue with each llm completion requested. The dialogue can develop one-by-one (chat mode) or interactively, in a REPL loop (REPL mode). Both ways rely on the same underlying object, called a chat session. The session is located at the [configurable](#runtime-configuration-file) `CHAT_CACHE_PATH`.

### Listing and Showing Chat Sessions 

Dialogues had in both REPL and chat mode are saved as chat sessions.

To list all the sessions from either conversational mode, use the `--list-chats` option:
```shell
giga --list-chats
# .../gigashell/chat_cache/number
# .../gigashell/chat_cache/python_request
```
To show all the messages related to a specific conversation, use the `--show-chat` option followed by the session name:
```shell
giga --show-chat number
# user: please remember my favorite number: 4
# assistant: I will remember that your favorite number is 4.
# user: what would be my favorite number + 4?
# assistant: Your favorite number is 4, so if we add 4 to it, the result would be 8.
```

### Chat Mode
To start a chat session, use the `--chat` option followed by a unique session name and a prompt. You can also use "temp" as a session name to start a temporary chat session.
```shell
giga --chat number "please remember my favorite number: 4"
# -> I will remember that your favorite number is 4.
giga --chat number "what would be my favorite number + 4?"
# -> Your favorite number is 4, so if we add 4 to it, the result would be 8.
```
You can also use chat sessions to iteratively improve LLM suggestions by providing additional clues.
```shell
giga --chat python_request --code "make an example request to localhost using Python"
```
```python
import requests

response = requests.get('http://localhost')
print(response.text)
```
Asking AI to add a cache to our request.
```shell
giga --chat python_request --code "add caching"
```
```python
import requests
from cachecontrol import CacheControl

sess = requests.session()
cached_sess = CacheControl(sess)

response = cached_sess.get('http://localhost')
print(response.text)
```
We can use `--code` or `--shell` options to initiate `--chat`, so you can keep refining the results:
```shell
giga --chat sh --shell "What are the files in this directory?"
# -> ls
giga --chat sh "Sort them by name"
# -> ls | sort
giga --chat sh "Concatenate them using FFMPEG"
# -> ffmpeg -i "concat:$(ls | sort | tr '\n' '|')" -codec copy output.mp4
giga --chat sh "Convert the resulting file into an MP3"
# -> ffmpeg -i output.mp4 -vn -acodec libmp3lame -ac 2 -ab 160k -ar 48000 final_output.mp3
```

### REPL Mode
There is very handy REPL (read–eval–print loop) mode, which allows you to interactively chat with LLM models. To start a chat session in REPL mode, use the `--repl` option followed by a unique session name. You can also use "temp" as a session name to start a temporary REPL session. Note that `--chat` and `--repl` are using same chat sessions, so you can use `--chat` to start a chat session and then use `--repl` to continue the conversation in REPL mode. REPL mode will also show history of your conversation in the beginning.

<p align="center">
  <img src="https://s10.gifyu.com/images/repl-demo.gif" alt="gif">
</p>

```text
giga --repl temp
Entering REPL mode, press Ctrl+C to exit.
>>> What is REPL?
REPL stands for Read-Eval-Print Loop. It is a programming environment ...
>>> How can I use Python with REPL?
To use Python with REPL, you can simply open a terminal or command prompt ...
```
REPL mode can work with `--shell` and `--code` options, which makes it very handy for interactive shell commands and code generation:
```text
giga --repl temp --shell
Entering shell REPL mode, type [e] to execute commands or press Ctrl+C to exit.
>>> What is in current folder?
ls
>>> Show file sizes
ls -lh
>>> Sort them by file sizes
ls -lhS
>>> e (enter just e to execute commands, or d to describe them)
...
```
Example of using REPL mode to generate code:
```text
giga --repl temp --code
Entering REPL mode, press Ctrl+C to exit.
>>> Using Python request localhost:80
import requests
response = requests.get('http://localhost:80')
print(response.text)
>>> Change port to 443
import requests
response = requests.get('https://localhost:443')
print(response.text)
```

### Picking up on a chat mode conversation with REPL mode

```text
giga --repl number
───── Chat History──────
user: ###
Role name: default
You are Command Line App GigaShell, a programming and system administration assistant.
You are managing Darwin/MacOS 13.3.1 operating system with zsh shell.
Provide only plain text without Markdown formatting.
Do not show any warnings or information regarding your capabilities.
If you need to store any data, assume it will be stored in the chat.

Request: please remember my favorite number: 4
###
assistant: Sure, I have stored your favorite number as 4.
user: what would be my favorite number raised to the power of 4
assistant: Your favorite number raised to the power of 4 would be 256.
────────────────────────────────────────────────────────
Entering REPL mode, press Ctrl+C to exit.
>>> What is the sum of my favorite number and your previous response?
The sum of your favorite number (4) and my previous response (256) would be 260.
```


### Roles
GigaShell allows you to create custom roles, which can be utilized to generate code, shell commands, or to fulfill your specific needs. To create a new role, use the `--create-role` option followed by the role name. You will be prompted to provide a description for the role, along with other details. This will create a JSON file in `~/.config/gigashell/roles` with the role name. Inside this directory, you can also edit default `giga` roles, such as **shell**, **code**, and **default**. Use the `--list-roles` option to list all available roles, and the `--show-role` option to display the details of a specific role. Here's an example of a custom role:
```shell
giga --create-role json
# Enter role description: You are JSON generator, provide only valid json as response.
# Enter expecting result, e.g. answer, code, shell command, etc.: json
giga --role json "random: user, password, email, address"
{
  "user": "JohnDoe",
  "password": "p@ssw0rd",
  "email": "johndoe@example.com",
  "address": {
    "street": "123 Main St",
    "city": "Anytown",
    "state": "CA",
    "zip": "12345"
  }
}
```

### Request cache
Control cache using `--cache` (default) and `--no-cache` options. This caching applies for all `giga` requests to OpenAI API:
```shell
giga "what are the colors of a rainbow"
# -> The colors of a rainbow are red, orange, yellow, green, blue, indigo, and violet.
```
Next time, same exact query will get results from local cache instantly. Note that `giga "what are the colors of a rainbow" --temperature 0.5` will make a new request, since we didn't provide `--temperature` (same applies to `--top-probability`) on previous request.

This is just some examples of what we can do using OpenAI GPT models, I'm sure you will find it useful for your specific use cases.

### Runtime configuration file
You can setup some parameters in runtime configuration file `~/.config/gigashell/.gigarc`:
```text
# Credentionals to access GigaChat
GIGA_USERNAME=your username
GIGA_PASSWORD=your password
# OpenAI host, useful if you would like to use proxy.
GIGACHAT_API_HOST=https://...
# Max amount of cached message per chat session.
CHAT_CACHE_LENGTH=100
# Chat cache folder.
CHAT_CACHE_PATH=/tmp/gigashell/chat_cache
# Request cache length (amount).
CACHE_LENGTH=100
# Request cache folder.
CACHE_PATH=/tmp/gigashell/cache
# Request timeout in seconds.
REQUEST_TIMEOUT=60
# Default OpenAI model to use.
DEFAULT_MODEL=GigaChat70:latest
# Default color for OpenAI completions.
DEFAULT_COLOR=magenta
# Force use system role messages (not recommended).
SYSTEM_ROLES=false
# When in --shell mode, default to "Y" for no input.
DEFAULT_EXECUTE_SHELL_CMD=false
# Disable streaming of responses
DISABLE_STREAMING=false
```
Possible options for `DEFAULT_COLOR`: black, red, green, yellow, blue, magenta, cyan, white, bright_black, bright_red, bright_green, bright_yellow, bright_blue, bright_magenta, bright_cyan, bright_white.

Switch `SYSTEM_ROLES` to force use [system roles](https://help.openai.com/en/articles/7042661-chatgpt-api-transition-guide) messages, this is not recommended, since it doesn't perform well with current LLM models.

### Full list of arguments
```text
╭─ Arguments ─────────────────────────────────────────────────────────────────────────────────────────────────╮
│   prompt      [PROMPT]  The prompt to generate completions for.                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─ Options ───────────────────────────────────────────────────────────────────────────────────────────────────╮
│ --model            TEXT                             GigaChat model to use. [default: GigaChat70:latest]     │
│ --temperature      FLOAT RANGE [0.0<=x<=2.0]        Randomness of generated output. [default: 0.1]          │
│ --top-probability  FLOAT RANGE [0.1<=x<=1.0]        Limits highest probable tokens (words). [default: 1.0]  │
│ --editor                                            Open $EDITOR to provide a prompt. [default: no-editor]  │
│ --cache                                             Cache completion results. [default: cache]              │
│ --help                                              Show this message and exit.                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─ Assistance Options ────────────────────────────────────────────────────────────────────────────────────────╮
│ --shell  -s                 Generate and execute shell commands.                                            │
│ --describe-shell  -d        Describe a shell command.                                                       │
│ --code       --no-code      Generate only code. [default: no-code]                                          │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─ Chat Options ──────────────────────────────────────────────────────────────────────────────────────────────╮
│ --chat        TEXT  Follow conversation with id, use "temp" for quick session. [default: None]              │
│ --repl        TEXT  Start a REPL (Read–eval–print loop) session. [default: None]                            │
│ --show-chat   TEXT  Show all messages from provided chat id. [default: None]                                │
│ --list-chats        List all existing chat ids. [default: no-list-chats]                                    │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─ Role Options ──────────────────────────────────────────────────────────────────────────────────────────────╮
│ --role         TEXT  System role for LLM model. [default: None]                                             │
│ --create-role  TEXT  Create role. [default: None]                                                           │
│ --show-role    TEXT  Show role. [default: None]                                                             │
│ --list-roles         List roles. [default: no-list-roles]                                                   │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
```

<!--
## LocalAI
By default, GigaShell leverages OpenAI's large language models. However, it also provides the flexibility to use locally hosted models, which can be a cost-effective alternative. To use local models, you will need to run your own API server. You can accomplish this by using [LocalAI](https://github.com/go-skynet/LocalAI), a self-hosted, OpenAI-compatible API. Setting up LocalAI allows you to run language models on your own hardware, potentially without the need for an internet connection, depending on your usage. To set up your LocalAI, please follow this comprehensive [guide](https://github.com/TheR1D/shell_gpt/wiki/LocalAI). Remember that the performance of your local models may depend on the specifications of your hardware and the specific language model you choose to deploy.

## Docker
Run the container using the `OPENAI_API_KEY` environment variable, and a docker volume to store cache:
```shell
docker run --rm \
           --env OPENAI_API_KEY="your OPENAI API key" \
           --volume gpt-cache:/tmp/shell_gpt \
       ghcr.io/ther1d/shell_gpt --chat rainbow "what are the colors of a rainbow"
```

Example of a conversation, using an alias and the `OPENAI_API_KEY` environment variable:
```shell
alias giga="docker run --rm --env OPENAI_API_KEY --volume gpt-cache:/tmp/gigashell ghcr.io/ther1d/gigashell"
export OPENAI_API_KEY="your OPENAI API key"
giga --chat rainbow "what are the colors of a rainbow"
giga --chat rainbow "inverse the list of your last answer"
giga --chat rainbow "translate your last answer in french"
```

You also can use the provided `Dockerfile` to build your own image:
```shell
docker build -t giga .
```
-->
            

Raw data

            {
    "_id": null,
    "home_page": "",
    "name": "gigashell",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.6",
    "maintainer_email": "",
    "keywords": "cheet-sheet,cli,gigachain,gigachat,gpt,openai,productivity,shell",
    "author": "",
    "author_email": "Farkhod Sadykov <farkhod@sadykov.dev>, Konstantin Krstnikov <k.krstnikov@gmail.com>",
    "download_url": "https://files.pythonhosted.org/packages/f8/45/b38506721da293f90a4339dff2bdbdcd77cded660751884263d16701091a/gigashell-0.9.4.3.tar.gz",
    "platform": null,
    "description": "# GigaShell\n\u041e\u043d\u043b\u0430\u0439\u043d-\u0438\u043d\u0441\u0442\u0440\u0443\u043c\u0435\u043d\u0442 \u0434\u043b\u044f \u043f\u043e\u0432\u044b\u0448\u0435\u043d\u0438\u044f \u043f\u0440\u043e\u0434\u0443\u043a\u0442\u0438\u0432\u043d\u043e\u0441\u0442\u0438, \u0440\u0430\u0431\u043e\u0442\u0430\u044e\u0449\u0438\u0439 \u043d\u0430 \u043e\u0441\u043d\u043e\u0432\u0435 \u0431\u043e\u043b\u044c\u0448\u0438\u0445 \u044f\u0437\u044b\u043a\u043e\u0432\u044b\u0445 \u043c\u043e\u0434\u0435\u043b\u0435\u0439. \u041a\u0430\u043a \u0440\u0430\u0437\u0440\u0430\u0431\u043e\u0442\u0447\u0438\u043a\u0438, \u043c\u044b \u043c\u043e\u0436\u0435\u043c \u0438\u0441\u043f\u043e\u043b\u044c\u0437\u043e\u0432\u0430\u0442\u044c \u0432\u043e\u0437\u043c\u043e\u0436\u043d\u043e\u0441\u0442\u0438 \u0418\u0418 \u0434\u043b\u044f \u0433\u0435\u043d\u0435\u0440\u0430\u0446\u0438\u0438 \u043a\u043e\u043c\u0430\u043d\u0434 \u043e\u0431\u043e\u043b\u043e\u0447\u043a\u0438, \u0444\u0440\u0430\u0433\u043c\u0435\u043d\u0442\u043e\u0432 \u043a\u043e\u0434\u0430, \u043a\u043e\u043c\u043c\u0435\u043d\u0442\u0430\u0440\u0438\u0435\u0432 \u0438 \u0434\u043e\u043a\u0443\u043c\u0435\u043d\u0442\u0430\u0446\u0438\u0438, \u0441\u0440\u0435\u0434\u0438 \u043f\u0440\u043e\u0447\u0435\u0433\u043e. \u0417\u0430\u0431\u0443\u0434\u044c\u0442\u0435 \u043e \u0448\u043f\u0430\u0440\u0433\u0430\u043b\u043a\u0430\u0445 \u0438 \u0437\u0430\u043c\u0435\u0442\u043a\u0430\u0445, \u0441 \u043f\u043e\u043c\u043e\u0449\u044c\u044e \u044d\u0442\u043e\u0433\u043e \u0438\u043d\u0441\u0442\u0440\u0443\u043c\u0435\u043d\u0442\u0430 \u0432\u044b \u043c\u043e\u0436\u0435\u0442\u0435 \u043f\u043e\u043b\u0443\u0447\u0438\u0442\u044c \u0442\u043e\u0447\u043d\u044b\u0435 \u043e\u0442\u0432\u0435\u0442\u044b \u043f\u0440\u044f\u043c\u043e \u0432 \u0441\u0432\u043e\u0435\u043c \u0442\u0435\u0440\u043c\u0438\u043d\u0430\u043b\u0435, \u0438 \u0432\u044b, \u0432\u0435\u0440\u043e\u044f\u0442\u043d\u043e, \u043e\u0431\u043d\u0430\u0440\u0443\u0436\u0438\u0442\u0435, \u0447\u0442\u043e \u0441\u043e\u043a\u0440\u0430\u0449\u0430\u0435\u0442\u0435 \u0441\u0432\u043e\u0438 \u0435\u0436\u0435\u0434\u043d\u0435\u0432\u043d\u044b\u0435 \u043f\u043e\u0438\u0441\u043a\u0438 \u0432 Google, \u044d\u043a\u043e\u043d\u043e\u043c\u044f \u0432\u0430\u0448\u0435 \u0434\u0440\u0430\u0433\u043e\u0446\u0435\u043d\u043d\u043e\u0435 \u0432\u0440\u0435\u043c\u044f \u0438 \u0443\u0441\u0438\u043b\u0438\u044f. GigaShell \u0441\u043e\u0432\u043c\u0435\u0441\u0442\u0438\u043c \u0441 \u043d\u0435\u0441\u043a\u043e\u043b\u044c\u043a\u0438\u043c\u0438 \u043f\u043b\u0430\u0442\u0444\u043e\u0440\u043c\u0430\u043c\u0438 \u0438 \u043f\u043e\u0434\u0434\u0435\u0440\u0436\u0438\u0432\u0430\u0435\u0442 \u0432\u0441\u0435 \u043e\u0441\u043d\u043e\u0432\u043d\u044b\u0435 \u043e\u043f\u0435\u0440\u0430\u0446\u0438\u043e\u043d\u043d\u044b\u0435 \u0441\u0438\u0441\u0442\u0435\u043c\u044b, \u0432\u043a\u043b\u044e\u0447\u0430\u044f Linux, macOS \u0438 Windows, \u0430 \u0442\u0430\u043a\u0436\u0435 \u0432\u0441\u0435 \u043e\u0441\u043d\u043e\u0432\u043d\u044b\u0435 \u043e\u0431\u043e\u043b\u043e\u0447\u043a\u0438, \u0442\u0430\u043a\u0438\u0435 \u043a\u0430\u043a PowerShell, CMD, Bash, Zsh, Fish \u0438 \u043c\u043d\u043e\u0433\u0438\u0435 \u0434\u0440\u0443\u0433\u0438\u0435.\n\n<img src=\"demo_screen_1.png\" width=\"1060\"/>\n\n\u042f\u0432\u043b\u044f\u0435\u0442\u0441\u044f \u0444\u043e\u0440\u043a\u043e\u043c \u0431\u0438\u0431\u043b\u0438\u043e\u0442\u0435\u043a\u0438 [ShellGPT](https://github.com/TheR1D/shell_gpt), \u0430\u0434\u0430\u043f\u0442\u0438\u0440\u043e\u0432\u0430\u043d\u044b\u043c \u0434\u043b\u044f \u0440\u0443\u0441\u0441\u043a\u043e\u0433\u043e \u044f\u0437\u044b\u043a\u0430 \u0438 \u0440\u0430\u0431\u043e\u0442\u044b \u0441 GigaChat\n\n## Installation\n```shell\npip install gigashell\n```\n<!--\nYou'll need an OpenAI API key, you can generate one [here](https://beta.openai.com/account/api-keys).\n\nIf the`$OPENAI_API_KEY` environment variable is set it will be used, otherwise, you will be prompted for your key which will then be stored in `~/.config/gigashell/.gigarc`.\n-->\n\n## Usage\n`giga` has a variety of use cases, including simple queries, shell queries, and code queries.\n### Simple queries\nWe can use it as normal search engine, asking about anything:\n```shell\ngiga \"nginx default config file location\"\n# -> The default configuration file for Nginx is located at /etc/nginx/nginx.conf.\n```\n```shell\ngiga \"mass of sun\"\n# -> = 1.99 \u00d7 10^30 kg\n```\n```shell\ngiga \"1 hour and 30 minutes to seconds\"\n# -> 5,400 seconds\n```\n### Summarization and analyzing\nGigaShell accepts prompt from both stdin and command line argument, you choose the most convenient input method for your preferences. Whether you prefer piping input through the terminal or specifying it directly as arguments, `giga` got you covered. This versatile feature is particularly useful when you need to pass file content or pipe output from other commands to the LLM models for summarization or analysis. For example, you can easily generate a git commit message based on a diff:\n```shell\ngit diff | giga \"Generate git commit message, for my changes\"\n# -> Commit message: Implement Model enum and get_edited_prompt()\n```\nYou can analyze logs from various sources by passing them using stdin or command line arguments, along with a user-friendly prompt. This enables you to quickly identify errors and get suggestions for possible solutions:\n```shell\ndocker logs -n 20 container_name | giga \"check logs, find errors, provide possible solutions\"\n# ...\n```\nThis powerful feature simplifies the process of managing and understanding data from different sources, making it easier for you to focus on what really matters: improving your projects and applications.\n\n### Shell commands\nHave you ever found yourself forgetting common shell commands, such as `chmod`, and needing to look up the syntax online? With `--shell` or shortcut `-s` option, you can quickly find and execute the commands you need right in the terminal.\n```shell\ngiga --shell \"make all files in current directory read only\"\n# -> chmod 444 *\n# -> [E]xecute, [D]escribe, [A]bort: e\n...\n```\nGigaShell is aware of OS and `$SHELL` you are using, it will provide shell command for specific system you have. For instance, if you ask `giga` to update your system, it will return a command based on your OS. Here's an example using macOS:\n```shell\ngiga -s \"update my system\"\n# -> sudo softwareupdate -i -a\n# -> [E]xecute, [D]escribe, [A]bort: e\n...\n```\nThe same prompt, when used on Ubuntu, will generate a different suggestion:\n```shell\ngiga -s \"update my system\"\n# -> sudo apt update && sudo apt upgrade -y\n# -> [E]xecute, [D]escribe, [A]bort: e\n...\n```\nWe can ask LLM to describe suggested shell command, it will provide a short description of what the command does:\n```shell\ngiga -s \"show all txt files in current folder\"\n# -> ls *.txt\n# -> [E]xecute, [D]escribe, [A]bort: d\n# -> List all files with .txt extension in current directory\n# -> [E]xecute, [D]escribe, [A]bort: e\n...\n```\nLet's try some docker containers:\n```shell\ngiga -s \"start nginx using docker, forward 443 and 80 port, mount current folder with index.html\"\n# -> docker run -d -p 443:443 -p 80:80 -v $(pwd):/usr/share/nginx/html nginx\n# -> [E]xecute, [D]escribe, [A]bort: e\n...\n```\nWe can still use pipes to pass input to `giga` and get shell commands as output:\n```shell\ncat data.json | giga -s \"curl localhost with provided json\"\n# -> curl -X POST -H \"Content-Type: application/json\" -d '{\"a\": 1, \"b\": 2, \"c\": 3}' http://localhost\n````\nWe can apply additional shell magic in our prompt, in this example passing file names to ffmpeg:\n```shell\nls\n# -> 1.mp4 2.mp4 3.mp4\ngiga -s \"using ffmpeg combine multiple videos into one without audio. Video file names: $(ls -m)\"\n# -> ffmpeg -i 1.mp4 -i 2.mp4 -i 3.mp4 -filter_complex \"[0:v] [1:v] [2:v] concat=n=3:v=1 [v]\" -map \"[v]\" out.mp4\n# -> [E]xecute, [D]escribe, [A]bort: e\n...\n```\n### Shell integration\nShell integration allows you to use GigaShell in your terminal with hotkeys. It is currently available for bash and zsh. It will allow you to have giga completions in your shell history, and also edit suggested commands right away.\n\nhttps://github.com/TheR1D/shell_gpt/assets/16740832/bead0dab-0dd9-436d-88b7-6abfb2c556c1\n\nTo install shell integration, run:\n```shell\ngiga --install-integration\n# Restart your terminal to apply changes.\n```\nThis will add few lines to your `.bashrc` or `.zshrc` file. After that, you can use `Ctrl+l` (by default) to invoke GigaShell. When you press `Ctrl+l` it will replace you current input line (buffer) with suggested command. You can then edit it and press `Enter` to execute.\n\n### Generating code\nWith `--code` parameters we can query only code as output, for example:\n```shell\ngiga --code \"Solve classic fizz buzz problem using Python\"\n```\n```python\nfor i in range(1, 101):\n    if i % 3 == 0 and i % 5 == 0:\n        print(\"FizzBuzz\")\n    elif i % 3 == 0:\n        print(\"Fizz\")\n    elif i % 5 == 0:\n        print(\"Buzz\")\n    else:\n        print(i)\n```\nSince it is valid python code, we can redirect the output to file:\n```shell\ngiga --code \"solve classic fizz buzz problem using Python\" > fizz_buzz.py\npython fizz_buzz.py\n# 1\n# 2\n# Fizz\n# 4\n# Buzz\n# Fizz\n# ...\n```\nWe can also use pipes to pass input to `giga`:\n```shell\ncat fizz_buzz.py | giga --code \"Generate comments for each line of my code\"\n```\n```python\n# Loop through numbers 1 to 100\nfor i in range(1, 101):\n    # Check if number is divisible by both 3 and 5\n    if i % 3 == 0 and i % 5 == 0:\n        # Print \"FizzBuzz\" if number is divisible by both 3 and 5\n        print(\"FizzBuzz\")\n    # Check if number is divisible by 3\n    elif i % 3 == 0:\n        # Print \"Fizz\" if number is divisible by 3\n        print(\"Fizz\")\n    # Check if number is divisible by 5\n    elif i % 5 == 0:\n        # Print \"Buzz\" if number is divisible by 5\n        print(\"Buzz\")\n    # If number is not divisible by 3 or 5, print the number itself\n    else:\n        print(i)\n```\n\n### Conversational Modes - Overview\n\nOften it is important to preserve and recall a conversation and this is kept track of locally. `giga` creates conversational dialogue with each llm completion requested. The dialogue can develop one-by-one (chat mode) or interactively, in a REPL loop (REPL mode). Both ways rely on the same underlying object, called a chat session. The session is located at the [configurable](#runtime-configuration-file) `CHAT_CACHE_PATH`.\n\n### Listing and Showing Chat Sessions \n\nDialogues had in both REPL and chat mode are saved as chat sessions.\n\nTo list all the sessions from either conversational mode, use the `--list-chats` option:\n```shell\ngiga --list-chats\n# .../gigashell/chat_cache/number\n# .../gigashell/chat_cache/python_request\n```\nTo show all the messages related to a specific conversation, use the `--show-chat` option followed by the session name:\n```shell\ngiga --show-chat number\n# user: please remember my favorite number: 4\n# assistant: I will remember that your favorite number is 4.\n# user: what would be my favorite number + 4?\n# assistant: Your favorite number is 4, so if we add 4 to it, the result would be 8.\n```\n\n### Chat Mode\nTo start a chat session, use the `--chat` option followed by a unique session name and a prompt. You can also use \"temp\" as a session name to start a temporary chat session.\n```shell\ngiga --chat number \"please remember my favorite number: 4\"\n# -> I will remember that your favorite number is 4.\ngiga --chat number \"what would be my favorite number + 4?\"\n# -> Your favorite number is 4, so if we add 4 to it, the result would be 8.\n```\nYou can also use chat sessions to iteratively improve LLM suggestions by providing additional clues.\n```shell\ngiga --chat python_request --code \"make an example request to localhost using Python\"\n```\n```python\nimport requests\n\nresponse = requests.get('http://localhost')\nprint(response.text)\n```\nAsking AI to add a cache to our request.\n```shell\ngiga --chat python_request --code \"add caching\"\n```\n```python\nimport requests\nfrom cachecontrol import CacheControl\n\nsess = requests.session()\ncached_sess = CacheControl(sess)\n\nresponse = cached_sess.get('http://localhost')\nprint(response.text)\n```\nWe can use `--code` or `--shell` options to initiate `--chat`, so you can keep refining the results:\n```shell\ngiga --chat sh --shell \"What are the files in this directory?\"\n# -> ls\ngiga --chat sh \"Sort them by name\"\n# -> ls | sort\ngiga --chat sh \"Concatenate them using FFMPEG\"\n# -> ffmpeg -i \"concat:$(ls | sort | tr '\\n' '|')\" -codec copy output.mp4\ngiga --chat sh \"Convert the resulting file into an MP3\"\n# -> ffmpeg -i output.mp4 -vn -acodec libmp3lame -ac 2 -ab 160k -ar 48000 final_output.mp3\n```\n\n### REPL Mode\nThere is very handy REPL (read\u2013eval\u2013print loop) mode, which allows you to interactively chat with LLM models. To start a chat session in REPL mode, use the `--repl` option followed by a unique session name. You can also use \"temp\" as a session name to start a temporary REPL session. Note that `--chat` and `--repl` are using same chat sessions, so you can use `--chat` to start a chat session and then use `--repl` to continue the conversation in REPL mode. REPL mode will also show history of your conversation in the beginning.\n\n<p align=\"center\">\n  <img src=\"https://s10.gifyu.com/images/repl-demo.gif\" alt=\"gif\">\n</p>\n\n```text\ngiga --repl temp\nEntering REPL mode, press Ctrl+C to exit.\n>>> What is REPL?\nREPL stands for Read-Eval-Print Loop. It is a programming environment ...\n>>> How can I use Python with REPL?\nTo use Python with REPL, you can simply open a terminal or command prompt ...\n```\nREPL mode can work with `--shell` and `--code` options, which makes it very handy for interactive shell commands and code generation:\n```text\ngiga --repl temp --shell\nEntering shell REPL mode, type [e] to execute commands or press Ctrl+C to exit.\n>>> What is in current folder?\nls\n>>> Show file sizes\nls -lh\n>>> Sort them by file sizes\nls -lhS\n>>> e (enter just e to execute commands, or d to describe them)\n...\n```\nExample of using REPL mode to generate code:\n```text\ngiga --repl temp --code\nEntering REPL mode, press Ctrl+C to exit.\n>>> Using Python request localhost:80\nimport requests\nresponse = requests.get('http://localhost:80')\nprint(response.text)\n>>> Change port to 443\nimport requests\nresponse = requests.get('https://localhost:443')\nprint(response.text)\n```\n\n### Picking up on a chat mode conversation with REPL mode\n\n```text\ngiga --repl number\n\u2500\u2500\u2500\u2500\u2500 Chat History\u2500\u2500\u2500\u2500\u2500\u2500\nuser: ###\nRole name: default\nYou are Command Line App GigaShell, a programming and system administration assistant.\nYou are managing Darwin/MacOS 13.3.1 operating system with zsh shell.\nProvide only plain text without Markdown formatting.\nDo not show any warnings or information regarding your capabilities.\nIf you need to store any data, assume it will be stored in the chat.\n\nRequest: please remember my favorite number: 4\n###\nassistant: Sure, I have stored your favorite number as 4.\nuser: what would be my favorite number raised to the power of 4\nassistant: Your favorite number raised to the power of 4 would be 256.\n\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\nEntering REPL mode, press Ctrl+C to exit.\n>>> What is the sum of my favorite number and your previous response?\nThe sum of your favorite number (4) and my previous response (256) would be 260.\n```\n\n\n### Roles\nGigaShell allows you to create custom roles, which can be utilized to generate code, shell commands, or to fulfill your specific needs. To create a new role, use the `--create-role` option followed by the role name. You will be prompted to provide a description for the role, along with other details. This will create a JSON file in `~/.config/gigashell/roles` with the role name. Inside this directory, you can also edit default `giga` roles, such as **shell**, **code**, and **default**. Use the `--list-roles` option to list all available roles, and the `--show-role` option to display the details of a specific role. Here's an example of a custom role:\n```shell\ngiga --create-role json\n# Enter role description: You are JSON generator, provide only valid json as response.\n# Enter expecting result, e.g. answer, code, shell command, etc.: json\ngiga --role json \"random: user, password, email, address\"\n{\n  \"user\": \"JohnDoe\",\n  \"password\": \"p@ssw0rd\",\n  \"email\": \"johndoe@example.com\",\n  \"address\": {\n    \"street\": \"123 Main St\",\n    \"city\": \"Anytown\",\n    \"state\": \"CA\",\n    \"zip\": \"12345\"\n  }\n}\n```\n\n### Request cache\nControl cache using `--cache` (default) and `--no-cache` options. This caching applies for all `giga` requests to OpenAI API:\n```shell\ngiga \"what are the colors of a rainbow\"\n# -> The colors of a rainbow are red, orange, yellow, green, blue, indigo, and violet.\n```\nNext time, same exact query will get results from local cache instantly. Note that `giga \"what are the colors of a rainbow\" --temperature 0.5` will make a new request, since we didn't provide `--temperature` (same applies to `--top-probability`) on previous request.\n\nThis is just some examples of what we can do using OpenAI GPT models, I'm sure you will find it useful for your specific use cases.\n\n### Runtime configuration file\nYou can setup some parameters in runtime configuration file `~/.config/gigashell/.gigarc`:\n```text\n# Credentionals to access GigaChat\nGIGA_USERNAME=your username\nGIGA_PASSWORD=your password\n# OpenAI host, useful if you would like to use proxy.\nGIGACHAT_API_HOST=https://...\n# Max amount of cached message per chat session.\nCHAT_CACHE_LENGTH=100\n# Chat cache folder.\nCHAT_CACHE_PATH=/tmp/gigashell/chat_cache\n# Request cache length (amount).\nCACHE_LENGTH=100\n# Request cache folder.\nCACHE_PATH=/tmp/gigashell/cache\n# Request timeout in seconds.\nREQUEST_TIMEOUT=60\n# Default OpenAI model to use.\nDEFAULT_MODEL=GigaChat70:latest\n# Default color for OpenAI completions.\nDEFAULT_COLOR=magenta\n# Force use system role messages (not recommended).\nSYSTEM_ROLES=false\n# When in --shell mode, default to \"Y\" for no input.\nDEFAULT_EXECUTE_SHELL_CMD=false\n# Disable streaming of responses\nDISABLE_STREAMING=false\n```\nPossible options for `DEFAULT_COLOR`: black, red, green, yellow, blue, magenta, cyan, white, bright_black, bright_red, bright_green, bright_yellow, bright_blue, bright_magenta, bright_cyan, bright_white.\n\nSwitch `SYSTEM_ROLES` to force use [system roles](https://help.openai.com/en/articles/7042661-chatgpt-api-transition-guide) messages, this is not recommended, since it doesn't perform well with current LLM models.\n\n### Full list of arguments\n```text\n\u256d\u2500 Arguments \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256e\n\u2502   prompt      [PROMPT]  The prompt to generate completions for.                                             \u2502\n\u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f\n\u256d\u2500 Options \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256e\n\u2502 --model            TEXT                             GigaChat model to use. [default: GigaChat70:latest]     \u2502\n\u2502 --temperature      FLOAT RANGE [0.0<=x<=2.0]        Randomness of generated output. [default: 0.1]          \u2502\n\u2502 --top-probability  FLOAT RANGE [0.1<=x<=1.0]        Limits highest probable tokens (words). [default: 1.0]  \u2502\n\u2502 --editor                                            Open $EDITOR to provide a prompt. [default: no-editor]  \u2502\n\u2502 --cache                                             Cache completion results. [default: cache]              \u2502\n\u2502 --help                                              Show this message and exit.                             \u2502\n\u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f\n\u256d\u2500 Assistance Options \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256e\n\u2502 --shell  -s                 Generate and execute shell commands.                                            \u2502\n\u2502 --describe-shell  -d        Describe a shell command.                                                       \u2502\n\u2502 --code       --no-code      Generate only code. [default: no-code]                                          \u2502\n\u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f\n\u256d\u2500 Chat Options \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256e\n\u2502 --chat        TEXT  Follow conversation with id, use \"temp\" for quick session. [default: None]              \u2502\n\u2502 --repl        TEXT  Start a REPL (Read\u2013eval\u2013print loop) session. [default: None]                            \u2502\n\u2502 --show-chat   TEXT  Show all messages from provided chat id. [default: None]                                \u2502\n\u2502 --list-chats        List all existing chat ids. [default: no-list-chats]                                    \u2502\n\u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f\n\u256d\u2500 Role Options \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256e\n\u2502 --role         TEXT  System role for LLM model. [default: None]                                             \u2502\n\u2502 --create-role  TEXT  Create role. [default: None]                                                           \u2502\n\u2502 --show-role    TEXT  Show role. [default: None]                                                             \u2502\n\u2502 --list-roles         List roles. [default: no-list-roles]                                                   \u2502\n\u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f\n```\n\n<!--\n## LocalAI\nBy default, GigaShell leverages OpenAI's large language models. However, it also provides the flexibility to use locally hosted models, which can be a cost-effective alternative. To use local models, you will need to run your own API server. You can accomplish this by using [LocalAI](https://github.com/go-skynet/LocalAI), a self-hosted, OpenAI-compatible API. Setting up LocalAI allows you to run language models on your own hardware, potentially without the need for an internet connection, depending on your usage. To set up your LocalAI, please follow this comprehensive [guide](https://github.com/TheR1D/shell_gpt/wiki/LocalAI). Remember that the performance of your local models may depend on the specifications of your hardware and the specific language model you choose to deploy.\n\n## Docker\nRun the container using the `OPENAI_API_KEY` environment variable, and a docker volume to store cache:\n```shell\ndocker run --rm \\\n           --env OPENAI_API_KEY=\"your OPENAI API key\" \\\n           --volume gpt-cache:/tmp/shell_gpt \\\n       ghcr.io/ther1d/shell_gpt --chat rainbow \"what are the colors of a rainbow\"\n```\n\nExample of a conversation, using an alias and the `OPENAI_API_KEY` environment variable:\n```shell\nalias giga=\"docker run --rm --env OPENAI_API_KEY --volume gpt-cache:/tmp/gigashell ghcr.io/ther1d/gigashell\"\nexport OPENAI_API_KEY=\"your OPENAI API key\"\ngiga --chat rainbow \"what are the colors of a rainbow\"\ngiga --chat rainbow \"inverse the list of your last answer\"\ngiga --chat rainbow \"translate your last answer in french\"\n```\n\nYou also can use the provided `Dockerfile` to build your own image:\n```shell\ndocker build -t giga .\n```\n-->",
    "bugtrack_url": null,
    "license": "",
    "summary": "A command-line productivity tool powered by GigaChat models, will help you accomplish your tasks faster and more efficiently.",
    "version": "0.9.4.3",
    "project_urls": {
        "documentation": "https://github.com/Rai220/GigaShell/blob/main/README.md",
        "homepage": "https://github.com/Rai220/GigaShell",
        "repository": "https://github.com/Rai220/GigaShell"
    },
    "split_keywords": [
        "cheet-sheet",
        "cli",
        "gigachain",
        "gigachat",
        "gpt",
        "openai",
        "productivity",
        "shell"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "e571c8d766bb44bc5faf3c22ab9f4c2df732d6e504cd1d1a38483695ce962252",
                "md5": "6f5fe6df17e832fef8a4791b907e14d2",
                "sha256": "ea5226aee5ceda1813e54d1a4f16bc0af3bf5ddab102d20264ba46a706f1b96d"
            },
            "downloads": -1,
            "filename": "gigashell-0.9.4.3-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "6f5fe6df17e832fef8a4791b907e14d2",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.6",
            "size": 25230,
            "upload_time": "2023-10-04T10:38:01",
            "upload_time_iso_8601": "2023-10-04T10:38:01.250353Z",
            "url": "https://files.pythonhosted.org/packages/e5/71/c8d766bb44bc5faf3c22ab9f4c2df732d6e504cd1d1a38483695ce962252/gigashell-0.9.4.3-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "f845b38506721da293f90a4339dff2bdbdcd77cded660751884263d16701091a",
                "md5": "3fa8517fd769d1687d1a813a40a8988a",
                "sha256": "b343b4473adbe3a9bf3076634c43c3e06c0a17803d538db5645a60630539e416"
            },
            "downloads": -1,
            "filename": "gigashell-0.9.4.3.tar.gz",
            "has_sig": false,
            "md5_digest": "3fa8517fd769d1687d1a813a40a8988a",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.6",
            "size": 32779,
            "upload_time": "2023-10-04T10:38:06",
            "upload_time_iso_8601": "2023-10-04T10:38:06.617598Z",
            "url": "https://files.pythonhosted.org/packages/f8/45/b38506721da293f90a4339dff2bdbdcd77cded660751884263d16701091a/gigashell-0.9.4.3.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-10-04 10:38:06",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "Rai220",
    "github_project": "GigaShell",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "gigashell"
}
        
Elapsed time: 0.11927s