lfg-llama


Namelfg-llama JSON
Version 2.0.2 PyPI version JSON
download
home_pageNone
SummaryLFG, It Really Whips the Llama's Ass 🦙🦙🦙🦙
upload_time2024-05-13 18:51:32
maintainerNone
docs_urlNone
authorBjarne Oeverli
requires_pythonNone
licenseNone
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # LFG

> LFG, It Really Whips the Llama's Ass 🦙🦙🦙🦙

![Demo](example.gif)

LFG is a command-line tool that intelligently helps you find the right terminal commands for your tasks. Such sales pitch. This interface is using GPT-4o as an engine.

## Why?

- Firstly, this was created to test Ollama -> Groq
- I do not like the Github Copilot command-line
- Quicker than using Gemini/ChatGPT/Google directly via the browser interface
- Easier to find what needed without opening man pages
- NEW: Changing to GPT-4o model which is free

However, never trust the output entirely.

## Installation

```bash
# install pipx
brew install pipx

# add pipx binaries to path
pipx ensurepath

# restart your terminal
# install LFG
pipx install lfg-llama
```

## Usage

This executable is using OpenAI, that means you need and [API token](https://platform.openai.com/api-keys).

[GPT-4o](https://platform.openai.com/docs/models/gpt-4o) is free to use.

Add the token to your .bashrc/.zshrc and reload your terminal.

```
OPENAI_API_KEY={replace_me}
```

```
$ lfg query
```

Now you can use the executable

```bash
lfg "kill port 3000"

# Kill process listening on port 3000
lsof -i :3000 | xargs kill

```

Change the LLM

```bash
$ lfg "list ec2 pipe json jq get name" -m llama370b

# List EC2 instances with name

aws ec2 describe-instances --query 'Reservations[].Instances[]|{Name:Tags[?Key==`Name`]|[0].Value,I
nstanceId}' --output text | jq '.[] | {"Name", .Name, "InstanceId", .InstanceId}'

This command uses the AWS CLI to describe EC2 instances, and then pipes the output to `jq` to format the output in a JSON-like format, showing the instance name and ID.
```

### Development

```bash
pip install --user pipenv
pipenv --python 3.11
pipenv install

pipenv run lfg "kill port 3000"
```

### TODO

- Fix the setup and pyproject file, including github workflow for releasing the package

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "lfg-llama",
    "maintainer": null,
    "docs_url": null,
    "requires_python": null,
    "maintainer_email": null,
    "keywords": null,
    "author": "Bjarne Oeverli",
    "author_email": "bjarneocodes@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/ee/d1/eea0a09ca91cc2cb1d2c4a57f049f803cc03421a255a76ccb03a911ba35f/lfg_llama-2.0.2.tar.gz",
    "platform": null,
    "description": "# LFG\n\n> LFG, It Really Whips the Llama's Ass \ud83e\udd99\ud83e\udd99\ud83e\udd99\ud83e\udd99\n\n![Demo](example.gif)\n\nLFG is a command-line tool that intelligently helps you find the right terminal commands for your tasks. Such sales pitch. This interface is using GPT-4o as an engine.\n\n## Why?\n\n- Firstly, this was created to test Ollama -> Groq\n- I do not like the Github Copilot command-line\n- Quicker than using Gemini/ChatGPT/Google directly via the browser interface\n- Easier to find what needed without opening man pages\n- NEW: Changing to GPT-4o model which is free\n\nHowever, never trust the output entirely.\n\n## Installation\n\n```bash\n# install pipx\nbrew install pipx\n\n# add pipx binaries to path\npipx ensurepath\n\n# restart your terminal\n# install LFG\npipx install lfg-llama\n```\n\n## Usage\n\nThis executable is using OpenAI, that means you need and [API token](https://platform.openai.com/api-keys).\n\n[GPT-4o](https://platform.openai.com/docs/models/gpt-4o) is free to use.\n\nAdd the token to your .bashrc/.zshrc and reload your terminal.\n\n```\nOPENAI_API_KEY={replace_me}\n```\n\n```\n$ lfg query\n```\n\nNow you can use the executable\n\n```bash\nlfg \"kill port 3000\"\n\n# Kill process listening on port 3000\nlsof -i :3000 | xargs kill\n\n```\n\nChange the LLM\n\n```bash\n$ lfg \"list ec2 pipe json jq get name\" -m llama370b\n\n# List EC2 instances with name\n\naws ec2 describe-instances --query 'Reservations[].Instances[]|{Name:Tags[?Key==`Name`]|[0].Value,I\nnstanceId}' --output text | jq '.[] | {\"Name\", .Name, \"InstanceId\", .InstanceId}'\n\nThis command uses the AWS CLI to describe EC2 instances, and then pipes the output to `jq` to format the output in a JSON-like format, showing the instance name and ID.\n```\n\n### Development\n\n```bash\npip install --user pipenv\npipenv --python 3.11\npipenv install\n\npipenv run lfg \"kill port 3000\"\n```\n\n### TODO\n\n- Fix the setup and pyproject file, including github workflow for releasing the package\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "LFG, It Really Whips the Llama's Ass \ud83e\udd99\ud83e\udd99\ud83e\udd99\ud83e\udd99",
    "version": "2.0.2",
    "project_urls": null,
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "6967240524fb04b3d8a260c1404ef878c1fbee22968ea8aa975497fef8585a66",
                "md5": "f67180ba14163545ae6c454117b4335a",
                "sha256": "446a3bda743d7a4d544eb18b83b51ba2c81b17ed5f20f6ae1c33f43cb4f692fe"
            },
            "downloads": -1,
            "filename": "lfg_llama-2.0.2-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "f67180ba14163545ae6c454117b4335a",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": null,
            "size": 3851,
            "upload_time": "2024-05-13T18:51:31",
            "upload_time_iso_8601": "2024-05-13T18:51:31.264497Z",
            "url": "https://files.pythonhosted.org/packages/69/67/240524fb04b3d8a260c1404ef878c1fbee22968ea8aa975497fef8585a66/lfg_llama-2.0.2-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "eed1eea0a09ca91cc2cb1d2c4a57f049f803cc03421a255a76ccb03a911ba35f",
                "md5": "22b2c249ed973075563d69728f07da74",
                "sha256": "008b753a693a3b912961283e4f4902cb2d770a2d3ebe13d277e5ec65c3e78ae7"
            },
            "downloads": -1,
            "filename": "lfg_llama-2.0.2.tar.gz",
            "has_sig": false,
            "md5_digest": "22b2c249ed973075563d69728f07da74",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 3495,
            "upload_time": "2024-05-13T18:51:32",
            "upload_time_iso_8601": "2024-05-13T18:51:32.813108Z",
            "url": "https://files.pythonhosted.org/packages/ee/d1/eea0a09ca91cc2cb1d2c4a57f049f803cc03421a255a76ccb03a911ba35f/lfg_llama-2.0.2.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-05-13 18:51:32",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "lcname": "lfg-llama"
}
        
Elapsed time: 0.29223s