
# 🐧 Penguin Tamer 🐧
[](https://www.python.org/downloads/)
[](https://opensource.org/licenses/MIT)
[](https://pypi.org/project/penguin-tamer/)
[](https://github.com/Vivatist/penguin-tamer/stargazers)
> **🐧 Tame your Linux terminal with AI power!** Ask questions to ***ChatGPT***, ***Deep Seek***, ***Grok*** and many other large language models (LLM). Execute scripts and commands suggested by the neural network directly from the command line. Perfect for beginners in Linux and Windows administration.
🌍 **Available in:** [English](README.md) | [Русский](/docs/locales/README_ru.md)

## Table of Contents
- [🐧 Penguin Tamer 🐧](#-penguin-tamer-)
- [Table of Contents](#table-of-contents)
- [Install](#install)
- [Uninstall](#uninstall)
- [Description](#description)
- [Features](#features)
- [Quick Start](#quick-start)
- [Connecting to Neural Networks](#connecting-to-neural-networks)
- [Getting a Token (API\_KEY) and Connecting to a Pre-installed Model](#getting-a-token-api_key-and-connecting-to-a-pre-installed-model)
- [Adding a New Model](#adding-a-new-model)
- [Connection Example](#connection-example)
- [Examples](#examples)
- [Quick Query](#quick-query)
- [Dialog Mode](#dialog-mode)
- [Running Code from AI Response](#running-code-from-ai-response)
- [Security](#security)
- [Best Practices](#best-practices)
- [Configuration](#configuration)
- [Initial Setup](#initial-setup)
- [Supported AI Providers](#supported-ai-providers)
- [Configuration File](#configuration-file)
- [Reset Settings](#reset-settings)
- [Contributing](#contributing)
- [Areas for Contribution](#areas-for-contribution)
- [Development Environment Setup](#development-environment-setup)
- [Contribution Guidelines](#contribution-guidelines)
- [License](#license)
- [Contacts](#contacts)
## Install
```bash
curl -sSL https://raw.githubusercontent.com/Vivatist/penguin-tamer/main/install.sh | bash
```
## Uninstall
```bash
pipx uninstall penguin-tamer
```
## Description
### Features
- **Quick AI queries** — Get answers from large language models via the command line
- **No GUI** — Communicate with your chosen AI in natural language and any locale: ai how to install Russian fonts?
- **Interactive dialog mode** — Chat with AI in dialog mode with preserved conversation context
- **Code execution** — Execute scripts and commands suggested by AI in the console
- **Friendly interface** — Formatted output with syntax highlighting — just like you’re used to when working with neural networks
- **Multiple AI providers** — Support for OpenAI, OpenRouter, DeepSeek, Anthropic and other popular providers
- **Multi-language support** — En and Ru are available now. You can [(help with translation)](#contributing) into other languages.
### Quick Start
Try asking the assistant a question, for example `pt who are you?`. In a couple of seconds, the neural network will respond:

On first launch, the program uses a Microsoft-hosted model — **DeepSeek-R1-Lite-Preview** with a public token. This is not the best option since you may see a quota-exceeded message due to high traffic, but it’s fine for a test run.
**For full operation, you need to [obtain](#getting-a-token-api_key-and-connecting-to-a-pre-installed-model) a personal token and add it to the selected model in the program [settings](#installation).**
> [!NOTE]
> penguin-tamer can work with any neural network that supports API access. Today this includes almost all large language models (LLMs) on the market. [How to add a new model](#adding-a-new-model).
## Connecting to Neural Networks
penguin-tamer ships with several popular models pre-configured, such as **DeepSeek**, **Grok 4 Fast**, **Qwen3 Coder**. However, provider policies don’t allow full operation without authorization. You must obtain a personal token (API_KEY) from the provider’s website.
### Getting a Token (API_KEY) and Connecting to a Pre-installed Model
We recommend the provider [OpenRouter](https://openrouter.ai/models?max_price=0) — simple registration and dozens of popular models available for free with a single token.
- Register on the [website](https://openrouter.ai/)
- Get a token by clicking **[Create API key](https://openrouter.ai/settings/keys)**. Save it — OpenRouter will show it only once!
- Add the token to penguin-tamer in the [settings](#configuration) of the selected model
- Make this model the current one
**Done! Now the selected model will answer you in the console. You can connect any other model from this website in the same way.**
> [!NOTE]
> One OpenRouter token is valid for **all** models available from this provider.
A similar procedure applies to other providers, although with **OpenRouter** available, you may not need it.
### Adding a New Model
To add a **new** model to penguin-tamer, including a local model or one from major providers, simply enter in penguin-tamer [settings](#configuration):
- API_KEY (your personal token)
- API_URL (API base URL)
- model (model name)
You can find this information on the provider’s website in the *API* section.
#### Connection Example
Using the example of the free **Meta: Llama 3.1** model listed on [OpenRouter](https://openrouter.ai/models?max_price=0) among dozens of other free models.
Open the model’s page and find the [API](https://openrouter.ai/meta-llama/llama-3.1-405b-instruct:free/api) section.
Among the connection examples, look for information similar to:
- **API_URL** — for OpenRouter, this parameter is called ***base_url***
- **model** — listed as ***model***
How to get **API_KEY** is described [above](#getting-a-token-api_key-and-connecting-to-a-pre-installed-model).
Enter these values (***without quotes***) in penguin-tamer settings and set this model as current. Now ***Meta: Llama 3.1*** will answer your questions.
## Examples
### Quick Query
```bash
# Simple question
ai kernel update script
```
### Dialog Mode
Penguin Tamer always works in dialog mode, preserving the conversation context throughout the session.
You can start a dialog with an initial question:
```bash
pt what python version is installed?
```
Or without a question to begin an interactive session:
```bash
pt # Enter
```
### Running Code from AI Response
If the response contains code blocks — they are numbered. To run code, simply enter the block number in the console.

## Security
> [!WARNING]
> Never execute code suggested by the neural network if you’re not sure what it does!
### Best Practices
1. **Review code before execution**
```bash
# Always check what AI suggests
ai Delete all files from /tmp # Don’t run this blindly!
```
2. **Use safe commands**
```bash
# Prefer these over destructive operations
ai Show disk usage
ai Show running processes
```
## Configuration
### Initial Setup
Run the setup mode to configure your AI provider:
```bash
pt -s
```
### Supported AI Providers
- **OpenAI** (GPT-3.5, GPT-4)
- **Anthropic** (Claude)
- **OpenRouter** (Multiple models)
- **Local models** (Ollama, LM Studio)
And many others that support API access.
### Configuration File
Settings are stored in:
- **Linux:** `~/.config/penguin-tamer/config.yaml`
- **Windows:** `%APPDATA%\penguin-tamer\config.yaml`
### Reset Settings
To restore defaults, delete the configuration file manually or run:
```bash
# For Linux
rm ~/.config/penguin-tamer/config.yaml
```
```bash
# For Windows
rm %APPDATA%\penguin-tamer\config.yaml
```
## Contributing
I’ll be glad for any help!
### Areas for Contribution
- 🌍 **Localization** — Adding support for new languages ([template](https://github.com/Vivatist/penguin-tamer/blob/main/src/penguin_tamer/locales/template_locale.json)), including [README.md](https://github.com/Vivatist/penguin-tamer/blob/main/README.md)
- 🤖 **AI Providers** — Integrating new AI providers
- 🎨 **UI/UX** — Improving the configuration manager interface (yes, it’s not perfect)
- 🔧 **Tools** — Creating additional utilities
- 💡 **Ideas** — I welcome any ideas to improve and develop penguin-tamer. [Join the discussion](https://github.com/Vivatist/penguin-tamer/discussions/10#discussion-8924293)
Here’s how to get started:
### Development Environment Setup
1. **Fork the repository**
2. **Clone your fork**:
```bash
git clone https://github.com/your-username/penguin-tamer.git
cd penguin-tamer
```
3. **Set up the development environment**:
```bash
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
pip install -r requirements.txt
pip install -e .
```
4. **Install git hooks (optional but recommended)**:
```bash
make install-hooks # Linux/Mac
make.bat install-hooks # Windows
```
This will automatically run tests before commits and pushes.
### Contribution Guidelines
- 📝 **Code Style**: Follow PEP 8
- 🧪 **Testing**: Add tests for new features (run `python run_tests.py`)
- 🔍 **Pre-commit**: Tests run automatically before commits (or use `git commit --no-verify` to skip)
- 📚 **Documentation**: Update README for new features
- 🔄 **Pull Requests**: Use clear commit messages
For detailed information about testing and git hooks, see:
- [tests/README.md](tests/README.md) - Testing documentation
- [docs/GIT_HOOKS.md](docs/GIT_HOOKS.md) - Git hooks setup and usage
## License
This project is licensed under the MIT License.
## Contacts
- **Author**: Andrey Bochkarev
- **GitHub Issues**: [🐛 Report issues](https://github.com/Vivatist/penguin-tamer/issues)
- **Discussions**: [💬 Join](https://github.com/Vivatist/penguin-tamer/discussions)
---
<div align="center">
**Created with ❤️ for the Linux community**
[⭐ Star on GitHub](https://github.com/Vivatist/penguin-tamer)
</div>
Raw data
{
"_id": null,
"home_page": null,
"name": "penguin-tamer",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.11",
"maintainer_email": null,
"keywords": "ai, terminal, console, bash, automation, penguin, tamer",
"author": null,
"author_email": "Andrey Bochkarev <andrey.bch.1976@gmail.com>",
"download_url": "https://files.pythonhosted.org/packages/9c/b4/ed70dcdb2908ea5be607d7300e910055616e68294f22d12845599246e715/penguin_tamer-0.8.17.tar.gz",
"platform": null,
"description": "\n\n# \ud83d\udc27 Penguin Tamer \ud83d\udc27\n\n[](https://www.python.org/downloads/)\n[](https://opensource.org/licenses/MIT)\n[](https://pypi.org/project/penguin-tamer/)\n[](https://github.com/Vivatist/penguin-tamer/stargazers)\n\n\n\n> **\ud83d\udc27 Tame your Linux terminal with AI power!** Ask questions to ***ChatGPT***, ***Deep Seek***, ***Grok*** and many other large language models (LLM). Execute scripts and commands suggested by the neural network directly from the command line. Perfect for beginners in Linux and Windows administration.\n\n\ud83c\udf0d **Available in:** [English](README.md) | [\u0420\u0443\u0441\u0441\u043a\u0438\u0439](/docs/locales/README_ru.md)\n\n\n\n## Table of Contents\n\n- [\ud83d\udc27 Penguin Tamer \ud83d\udc27](#-penguin-tamer-)\n - [Table of Contents](#table-of-contents)\n - [Install](#install)\n - [Uninstall](#uninstall)\n - [Description](#description)\n - [Features](#features)\n - [Quick Start](#quick-start)\n - [Connecting to Neural Networks](#connecting-to-neural-networks)\n - [Getting a Token (API\\_KEY) and Connecting to a Pre-installed Model](#getting-a-token-api_key-and-connecting-to-a-pre-installed-model)\n - [Adding a New Model](#adding-a-new-model)\n - [Connection Example](#connection-example)\n - [Examples](#examples)\n - [Quick Query](#quick-query)\n - [Dialog Mode](#dialog-mode)\n - [Running Code from AI Response](#running-code-from-ai-response)\n - [Security](#security)\n - [Best Practices](#best-practices)\n - [Configuration](#configuration)\n - [Initial Setup](#initial-setup)\n - [Supported AI Providers](#supported-ai-providers)\n - [Configuration File](#configuration-file)\n - [Reset Settings](#reset-settings)\n - [Contributing](#contributing)\n - [Areas for Contribution](#areas-for-contribution)\n - [Development Environment Setup](#development-environment-setup)\n - [Contribution Guidelines](#contribution-guidelines)\n - [License](#license)\n - [Contacts](#contacts)\n\n## Install\n```bash\ncurl -sSL https://raw.githubusercontent.com/Vivatist/penguin-tamer/main/install.sh | bash\n```\n\n## Uninstall\n```bash\npipx uninstall penguin-tamer\n```\n\n## Description\n\n### Features\n\n- **Quick AI queries** \u2014 Get answers from large language models via the command line\n- **No GUI** \u2014 Communicate with your chosen AI in natural language and any locale: ai how to install Russian fonts?\n- **Interactive dialog mode** \u2014 Chat with AI in dialog mode with preserved conversation context\n- **Code execution** \u2014 Execute scripts and commands suggested by AI in the console\n- **Friendly interface** \u2014 Formatted output with syntax highlighting \u2014 just like you\u2019re used to when working with neural networks\n- **Multiple AI providers** \u2014 Support for OpenAI, OpenRouter, DeepSeek, Anthropic and other popular providers\n- **Multi-language support** \u2014 En and Ru are available now. You can [(help with translation)](#contributing) into other languages.\n\n\n### Quick Start\n\nTry asking the assistant a question, for example `pt who are you?`. In a couple of seconds, the neural network will respond:\n\n\n\nOn first launch, the program uses a Microsoft-hosted model \u2014 **DeepSeek-R1-Lite-Preview** with a public token. This is not the best option since you may see a quota-exceeded message due to high traffic, but it\u2019s fine for a test run.\n\n**For full operation, you need to [obtain](#getting-a-token-api_key-and-connecting-to-a-pre-installed-model) a personal token and add it to the selected model in the program [settings](#installation).**\n\n> [!NOTE]\n> penguin-tamer can work with any neural network that supports API access. Today this includes almost all large language models (LLMs) on the market. [How to add a new model](#adding-a-new-model).\n\n## Connecting to Neural Networks\npenguin-tamer ships with several popular models pre-configured, such as **DeepSeek**, **Grok 4 Fast**, **Qwen3 Coder**. However, provider policies don\u2019t allow full operation without authorization. You must obtain a personal token (API_KEY) from the provider\u2019s website.\n\n### Getting a Token (API_KEY) and Connecting to a Pre-installed Model\nWe recommend the provider [OpenRouter](https://openrouter.ai/models?max_price=0) \u2014 simple registration and dozens of popular models available for free with a single token.\n\n- Register on the [website](https://openrouter.ai/)\n- Get a token by clicking **[Create API key](https://openrouter.ai/settings/keys)**. Save it \u2014 OpenRouter will show it only once!\n- Add the token to penguin-tamer in the [settings](#configuration) of the selected model\n- Make this model the current one\n\n**Done! Now the selected model will answer you in the console. You can connect any other model from this website in the same way.**\n\n> [!NOTE]\n> One OpenRouter token is valid for **all** models available from this provider.\n\nA similar procedure applies to other providers, although with **OpenRouter** available, you may not need it.\n\n### Adding a New Model\nTo add a **new** model to penguin-tamer, including a local model or one from major providers, simply enter in penguin-tamer [settings](#configuration):\n - API_KEY (your personal token)\n - API_URL (API base URL)\n - model (model name)\n\nYou can find this information on the provider\u2019s website in the *API* section.\n\n#### Connection Example\nUsing the example of the free **Meta: Llama 3.1** model listed on [OpenRouter](https://openrouter.ai/models?max_price=0) among dozens of other free models.\n\nOpen the model\u2019s page and find the [API](https://openrouter.ai/meta-llama/llama-3.1-405b-instruct:free/api) section.\n\nAmong the connection examples, look for information similar to:\n\n - **API_URL** \u2014 for OpenRouter, this parameter is called ***base_url***\n - **model** \u2014 listed as ***model***\n\nHow to get **API_KEY** is described [above](#getting-a-token-api_key-and-connecting-to-a-pre-installed-model).\n\nEnter these values (***without quotes***) in penguin-tamer settings and set this model as current. Now ***Meta: Llama 3.1*** will answer your questions.\n\n## Examples\n\n### Quick Query\n\n```bash\n# Simple question\nai kernel update script\n```\n\n### Dialog Mode\n\nPenguin Tamer always works in dialog mode, preserving the conversation context throughout the session.\n\nYou can start a dialog with an initial question:\n\n```bash\npt what python version is installed?\n```\n\nOr without a question to begin an interactive session:\n\n```bash\npt # Enter\n```\n\n### Running Code from AI Response\n\nIf the response contains code blocks \u2014 they are numbered. To run code, simply enter the block number in the console.\n\n\n\n## Security\n\n> [!WARNING]\n> Never execute code suggested by the neural network if you\u2019re not sure what it does!\n\n### Best Practices\n\n1. **Review code before execution**\n ```bash\n # Always check what AI suggests\n ai Delete all files from /tmp # Don\u2019t run this blindly!\n ```\n\n2. **Use safe commands**\n ```bash\n # Prefer these over destructive operations\n ai Show disk usage\n ai Show running processes\n ```\n\n## Configuration\n\n### Initial Setup\n\nRun the setup mode to configure your AI provider:\n\n```bash\npt -s\n```\n\n### Supported AI Providers\n\n- **OpenAI** (GPT-3.5, GPT-4)\n- **Anthropic** (Claude)\n- **OpenRouter** (Multiple models)\n- **Local models** (Ollama, LM Studio)\n\nAnd many others that support API access.\n\n### Configuration File\n\nSettings are stored in:\n- **Linux:** `~/.config/penguin-tamer/config.yaml`\n- **Windows:** `%APPDATA%\\penguin-tamer\\config.yaml`\n\n### Reset Settings\n\nTo restore defaults, delete the configuration file manually or run:\n```bash\n# For Linux\nrm ~/.config/penguin-tamer/config.yaml\n```\n```bash\n# For Windows\nrm %APPDATA%\\penguin-tamer\\config.yaml\n```\n\n## Contributing\n\nI\u2019ll be glad for any help!\n\n### Areas for Contribution\n\n- \ud83c\udf0d **Localization** \u2014 Adding support for new languages ([template](https://github.com/Vivatist/penguin-tamer/blob/main/src/penguin_tamer/locales/template_locale.json)), including [README.md](https://github.com/Vivatist/penguin-tamer/blob/main/README.md)\n- \ud83e\udd16 **AI Providers** \u2014 Integrating new AI providers\n- \ud83c\udfa8 **UI/UX** \u2014 Improving the configuration manager interface (yes, it\u2019s not perfect)\n- \ud83d\udd27 **Tools** \u2014 Creating additional utilities\n- \ud83d\udca1 **Ideas** \u2014 I welcome any ideas to improve and develop penguin-tamer. [Join the discussion](https://github.com/Vivatist/penguin-tamer/discussions/10#discussion-8924293)\n\nHere\u2019s how to get started:\n\n### Development Environment Setup\n\n1. **Fork the repository**\n2. **Clone your fork**:\n ```bash\n git clone https://github.com/your-username/penguin-tamer.git\n cd penguin-tamer\n ```\n\n3. **Set up the development environment**:\n ```bash\n python -m venv venv\n source venv/bin/activate # On Windows: venv\\Scripts\\activate\n pip install -r requirements.txt\n pip install -e .\n ```\n\n4. **Install git hooks (optional but recommended)**:\n ```bash\n make install-hooks # Linux/Mac\n make.bat install-hooks # Windows\n ```\n This will automatically run tests before commits and pushes.\n\n### Contribution Guidelines\n\n- \ud83d\udcdd **Code Style**: Follow PEP 8\n- \ud83e\uddea **Testing**: Add tests for new features (run `python run_tests.py`)\n- \ud83d\udd0d **Pre-commit**: Tests run automatically before commits (or use `git commit --no-verify` to skip)\n- \ud83d\udcda **Documentation**: Update README for new features\n- \ud83d\udd04 **Pull Requests**: Use clear commit messages\n\nFor detailed information about testing and git hooks, see:\n- [tests/README.md](tests/README.md) - Testing documentation\n- [docs/GIT_HOOKS.md](docs/GIT_HOOKS.md) - Git hooks setup and usage\n\n## License\n\nThis project is licensed under the MIT License.\n\n## Contacts\n\n- **Author**: Andrey Bochkarev\n- **GitHub Issues**: [\ud83d\udc1b Report issues](https://github.com/Vivatist/penguin-tamer/issues)\n- **Discussions**: [\ud83d\udcac Join](https://github.com/Vivatist/penguin-tamer/discussions)\n\n---\n\n<div align=\"center\">\n\n**Created with \u2764\ufe0f for the Linux community**\n\n[\u2b50 Star on GitHub](https://github.com/Vivatist/penguin-tamer)\n</div>\n",
"bugtrack_url": null,
"license": null,
"summary": "Penguin Tamer - AI-powered terminal assistant for Linux systems",
"version": "0.8.17",
"project_urls": {
"Bug Reports": "https://github.com/Vivatist/penguin-tamer/issues",
"Homepage": "https://github.com/Vivatist/penguin-tamer",
"Source": "https://github.com/Vivatist/penguin-tamer"
},
"split_keywords": [
"ai",
" terminal",
" console",
" bash",
" automation",
" penguin",
" tamer"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "f83ee0910ed3cf00a15383f3aee0d894b656db38338d2be8af56443c603b39ca",
"md5": "52323f46cb66241df4c670c202913854",
"sha256": "f02b74845bcb1fda4e84c473fa5657260c904ce789825ba0144cd788e266c49b"
},
"downloads": -1,
"filename": "penguin_tamer-0.8.17-py3-none-any.whl",
"has_sig": false,
"md5_digest": "52323f46cb66241df4c670c202913854",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.11",
"size": 101487,
"upload_time": "2025-10-16T09:44:08",
"upload_time_iso_8601": "2025-10-16T09:44:08.003282Z",
"url": "https://files.pythonhosted.org/packages/f8/3e/e0910ed3cf00a15383f3aee0d894b656db38338d2be8af56443c603b39ca/penguin_tamer-0.8.17-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "9cb4ed70dcdb2908ea5be607d7300e910055616e68294f22d12845599246e715",
"md5": "d2f8d73128183fbdc9771f9a31f7f016",
"sha256": "33ef7782c8533fd745a24612ddbbb66ef201ec6e852ebf589da7472680a67e3b"
},
"downloads": -1,
"filename": "penguin_tamer-0.8.17.tar.gz",
"has_sig": false,
"md5_digest": "d2f8d73128183fbdc9771f9a31f7f016",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.11",
"size": 26423610,
"upload_time": "2025-10-16T09:44:09",
"upload_time_iso_8601": "2025-10-16T09:44:09.257992Z",
"url": "https://files.pythonhosted.org/packages/9c/b4/ed70dcdb2908ea5be607d7300e910055616e68294f22d12845599246e715/penguin_tamer-0.8.17.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-10-16 09:44:09",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "Vivatist",
"github_project": "penguin-tamer",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"requirements": [
{
"name": "certifi",
"specs": [
[
"==",
"2025.8.3"
]
]
},
{
"name": "charset-normalizer",
"specs": [
[
"==",
"3.4.3"
]
]
},
{
"name": "idna",
"specs": [
[
"==",
"3.10"
]
]
},
{
"name": "markdown-it-py",
"specs": [
[
"==",
"3.0.0"
]
]
},
{
"name": "mdurl",
"specs": [
[
"==",
"0.1.2"
]
]
},
{
"name": "platformdirs",
"specs": [
[
"==",
"4.4.0"
]
]
},
{
"name": "Pygments",
"specs": [
[
"==",
"2.19.2"
]
]
},
{
"name": "requests",
"specs": [
[
"==",
"2.32.5"
]
]
},
{
"name": "rich",
"specs": [
[
"==",
"14.1.0"
]
]
},
{
"name": "urllib3",
"specs": [
[
"==",
"2.5.0"
]
]
},
{
"name": "openai",
"specs": [
[
"==",
"1.108.0"
]
]
},
{
"name": "PyYAML",
"specs": [
[
"==",
"6.0.2"
]
]
},
{
"name": "inquirer",
"specs": [
[
"==",
"3.4.0"
]
]
},
{
"name": "prompt-toolkit",
"specs": [
[
"==",
"3.0.52"
]
]
},
{
"name": "textual",
"specs": [
[
">=",
"0.47.0"
]
]
}
],
"lcname": "penguin-tamer"
}