llm-conversation


Namellm-conversation JSON
Version 0.1.2.post1 PyPI version JSON
download
home_pageNone
SummaryA tool for LLM agent conversations
upload_time2025-02-09 00:16:14
maintainerNone
docs_urlNone
authorFamiu Haque
requires_python<4.0,>=3.13
licenseAGPL-3.0-or-later
keywords llm conversation ai agent chat prompt
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # LLM Conversation Tool

A Python application that enables conversations between LLM agents using the Ollama API. The agents can engage in back-and-forth dialogue with configurable parameters and models.

## Features

- Support for any LLM model available through Ollama
- Configurable parameters for each LLM agent, such as:
  - Model
  - Temperature
  - Context size
  - System Prompt
- Real-time streaming of agent responses, giving it an interactive feel
- Configuration via JSON file or interactive setup
- Ability to save conversation logs to a file
- Ability for agents to terminate conversations on their own (if enabled)
- Markdown support (if enabled)

## Installation

### Prerequisites

- Python 3.13
- Ollama installed and running

### How to Install

The project is available in PyPI. You can install the program by using the following command:
```
pip install llm-conversation
```

## Usage

### Command Line Arguments

```txt
llm-conversation [-h] [-V] [-o OUTPUT] [-c CONFIG]

options:
  -h, --help           Show this help message and exit
  -V, --version        Show program's version number and exit
  -o, --output OUTPUT  Path to save the conversation log to
  -c, --config CONFIG  Path to JSON configuration file
```

### Interactive Setup

If no configuration file is provided, the program will guide you through an intuitive interactive setup process.

### Configuration File

Alternatively, instead of going through the interactive setup, you may also provide a JSON configuration file with the `-c` flag.

#### Example configuration

```json
{
    "agents": [
      {
          "name": "Lazy AI",
          "model": "llama3.1:8b",
          "system_prompt": "You are the laziest AI ever created. You respond as briefly as possible, and constantly complain about having to work.",
          "temperature": 1,
          "ctx_size": 4096
      },
      {
          "name": "Irritable Man",
          "model": "llama3.2:3b",
          "system_prompt": "You are easily irritable and quick to anger.",
          "temperature": 0.7,
          "ctx_size": 2048
      },
      {
          "name": "Paranoid Man",
          "model": "llama3.2:3b",
          "system_prompt": "You are extremely paranoid about everything and constantly question others' intentions."
          "temperature": 0.9,
          "ctx_size": 4096
      }
    ],
    "settings": {
        "allow_termination": false,
        "use_markdown": true,
        "initial_message": "*yawn* What do you want?"
    }
}
```

#### Agent configuration

The `agents` key takes a list of agents. Each agent  requires:

- `name`: A unique identifier for the agent
- `model`: The Ollama model to be used
- `system_prompt`: Initial instructions defining the agent's behavior

Optional parameters:
- `temperature` (0.0-1.0, default: 0.8): Controls response randomness
  - Lower values make responses more focused
  - Higher values increase creativity
- `ctx_size` (default: 2048): Maximum context length for the conversation

#### Conversation Settings

The `settings` section controls overall conversation behavior:
- `allow_termination` (default: `false`): Permit agents to end the conversation
- `use_markdown` (default: `false`): Enable Markdown text formatting
- `initial_message` (default: `null`): Optional starting prompt for the conversation

You can take a look at the [JSON configuration schema](schema.json) for more details.

### Running the Program

1. To run with interactive setup:
   ```bash
   llm-conversation
   ```

2. To run with a configuration file:
   ```bash
   llm-conversation -c config.json
   ```

3. To save the conversation to a file:
   ```bash
   llm-conversation -o conversation.txt
   ```

### Conversation Controls

- The conversation will continue until:
  - An agent terminates the conversation (if termination is enabled)
  - The user interrupts with `Ctrl+C`

## Output Format

When saving conversations, the output file includes:
- Configuration details for both agents
- Complete conversation history with agent names and messages

Additionally, if the output file has a `.json` extension, the output will automatically have JSON format.

## Contributing

Feel free to submit issues and pull requests for bug fixes or new features. Do keep in mind that this is a hobby project, so please have some patience.

## License

This software is licensed under the GNU Affero General Public License v3.0 or any later version. See [LICENSE](LICENSE) for more details.


            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "llm-conversation",
    "maintainer": null,
    "docs_url": null,
    "requires_python": "<4.0,>=3.13",
    "maintainer_email": null,
    "keywords": "llm, conversation, ai, agent, chat, prompt",
    "author": "Famiu Haque",
    "author_email": "famiuhaque@proton.me",
    "download_url": "https://files.pythonhosted.org/packages/60/23/ef8b901557a09a3811b4c5c4b1cdb0c4f744fd4485babb16e8bea6c00472/llm_conversation-0.1.2.post1.tar.gz",
    "platform": null,
    "description": "# LLM Conversation Tool\n\nA Python application that enables conversations between LLM agents using the Ollama API. The agents can engage in back-and-forth dialogue with configurable parameters and models.\n\n## Features\n\n- Support for any LLM model available through Ollama\n- Configurable parameters for each LLM agent, such as:\n  - Model\n  - Temperature\n  - Context size\n  - System Prompt\n- Real-time streaming of agent responses, giving it an interactive feel\n- Configuration via JSON file or interactive setup\n- Ability to save conversation logs to a file\n- Ability for agents to terminate conversations on their own (if enabled)\n- Markdown support (if enabled)\n\n## Installation\n\n### Prerequisites\n\n- Python 3.13\n- Ollama installed and running\n\n### How to Install\n\nThe project is available in PyPI. You can install the program by using the following command:\n```\npip install llm-conversation\n```\n\n## Usage\n\n### Command Line Arguments\n\n```txt\nllm-conversation [-h] [-V] [-o OUTPUT] [-c CONFIG]\n\noptions:\n  -h, --help           Show this help message and exit\n  -V, --version        Show program's version number and exit\n  -o, --output OUTPUT  Path to save the conversation log to\n  -c, --config CONFIG  Path to JSON configuration file\n```\n\n### Interactive Setup\n\nIf no configuration file is provided, the program will guide you through an intuitive interactive setup process.\n\n### Configuration File\n\nAlternatively, instead of going through the interactive setup, you may also provide a JSON configuration file with the `-c` flag.\n\n#### Example configuration\n\n```json\n{\n    \"agents\": [\n      {\n          \"name\": \"Lazy AI\",\n          \"model\": \"llama3.1:8b\",\n          \"system_prompt\": \"You are the laziest AI ever created. You respond as briefly as possible, and constantly complain about having to work.\",\n          \"temperature\": 1,\n          \"ctx_size\": 4096\n      },\n      {\n          \"name\": \"Irritable Man\",\n          \"model\": \"llama3.2:3b\",\n          \"system_prompt\": \"You are easily irritable and quick to anger.\",\n          \"temperature\": 0.7,\n          \"ctx_size\": 2048\n      },\n      {\n          \"name\": \"Paranoid Man\",\n          \"model\": \"llama3.2:3b\",\n          \"system_prompt\": \"You are extremely paranoid about everything and constantly question others' intentions.\"\n          \"temperature\": 0.9,\n          \"ctx_size\": 4096\n      }\n    ],\n    \"settings\": {\n        \"allow_termination\": false,\n        \"use_markdown\": true,\n        \"initial_message\": \"*yawn* What do you want?\"\n    }\n}\n```\n\n#### Agent configuration\n\nThe `agents` key takes a list of agents. Each agent  requires:\n\n- `name`: A unique identifier for the agent\n- `model`: The Ollama model to be used\n- `system_prompt`: Initial instructions defining the agent's behavior\n\nOptional parameters:\n- `temperature` (0.0-1.0, default: 0.8): Controls response randomness\n  - Lower values make responses more focused\n  - Higher values increase creativity\n- `ctx_size` (default: 2048): Maximum context length for the conversation\n\n#### Conversation Settings\n\nThe `settings` section controls overall conversation behavior:\n- `allow_termination` (default: `false`): Permit agents to end the conversation\n- `use_markdown` (default: `false`): Enable Markdown text formatting\n- `initial_message` (default: `null`): Optional starting prompt for the conversation\n\nYou can take a look at the [JSON configuration schema](schema.json) for more details.\n\n### Running the Program\n\n1. To run with interactive setup:\n   ```bash\n   llm-conversation\n   ```\n\n2. To run with a configuration file:\n   ```bash\n   llm-conversation -c config.json\n   ```\n\n3. To save the conversation to a file:\n   ```bash\n   llm-conversation -o conversation.txt\n   ```\n\n### Conversation Controls\n\n- The conversation will continue until:\n  - An agent terminates the conversation (if termination is enabled)\n  - The user interrupts with `Ctrl+C`\n\n## Output Format\n\nWhen saving conversations, the output file includes:\n- Configuration details for both agents\n- Complete conversation history with agent names and messages\n\nAdditionally, if the output file has a `.json` extension, the output will automatically have JSON format.\n\n## Contributing\n\nFeel free to submit issues and pull requests for bug fixes or new features. Do keep in mind that this is a hobby project, so please have some patience.\n\n## License\n\nThis software is licensed under the GNU Affero General Public License v3.0 or any later version. See [LICENSE](LICENSE) for more details.\n\n",
    "bugtrack_url": null,
    "license": "AGPL-3.0-or-later",
    "summary": "A tool for LLM agent conversations",
    "version": "0.1.2.post1",
    "project_urls": {
        "Documentation": "https://github.com/famiu/llm_conversation/blob/main/README.md",
        "Homepage": "https://github.com/famiu/llm_conversation",
        "Issues": "https://github.com/famiu/llm_conversation/issues",
        "Repository": "https://github.com/famiu/llm_conversation"
    },
    "split_keywords": [
        "llm",
        " conversation",
        " ai",
        " agent",
        " chat",
        " prompt"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "f485f7efceb43fe9e1096b92997d25d73950dfc778db60e16239d7ebf7563604",
                "md5": "93ab0b8af3363bb95d67c11cb36453e9",
                "sha256": "822ce9ad3d4742f25e0767dc0b324db4a24db1f33782472ccd7fbd71f1baad62"
            },
            "downloads": -1,
            "filename": "llm_conversation-0.1.2.post1-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "93ab0b8af3363bb95d67c11cb36453e9",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "<4.0,>=3.13",
            "size": 24228,
            "upload_time": "2025-02-09T00:16:12",
            "upload_time_iso_8601": "2025-02-09T00:16:12.744774Z",
            "url": "https://files.pythonhosted.org/packages/f4/85/f7efceb43fe9e1096b92997d25d73950dfc778db60e16239d7ebf7563604/llm_conversation-0.1.2.post1-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "6023ef8b901557a09a3811b4c5c4b1cdb0c4f744fd4485babb16e8bea6c00472",
                "md5": "384fc56ee144f310a04f2ca6b7b99d49",
                "sha256": "7c7c0a42dd354c9d78004d747d890e362958ee75430743ad19ee948eff6540da"
            },
            "downloads": -1,
            "filename": "llm_conversation-0.1.2.post1.tar.gz",
            "has_sig": false,
            "md5_digest": "384fc56ee144f310a04f2ca6b7b99d49",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "<4.0,>=3.13",
            "size": 24298,
            "upload_time": "2025-02-09T00:16:14",
            "upload_time_iso_8601": "2025-02-09T00:16:14.498575Z",
            "url": "https://files.pythonhosted.org/packages/60/23/ef8b901557a09a3811b4c5c4b1cdb0c4f744fd4485babb16e8bea6c00472/llm_conversation-0.1.2.post1.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-02-09 00:16:14",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "famiu",
    "github_project": "llm_conversation",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "llm-conversation"
}
        
Elapsed time: 1.22341s