lollms-client


Namelollms-client JSON
Version 1.4.7 PyPI version JSON
download
home_pageNone
SummaryA client library for LoLLMs generate endpoint
upload_time2025-09-10 13:26:42
maintainerNone
docs_urlNone
authorNone
requires_python>=3.7
licenseApache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS APPENDIX: How to apply the Apache License to your work. To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets "[]" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same "printed page" as the copyright notice for easier identification within third-party archives. Copyright [yyyy] [name of copyright owner] Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
keywords
VCS
bugtrack_url
requirements requests ascii-colors pillow pipmaster pyyaml tiktoken pydantic numpy
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # LoLLMs Client Library

[![License](https://img.shields.io/badge/License-Apache_2.0-blue.svg)](https://opensource.org/licenses/Apache-2.0)
[![PyPI version](https://badge.fury.io/py/lollms_client.svg)](https://badge.fury.io/py/lollms_client)
[![Python Versions](https://img.shields.io/pypi/pyversions/lollms_client.svg)](https://pypi.org/project/lollms-client/)
[![Downloads](https://static.pepy.tech/personalized-badge/lollms-client?period=total&units=international_system&left_color=grey&right_color=green&left_text=Downloads)](https://pepy.tech/project/lollms-client)
[![Documentation - Usage](https://img.shields.io/badge/docs-Usage%20Guide-brightgreen)](DOC_USE.md)
[![Documentation - Developer](https://img.shields.io/badge/docs-Developer%20Guide-blue)](DOC_DEV.md)
[![GitHub stars](https://img.shields.io/github/stars/ParisNeo/lollms_client.svg?style=social&label=Star&maxAge=2592000)](https://github.com/ParisNeo/lollms_client/stargazers/)
[![GitHub issues](https://img.shields.io/github/issues/ParisNeo/lollms_client.svg)](https://github.com/ParisNeo/lollms_client/issues)

**`lollms_client`** is a powerful and flexible Python library designed to simplify interactions with the **LoLLMs (Lord of Large Language Models)** ecosystem and various other Large Language Model (LLM) backends. It provides a unified API for text generation, multimodal operations (text-to-image, text-to-speech, etc.), and robust function calling through the Model Context Protocol (MCP).

Whether you're connecting to a remote LoLLMs server, an Ollama instance, the OpenAI API, or running models locally using GGUF (via `llama-cpp-python` or a managed `llama.cpp` server), Hugging Face Transformers, or vLLM, `lollms-client` offers a consistent and developer-friendly experience.

## Key Features

*   🔌 **Versatile Binding System:** Seamlessly switch between different LLM backends (LoLLMs, Ollama, OpenAI, Llama.cpp, Transformers, vLLM, OpenLLM, Gemini, Claude, Groq, OpenRouter, Hugging Face Inference API) using a unified `llm_binding_config` dictionary for all parameters.
*   🗣️ **Multimodal Support:** Interact with models capable of processing images and generate various outputs like speech (TTS), images (TTI), video (TTV), and music (TTM).
*   🖼️ **Selective Image Activation:** Control which images in a message are active and sent to the model, allowing for fine-grained multimodal context management without deleting the original data.
*   🤖 **Agentic Workflows with MCP:** Empower LLMs to act as sophisticated agents, breaking down complex tasks, selecting and executing external tools (e.g., internet search, code interpreter, file I/O, image generation) through the Model Context Protocol (MCP) using a robust "observe-think-act" loop.
*   🎭 **Personalities as Agents:** Personalities can now define their own set of required tools (MCPs) and have access to static or dynamic knowledge bases (`data_source`), turning them into self-contained, ready-to-use agents.
*   🚀 **Streaming & Callbacks:** Efficiently handle real-time text generation with customizable callback functions across all generation methods, including during agentic (MCP) interactions.
*   📑 **Long Context Processing:** The `long_context_processing` method (formerly `sequential_summarize`) intelligently chunks and synthesizes texts that exceed the model's context window, suitable for summarization or deep analysis.
*   📝 **Advanced Structured Content Generation:** Reliably generate structured JSON output from natural language prompts using the `generate_structured_content` helper method, enforcing a specific schema.
*   💬 **Advanced Discussion Management:** Robustly manage conversation histories with `LollmsDiscussion`, featuring branching, context exporting, and automatic pruning.
*   🧠 **Persistent Memory & Data Zones:** `LollmsDiscussion` now supports multiple, distinct data zones (`user_data_zone`, `discussion_data_zone`, `personality_data_zone`) and a long-term `memory` field. This allows for sophisticated context layering and state management, enabling agents to learn and remember over time.
*   ✍️ **Structured Memorization:** The `memorize()` method analyzes a conversation to extract its essence (e.g., a problem and its solution), creating a structured "memory" with a title and content. These memories are stored and can be explicitly loaded into the AI's context, providing a more robust and manageable long-term memory system.
*   📊 **Detailed Context Analysis:** The `get_context_status()` method provides a rich, detailed breakdown of the prompt context, showing the content and token count for each individual component (system prompt, data zones, message history).
*   ⚙️ **Standardized Configuration Management:** A unified dictionary-based system (`llm_binding_config`) to configure any binding in a consistent manner.
*   🧩 **Extensible:** Designed to easily incorporate new LLM backends and modality services, including custom MCP toolsets.
*   📝 **High-Level Operations:** Includes convenience methods for complex tasks like sequential summarization and deep text analysis directly within `LollmsClient`.

## Installation

You can install `lollms_client` directly from PyPI:

```bash
pip install lollms-client
```

This will install the core library. Some bindings may require additional dependencies (e.g., `llama-cpp-python`, `torch`, `transformers`, `ollama`, `vllm`, `Pillow` for image utilities, `docling` for document parsing). The library attempts to manage these using `pipmaster`, but for complex dependencies (especially those requiring compilation like `llama-cpp-python` with GPU support), manual installation might be preferred.

## Core Generation Methods

The `LollmsClient` provides several methods for generating text, catering to different use cases.

### Basic Text Generation (`generate_text`)

This is the most straightforward method for generating a response based on a simple prompt.

```python
from lollms_client import LollmsClient, MSG_TYPE
from ascii_colors import ASCIIColors
import os

# Callback for streaming output
def simple_streaming_callback(chunk: str, msg_type: MSG_TYPE, params=None, metadata=None) -> bool:
    if msg_type == MSG_TYPE.MSG_TYPE_CHUNK:
        print(chunk, end="", flush=True)
    elif msg_type == MSG_TYPE.MSG_TYPE_EXCEPTION:
        ASCIIColors.error(f"\nStreaming Error: {chunk}")
    return True # True to continue streaming

try:
    # Initialize client to connect to a LoLLMs server.
    # All binding-specific parameters now go into the 'llm_binding_config' dictionary.
    lc = LollmsClient(
        llm_binding_name="lollms", # This is the default binding
        llm_binding_config={
            "host_address": "http://localhost:9642", # Default port for LoLLMs server
            # "service_key": "your_lollms_api_key_here" # Get key from LoLLMs UI -> User Settings if security is enabled
        }
    )

    prompt = "Tell me a fun fact about space."
    ASCIIColors.yellow(f"Prompt: {prompt}")

    # Generate text with streaming
    ASCIIColors.green("Streaming Response:")
    response_text = lc.generate_text(
        prompt,
        n_predict=100,
        stream=True,
        streaming_callback=simple_streaming_callback
    )
    print("\n--- End of Stream ---")

    # The 'response_text' variable will contain the full concatenated text
    # if streaming_callback returns True throughout.
    if isinstance(response_text, str):
        ASCIIColors.cyan(f"\nFull streamed text collected: {response_text[:100]}...")
    elif isinstance(response_text, dict) and "error" in response_text:
        ASCIIColors.error(f"Error during generation: {response_text['error']}")

except ValueError as ve:
    ASCIIColors.error(f"Initialization Error: {ve}")
    ASCIIColors.info("Ensure a LoLLMs server is running or configure another binding.")
except ConnectionRefusedError:
    ASCIIColors.error("Connection refused. Is the LoLLMs server running at http://localhost:9642?")
except Exception as e:
    ASCIIColors.error(f"An unexpected error occurred: {e}")

```

### Generating from Message Lists (`generate_from_messages`)

For more complex conversational interactions, you can provide the LLM with a list of messages, similar to the OpenAI Chat Completion API. This allows you to define roles (system, user, assistant) and build multi-turn conversations programmatically.

```python
from lollms_client import LollmsClient, MSG_TYPE
from ascii_colors import ASCIIColors
import os

def streaming_callback_for_messages(chunk: str, msg_type: MSG_TYPE, params=None, metadata=None) -> bool:
    if msg_type == MSG_TYPE.MSG_TYPE_CHUNK:
        print(chunk, end="", flush=True)
    return True

try:
    # Example for an Ollama binding
    # Ensure you have Ollama installed and model 'llama3' pulled (e.g., ollama pull llama3)
    lc = LollmsClient(
        llm_binding_name="ollama", 
        llm_binding_config={
            "model_name": "llama3",
            "host_address": "http://localhost:11434" # Default Ollama address
        }
    )

    # Define the conversation history as a list of messages
    messages = [
        {"role": "system", "content": "You are a helpful assistant that specializes in programming."},
        {"role": "user", "content": "Hello, what's your name?"},
        {"role": "assistant", "content": "I am an AI assistant created by Google."},
        {"role": "user", "content": "Can you explain recursion in Python?"}
    ]

    ASCIIColors.yellow("\nGenerating response from messages:")
    response_text = lc.generate_from_messages(
        messages=messages,
        n_predict=200,
        stream=True,
        streaming_callback=streaming_callback_for_messages
    )
    print("\n--- End of Message Stream ---")
    ASCIIColors.cyan(f"\nFull collected response: {response_text[:150]}...")

except Exception as e:
    ASCIIColors.error(f"Error during message generation: {e}")

```

### Advanced Structured Content Generation (`generate_structured_content`)

The `generate_structured_content` method is a powerful utility for forcing an LLM's output into a specific JSON format. It's ideal for extracting information, getting consistent tool parameters, or any task requiring reliable, machine-readable output.

```python
from lollms_client import LollmsClient
from ascii_colors import ASCIIColors
import json
import os

try:
    # Using Ollama as an example binding
    lc = LollmsClient(llm_binding_name="ollama", llm_binding_config={"model_name": "llama3"})

    text_block = "John Doe is a 34-year-old software engineer from New York. He loves hiking and Python programming."

    # Define the exact JSON structure you want
    output_template = {
        "full_name": "string",
        "age": "integer",
        "profession": "string",
        "city": "string",
        "hobbies": ["list", "of", "strings"] # Example of a list in schema
    }

    ASCIIColors.yellow(f"\nExtracting structured data from: '{text_block}'")
    ASCIIColors.yellow(f"Using schema: {json.dumps(output_template)}")

    # Generate the structured data
    extracted_data = lc.generate_structured_content(
        prompt=f"Extract the relevant information from the following text:\n\n{text_block}",
        schema=output_template, # Note: parameter is 'schema'
        temperature=0.0 # Use low temperature for deterministic structured output
    )

    if extracted_data:
        ASCIIColors.green("\nExtracted Data (JSON):")
        print(json.dumps(extracted_data, indent=2))
    else:
        ASCIIColors.error("\nFailed to extract structured data.")

except Exception as e:
    ASCIIColors.error(f"An error occurred during structured content generation: {e}")
```

## Advanced Discussion Management

The `LollmsDiscussion` class is a core component for managing conversational state, including message history, long-term memory, and various context zones.

### Basic Chat with `LollmsDiscussion`

For general conversational agents that need to maintain context across turns, `LollmsDiscussion` simplifies the process. It automatically handles message formatting, history management, and context window limitations.

```python
from lollms_client import LollmsClient, LollmsDiscussion, MSG_TYPE, LollmsDataManager
from ascii_colors import ASCIIColors
import os
import tempfile

# Initialize LollmsClient
try:
    lc = LollmsClient(
        llm_binding_name="ollama", 
        llm_binding_config={
            "model_name": "llama3",
            "host_address": "http://localhost:11434"
        }
    )
except Exception as e:
    ASCIIColors.error(f"Failed to initialize LollmsClient for discussion: {e}")
    exit()

# Create a new discussion. For persistent discussions, pass a db_manager.
# Using a temporary directory for the database for this example's simplicity
with tempfile.TemporaryDirectory() as tmpdir:
    db_path = Path(tmpdir) / "discussion_db.sqlite"
    db_manager = LollmsDataManager(f"sqlite:///{db_path}")

    discussion_id = "basic_chat_example"
    discussion = db_manager.get_discussion(lc, discussion_id)
    if not discussion:
        ASCIIColors.yellow(f"\nCreating new discussion '{discussion_id}'...")
        discussion = LollmsDiscussion.create_new(
            lollms_client=lc,
            db_manager=db_manager,
            id=discussion_id,
            autosave=True # Important for persistence
        )
        discussion.system_prompt = "You are a friendly and helpful AI."
        discussion.commit()
    else:
        ASCIIColors.green(f"\nLoaded existing discussion '{discussion_id}'.")


    # Define a simple callback for streaming
    def chat_callback(chunk: str, msg_type: MSG_TYPE, **kwargs) -> bool:
        if msg_type == MSG_TYPE.MSG_TYPE_CHUNK:
            print(chunk, end="", flush=True)
        return True

    try:
        ASCIIColors.cyan("> User: Hello, how are you today?")
        response = discussion.chat(
            user_message="Hello, how are you today?",
            streaming_callback=chat_callback
        )
        print("\n") # Newline after stream finishes

        ai_message = response['ai_message']
        user_message = response['user_message']

        ASCIIColors.green(f"< Assistant (Full): {ai_message.content[:100]}...")

        # Now, continue the conversation
        ASCIIColors.cyan("\n> User: Can you recommend a good book?")
        response = discussion.chat(
            user_message="Can you recommend a good book?",
            streaming_callback=chat_callback
        )
        print("\n")

        # You can inspect the full message history
        ASCIIColors.magenta("\n--- Discussion History (last 3 messages) ---")
        for msg in discussion.get_messages()[-3:]:
            print(f"[{msg.sender.capitalize()}]: {msg.content[:50]}...")

    except Exception as e:
        ASCIIColors.error(f"An error occurred during discussion chat: {e}")
```

### Building Stateful Agents with Memory and Data Zones

The `LollmsDiscussion` class provides a sophisticated system for creating stateful agents that can remember information across conversations. This is achieved through a layered system of "context zones" that are automatically combined into the AI's system prompt.

#### Understanding the Context Zones

The AI's context is more than just chat history. It's built from several distinct components, each with a specific purpose:

*   **`system_prompt`**: The foundational layer defining the AI's core identity, persona, and primary instructions.
*   **`memory`**: The AI's long-term, persistent memory. It stores key facts about the user or topics, built up over time using the `memorize()` method.
*   **`user_data_zone`**: Holds session-specific information about the user's current state or goals (e.g., "User is currently working on 'file.py'").
*   **`discussion_data_zone`**: Contains state or meta-information about the current conversational task (e.g., "Step 1 of the plan is complete").
*   **`personality_data_zone`**: A knowledge base or set of rules automatically injected from a `LollmsPersonality`'s `data_source`.
*   **`pruning_summary`**: An automatic, AI-generated summary of the oldest messages in a very long chat, used to conserve tokens without losing the gist of the early conversation.

The `get_context_status()` method is your window into this system, showing you exactly how these zones are combined and how many tokens they consume.

Let's see this in action with a "Personal Assistant" agent that learns about the user over time.

```python
from lollms_client import LollmsClient, LollmsDataManager, LollmsDiscussion, MSG_TYPE
from ascii_colors import ASCIIColors
import json
import tempfile
import os

# --- 1. Setup a persistent database for our discussion ---
with tempfile.TemporaryDirectory() as tmpdir:
    db_path = Path(tmpdir) / "my_assistant.db"
    db_manager = LollmsDataManager(f"sqlite:///{db_path}")

    try:
        lc = LollmsClient(llm_binding_name="ollama", llm_binding_config={"model_name": "llama3"})
    except Exception as e:
        ASCIIColors.error(f"Failed to initialize LollmsClient for stateful agent: {e}")
        exit()

    # Try to load an existing discussion or create a new one
    discussion_id = "user_assistant_chat_1"
    discussion = db_manager.get_discussion(lc, discussion_id)
    if not discussion:
        ASCIIColors.yellow("Creating a new discussion for stateful agent...")
        discussion = LollmsDiscussion.create_new(
            lollms_client=lc,
            db_manager=db_manager,
            id=discussion_id,
            autosave=True # Important for persistence
        )
        # Let's preset some data in different zones
        discussion.system_prompt = "You are a helpful Personal Assistant."
        discussion.user_data_zone = "User's Name: Alex\nUser's Goal: Learn about AI development."
        discussion.commit()
    else:
        ASCIIColors.green("Loaded existing discussion for stateful agent.")


    def run_chat_turn(prompt: str):
        """Helper function to run a single chat turn and print details."""
        ASCIIColors.cyan(f"\n> User: {prompt}")

        # --- A. Check context status BEFORE the turn using get_context_status() ---
        ASCIIColors.magenta("\n--- Context Status (Before Generation) ---")
        status = discussion.get_context_status()
        print(f"Max Tokens: {status.get('max_tokens')}, Current Tokens: {status.get('current_tokens')}")
        
        # Print the system context details
        if 'system_context' in status['zones']:
            sys_ctx = status['zones']['system_context']
            print(f"  - System Context Tokens: {sys_ctx['tokens']}")
            # The 'breakdown' shows the individual zones that were combined
            for name, content in sys_ctx.get('breakdown', {}).items():
                # For brevity, show only first line of content
                print(f"    -> Contains '{name}': {content.split(os.linesep)}...")

        # Print the message history details
        if 'message_history' in status['zones']:
            msg_hist = status['zones']['message_history']
            print(f"  - Message History Tokens: {msg_hist['tokens']} ({msg_hist['message_count']} messages)")

        print("------------------------------------------")

        # --- B. Run the chat ---
        ASCIIColors.green("\n< Assistant:")
        response = discussion.chat(
            user_message=prompt,
            streaming_callback=lambda chunk, type, **k: print(chunk, end="", flush=True) if type==MSG_TYPE.MSG_TYPE_CHUNK else None
        )
        print() # Newline after stream

        # --- C. Trigger memorization to update the 'memory' zone ---
        ASCIIColors.yellow("\nTriggering memorization process...")
        discussion.memorize()
        discussion.commit() # Save the new memory to the DB
        ASCIIColors.yellow("Memorization complete.")

    # --- Run a few turns ---
    run_chat_turn("Hi there! Can you recommend a good Python library for building web APIs?")
    run_chat_turn("That sounds great. By the way, my favorite programming language is Rust, I find its safety features amazing.")
    run_chat_turn("What was my favorite programming language again?")

    # --- Final Inspection of Memory ---
    ASCIIColors.magenta("\n--- Final Context Status ---")
    status = discussion.get_context_status()
    print(f"Max Tokens: {status.get('max_tokens')}, Current Tokens: {status.get('current_tokens')}")
    if 'system_context' in status['zones']:
        sys_ctx = status['zones']['system_context']
        print(f"  - System Context Tokens: {sys_ctx['tokens']}")
        for name, content in sys_ctx.get('breakdown', {}).items():
            # Print the full content of the memory zone to verify it was updated
            if name == 'memory':
                ASCIIColors.yellow(f"    -> Full '{name}' content:\n{content}")
            else:
                print(f"    -> Contains '{name}': {content.split(os.linesep)}...")
    print("------------------------------------------")

```

#### How it Works:

1.  **Persistence & Initialization:** The `LollmsDataManager` saves and loads the discussion. We initialize the `system_prompt` and `user_data_zone` to provide initial context.
2.  **`get_context_status()`:** Before each generation, we call this method. The output shows a `system_context` block with a token count for all combined zones and a `breakdown` field that lets us see the content of each individual zone that contributed to it.
3.  **`memorize()`:** After the user mentions their favorite language, `memorize()` is called. The LLM analyzes the last turn, identifies this new, important fact, and appends it to the `discussion.memory` zone.
4.  **Recall:** In the final turn, when asked to recall the favorite language, the AI has access to the updated `memory` content within its system context and can correctly answer "Rust". This demonstrates true long-term, stateful memory.

### Managing Multimodal Context: Activating and Deactivating Images

When working with multimodal models, you can now control which images in a message are active and sent to the model. This is useful for focusing the AI's attention, saving tokens on expensive vision models, or allowing a user to correct which images are relevant.

This is managed at the `LollmsMessage` level using the `toggle_image_activation()` method.

```python
from lollms_client import LollmsClient, LollmsDiscussion, LollmsDataManager, MSG_TYPE
from ascii_colors import ASCIIColors
import base64
from pathlib import Path
import os
import tempfile

# Helper to create a dummy image b64 string
def create_dummy_image(text, output_dir):
    try:
        from PIL import Image, ImageDraw, ImageFont
    except ImportError:
        ASCIIColors.warning("Pillow not installed. Skipping image example.")
        return None
    
    # Try to find a common font, otherwise use default
    font_path = Path("/usr/share/fonts/truetype/dejavu/DejaVuSans.ttf") # Common Linux path
    if not font_path.exists():
        font_path = Path("/Library/Fonts/Arial.ttf") # Common macOS path
    if not font_path.exists():
        font_path = Path("C:/Windows/Fonts/arial.ttf") # Common Windows path
    
    try:
        font = ImageFont.truetype(str(font_path), 15)
    except (IOError, OSError):
        font = ImageFont.load_default() # Fallback to default if font not found

    img = Image.new('RGB', (200, 50), color = (73, 109, 137))
    d = ImageDraw.Draw(img)
    d.text((10,10), text, fill=(255,255,0), font=font)
    
    temp_file = Path(output_dir) / f"temp_img_{text.replace(' ', '_')}.png"
    img.save(temp_file, "PNG")
    b64 = base64.b64encode(temp_file.read_bytes()).decode('utf-8')
    temp_file.unlink() # Clean up temporary file
    return b64

# --- 1. Setup ---
try:
    # Llava is a good multi-modal model for Ollama
    # Ensure Ollama is running and 'llava' model is pulled (e.g., ollama pull llava)
    lc = LollmsClient(llm_binding_name="ollama", llm_binding_config={"model_name": "llava"})
except Exception as e:
    ASCIIColors.warning(f"Failed to initialize LollmsClient for image example: {e}")
    ASCIIColors.warning("Skipping image activation example. Ensure Ollama is running and 'llava' model is pulled.")
    exit()

with tempfile.TemporaryDirectory() as tmpdir:
    db_path = Path(tmpdir) / "image_discussion_db.sqlite"
    db_manager = LollmsDataManager(f"sqlite:///{db_path}")
    discussion = LollmsDiscussion.create_new(lollms_client=lc, db_manager=db_manager)

    # --- 2. Add a message with multiple images ---
    # Ensure Pillow is installed: pip install Pillow
    img1_b64 = create_dummy_image("Image 1: Apple", tmpdir)
    img2_b64 = create_dummy_image("Image 2: Cat", tmpdir)
    img3_b64 = create_dummy_image("Image 3: Dog", tmpdir)

    if not img1_b64 or not img2_b64 or not img3_b64:
        ASCIIColors.warning("Skipping image activation example due to image creation failure (likely missing Pillow or font).")
        exit()

    discussion.add_message(
        sender="user", 
        content="What is in the second image?", 
        images=[img1_b64, img2_b64, img3_b64]
    )
    user_message = discussion.get_messages()[-1]

    # --- 3. Check the initial state ---
    ASCIIColors.magenta("--- Initial State (All 3 Images Active) ---")
    status_before = discussion.get_context_status()
    # The 'content' field for message history will indicate the number of images if present
    print(f"Message History Text (showing active images):\n{status_before['zones']['message_history']['content']}")

    # --- 4. Deactivate irrelevant images ---
    ASCIIColors.magenta("\n--- Deactivating images 1 and 3 ---")
    user_message.toggle_image_activation(index=0, active=False) # Deactivate first image (Apple)
    user_message.toggle_image_activation(index=2, active=False) # Deactivate third image (Dog)
    discussion.commit() # Save changes to the message

    # --- 5. Check the new state ---
    ASCIIColors.magenta("\n--- New State (Only Image 2 is Active) ---")
    status_after = discussion.get_context_status()
    print(f"Message History Text (showing active images):\n{status_after['zones']['message_history']['content']}")

    ASCIIColors.green("\nNotice the message now says '(1 image(s) attached)' instead of 3, and only the active image will be sent to the multimodal LLM.")
    ASCIIColors.green("To confirm, let's ask the model what it sees:")

    # This will send only the activated image
    response = discussion.chat(
        user_message="What do you see in the image(s) attached to my last message?",
        # Use a streaming callback to see the response
        streaming_callback=lambda chunk, type, **k: print(chunk, end="", flush=True) if type==MSG_TYPE.MSG_TYPE_CHUNK else None
    )
    print("\n")
    ASCIIColors.green(f"Assistant's response after toggling images: {response['ai_message'].content}")

```
**Note:** The image generation helper in the example requires `Pillow` (`pip install Pillow`). It also attempts to find common system fonts; if issues persist, you might need to install `matplotlib` for better font handling or provide a specific font path.

### Putting It All Together: An Advanced Agentic Example

Let's create a **Python Coder Agent**. This agent will use a set of coding rules from a local file as its knowledge base and will be equipped with a tool to execute the code it writes. This demonstrates the synergy between `LollmsPersonality` (with `data_source` and `active_mcps`), `LollmsDiscussion`, and the MCP system.

#### Step 1: Create the Knowledge Base (`coding_rules.txt`)

Create a simple text file with the rules our agent must follow.

```text
# File: coding_rules.txt

1.  All Python functions must include a Google-style docstring.
2.  Use type hints for all function parameters and return values.
3.  The main execution block should be protected by `if __name__ == "__main__":`.
4.  After defining a function, add a simple example of its usage inside the main block.
5.  Print the output of the example usage to the console.
```

#### Step 2: The Main Script (`agent_example.py`)

This script will define the personality, initialize the client, and run the agent.

```python
from pathlib import Path
from lollms_client import LollmsClient, LollmsPersonality, LollmsDiscussion, MSG_TYPE
from ascii_colors import ASCIIColors, trace_exception
import json
import tempfile
import os

# A detailed callback to visualize the agent's process
def agent_callback(chunk: str, msg_type: MSG_TYPE, params: dict = None, **kwargs) -> bool:
    if not params: params = {}
    
    if msg_type == MSG_TYPE.MSG_TYPE_STEP:
        ASCIIColors.yellow(f"\n>> Agent Step: {chunk}")
    elif msg_type == MSG_TYPE.MSG_TYPE_STEP_START:
        ASCIIColors.yellow(f"\n>> Agent Step Start: {chunk}")
    elif msg_type == MSG_TYPE.MSG_TYPE_STEP_END:
        result = params.get('result', '')
        # Only print a snippet of result to avoid overwhelming console for large outputs
        if isinstance(result, dict):
            result_str = json.dumps(result)[:150] + ("..." if len(json.dumps(result)) > 150 else "")
        else:
            result_str = str(result)[:150] + ("..." if len(str(result)) > 150 else "")
        ASCIIColors.green(f"<< Agent Step End: {chunk} -> Result: {result_str}")
    elif msg_type == MSG_TYPE.MSG_TYPE_THOUGHT_CONTENT:
        ASCIIColors.magenta(f"🤔 Agent Thought: {chunk}")
    elif msg_type == MSG_TYPE.MSG_TYPE_TOOL_CALL:
        tool_name = params.get('name', 'unknown_tool')
        tool_params = params.get('parameters', {})
        ASCIIColors.blue(f"🛠️  Agent Action: Called '{tool_name}' with {tool_params}")
    elif msg_type == MSG_TYPE.MSG_TYPE_TOOL_OUTPUT:
        ASCIIColors.cyan(f"👀 Agent Observation (Tool Output): {params.get('result', 'No result')}")
    elif msg_type == MSG_TYPE.MSG_TYPE_CHUNK:
        print(chunk, end="", flush=True) # Final answer stream
    return True

# Create a temporary directory for the discussion DB and coding rules file
with tempfile.TemporaryDirectory() as tmpdir:
    db_path = Path(tmpdir) / "agent_discussion.db"
    
    # Create the coding rules file
    rules_path = Path(tmpdir) / "coding_rules.txt"
    rules_content = """
1.  All Python functions must include a Google-style docstring.
2.  Use type hints for all function parameters and return values.
3.  The main execution block should be protected by `if __name__ == "__main__":`.
4.  After defining a function, add a simple example of its usage inside the main block.
5.  Print the output of the example usage to the console.
"""
    rules_path.write_text(rules_content.strip())
    ASCIIColors.yellow(f"Created temporary coding rules file at: {rules_path}")

    try:
        # --- 1. Load the knowledge base from the file ---
        coding_rules = rules_path.read_text()

        # --- 2. Define the Coder Agent Personality ---
        coder_personality = LollmsPersonality(
            name="Python Coder Agent",
            author="lollms-client",
            category="Coding",
            description="An agent that writes and executes Python code according to specific rules.",
            system_prompt=(
                "You are an expert Python programmer. Your task is to write clean, executable Python code based on the user's request. "
                "You MUST strictly follow all rules provided in the 'Personality Static Data' section. "
                "First, think about the plan. Then, use the `python_code_interpreter` tool to write and execute the code. "
                "Finally, present the code and its output to the user."
            ),
            # A) Attach the static knowledge base
            data_source=coding_rules,
            # B) Equip the agent with a code execution tool
            active_mcps=["python_code_interpreter"]
        )

        # --- 3. Initialize the Client and Discussion ---
        # A code-specialized model is recommended (e.g., codellama, deepseek-coder)
        # Ensure Ollama is running and 'codellama' model is pulled (e.g., ollama pull codellama)
        lc = LollmsClient(
            llm_binding_name="ollama",          
            llm_binding_config={
                "model_name": "codellama",
                "host_address": "http://localhost:11434"
            },
            mcp_binding_name="local_mcp"    # Enable the local tool execution engine
        )
        # For agentic workflows, it's often good to have a persistent discussion
        db_manager = LollmsDataManager(f"sqlite:///{db_path}")
        discussion = LollmsDiscussion.create_new(lollms_client=lc, db_manager=db_manager)
        
        # --- 4. The User's Request ---
        user_prompt = "Write a Python function that takes two numbers and returns their sum."

        ASCIIColors.yellow(f"User Prompt: {user_prompt}")
        print("\n" + "="*50 + "\nAgent is now running...\n" + "="*50)

        # --- 5. Run the Agentic Chat Turn ---
        response = discussion.chat(
            user_message=user_prompt,
            personality=coder_personality,
            streaming_callback=agent_callback,
            max_llm_iterations=5, # Limit iterations for faster demo
            tool_call_decision_temperature=0.0 # Make decision more deterministic
        )

        print("\n\n" + "="*50 + "\nAgent finished.\n" + "="*50)
        
        # --- 6. Inspect the results ---
        ai_message = response['ai_message']
        ASCIIColors.green("\n--- Final Answer from Agent ---")
        print(ai_message.content)
        
        ASCIIColors.magenta("\n--- Tool Calls Made (from metadata) ---")
        if "tool_calls" in ai_message.metadata:
            print(json.dumps(ai_message.metadata["tool_calls"], indent=2))
        else:
            print("No tool calls recorded in message metadata.")

    except Exception as e:
        ASCIIColors.error(f"An error occurred during agent execution: {e}")
        ASCIIColors.warning("Please ensure Ollama is running, 'codellama' model is pulled, and 'local_mcp' binding is available.")
        trace_exception(e) # Provide detailed traceback
```

#### Step 3: What Happens Under the Hood

When you run `agent_example.py`, a sophisticated process unfolds:

1.  **Initialization:** The `LollmsDiscussion.chat()` method is called with the `coder_personality`.
2.  **Knowledge Injection:** The `chat` method sees that `personality.data_source` is a string. It automatically takes the content of `coding_rules.txt` and injects it into the discussion's data zones.
3.  **Tool Activation:** The method also sees `personality.active_mcps`. It enables the `python_code_interpreter` tool for this turn.
4.  **Context Assembly:** The `LollmsClient` assembles a rich prompt for the LLM that includes:
    *   The personality's `system_prompt`.
    *   The content of `coding_rules.txt` (from the data zones).
    *   The list of available tools (including `python_code_interpreter`).
    *   The user's request ("Write a function...").
5.  **Reason and Act:** The LLM, now fully briefed, reasons that it needs to use the `python_code_interpreter` tool. It formulate the Python code *according to the rules it was given*.
6.  **Tool Execution:** The `local_mcp` binding receives the code and executes it in a secure local environment. It captures any output (`stdout`, `stderr`) and results.
7.  **Observation:** The execution results are sent back to the LLM as an "observation."
8.  **Final Synthesis:** The LLM now has the user's request, the rules, the code it wrote, and the code's output. It synthesizes all of this into a final, comprehensive answer for the user.

This example showcases how `lollms-client` allows you to build powerful, knowledgeable, and capable agents by simply composing personalities with data and tools.

## Using LoLLMs Client with Different Bindings

`lollms-client` supports a wide range of LLM backends through its binding system. This section provides practical examples of how to initialize `LollmsClient` for each of the major supported bindings.

### A New Configuration Model

Configuration for all bindings has been unified. Instead of passing parameters like `host_address` or `model_name` directly to the `LollmsClient` constructor, you now pass them inside a single dictionary: `llm_binding_config`.

This approach provides a clean, consistent, and extensible way to manage settings for any backend. Each binding defines its own set of required and optional parameters (e.g., `host_address`, `model_name`, `service_key`, `n_gpu_layers`).

```python
# General configuration pattern
from lollms_client import LollmsClient
# ... other imports as needed

# lc = LollmsClient(
#     llm_binding_name="your_binding_name",
#     llm_binding_config={
#         "parameter_1_for_this_binding": "value_1",
#         "parameter_2_for_this_binding": "value_2",
#         # ... and so on
#     }
# )
```

---

### 1. Core and Local Server Bindings

These bindings connect to servers running on your local network, including the core LoLLMs server itself.

#### **LoLLMs (Default Binding)**

This connects to a running LoLLMs service, which acts as a powerful backend providing access to models, personalities, and tools. This is the default and most feature-rich way to use `lollms-client`.

**Prerequisites:**
*   A LoLLMs server instance installed and running (e.g., `lollms-webui`).
*   An API key can be generated from the LoLLMs web UI (under User Settings -> Security) if security is enabled.

**Usage:**

```python
from lollms_client import LollmsClient
from ascii_colors import ASCIIColors
import os

try:
    # The default port for a LoLLMs server is 9642 (a nod to The Hitchhiker's Guide to the Galaxy).
    # The API key can also be set via the LOLLMS_API_KEY environment variable.
    config = {
        "host_address": "http://localhost:9642",
        # "service_key": "your_lollms_api_key_here" # Uncomment and replace if security is enabled
    }

    lc = LollmsClient(
        llm_binding_name="lollms", # This is the default, so specifying it is optional
        llm_binding_config=config
    )

    response = lc.generate_text("What is the answer to life, the universe, and everything?")
    ASCIIColors.green(f"\nResponse from LoLLMs: {response}")

except ConnectionRefusedError:
    ASCIIColors.error("Connection refused. Is the LoLLMs server running at http://localhost:9642?")
except ValueError as ve:
    ASCIIColors.error(f"Initialization Error: {ve}")
except Exception as e:
    ASCIIColors.error(f"An unexpected error occurred: {e}")
```

#### **Ollama**

The `ollama` binding connects to a running Ollama server instance on your machine or network.

**Prerequisites:**
*   [Ollama installed and running](https://ollama.com/).
*   Models pulled, e.g., `ollama pull llama3`.

**Usage:**

```python
from lollms_client import LollmsClient
from ascii_colors import ASCIIColors
import os

try:
    # Configuration for a local Ollama server
    lc = LollmsClient(
        llm_binding_name="ollama",
        llm_binding_config={
            "model_name": "llama3",  # Or any other model you have pulled
            "host_address": "http://localhost:11434" # Default Ollama address
        }
    )

    # Now you can use lc.generate_text(), lc.chat(), etc.
    response = lc.generate_text("Why is the sky blue?")
    ASCIIColors.green(f"\nResponse from Ollama: {response}")

except Exception as e:
    ASCIIColors.error(f"Error initializing Ollama binding: {e}")
    ASCIIColors.info("Please ensure Ollama is installed, running, and the specified model is pulled.")
```

#### **PythonLlamaCpp (Local GGUF Models)**

The `pythonllamacpp` binding loads and runs GGUF model files directly using the powerful `llama-cpp-python` library. This is ideal for high-performance, local inference on CPU or GPU.

**Prerequisites:**
*   A GGUF model file downloaded to your machine.
*   `llama-cpp-python` installed. For GPU support, it must be compiled with the correct flags (e.g., `CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python`).

**Usage:**

```python
from lollms_client import LollmsClient
from ascii_colors import ASCIIColors
import os
from pathlib import Path

# Path to your GGUF model file
# IMPORTANT: Replace this with the actual path to your model file
# Example: MODEL_PATH = Path.home() / "models" / "your_model_name.gguf"
MODEL_PATH = Path("./path/to/your/model.gguf") 

# Binding-specific configuration
config = {
    "model_path": str(MODEL_PATH), # The path to the GGUF file
    "n_gpu_layers": -1,       # -1 for all layers to GPU, 0 for CPU
    "n_ctx": 4096,            # Context size
    "seed": -1,               # -1 for random seed
    "chat_format": "chatml"   # Or another format like 'llama-2' or 'mistral'
}

if not MODEL_PATH.exists():
    ASCIIColors.warning(f"Model file not found at: {MODEL_PATH}")
    ASCIIColors.warning("Skipping PythonLlamaCpp example. Please download a GGUF model and update MODEL_PATH.")
else:
    try:
        lc = LollmsClient(
            llm_binding_name="pythonllamacpp",
            llm_binding_config=config
        )

        response = lc.generate_text("Write a recipe for a great day.")
        ASCIIColors.green(f"\nResponse from PythonLlamaCpp: {response}")

    except ImportError:
        ASCIIColors.error("`llama-cpp-python` not installed. Please install it (`pip install llama-cpp-python`) to run this example.")
    except Exception as e:
        ASCIIColors.error(f"Error initializing PythonLlamaCpp binding: {e}")
        ASCIIColors.info("Please ensure the model path is correct and `llama-cpp-python` is correctly installed (with GPU support if desired).")

```

---

### 2. Cloud Service Bindings

These bindings connect to hosted LLM APIs from major providers.

#### **OpenAI**

Connects to the official OpenAI API to use models like GPT-4o, GPT-4, and GPT-3.5.

**Prerequisites:**
*   An OpenAI API key (starts with `sk-...`). It's recommended to set this as an environment variable `OPENAI_API_KEY`.

**Usage:**

```python
from lollms_client import LollmsClient
from ascii_colors import ASCIIColors
import os

# Set your API key as an environment variable or directly in the config
# os.environ["OPENAI_API_KEY"] = "your_openai_api_key_here"

try:
    if "OPENAI_API_KEY" not in os.environ and "your_openai_api_key_here" in "your_openai_api_key_here":
        ASCIIColors.warning("OPENAI_API_KEY not set in environment or hardcoded. Skipping OpenAI example.")
    else:
        lc = LollmsClient(
            llm_binding_name="openai",
            llm_binding_config={
                "model_name": "gpt-4o", # Or "gpt-3.5-turbo"
                # "service_key": os.environ.get("OPENAI_API_KEY", "your_openai_api_key_here") 
                # ^ service_key is optional if OPENAI_API_KEY env var is set
            }
        )

        response = lc.generate_text("What is the difference between AI and machine learning?")
        ASCIIColors.green(f"\nResponse from OpenAI: {response}")

except Exception as e:
    ASCIIColors.error(f"Error initializing OpenAI binding: {e}")
    ASCIIColors.info("Please ensure your OpenAI API key is correctly set and you have access to the specified model.")
```

#### **Google Gemini**

Connects to Google's Gemini family of models via the Google AI Studio API.

**Prerequisites:**
*   A Google AI Studio API key. It's recommended to set this as an environment variable `GEMINI_API_KEY`.

**Usage:**

```python
from lollms_client import LollmsClient
from ascii_colors import ASCIIColors
import os

# Set your API key as an environment variable or directly in the config
# os.environ["GEMINI_API_KEY"] = "your_google_api_key_here"

try:
    if "GEMINI_API_KEY" not in os.environ and "your_google_api_key_here" in "your_google_api_key_here":
        ASCIIColors.warning("GEMINI_API_KEY not set in environment or hardcoded. Skipping Gemini example.")
    else:
        lc = LollmsClient(
            llm_binding_name="gemini",
            llm_binding_config={
                "model_name": "gemini-1.5-pro-latest",
                # "service_key": os.environ.get("GEMINI_API_KEY", "your_google_api_key_here")
            }
        )

        response = lc.generate_text("Summarize the plot of 'Dune' in three sentences.")
        ASCIIColors.green(f"\nResponse from Gemini: {response}")

except Exception as e:
    ASCIIColors.error(f"Error initializing Gemini binding: {e}")
    ASCIIColors.info("Please ensure your Google AI Studio API key is correctly set and you have access to the specified model.")
```

#### **Anthropic Claude**

Connects to Anthropic's API to use the Claude family of models, including Claude 3.5 Sonnet, Opus, and Haiku.

**Prerequisites:**
*   An Anthropic API key. It's recommended to set this as an environment variable `ANTHROPIC_API_KEY`.

**Usage:**

```python
from lollms_client import LollmsClient
from ascii_colors import ASCIIColors
import os

# Set your API key as an environment variable or directly in the config
# os.environ["ANTHROPIC_API_KEY"] = "your_anthropic_api_key_here"

try:
    if "ANTHROPIC_API_KEY" not in os.environ and "your_anthropic_api_key_here" in "your_anthropic_api_key_here":
        ASCIIColors.warning("ANTHROPIC_API_KEY not set in environment or hardcoded. Skipping Claude example.")
    else:
        lc = LollmsClient(
            llm_binding_name="claude",
            llm_binding_config={
                "model_name": "claude-3-5-sonnet-20240620",
                # "service_key": os.environ.get("ANTHROPIC_API_KEY", "your_anthropic_api_key_here")
            }
        )

        response = lc.generate_text("What are the core principles of constitutional AI?")
        ASCIIColors.green(f"\nResponse from Claude: {response}")

except Exception as e:
    ASCIIColors.error(f"Error initializing Claude binding: {e}")
    ASCIIColors.info("Please ensure your Anthropic API key is correctly set and you have access to the specified model.")
```

---

### 3. API Aggregator Bindings

These bindings connect to services that provide access to many different models through a single API.

#### **OpenRouter**

OpenRouter provides a unified, OpenAI-compatible interface to access models from dozens of providers (Google, Anthropic, Mistral, Groq, etc.) with one API key.

**Prerequisites:**
*   An OpenRouter API key (starts with `sk-or-...`). It's recommended to set this as an environment variable `OPENROUTER_API_KEY`.

**Usage:**
Model names must be specified in the format `provider/model-name`.

```python
from lollms_client import LollmsClient
from ascii_colors import ASCIIColors
import os

# Set your API key as an environment variable or directly in the config
# os.environ["OPENROUTER_API_KEY"] = "your_openrouter_api_key_here"

try:
    if "OPENROUTER_API_KEY" not in os.environ and "your_openrouter_api_key_here" in "your_openrouter_api_key_here":
        ASCIIColors.warning("OPENROUTER_API_KEY not set in environment or hardcoded. Skipping OpenRouter example.")
    else:
        lc = LollmsClient(
            llm_binding_name="open_router",
            llm_binding_config={
                "model_name": "anthropic/claude-3-haiku-20240307",
                # "open_router_api_key": os.environ.get("OPENROUTER_API_KEY", "your_openrouter_api_key_here")
            }
        )

        response = lc.generate_text("Explain what an API aggregator is, as if to a beginner.")
        ASCIIColors.green(f"\nResponse from OpenRouter: {response}")

except Exception as e:
    ASCIIColors.error(f"Error initializing OpenRouter binding: {e}")
    ASCIIColors.info("Please ensure your OpenRouter API key is correctly set and you have access to the specified model.")
```

#### **Groq**

While Groq is a direct provider, it's famous as an aggregator of speed. It runs open-source models on custom LPU hardware for exceptionally fast inference.

**Prerequisites:**
*   A Groq API key. It's recommended to set this as an environment variable `GROQ_API_KEY`.

**Usage:**

```python
from lollms_client import LollmsClient
from ascii_colors import ASCIIColors
import os

# Set your API key as an environment variable or directly in the config
# os.environ["GROQ_API_KEY"] = "your_groq_api_key_here"

try:
    if "GROQ_API_KEY" not in os.environ and "your_groq_api_key_here" in "your_groq_api_key_here":
        ASCIIColors.warning("GROQ_API_KEY not set in environment or hardcoded. Skipping Groq example.")
    else:
        lc = LollmsClient(
            llm_binding_name="groq",
            llm_binding_config={
                "model_name": "llama3-8b-8192", # Or "mixtral-8x7b-32768"
                # "groq_api_key": os.environ.get("GROQ_API_KEY", "your_groq_api_key_here")
            }
        )

        response = lc.generate_text("Write a 3-line poem about incredible speed.")
        ASCIIColors.green(f"\nResponse from Groq: {response}")

except Exception as e:
    ASCIIColors.error(f"Error initializing Groq binding: {e}")
    ASCIIColors.info("Please ensure your Groq API key is correctly set and you have access to the specified model.")
```

#### **Hugging Face Inference API**

This connects to the serverless Hugging Face Inference API, allowing experimentation with thousands of open-source models without local hardware.

**Note:** This API can have "cold starts," so the first request might be slow.

**Prerequisites:**
*   A Hugging Face User Access Token (starts with `hf_...`). It's recommended to set this as an environment variable `HF_API_KEY`.

**Usage:**

```python
from lollms_client import LollmsClient
from ascii_colors import ASCIIColors
import os

# Set your API key as an environment variable or directly in the config
# os.environ["HF_API_KEY"] = "your_hugging_face_token_here"

try:
    if "HF_API_KEY" not in os.environ and "your_hugging_face_token_here" in "your_hugging_face_token_here":
        ASCIIColors.warning("HF_API_KEY not set in environment or hardcoded. Skipping Hugging Face Inference API example.")
    else:
        lc = LollmsClient(
            llm_binding_name="hugging_face_inference_api",
            llm_binding_config={
                "model_name": "google/gemma-1.1-7b-it", # Or other suitable models from HF
                # "hf_api_key": os.environ.get("HF_API_KEY", "your_hugging_face_token_here")
            }
        )

        response = lc.generate_text("Write a short story about a robot who discovers music.")
        ASCIIColors.green(f"\nResponse from Hugging Face: {response}")

except Exception as e:
    ASCIIColors.error(f"Error initializing Hugging Face Inference API binding: {e}")
    ASCIIColors.info("Please ensure your Hugging Face API token is correctly set and you have access to the specified model.")
```

### Listing Available Models

You can query the active LLM binding to get a list of models it supports or has available. The exact information returned depends on the binding (e.g., Ollama lists local models, OpenAI lists all its API models).

```python
from lollms_client import LollmsClient
from ascii_colors import ASCIIColors
import os

try:
    # Initialize client for Ollama (or any other binding)
    lc = LollmsClient(
        llm_binding_name="ollama",
        llm_binding_config={
            "host_address": "http://localhost:11434"
            # model_name is not needed just to list models
        }
    )

    ASCIIColors.yellow("\nListing available models for the current binding:")
    available_models = lc.listModels()

    if isinstance(available_models, list):
        for model in available_models:
            # Model structure varies by binding, common fields are 'name'
            model_name = model.get('name', 'N/A')
            model_size = model.get('size', 'N/A') # Common for Ollama
            print(f"- {model_name} (Size: {model_size})")
    elif isinstance(available_models, dict) and "error" in available_models:
        ASCIIColors.error(f"Error listing models: {available_models['error']}")
    else:
        print("Could not retrieve model list or unexpected format.")

except Exception as e:
    ASCIIColors.error(f"An error occurred: {e}")

```

### Long Context Processing for Long Texts (`long_context_processing`)

When dealing with a document, article, or transcript that is too large to fit into a model's context window, the `long_context_processing` method is the solution. It intelligently chunks the text, summarizes or processes each piece, and then synthesizes those into a final, coherent output.

```python
from lollms_client import LollmsClient, MSG_TYPE
from ascii_colors import ASCIIColors
import os

# --- A very long text (imagine this is 10,000+ tokens) ---
long_text = """
The history of computing is a fascinating journey from mechanical contraptions to the powerful devices we use today. 
It began with devices like the abacus, used for arithmetic tasks. In the 19th century, Charles Babbage conceived 
the Analytical Engine, a mechanical computer that was never fully built but laid the groundwork for modern computing. 
Ada Lovelace, daughter of Lord Byron, is often credited as the first computer programmer for her work on Babbage's Engine.
The 20th century saw the rise of electronic computers, starting with vacuum tubes and progressing to transistors and integrated circuits. 
Early computers like ENIAC were massive machines, but technological advancements rapidly led to smaller, more powerful, and more accessible devices.
The invention of the microprocessor in 1971 by Intel's Ted Hoff was a pivotal moment, leading to the personal computer revolution. 
Companies like Apple and Microsoft brought computing to the masses. The internet, initially ARPANET, transformed communication and information access globally.
In recent decades, cloud computing, big data, and artificial intelligence have become dominant themes. AI, particularly machine learning and deep learning, 
has enabled breakthroughs in areas like image recognition, natural language processing, and autonomous systems.
Today, a new revolution is on the horizon with quantum computing, which promises to solve problems that are currently intractable 
for even the most powerful supercomputers. Researchers are exploring qubits and quantum entanglement to create 
machines that will redefine what is computationally possible, impacting fields from medicine to materials science.
This continuous evolution demonstrates humanity's relentless pursuit of greater computational power and intelligence.
""" * 10 # Simulate a very long text (repeated 10 times)

# --- Callback to see the process in action ---
def lcp_callback(chunk: str, msg_type: MSG_TYPE, params: dict = None, **kwargs):
    if msg_type in [MSG_TYPE.MSG_TYPE_STEP_START, MSG_TYPE.MSG_TYPE_STEP_END]:
        ASCIIColors.yellow(f">> {chunk}")
    elif msg_type == MSG_TYPE.MSG_TYPE_STEP:
        ASCIIColors.cyan(f"   {chunk}")
    elif msg_type == MSG_TYPE.MSG_TYPE_CHUNK:
        # Only print final answer chunks, not internal step chunks
        pass
    return True

try:
    lc = LollmsClient(llm_binding_name="ollama", llm_binding_config={"model_name": "llama3"})

    # The contextual prompt guides the focus of the processing
    context_prompt = "Summarize the text, focusing on the key technological milestones, notable figures, and future directions in computing history."

    ASCIIColors.blue("--- Starting Long Context Processing (Summarization) ---")
    
    final_summary = lc.long_context_processing(
        text_to_process=long_text,
        contextual_prompt=context_prompt,
        chunk_size_tokens=1000, # Adjust based on your model's context size
        overlap_tokens=200,
        streaming_callback=lcp_callback,
        temperature=0.1 # Good for factual summarization
    )
    
    ASCIIColors.blue("\n--- Final Comprehensive Summary ---")
    ASCIIColors.green(final_summary)

except Exception as e:
    ASCIIColors.error(f"An error occurred during long context processing: {e}")
```

## Contributing

Contributions are welcome! Whether it's bug reports, feature suggestions, documentation improvements, or new bindings, please feel free to open an issue or submit a pull request on our [GitHub repository](https://github.com/ParisNeo/lollms_client).

## License

This project is licensed under the **Apache 2.0 License**. See the [LICENSE](LICENSE) file for details.

## Changelog

For a list of changes and updates, please refer to the [CHANGELOG.md](CHANGELOG.md) file.

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "lollms-client",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.7",
    "maintainer_email": null,
    "keywords": null,
    "author": null,
    "author_email": "ParisNeo <parisneoai@gmail.com>",
    "download_url": "https://files.pythonhosted.org/packages/24/25/30bc0b781dcad981f646e7b27410793f1d76e95520499838598795c5b772/lollms_client-1.4.7.tar.gz",
    "platform": null,
    "description": "# LoLLMs Client Library\r\n\r\n[![License](https://img.shields.io/badge/License-Apache_2.0-blue.svg)](https://opensource.org/licenses/Apache-2.0)\r\n[![PyPI version](https://badge.fury.io/py/lollms_client.svg)](https://badge.fury.io/py/lollms_client)\r\n[![Python Versions](https://img.shields.io/pypi/pyversions/lollms_client.svg)](https://pypi.org/project/lollms-client/)\r\n[![Downloads](https://static.pepy.tech/personalized-badge/lollms-client?period=total&units=international_system&left_color=grey&right_color=green&left_text=Downloads)](https://pepy.tech/project/lollms-client)\r\n[![Documentation - Usage](https://img.shields.io/badge/docs-Usage%20Guide-brightgreen)](DOC_USE.md)\r\n[![Documentation - Developer](https://img.shields.io/badge/docs-Developer%20Guide-blue)](DOC_DEV.md)\r\n[![GitHub stars](https://img.shields.io/github/stars/ParisNeo/lollms_client.svg?style=social&label=Star&maxAge=2592000)](https://github.com/ParisNeo/lollms_client/stargazers/)\r\n[![GitHub issues](https://img.shields.io/github/issues/ParisNeo/lollms_client.svg)](https://github.com/ParisNeo/lollms_client/issues)\r\n\r\n**`lollms_client`** is a powerful and flexible Python library designed to simplify interactions with the **LoLLMs (Lord of Large Language Models)** ecosystem and various other Large Language Model (LLM) backends. It provides a unified API for text generation, multimodal operations (text-to-image, text-to-speech, etc.), and robust function calling through the Model Context Protocol (MCP).\r\n\r\nWhether you're connecting to a remote LoLLMs server, an Ollama instance, the OpenAI API, or running models locally using GGUF (via `llama-cpp-python` or a managed `llama.cpp` server), Hugging Face Transformers, or vLLM, `lollms-client` offers a consistent and developer-friendly experience.\r\n\r\n## Key Features\r\n\r\n*   \ud83d\udd0c **Versatile Binding System:** Seamlessly switch between different LLM backends (LoLLMs, Ollama, OpenAI, Llama.cpp, Transformers, vLLM, OpenLLM, Gemini, Claude, Groq, OpenRouter, Hugging Face Inference API) using a unified `llm_binding_config` dictionary for all parameters.\r\n*   \ud83d\udde3\ufe0f **Multimodal Support:** Interact with models capable of processing images and generate various outputs like speech (TTS), images (TTI), video (TTV), and music (TTM).\r\n*   \ud83d\uddbc\ufe0f **Selective Image Activation:** Control which images in a message are active and sent to the model, allowing for fine-grained multimodal context management without deleting the original data.\r\n*   \ud83e\udd16 **Agentic Workflows with MCP:** Empower LLMs to act as sophisticated agents, breaking down complex tasks, selecting and executing external tools (e.g., internet search, code interpreter, file I/O, image generation) through the Model Context Protocol (MCP) using a robust \"observe-think-act\" loop.\r\n*   \ud83c\udfad **Personalities as Agents:** Personalities can now define their own set of required tools (MCPs) and have access to static or dynamic knowledge bases (`data_source`), turning them into self-contained, ready-to-use agents.\r\n*   \ud83d\ude80 **Streaming & Callbacks:** Efficiently handle real-time text generation with customizable callback functions across all generation methods, including during agentic (MCP) interactions.\r\n*   \ud83d\udcd1 **Long Context Processing:** The `long_context_processing` method (formerly `sequential_summarize`) intelligently chunks and synthesizes texts that exceed the model's context window, suitable for summarization or deep analysis.\r\n*   \ud83d\udcdd **Advanced Structured Content Generation:** Reliably generate structured JSON output from natural language prompts using the `generate_structured_content` helper method, enforcing a specific schema.\r\n*   \ud83d\udcac **Advanced Discussion Management:** Robustly manage conversation histories with `LollmsDiscussion`, featuring branching, context exporting, and automatic pruning.\r\n*   \ud83e\udde0 **Persistent Memory & Data Zones:** `LollmsDiscussion` now supports multiple, distinct data zones (`user_data_zone`, `discussion_data_zone`, `personality_data_zone`) and a long-term `memory` field. This allows for sophisticated context layering and state management, enabling agents to learn and remember over time.\r\n*   \u270d\ufe0f **Structured Memorization:** The `memorize()` method analyzes a conversation to extract its essence (e.g., a problem and its solution), creating a structured \"memory\" with a title and content. These memories are stored and can be explicitly loaded into the AI's context, providing a more robust and manageable long-term memory system.\r\n*   \ud83d\udcca **Detailed Context Analysis:** The `get_context_status()` method provides a rich, detailed breakdown of the prompt context, showing the content and token count for each individual component (system prompt, data zones, message history).\r\n*   \u2699\ufe0f **Standardized Configuration Management:** A unified dictionary-based system (`llm_binding_config`) to configure any binding in a consistent manner.\r\n*   \ud83e\udde9 **Extensible:** Designed to easily incorporate new LLM backends and modality services, including custom MCP toolsets.\r\n*   \ud83d\udcdd **High-Level Operations:** Includes convenience methods for complex tasks like sequential summarization and deep text analysis directly within `LollmsClient`.\r\n\r\n## Installation\r\n\r\nYou can install `lollms_client` directly from PyPI:\r\n\r\n```bash\r\npip install lollms-client\r\n```\r\n\r\nThis will install the core library. Some bindings may require additional dependencies (e.g., `llama-cpp-python`, `torch`, `transformers`, `ollama`, `vllm`, `Pillow` for image utilities, `docling` for document parsing). The library attempts to manage these using `pipmaster`, but for complex dependencies (especially those requiring compilation like `llama-cpp-python` with GPU support), manual installation might be preferred.\r\n\r\n## Core Generation Methods\r\n\r\nThe `LollmsClient` provides several methods for generating text, catering to different use cases.\r\n\r\n### Basic Text Generation (`generate_text`)\r\n\r\nThis is the most straightforward method for generating a response based on a simple prompt.\r\n\r\n```python\r\nfrom lollms_client import LollmsClient, MSG_TYPE\r\nfrom ascii_colors import ASCIIColors\r\nimport os\r\n\r\n# Callback for streaming output\r\ndef simple_streaming_callback(chunk: str, msg_type: MSG_TYPE, params=None, metadata=None) -> bool:\r\n    if msg_type == MSG_TYPE.MSG_TYPE_CHUNK:\r\n        print(chunk, end=\"\", flush=True)\r\n    elif msg_type == MSG_TYPE.MSG_TYPE_EXCEPTION:\r\n        ASCIIColors.error(f\"\\nStreaming Error: {chunk}\")\r\n    return True # True to continue streaming\r\n\r\ntry:\r\n    # Initialize client to connect to a LoLLMs server.\r\n    # All binding-specific parameters now go into the 'llm_binding_config' dictionary.\r\n    lc = LollmsClient(\r\n        llm_binding_name=\"lollms\", # This is the default binding\r\n        llm_binding_config={\r\n            \"host_address\": \"http://localhost:9642\", # Default port for LoLLMs server\r\n            # \"service_key\": \"your_lollms_api_key_here\" # Get key from LoLLMs UI -> User Settings if security is enabled\r\n        }\r\n    )\r\n\r\n    prompt = \"Tell me a fun fact about space.\"\r\n    ASCIIColors.yellow(f\"Prompt: {prompt}\")\r\n\r\n    # Generate text with streaming\r\n    ASCIIColors.green(\"Streaming Response:\")\r\n    response_text = lc.generate_text(\r\n        prompt,\r\n        n_predict=100,\r\n        stream=True,\r\n        streaming_callback=simple_streaming_callback\r\n    )\r\n    print(\"\\n--- End of Stream ---\")\r\n\r\n    # The 'response_text' variable will contain the full concatenated text\r\n    # if streaming_callback returns True throughout.\r\n    if isinstance(response_text, str):\r\n        ASCIIColors.cyan(f\"\\nFull streamed text collected: {response_text[:100]}...\")\r\n    elif isinstance(response_text, dict) and \"error\" in response_text:\r\n        ASCIIColors.error(f\"Error during generation: {response_text['error']}\")\r\n\r\nexcept ValueError as ve:\r\n    ASCIIColors.error(f\"Initialization Error: {ve}\")\r\n    ASCIIColors.info(\"Ensure a LoLLMs server is running or configure another binding.\")\r\nexcept ConnectionRefusedError:\r\n    ASCIIColors.error(\"Connection refused. Is the LoLLMs server running at http://localhost:9642?\")\r\nexcept Exception as e:\r\n    ASCIIColors.error(f\"An unexpected error occurred: {e}\")\r\n\r\n```\r\n\r\n### Generating from Message Lists (`generate_from_messages`)\r\n\r\nFor more complex conversational interactions, you can provide the LLM with a list of messages, similar to the OpenAI Chat Completion API. This allows you to define roles (system, user, assistant) and build multi-turn conversations programmatically.\r\n\r\n```python\r\nfrom lollms_client import LollmsClient, MSG_TYPE\r\nfrom ascii_colors import ASCIIColors\r\nimport os\r\n\r\ndef streaming_callback_for_messages(chunk: str, msg_type: MSG_TYPE, params=None, metadata=None) -> bool:\r\n    if msg_type == MSG_TYPE.MSG_TYPE_CHUNK:\r\n        print(chunk, end=\"\", flush=True)\r\n    return True\r\n\r\ntry:\r\n    # Example for an Ollama binding\r\n    # Ensure you have Ollama installed and model 'llama3' pulled (e.g., ollama pull llama3)\r\n    lc = LollmsClient(\r\n        llm_binding_name=\"ollama\", \r\n        llm_binding_config={\r\n            \"model_name\": \"llama3\",\r\n            \"host_address\": \"http://localhost:11434\" # Default Ollama address\r\n        }\r\n    )\r\n\r\n    # Define the conversation history as a list of messages\r\n    messages = [\r\n        {\"role\": \"system\", \"content\": \"You are a helpful assistant that specializes in programming.\"},\r\n        {\"role\": \"user\", \"content\": \"Hello, what's your name?\"},\r\n        {\"role\": \"assistant\", \"content\": \"I am an AI assistant created by Google.\"},\r\n        {\"role\": \"user\", \"content\": \"Can you explain recursion in Python?\"}\r\n    ]\r\n\r\n    ASCIIColors.yellow(\"\\nGenerating response from messages:\")\r\n    response_text = lc.generate_from_messages(\r\n        messages=messages,\r\n        n_predict=200,\r\n        stream=True,\r\n        streaming_callback=streaming_callback_for_messages\r\n    )\r\n    print(\"\\n--- End of Message Stream ---\")\r\n    ASCIIColors.cyan(f\"\\nFull collected response: {response_text[:150]}...\")\r\n\r\nexcept Exception as e:\r\n    ASCIIColors.error(f\"Error during message generation: {e}\")\r\n\r\n```\r\n\r\n### Advanced Structured Content Generation (`generate_structured_content`)\r\n\r\nThe `generate_structured_content` method is a powerful utility for forcing an LLM's output into a specific JSON format. It's ideal for extracting information, getting consistent tool parameters, or any task requiring reliable, machine-readable output.\r\n\r\n```python\r\nfrom lollms_client import LollmsClient\r\nfrom ascii_colors import ASCIIColors\r\nimport json\r\nimport os\r\n\r\ntry:\r\n    # Using Ollama as an example binding\r\n    lc = LollmsClient(llm_binding_name=\"ollama\", llm_binding_config={\"model_name\": \"llama3\"})\r\n\r\n    text_block = \"John Doe is a 34-year-old software engineer from New York. He loves hiking and Python programming.\"\r\n\r\n    # Define the exact JSON structure you want\r\n    output_template = {\r\n        \"full_name\": \"string\",\r\n        \"age\": \"integer\",\r\n        \"profession\": \"string\",\r\n        \"city\": \"string\",\r\n        \"hobbies\": [\"list\", \"of\", \"strings\"] # Example of a list in schema\r\n    }\r\n\r\n    ASCIIColors.yellow(f\"\\nExtracting structured data from: '{text_block}'\")\r\n    ASCIIColors.yellow(f\"Using schema: {json.dumps(output_template)}\")\r\n\r\n    # Generate the structured data\r\n    extracted_data = lc.generate_structured_content(\r\n        prompt=f\"Extract the relevant information from the following text:\\n\\n{text_block}\",\r\n        schema=output_template, # Note: parameter is 'schema'\r\n        temperature=0.0 # Use low temperature for deterministic structured output\r\n    )\r\n\r\n    if extracted_data:\r\n        ASCIIColors.green(\"\\nExtracted Data (JSON):\")\r\n        print(json.dumps(extracted_data, indent=2))\r\n    else:\r\n        ASCIIColors.error(\"\\nFailed to extract structured data.\")\r\n\r\nexcept Exception as e:\r\n    ASCIIColors.error(f\"An error occurred during structured content generation: {e}\")\r\n```\r\n\r\n## Advanced Discussion Management\r\n\r\nThe `LollmsDiscussion` class is a core component for managing conversational state, including message history, long-term memory, and various context zones.\r\n\r\n### Basic Chat with `LollmsDiscussion`\r\n\r\nFor general conversational agents that need to maintain context across turns, `LollmsDiscussion` simplifies the process. It automatically handles message formatting, history management, and context window limitations.\r\n\r\n```python\r\nfrom lollms_client import LollmsClient, LollmsDiscussion, MSG_TYPE, LollmsDataManager\r\nfrom ascii_colors import ASCIIColors\r\nimport os\r\nimport tempfile\r\n\r\n# Initialize LollmsClient\r\ntry:\r\n    lc = LollmsClient(\r\n        llm_binding_name=\"ollama\", \r\n        llm_binding_config={\r\n            \"model_name\": \"llama3\",\r\n            \"host_address\": \"http://localhost:11434\"\r\n        }\r\n    )\r\nexcept Exception as e:\r\n    ASCIIColors.error(f\"Failed to initialize LollmsClient for discussion: {e}\")\r\n    exit()\r\n\r\n# Create a new discussion. For persistent discussions, pass a db_manager.\r\n# Using a temporary directory for the database for this example's simplicity\r\nwith tempfile.TemporaryDirectory() as tmpdir:\r\n    db_path = Path(tmpdir) / \"discussion_db.sqlite\"\r\n    db_manager = LollmsDataManager(f\"sqlite:///{db_path}\")\r\n\r\n    discussion_id = \"basic_chat_example\"\r\n    discussion = db_manager.get_discussion(lc, discussion_id)\r\n    if not discussion:\r\n        ASCIIColors.yellow(f\"\\nCreating new discussion '{discussion_id}'...\")\r\n        discussion = LollmsDiscussion.create_new(\r\n            lollms_client=lc,\r\n            db_manager=db_manager,\r\n            id=discussion_id,\r\n            autosave=True # Important for persistence\r\n        )\r\n        discussion.system_prompt = \"You are a friendly and helpful AI.\"\r\n        discussion.commit()\r\n    else:\r\n        ASCIIColors.green(f\"\\nLoaded existing discussion '{discussion_id}'.\")\r\n\r\n\r\n    # Define a simple callback for streaming\r\n    def chat_callback(chunk: str, msg_type: MSG_TYPE, **kwargs) -> bool:\r\n        if msg_type == MSG_TYPE.MSG_TYPE_CHUNK:\r\n            print(chunk, end=\"\", flush=True)\r\n        return True\r\n\r\n    try:\r\n        ASCIIColors.cyan(\"> User: Hello, how are you today?\")\r\n        response = discussion.chat(\r\n            user_message=\"Hello, how are you today?\",\r\n            streaming_callback=chat_callback\r\n        )\r\n        print(\"\\n\") # Newline after stream finishes\r\n\r\n        ai_message = response['ai_message']\r\n        user_message = response['user_message']\r\n\r\n        ASCIIColors.green(f\"< Assistant (Full): {ai_message.content[:100]}...\")\r\n\r\n        # Now, continue the conversation\r\n        ASCIIColors.cyan(\"\\n> User: Can you recommend a good book?\")\r\n        response = discussion.chat(\r\n            user_message=\"Can you recommend a good book?\",\r\n            streaming_callback=chat_callback\r\n        )\r\n        print(\"\\n\")\r\n\r\n        # You can inspect the full message history\r\n        ASCIIColors.magenta(\"\\n--- Discussion History (last 3 messages) ---\")\r\n        for msg in discussion.get_messages()[-3:]:\r\n            print(f\"[{msg.sender.capitalize()}]: {msg.content[:50]}...\")\r\n\r\n    except Exception as e:\r\n        ASCIIColors.error(f\"An error occurred during discussion chat: {e}\")\r\n```\r\n\r\n### Building Stateful Agents with Memory and Data Zones\r\n\r\nThe `LollmsDiscussion` class provides a sophisticated system for creating stateful agents that can remember information across conversations. This is achieved through a layered system of \"context zones\" that are automatically combined into the AI's system prompt.\r\n\r\n#### Understanding the Context Zones\r\n\r\nThe AI's context is more than just chat history. It's built from several distinct components, each with a specific purpose:\r\n\r\n*   **`system_prompt`**: The foundational layer defining the AI's core identity, persona, and primary instructions.\r\n*   **`memory`**: The AI's long-term, persistent memory. It stores key facts about the user or topics, built up over time using the `memorize()` method.\r\n*   **`user_data_zone`**: Holds session-specific information about the user's current state or goals (e.g., \"User is currently working on 'file.py'\").\r\n*   **`discussion_data_zone`**: Contains state or meta-information about the current conversational task (e.g., \"Step 1 of the plan is complete\").\r\n*   **`personality_data_zone`**: A knowledge base or set of rules automatically injected from a `LollmsPersonality`'s `data_source`.\r\n*   **`pruning_summary`**: An automatic, AI-generated summary of the oldest messages in a very long chat, used to conserve tokens without losing the gist of the early conversation.\r\n\r\nThe `get_context_status()` method is your window into this system, showing you exactly how these zones are combined and how many tokens they consume.\r\n\r\nLet's see this in action with a \"Personal Assistant\" agent that learns about the user over time.\r\n\r\n```python\r\nfrom lollms_client import LollmsClient, LollmsDataManager, LollmsDiscussion, MSG_TYPE\r\nfrom ascii_colors import ASCIIColors\r\nimport json\r\nimport tempfile\r\nimport os\r\n\r\n# --- 1. Setup a persistent database for our discussion ---\r\nwith tempfile.TemporaryDirectory() as tmpdir:\r\n    db_path = Path(tmpdir) / \"my_assistant.db\"\r\n    db_manager = LollmsDataManager(f\"sqlite:///{db_path}\")\r\n\r\n    try:\r\n        lc = LollmsClient(llm_binding_name=\"ollama\", llm_binding_config={\"model_name\": \"llama3\"})\r\n    except Exception as e:\r\n        ASCIIColors.error(f\"Failed to initialize LollmsClient for stateful agent: {e}\")\r\n        exit()\r\n\r\n    # Try to load an existing discussion or create a new one\r\n    discussion_id = \"user_assistant_chat_1\"\r\n    discussion = db_manager.get_discussion(lc, discussion_id)\r\n    if not discussion:\r\n        ASCIIColors.yellow(\"Creating a new discussion for stateful agent...\")\r\n        discussion = LollmsDiscussion.create_new(\r\n            lollms_client=lc,\r\n            db_manager=db_manager,\r\n            id=discussion_id,\r\n            autosave=True # Important for persistence\r\n        )\r\n        # Let's preset some data in different zones\r\n        discussion.system_prompt = \"You are a helpful Personal Assistant.\"\r\n        discussion.user_data_zone = \"User's Name: Alex\\nUser's Goal: Learn about AI development.\"\r\n        discussion.commit()\r\n    else:\r\n        ASCIIColors.green(\"Loaded existing discussion for stateful agent.\")\r\n\r\n\r\n    def run_chat_turn(prompt: str):\r\n        \"\"\"Helper function to run a single chat turn and print details.\"\"\"\r\n        ASCIIColors.cyan(f\"\\n> User: {prompt}\")\r\n\r\n        # --- A. Check context status BEFORE the turn using get_context_status() ---\r\n        ASCIIColors.magenta(\"\\n--- Context Status (Before Generation) ---\")\r\n        status = discussion.get_context_status()\r\n        print(f\"Max Tokens: {status.get('max_tokens')}, Current Tokens: {status.get('current_tokens')}\")\r\n        \r\n        # Print the system context details\r\n        if 'system_context' in status['zones']:\r\n            sys_ctx = status['zones']['system_context']\r\n            print(f\"  - System Context Tokens: {sys_ctx['tokens']}\")\r\n            # The 'breakdown' shows the individual zones that were combined\r\n            for name, content in sys_ctx.get('breakdown', {}).items():\r\n                # For brevity, show only first line of content\r\n                print(f\"    -> Contains '{name}': {content.split(os.linesep)}...\")\r\n\r\n        # Print the message history details\r\n        if 'message_history' in status['zones']:\r\n            msg_hist = status['zones']['message_history']\r\n            print(f\"  - Message History Tokens: {msg_hist['tokens']} ({msg_hist['message_count']} messages)\")\r\n\r\n        print(\"------------------------------------------\")\r\n\r\n        # --- B. Run the chat ---\r\n        ASCIIColors.green(\"\\n< Assistant:\")\r\n        response = discussion.chat(\r\n            user_message=prompt,\r\n            streaming_callback=lambda chunk, type, **k: print(chunk, end=\"\", flush=True) if type==MSG_TYPE.MSG_TYPE_CHUNK else None\r\n        )\r\n        print() # Newline after stream\r\n\r\n        # --- C. Trigger memorization to update the 'memory' zone ---\r\n        ASCIIColors.yellow(\"\\nTriggering memorization process...\")\r\n        discussion.memorize()\r\n        discussion.commit() # Save the new memory to the DB\r\n        ASCIIColors.yellow(\"Memorization complete.\")\r\n\r\n    # --- Run a few turns ---\r\n    run_chat_turn(\"Hi there! Can you recommend a good Python library for building web APIs?\")\r\n    run_chat_turn(\"That sounds great. By the way, my favorite programming language is Rust, I find its safety features amazing.\")\r\n    run_chat_turn(\"What was my favorite programming language again?\")\r\n\r\n    # --- Final Inspection of Memory ---\r\n    ASCIIColors.magenta(\"\\n--- Final Context Status ---\")\r\n    status = discussion.get_context_status()\r\n    print(f\"Max Tokens: {status.get('max_tokens')}, Current Tokens: {status.get('current_tokens')}\")\r\n    if 'system_context' in status['zones']:\r\n        sys_ctx = status['zones']['system_context']\r\n        print(f\"  - System Context Tokens: {sys_ctx['tokens']}\")\r\n        for name, content in sys_ctx.get('breakdown', {}).items():\r\n            # Print the full content of the memory zone to verify it was updated\r\n            if name == 'memory':\r\n                ASCIIColors.yellow(f\"    -> Full '{name}' content:\\n{content}\")\r\n            else:\r\n                print(f\"    -> Contains '{name}': {content.split(os.linesep)}...\")\r\n    print(\"------------------------------------------\")\r\n\r\n```\r\n\r\n#### How it Works:\r\n\r\n1.  **Persistence & Initialization:** The `LollmsDataManager` saves and loads the discussion. We initialize the `system_prompt` and `user_data_zone` to provide initial context.\r\n2.  **`get_context_status()`:** Before each generation, we call this method. The output shows a `system_context` block with a token count for all combined zones and a `breakdown` field that lets us see the content of each individual zone that contributed to it.\r\n3.  **`memorize()`:** After the user mentions their favorite language, `memorize()` is called. The LLM analyzes the last turn, identifies this new, important fact, and appends it to the `discussion.memory` zone.\r\n4.  **Recall:** In the final turn, when asked to recall the favorite language, the AI has access to the updated `memory` content within its system context and can correctly answer \"Rust\". This demonstrates true long-term, stateful memory.\r\n\r\n### Managing Multimodal Context: Activating and Deactivating Images\r\n\r\nWhen working with multimodal models, you can now control which images in a message are active and sent to the model. This is useful for focusing the AI's attention, saving tokens on expensive vision models, or allowing a user to correct which images are relevant.\r\n\r\nThis is managed at the `LollmsMessage` level using the `toggle_image_activation()` method.\r\n\r\n```python\r\nfrom lollms_client import LollmsClient, LollmsDiscussion, LollmsDataManager, MSG_TYPE\r\nfrom ascii_colors import ASCIIColors\r\nimport base64\r\nfrom pathlib import Path\r\nimport os\r\nimport tempfile\r\n\r\n# Helper to create a dummy image b64 string\r\ndef create_dummy_image(text, output_dir):\r\n    try:\r\n        from PIL import Image, ImageDraw, ImageFont\r\n    except ImportError:\r\n        ASCIIColors.warning(\"Pillow not installed. Skipping image example.\")\r\n        return None\r\n    \r\n    # Try to find a common font, otherwise use default\r\n    font_path = Path(\"/usr/share/fonts/truetype/dejavu/DejaVuSans.ttf\") # Common Linux path\r\n    if not font_path.exists():\r\n        font_path = Path(\"/Library/Fonts/Arial.ttf\") # Common macOS path\r\n    if not font_path.exists():\r\n        font_path = Path(\"C:/Windows/Fonts/arial.ttf\") # Common Windows path\r\n    \r\n    try:\r\n        font = ImageFont.truetype(str(font_path), 15)\r\n    except (IOError, OSError):\r\n        font = ImageFont.load_default() # Fallback to default if font not found\r\n\r\n    img = Image.new('RGB', (200, 50), color = (73, 109, 137))\r\n    d = ImageDraw.Draw(img)\r\n    d.text((10,10), text, fill=(255,255,0), font=font)\r\n    \r\n    temp_file = Path(output_dir) / f\"temp_img_{text.replace(' ', '_')}.png\"\r\n    img.save(temp_file, \"PNG\")\r\n    b64 = base64.b64encode(temp_file.read_bytes()).decode('utf-8')\r\n    temp_file.unlink() # Clean up temporary file\r\n    return b64\r\n\r\n# --- 1. Setup ---\r\ntry:\r\n    # Llava is a good multi-modal model for Ollama\r\n    # Ensure Ollama is running and 'llava' model is pulled (e.g., ollama pull llava)\r\n    lc = LollmsClient(llm_binding_name=\"ollama\", llm_binding_config={\"model_name\": \"llava\"})\r\nexcept Exception as e:\r\n    ASCIIColors.warning(f\"Failed to initialize LollmsClient for image example: {e}\")\r\n    ASCIIColors.warning(\"Skipping image activation example. Ensure Ollama is running and 'llava' model is pulled.\")\r\n    exit()\r\n\r\nwith tempfile.TemporaryDirectory() as tmpdir:\r\n    db_path = Path(tmpdir) / \"image_discussion_db.sqlite\"\r\n    db_manager = LollmsDataManager(f\"sqlite:///{db_path}\")\r\n    discussion = LollmsDiscussion.create_new(lollms_client=lc, db_manager=db_manager)\r\n\r\n    # --- 2. Add a message with multiple images ---\r\n    # Ensure Pillow is installed: pip install Pillow\r\n    img1_b64 = create_dummy_image(\"Image 1: Apple\", tmpdir)\r\n    img2_b64 = create_dummy_image(\"Image 2: Cat\", tmpdir)\r\n    img3_b64 = create_dummy_image(\"Image 3: Dog\", tmpdir)\r\n\r\n    if not img1_b64 or not img2_b64 or not img3_b64:\r\n        ASCIIColors.warning(\"Skipping image activation example due to image creation failure (likely missing Pillow or font).\")\r\n        exit()\r\n\r\n    discussion.add_message(\r\n        sender=\"user\", \r\n        content=\"What is in the second image?\", \r\n        images=[img1_b64, img2_b64, img3_b64]\r\n    )\r\n    user_message = discussion.get_messages()[-1]\r\n\r\n    # --- 3. Check the initial state ---\r\n    ASCIIColors.magenta(\"--- Initial State (All 3 Images Active) ---\")\r\n    status_before = discussion.get_context_status()\r\n    # The 'content' field for message history will indicate the number of images if present\r\n    print(f\"Message History Text (showing active images):\\n{status_before['zones']['message_history']['content']}\")\r\n\r\n    # --- 4. Deactivate irrelevant images ---\r\n    ASCIIColors.magenta(\"\\n--- Deactivating images 1 and 3 ---\")\r\n    user_message.toggle_image_activation(index=0, active=False) # Deactivate first image (Apple)\r\n    user_message.toggle_image_activation(index=2, active=False) # Deactivate third image (Dog)\r\n    discussion.commit() # Save changes to the message\r\n\r\n    # --- 5. Check the new state ---\r\n    ASCIIColors.magenta(\"\\n--- New State (Only Image 2 is Active) ---\")\r\n    status_after = discussion.get_context_status()\r\n    print(f\"Message History Text (showing active images):\\n{status_after['zones']['message_history']['content']}\")\r\n\r\n    ASCIIColors.green(\"\\nNotice the message now says '(1 image(s) attached)' instead of 3, and only the active image will be sent to the multimodal LLM.\")\r\n    ASCIIColors.green(\"To confirm, let's ask the model what it sees:\")\r\n\r\n    # This will send only the activated image\r\n    response = discussion.chat(\r\n        user_message=\"What do you see in the image(s) attached to my last message?\",\r\n        # Use a streaming callback to see the response\r\n        streaming_callback=lambda chunk, type, **k: print(chunk, end=\"\", flush=True) if type==MSG_TYPE.MSG_TYPE_CHUNK else None\r\n    )\r\n    print(\"\\n\")\r\n    ASCIIColors.green(f\"Assistant's response after toggling images: {response['ai_message'].content}\")\r\n\r\n```\r\n**Note:** The image generation helper in the example requires `Pillow` (`pip install Pillow`). It also attempts to find common system fonts; if issues persist, you might need to install `matplotlib` for better font handling or provide a specific font path.\r\n\r\n### Putting It All Together: An Advanced Agentic Example\r\n\r\nLet's create a **Python Coder Agent**. This agent will use a set of coding rules from a local file as its knowledge base and will be equipped with a tool to execute the code it writes. This demonstrates the synergy between `LollmsPersonality` (with `data_source` and `active_mcps`), `LollmsDiscussion`, and the MCP system.\r\n\r\n#### Step 1: Create the Knowledge Base (`coding_rules.txt`)\r\n\r\nCreate a simple text file with the rules our agent must follow.\r\n\r\n```text\r\n# File: coding_rules.txt\r\n\r\n1.  All Python functions must include a Google-style docstring.\r\n2.  Use type hints for all function parameters and return values.\r\n3.  The main execution block should be protected by `if __name__ == \"__main__\":`.\r\n4.  After defining a function, add a simple example of its usage inside the main block.\r\n5.  Print the output of the example usage to the console.\r\n```\r\n\r\n#### Step 2: The Main Script (`agent_example.py`)\r\n\r\nThis script will define the personality, initialize the client, and run the agent.\r\n\r\n```python\r\nfrom pathlib import Path\r\nfrom lollms_client import LollmsClient, LollmsPersonality, LollmsDiscussion, MSG_TYPE\r\nfrom ascii_colors import ASCIIColors, trace_exception\r\nimport json\r\nimport tempfile\r\nimport os\r\n\r\n# A detailed callback to visualize the agent's process\r\ndef agent_callback(chunk: str, msg_type: MSG_TYPE, params: dict = None, **kwargs) -> bool:\r\n    if not params: params = {}\r\n    \r\n    if msg_type == MSG_TYPE.MSG_TYPE_STEP:\r\n        ASCIIColors.yellow(f\"\\n>> Agent Step: {chunk}\")\r\n    elif msg_type == MSG_TYPE.MSG_TYPE_STEP_START:\r\n        ASCIIColors.yellow(f\"\\n>> Agent Step Start: {chunk}\")\r\n    elif msg_type == MSG_TYPE.MSG_TYPE_STEP_END:\r\n        result = params.get('result', '')\r\n        # Only print a snippet of result to avoid overwhelming console for large outputs\r\n        if isinstance(result, dict):\r\n            result_str = json.dumps(result)[:150] + (\"...\" if len(json.dumps(result)) > 150 else \"\")\r\n        else:\r\n            result_str = str(result)[:150] + (\"...\" if len(str(result)) > 150 else \"\")\r\n        ASCIIColors.green(f\"<< Agent Step End: {chunk} -> Result: {result_str}\")\r\n    elif msg_type == MSG_TYPE.MSG_TYPE_THOUGHT_CONTENT:\r\n        ASCIIColors.magenta(f\"\ud83e\udd14 Agent Thought: {chunk}\")\r\n    elif msg_type == MSG_TYPE.MSG_TYPE_TOOL_CALL:\r\n        tool_name = params.get('name', 'unknown_tool')\r\n        tool_params = params.get('parameters', {})\r\n        ASCIIColors.blue(f\"\ud83d\udee0\ufe0f  Agent Action: Called '{tool_name}' with {tool_params}\")\r\n    elif msg_type == MSG_TYPE.MSG_TYPE_TOOL_OUTPUT:\r\n        ASCIIColors.cyan(f\"\ud83d\udc40 Agent Observation (Tool Output): {params.get('result', 'No result')}\")\r\n    elif msg_type == MSG_TYPE.MSG_TYPE_CHUNK:\r\n        print(chunk, end=\"\", flush=True) # Final answer stream\r\n    return True\r\n\r\n# Create a temporary directory for the discussion DB and coding rules file\r\nwith tempfile.TemporaryDirectory() as tmpdir:\r\n    db_path = Path(tmpdir) / \"agent_discussion.db\"\r\n    \r\n    # Create the coding rules file\r\n    rules_path = Path(tmpdir) / \"coding_rules.txt\"\r\n    rules_content = \"\"\"\r\n1.  All Python functions must include a Google-style docstring.\r\n2.  Use type hints for all function parameters and return values.\r\n3.  The main execution block should be protected by `if __name__ == \"__main__\":`.\r\n4.  After defining a function, add a simple example of its usage inside the main block.\r\n5.  Print the output of the example usage to the console.\r\n\"\"\"\r\n    rules_path.write_text(rules_content.strip())\r\n    ASCIIColors.yellow(f\"Created temporary coding rules file at: {rules_path}\")\r\n\r\n    try:\r\n        # --- 1. Load the knowledge base from the file ---\r\n        coding_rules = rules_path.read_text()\r\n\r\n        # --- 2. Define the Coder Agent Personality ---\r\n        coder_personality = LollmsPersonality(\r\n            name=\"Python Coder Agent\",\r\n            author=\"lollms-client\",\r\n            category=\"Coding\",\r\n            description=\"An agent that writes and executes Python code according to specific rules.\",\r\n            system_prompt=(\r\n                \"You are an expert Python programmer. Your task is to write clean, executable Python code based on the user's request. \"\r\n                \"You MUST strictly follow all rules provided in the 'Personality Static Data' section. \"\r\n                \"First, think about the plan. Then, use the `python_code_interpreter` tool to write and execute the code. \"\r\n                \"Finally, present the code and its output to the user.\"\r\n            ),\r\n            # A) Attach the static knowledge base\r\n            data_source=coding_rules,\r\n            # B) Equip the agent with a code execution tool\r\n            active_mcps=[\"python_code_interpreter\"]\r\n        )\r\n\r\n        # --- 3. Initialize the Client and Discussion ---\r\n        # A code-specialized model is recommended (e.g., codellama, deepseek-coder)\r\n        # Ensure Ollama is running and 'codellama' model is pulled (e.g., ollama pull codellama)\r\n        lc = LollmsClient(\r\n            llm_binding_name=\"ollama\",          \r\n            llm_binding_config={\r\n                \"model_name\": \"codellama\",\r\n                \"host_address\": \"http://localhost:11434\"\r\n            },\r\n            mcp_binding_name=\"local_mcp\"    # Enable the local tool execution engine\r\n        )\r\n        # For agentic workflows, it's often good to have a persistent discussion\r\n        db_manager = LollmsDataManager(f\"sqlite:///{db_path}\")\r\n        discussion = LollmsDiscussion.create_new(lollms_client=lc, db_manager=db_manager)\r\n        \r\n        # --- 4. The User's Request ---\r\n        user_prompt = \"Write a Python function that takes two numbers and returns their sum.\"\r\n\r\n        ASCIIColors.yellow(f\"User Prompt: {user_prompt}\")\r\n        print(\"\\n\" + \"=\"*50 + \"\\nAgent is now running...\\n\" + \"=\"*50)\r\n\r\n        # --- 5. Run the Agentic Chat Turn ---\r\n        response = discussion.chat(\r\n            user_message=user_prompt,\r\n            personality=coder_personality,\r\n            streaming_callback=agent_callback,\r\n            max_llm_iterations=5, # Limit iterations for faster demo\r\n            tool_call_decision_temperature=0.0 # Make decision more deterministic\r\n        )\r\n\r\n        print(\"\\n\\n\" + \"=\"*50 + \"\\nAgent finished.\\n\" + \"=\"*50)\r\n        \r\n        # --- 6. Inspect the results ---\r\n        ai_message = response['ai_message']\r\n        ASCIIColors.green(\"\\n--- Final Answer from Agent ---\")\r\n        print(ai_message.content)\r\n        \r\n        ASCIIColors.magenta(\"\\n--- Tool Calls Made (from metadata) ---\")\r\n        if \"tool_calls\" in ai_message.metadata:\r\n            print(json.dumps(ai_message.metadata[\"tool_calls\"], indent=2))\r\n        else:\r\n            print(\"No tool calls recorded in message metadata.\")\r\n\r\n    except Exception as e:\r\n        ASCIIColors.error(f\"An error occurred during agent execution: {e}\")\r\n        ASCIIColors.warning(\"Please ensure Ollama is running, 'codellama' model is pulled, and 'local_mcp' binding is available.\")\r\n        trace_exception(e) # Provide detailed traceback\r\n```\r\n\r\n#### Step 3: What Happens Under the Hood\r\n\r\nWhen you run `agent_example.py`, a sophisticated process unfolds:\r\n\r\n1.  **Initialization:** The `LollmsDiscussion.chat()` method is called with the `coder_personality`.\r\n2.  **Knowledge Injection:** The `chat` method sees that `personality.data_source` is a string. It automatically takes the content of `coding_rules.txt` and injects it into the discussion's data zones.\r\n3.  **Tool Activation:** The method also sees `personality.active_mcps`. It enables the `python_code_interpreter` tool for this turn.\r\n4.  **Context Assembly:** The `LollmsClient` assembles a rich prompt for the LLM that includes:\r\n    *   The personality's `system_prompt`.\r\n    *   The content of `coding_rules.txt` (from the data zones).\r\n    *   The list of available tools (including `python_code_interpreter`).\r\n    *   The user's request (\"Write a function...\").\r\n5.  **Reason and Act:** The LLM, now fully briefed, reasons that it needs to use the `python_code_interpreter` tool. It formulate the Python code *according to the rules it was given*.\r\n6.  **Tool Execution:** The `local_mcp` binding receives the code and executes it in a secure local environment. It captures any output (`stdout`, `stderr`) and results.\r\n7.  **Observation:** The execution results are sent back to the LLM as an \"observation.\"\r\n8.  **Final Synthesis:** The LLM now has the user's request, the rules, the code it wrote, and the code's output. It synthesizes all of this into a final, comprehensive answer for the user.\r\n\r\nThis example showcases how `lollms-client` allows you to build powerful, knowledgeable, and capable agents by simply composing personalities with data and tools.\r\n\r\n## Using LoLLMs Client with Different Bindings\r\n\r\n`lollms-client` supports a wide range of LLM backends through its binding system. This section provides practical examples of how to initialize `LollmsClient` for each of the major supported bindings.\r\n\r\n### A New Configuration Model\r\n\r\nConfiguration for all bindings has been unified. Instead of passing parameters like `host_address` or `model_name` directly to the `LollmsClient` constructor, you now pass them inside a single dictionary: `llm_binding_config`.\r\n\r\nThis approach provides a clean, consistent, and extensible way to manage settings for any backend. Each binding defines its own set of required and optional parameters (e.g., `host_address`, `model_name`, `service_key`, `n_gpu_layers`).\r\n\r\n```python\r\n# General configuration pattern\r\nfrom lollms_client import LollmsClient\r\n# ... other imports as needed\r\n\r\n# lc = LollmsClient(\r\n#     llm_binding_name=\"your_binding_name\",\r\n#     llm_binding_config={\r\n#         \"parameter_1_for_this_binding\": \"value_1\",\r\n#         \"parameter_2_for_this_binding\": \"value_2\",\r\n#         # ... and so on\r\n#     }\r\n# )\r\n```\r\n\r\n---\r\n\r\n### 1. Core and Local Server Bindings\r\n\r\nThese bindings connect to servers running on your local network, including the core LoLLMs server itself.\r\n\r\n#### **LoLLMs (Default Binding)**\r\n\r\nThis connects to a running LoLLMs service, which acts as a powerful backend providing access to models, personalities, and tools. This is the default and most feature-rich way to use `lollms-client`.\r\n\r\n**Prerequisites:**\r\n*   A LoLLMs server instance installed and running (e.g., `lollms-webui`).\r\n*   An API key can be generated from the LoLLMs web UI (under User Settings -> Security) if security is enabled.\r\n\r\n**Usage:**\r\n\r\n```python\r\nfrom lollms_client import LollmsClient\r\nfrom ascii_colors import ASCIIColors\r\nimport os\r\n\r\ntry:\r\n    # The default port for a LoLLMs server is 9642 (a nod to The Hitchhiker's Guide to the Galaxy).\r\n    # The API key can also be set via the LOLLMS_API_KEY environment variable.\r\n    config = {\r\n        \"host_address\": \"http://localhost:9642\",\r\n        # \"service_key\": \"your_lollms_api_key_here\" # Uncomment and replace if security is enabled\r\n    }\r\n\r\n    lc = LollmsClient(\r\n        llm_binding_name=\"lollms\", # This is the default, so specifying it is optional\r\n        llm_binding_config=config\r\n    )\r\n\r\n    response = lc.generate_text(\"What is the answer to life, the universe, and everything?\")\r\n    ASCIIColors.green(f\"\\nResponse from LoLLMs: {response}\")\r\n\r\nexcept ConnectionRefusedError:\r\n    ASCIIColors.error(\"Connection refused. Is the LoLLMs server running at http://localhost:9642?\")\r\nexcept ValueError as ve:\r\n    ASCIIColors.error(f\"Initialization Error: {ve}\")\r\nexcept Exception as e:\r\n    ASCIIColors.error(f\"An unexpected error occurred: {e}\")\r\n```\r\n\r\n#### **Ollama**\r\n\r\nThe `ollama` binding connects to a running Ollama server instance on your machine or network.\r\n\r\n**Prerequisites:**\r\n*   [Ollama installed and running](https://ollama.com/).\r\n*   Models pulled, e.g., `ollama pull llama3`.\r\n\r\n**Usage:**\r\n\r\n```python\r\nfrom lollms_client import LollmsClient\r\nfrom ascii_colors import ASCIIColors\r\nimport os\r\n\r\ntry:\r\n    # Configuration for a local Ollama server\r\n    lc = LollmsClient(\r\n        llm_binding_name=\"ollama\",\r\n        llm_binding_config={\r\n            \"model_name\": \"llama3\",  # Or any other model you have pulled\r\n            \"host_address\": \"http://localhost:11434\" # Default Ollama address\r\n        }\r\n    )\r\n\r\n    # Now you can use lc.generate_text(), lc.chat(), etc.\r\n    response = lc.generate_text(\"Why is the sky blue?\")\r\n    ASCIIColors.green(f\"\\nResponse from Ollama: {response}\")\r\n\r\nexcept Exception as e:\r\n    ASCIIColors.error(f\"Error initializing Ollama binding: {e}\")\r\n    ASCIIColors.info(\"Please ensure Ollama is installed, running, and the specified model is pulled.\")\r\n```\r\n\r\n#### **PythonLlamaCpp (Local GGUF Models)**\r\n\r\nThe `pythonllamacpp` binding loads and runs GGUF model files directly using the powerful `llama-cpp-python` library. This is ideal for high-performance, local inference on CPU or GPU.\r\n\r\n**Prerequisites:**\r\n*   A GGUF model file downloaded to your machine.\r\n*   `llama-cpp-python` installed. For GPU support, it must be compiled with the correct flags (e.g., `CMAKE_ARGS=\"-DLLAMA_CUBLAS=on\" pip install llama-cpp-python`).\r\n\r\n**Usage:**\r\n\r\n```python\r\nfrom lollms_client import LollmsClient\r\nfrom ascii_colors import ASCIIColors\r\nimport os\r\nfrom pathlib import Path\r\n\r\n# Path to your GGUF model file\r\n# IMPORTANT: Replace this with the actual path to your model file\r\n# Example: MODEL_PATH = Path.home() / \"models\" / \"your_model_name.gguf\"\r\nMODEL_PATH = Path(\"./path/to/your/model.gguf\") \r\n\r\n# Binding-specific configuration\r\nconfig = {\r\n    \"model_path\": str(MODEL_PATH), # The path to the GGUF file\r\n    \"n_gpu_layers\": -1,       # -1 for all layers to GPU, 0 for CPU\r\n    \"n_ctx\": 4096,            # Context size\r\n    \"seed\": -1,               # -1 for random seed\r\n    \"chat_format\": \"chatml\"   # Or another format like 'llama-2' or 'mistral'\r\n}\r\n\r\nif not MODEL_PATH.exists():\r\n    ASCIIColors.warning(f\"Model file not found at: {MODEL_PATH}\")\r\n    ASCIIColors.warning(\"Skipping PythonLlamaCpp example. Please download a GGUF model and update MODEL_PATH.\")\r\nelse:\r\n    try:\r\n        lc = LollmsClient(\r\n            llm_binding_name=\"pythonllamacpp\",\r\n            llm_binding_config=config\r\n        )\r\n\r\n        response = lc.generate_text(\"Write a recipe for a great day.\")\r\n        ASCIIColors.green(f\"\\nResponse from PythonLlamaCpp: {response}\")\r\n\r\n    except ImportError:\r\n        ASCIIColors.error(\"`llama-cpp-python` not installed. Please install it (`pip install llama-cpp-python`) to run this example.\")\r\n    except Exception as e:\r\n        ASCIIColors.error(f\"Error initializing PythonLlamaCpp binding: {e}\")\r\n        ASCIIColors.info(\"Please ensure the model path is correct and `llama-cpp-python` is correctly installed (with GPU support if desired).\")\r\n\r\n```\r\n\r\n---\r\n\r\n### 2. Cloud Service Bindings\r\n\r\nThese bindings connect to hosted LLM APIs from major providers.\r\n\r\n#### **OpenAI**\r\n\r\nConnects to the official OpenAI API to use models like GPT-4o, GPT-4, and GPT-3.5.\r\n\r\n**Prerequisites:**\r\n*   An OpenAI API key (starts with `sk-...`). It's recommended to set this as an environment variable `OPENAI_API_KEY`.\r\n\r\n**Usage:**\r\n\r\n```python\r\nfrom lollms_client import LollmsClient\r\nfrom ascii_colors import ASCIIColors\r\nimport os\r\n\r\n# Set your API key as an environment variable or directly in the config\r\n# os.environ[\"OPENAI_API_KEY\"] = \"your_openai_api_key_here\"\r\n\r\ntry:\r\n    if \"OPENAI_API_KEY\" not in os.environ and \"your_openai_api_key_here\" in \"your_openai_api_key_here\":\r\n        ASCIIColors.warning(\"OPENAI_API_KEY not set in environment or hardcoded. Skipping OpenAI example.\")\r\n    else:\r\n        lc = LollmsClient(\r\n            llm_binding_name=\"openai\",\r\n            llm_binding_config={\r\n                \"model_name\": \"gpt-4o\", # Or \"gpt-3.5-turbo\"\r\n                # \"service_key\": os.environ.get(\"OPENAI_API_KEY\", \"your_openai_api_key_here\") \r\n                # ^ service_key is optional if OPENAI_API_KEY env var is set\r\n            }\r\n        )\r\n\r\n        response = lc.generate_text(\"What is the difference between AI and machine learning?\")\r\n        ASCIIColors.green(f\"\\nResponse from OpenAI: {response}\")\r\n\r\nexcept Exception as e:\r\n    ASCIIColors.error(f\"Error initializing OpenAI binding: {e}\")\r\n    ASCIIColors.info(\"Please ensure your OpenAI API key is correctly set and you have access to the specified model.\")\r\n```\r\n\r\n#### **Google Gemini**\r\n\r\nConnects to Google's Gemini family of models via the Google AI Studio API.\r\n\r\n**Prerequisites:**\r\n*   A Google AI Studio API key. It's recommended to set this as an environment variable `GEMINI_API_KEY`.\r\n\r\n**Usage:**\r\n\r\n```python\r\nfrom lollms_client import LollmsClient\r\nfrom ascii_colors import ASCIIColors\r\nimport os\r\n\r\n# Set your API key as an environment variable or directly in the config\r\n# os.environ[\"GEMINI_API_KEY\"] = \"your_google_api_key_here\"\r\n\r\ntry:\r\n    if \"GEMINI_API_KEY\" not in os.environ and \"your_google_api_key_here\" in \"your_google_api_key_here\":\r\n        ASCIIColors.warning(\"GEMINI_API_KEY not set in environment or hardcoded. Skipping Gemini example.\")\r\n    else:\r\n        lc = LollmsClient(\r\n            llm_binding_name=\"gemini\",\r\n            llm_binding_config={\r\n                \"model_name\": \"gemini-1.5-pro-latest\",\r\n                # \"service_key\": os.environ.get(\"GEMINI_API_KEY\", \"your_google_api_key_here\")\r\n            }\r\n        )\r\n\r\n        response = lc.generate_text(\"Summarize the plot of 'Dune' in three sentences.\")\r\n        ASCIIColors.green(f\"\\nResponse from Gemini: {response}\")\r\n\r\nexcept Exception as e:\r\n    ASCIIColors.error(f\"Error initializing Gemini binding: {e}\")\r\n    ASCIIColors.info(\"Please ensure your Google AI Studio API key is correctly set and you have access to the specified model.\")\r\n```\r\n\r\n#### **Anthropic Claude**\r\n\r\nConnects to Anthropic's API to use the Claude family of models, including Claude 3.5 Sonnet, Opus, and Haiku.\r\n\r\n**Prerequisites:**\r\n*   An Anthropic API key. It's recommended to set this as an environment variable `ANTHROPIC_API_KEY`.\r\n\r\n**Usage:**\r\n\r\n```python\r\nfrom lollms_client import LollmsClient\r\nfrom ascii_colors import ASCIIColors\r\nimport os\r\n\r\n# Set your API key as an environment variable or directly in the config\r\n# os.environ[\"ANTHROPIC_API_KEY\"] = \"your_anthropic_api_key_here\"\r\n\r\ntry:\r\n    if \"ANTHROPIC_API_KEY\" not in os.environ and \"your_anthropic_api_key_here\" in \"your_anthropic_api_key_here\":\r\n        ASCIIColors.warning(\"ANTHROPIC_API_KEY not set in environment or hardcoded. Skipping Claude example.\")\r\n    else:\r\n        lc = LollmsClient(\r\n            llm_binding_name=\"claude\",\r\n            llm_binding_config={\r\n                \"model_name\": \"claude-3-5-sonnet-20240620\",\r\n                # \"service_key\": os.environ.get(\"ANTHROPIC_API_KEY\", \"your_anthropic_api_key_here\")\r\n            }\r\n        )\r\n\r\n        response = lc.generate_text(\"What are the core principles of constitutional AI?\")\r\n        ASCIIColors.green(f\"\\nResponse from Claude: {response}\")\r\n\r\nexcept Exception as e:\r\n    ASCIIColors.error(f\"Error initializing Claude binding: {e}\")\r\n    ASCIIColors.info(\"Please ensure your Anthropic API key is correctly set and you have access to the specified model.\")\r\n```\r\n\r\n---\r\n\r\n### 3. API Aggregator Bindings\r\n\r\nThese bindings connect to services that provide access to many different models through a single API.\r\n\r\n#### **OpenRouter**\r\n\r\nOpenRouter provides a unified, OpenAI-compatible interface to access models from dozens of providers (Google, Anthropic, Mistral, Groq, etc.) with one API key.\r\n\r\n**Prerequisites:**\r\n*   An OpenRouter API key (starts with `sk-or-...`). It's recommended to set this as an environment variable `OPENROUTER_API_KEY`.\r\n\r\n**Usage:**\r\nModel names must be specified in the format `provider/model-name`.\r\n\r\n```python\r\nfrom lollms_client import LollmsClient\r\nfrom ascii_colors import ASCIIColors\r\nimport os\r\n\r\n# Set your API key as an environment variable or directly in the config\r\n# os.environ[\"OPENROUTER_API_KEY\"] = \"your_openrouter_api_key_here\"\r\n\r\ntry:\r\n    if \"OPENROUTER_API_KEY\" not in os.environ and \"your_openrouter_api_key_here\" in \"your_openrouter_api_key_here\":\r\n        ASCIIColors.warning(\"OPENROUTER_API_KEY not set in environment or hardcoded. Skipping OpenRouter example.\")\r\n    else:\r\n        lc = LollmsClient(\r\n            llm_binding_name=\"open_router\",\r\n            llm_binding_config={\r\n                \"model_name\": \"anthropic/claude-3-haiku-20240307\",\r\n                # \"open_router_api_key\": os.environ.get(\"OPENROUTER_API_KEY\", \"your_openrouter_api_key_here\")\r\n            }\r\n        )\r\n\r\n        response = lc.generate_text(\"Explain what an API aggregator is, as if to a beginner.\")\r\n        ASCIIColors.green(f\"\\nResponse from OpenRouter: {response}\")\r\n\r\nexcept Exception as e:\r\n    ASCIIColors.error(f\"Error initializing OpenRouter binding: {e}\")\r\n    ASCIIColors.info(\"Please ensure your OpenRouter API key is correctly set and you have access to the specified model.\")\r\n```\r\n\r\n#### **Groq**\r\n\r\nWhile Groq is a direct provider, it's famous as an aggregator of speed. It runs open-source models on custom LPU hardware for exceptionally fast inference.\r\n\r\n**Prerequisites:**\r\n*   A Groq API key. It's recommended to set this as an environment variable `GROQ_API_KEY`.\r\n\r\n**Usage:**\r\n\r\n```python\r\nfrom lollms_client import LollmsClient\r\nfrom ascii_colors import ASCIIColors\r\nimport os\r\n\r\n# Set your API key as an environment variable or directly in the config\r\n# os.environ[\"GROQ_API_KEY\"] = \"your_groq_api_key_here\"\r\n\r\ntry:\r\n    if \"GROQ_API_KEY\" not in os.environ and \"your_groq_api_key_here\" in \"your_groq_api_key_here\":\r\n        ASCIIColors.warning(\"GROQ_API_KEY not set in environment or hardcoded. Skipping Groq example.\")\r\n    else:\r\n        lc = LollmsClient(\r\n            llm_binding_name=\"groq\",\r\n            llm_binding_config={\r\n                \"model_name\": \"llama3-8b-8192\", # Or \"mixtral-8x7b-32768\"\r\n                # \"groq_api_key\": os.environ.get(\"GROQ_API_KEY\", \"your_groq_api_key_here\")\r\n            }\r\n        )\r\n\r\n        response = lc.generate_text(\"Write a 3-line poem about incredible speed.\")\r\n        ASCIIColors.green(f\"\\nResponse from Groq: {response}\")\r\n\r\nexcept Exception as e:\r\n    ASCIIColors.error(f\"Error initializing Groq binding: {e}\")\r\n    ASCIIColors.info(\"Please ensure your Groq API key is correctly set and you have access to the specified model.\")\r\n```\r\n\r\n#### **Hugging Face Inference API**\r\n\r\nThis connects to the serverless Hugging Face Inference API, allowing experimentation with thousands of open-source models without local hardware.\r\n\r\n**Note:** This API can have \"cold starts,\" so the first request might be slow.\r\n\r\n**Prerequisites:**\r\n*   A Hugging Face User Access Token (starts with `hf_...`). It's recommended to set this as an environment variable `HF_API_KEY`.\r\n\r\n**Usage:**\r\n\r\n```python\r\nfrom lollms_client import LollmsClient\r\nfrom ascii_colors import ASCIIColors\r\nimport os\r\n\r\n# Set your API key as an environment variable or directly in the config\r\n# os.environ[\"HF_API_KEY\"] = \"your_hugging_face_token_here\"\r\n\r\ntry:\r\n    if \"HF_API_KEY\" not in os.environ and \"your_hugging_face_token_here\" in \"your_hugging_face_token_here\":\r\n        ASCIIColors.warning(\"HF_API_KEY not set in environment or hardcoded. Skipping Hugging Face Inference API example.\")\r\n    else:\r\n        lc = LollmsClient(\r\n            llm_binding_name=\"hugging_face_inference_api\",\r\n            llm_binding_config={\r\n                \"model_name\": \"google/gemma-1.1-7b-it\", # Or other suitable models from HF\r\n                # \"hf_api_key\": os.environ.get(\"HF_API_KEY\", \"your_hugging_face_token_here\")\r\n            }\r\n        )\r\n\r\n        response = lc.generate_text(\"Write a short story about a robot who discovers music.\")\r\n        ASCIIColors.green(f\"\\nResponse from Hugging Face: {response}\")\r\n\r\nexcept Exception as e:\r\n    ASCIIColors.error(f\"Error initializing Hugging Face Inference API binding: {e}\")\r\n    ASCIIColors.info(\"Please ensure your Hugging Face API token is correctly set and you have access to the specified model.\")\r\n```\r\n\r\n### Listing Available Models\r\n\r\nYou can query the active LLM binding to get a list of models it supports or has available. The exact information returned depends on the binding (e.g., Ollama lists local models, OpenAI lists all its API models).\r\n\r\n```python\r\nfrom lollms_client import LollmsClient\r\nfrom ascii_colors import ASCIIColors\r\nimport os\r\n\r\ntry:\r\n    # Initialize client for Ollama (or any other binding)\r\n    lc = LollmsClient(\r\n        llm_binding_name=\"ollama\",\r\n        llm_binding_config={\r\n            \"host_address\": \"http://localhost:11434\"\r\n            # model_name is not needed just to list models\r\n        }\r\n    )\r\n\r\n    ASCIIColors.yellow(\"\\nListing available models for the current binding:\")\r\n    available_models = lc.listModels()\r\n\r\n    if isinstance(available_models, list):\r\n        for model in available_models:\r\n            # Model structure varies by binding, common fields are 'name'\r\n            model_name = model.get('name', 'N/A')\r\n            model_size = model.get('size', 'N/A') # Common for Ollama\r\n            print(f\"- {model_name} (Size: {model_size})\")\r\n    elif isinstance(available_models, dict) and \"error\" in available_models:\r\n        ASCIIColors.error(f\"Error listing models: {available_models['error']}\")\r\n    else:\r\n        print(\"Could not retrieve model list or unexpected format.\")\r\n\r\nexcept Exception as e:\r\n    ASCIIColors.error(f\"An error occurred: {e}\")\r\n\r\n```\r\n\r\n### Long Context Processing for Long Texts (`long_context_processing`)\r\n\r\nWhen dealing with a document, article, or transcript that is too large to fit into a model's context window, the `long_context_processing` method is the solution. It intelligently chunks the text, summarizes or processes each piece, and then synthesizes those into a final, coherent output.\r\n\r\n```python\r\nfrom lollms_client import LollmsClient, MSG_TYPE\r\nfrom ascii_colors import ASCIIColors\r\nimport os\r\n\r\n# --- A very long text (imagine this is 10,000+ tokens) ---\r\nlong_text = \"\"\"\r\nThe history of computing is a fascinating journey from mechanical contraptions to the powerful devices we use today. \r\nIt began with devices like the abacus, used for arithmetic tasks. In the 19th century, Charles Babbage conceived \r\nthe Analytical Engine, a mechanical computer that was never fully built but laid the groundwork for modern computing. \r\nAda Lovelace, daughter of Lord Byron, is often credited as the first computer programmer for her work on Babbage's Engine.\r\nThe 20th century saw the rise of electronic computers, starting with vacuum tubes and progressing to transistors and integrated circuits. \r\nEarly computers like ENIAC were massive machines, but technological advancements rapidly led to smaller, more powerful, and more accessible devices.\r\nThe invention of the microprocessor in 1971 by Intel's Ted Hoff was a pivotal moment, leading to the personal computer revolution. \r\nCompanies like Apple and Microsoft brought computing to the masses. The internet, initially ARPANET, transformed communication and information access globally.\r\nIn recent decades, cloud computing, big data, and artificial intelligence have become dominant themes. AI, particularly machine learning and deep learning, \r\nhas enabled breakthroughs in areas like image recognition, natural language processing, and autonomous systems.\r\nToday, a new revolution is on the horizon with quantum computing, which promises to solve problems that are currently intractable \r\nfor even the most powerful supercomputers. Researchers are exploring qubits and quantum entanglement to create \r\nmachines that will redefine what is computationally possible, impacting fields from medicine to materials science.\r\nThis continuous evolution demonstrates humanity's relentless pursuit of greater computational power and intelligence.\r\n\"\"\" * 10 # Simulate a very long text (repeated 10 times)\r\n\r\n# --- Callback to see the process in action ---\r\ndef lcp_callback(chunk: str, msg_type: MSG_TYPE, params: dict = None, **kwargs):\r\n    if msg_type in [MSG_TYPE.MSG_TYPE_STEP_START, MSG_TYPE.MSG_TYPE_STEP_END]:\r\n        ASCIIColors.yellow(f\">> {chunk}\")\r\n    elif msg_type == MSG_TYPE.MSG_TYPE_STEP:\r\n        ASCIIColors.cyan(f\"   {chunk}\")\r\n    elif msg_type == MSG_TYPE.MSG_TYPE_CHUNK:\r\n        # Only print final answer chunks, not internal step chunks\r\n        pass\r\n    return True\r\n\r\ntry:\r\n    lc = LollmsClient(llm_binding_name=\"ollama\", llm_binding_config={\"model_name\": \"llama3\"})\r\n\r\n    # The contextual prompt guides the focus of the processing\r\n    context_prompt = \"Summarize the text, focusing on the key technological milestones, notable figures, and future directions in computing history.\"\r\n\r\n    ASCIIColors.blue(\"--- Starting Long Context Processing (Summarization) ---\")\r\n    \r\n    final_summary = lc.long_context_processing(\r\n        text_to_process=long_text,\r\n        contextual_prompt=context_prompt,\r\n        chunk_size_tokens=1000, # Adjust based on your model's context size\r\n        overlap_tokens=200,\r\n        streaming_callback=lcp_callback,\r\n        temperature=0.1 # Good for factual summarization\r\n    )\r\n    \r\n    ASCIIColors.blue(\"\\n--- Final Comprehensive Summary ---\")\r\n    ASCIIColors.green(final_summary)\r\n\r\nexcept Exception as e:\r\n    ASCIIColors.error(f\"An error occurred during long context processing: {e}\")\r\n```\r\n\r\n## Contributing\r\n\r\nContributions are welcome! Whether it's bug reports, feature suggestions, documentation improvements, or new bindings, please feel free to open an issue or submit a pull request on our [GitHub repository](https://github.com/ParisNeo/lollms_client).\r\n\r\n## License\r\n\r\nThis project is licensed under the **Apache 2.0 License**. See the [LICENSE](LICENSE) file for details.\r\n\r\n## Changelog\r\n\r\nFor a list of changes and updates, please refer to the [CHANGELOG.md](CHANGELOG.md) file.\r\n",
    "bugtrack_url": null,
    "license": "Apache License\r\n                                   Version 2.0, January 2004\r\n                                http://www.apache.org/licenses/\r\n        \r\n           TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION\r\n        \r\n           1. Definitions.\r\n        \r\n              \"License\" shall mean the terms and conditions for use, reproduction,\r\n              and distribution as defined by Sections 1 through 9 of this document.\r\n        \r\n              \"Licensor\" shall mean the copyright owner or entity authorized by\r\n              the copyright owner that is granting the License.\r\n        \r\n              \"Legal Entity\" shall mean the union of the acting entity and all\r\n              other entities that control, are controlled by, or are under common\r\n              control with that entity. For the purposes of this definition,\r\n              \"control\" means (i) the power, direct or indirect, to cause the\r\n              direction or management of such entity, whether by contract or\r\n              otherwise, or (ii) ownership of fifty percent (50%) or more of the\r\n              outstanding shares, or (iii) beneficial ownership of such entity.\r\n        \r\n              \"You\" (or \"Your\") shall mean an individual or Legal Entity\r\n              exercising permissions granted by this License.\r\n        \r\n              \"Source\" form shall mean the preferred form for making modifications,\r\n              including but not limited to software source code, documentation\r\n              source, and configuration files.\r\n        \r\n              \"Object\" form shall mean any form resulting from mechanical\r\n              transformation or translation of a Source form, including but\r\n              not limited to compiled object code, generated documentation,\r\n              and conversions to other media types.\r\n        \r\n              \"Work\" shall mean the work of authorship, whether in Source or\r\n              Object form, made available under the License, as indicated by a\r\n              copyright notice that is included in or attached to the work\r\n              (an example is provided in the Appendix below).\r\n        \r\n              \"Derivative Works\" shall mean any work, whether in Source or Object\r\n              form, that is based on (or derived from) the Work and for which the\r\n              editorial revisions, annotations, elaborations, or other modifications\r\n              represent, as a whole, an original work of authorship. For the purposes\r\n              of this License, Derivative Works shall not include works that remain\r\n              separable from, or merely link (or bind by name) to the interfaces of,\r\n              the Work and Derivative Works thereof.\r\n        \r\n              \"Contribution\" shall mean any work of authorship, including\r\n              the original version of the Work and any modifications or additions\r\n              to that Work or Derivative Works thereof, that is intentionally\r\n              submitted to Licensor for inclusion in the Work by the copyright owner\r\n              or by an individual or Legal Entity authorized to submit on behalf of\r\n              the copyright owner. For the purposes of this definition, \"submitted\"\r\n              means any form of electronic, verbal, or written communication sent\r\n              to the Licensor or its representatives, including but not limited to\r\n              communication on electronic mailing lists, source code control systems,\r\n              and issue tracking systems that are managed by, or on behalf of, the\r\n              Licensor for the purpose of discussing and improving the Work, but\r\n              excluding communication that is conspicuously marked or otherwise\r\n              designated in writing by the copyright owner as \"Not a Contribution.\"\r\n        \r\n              \"Contributor\" shall mean Licensor and any individual or Legal Entity\r\n              on behalf of whom a Contribution has been received by Licensor and\r\n              subsequently incorporated within the Work.\r\n        \r\n           2. Grant of Copyright License. Subject to the terms and conditions of\r\n              this License, each Contributor hereby grants to You a perpetual,\r\n              worldwide, non-exclusive, no-charge, royalty-free, irrevocable\r\n              copyright license to reproduce, prepare Derivative Works of,\r\n              publicly display, publicly perform, sublicense, and distribute the\r\n              Work and such Derivative Works in Source or Object form.\r\n        \r\n           3. Grant of Patent License. Subject to the terms and conditions of\r\n              this License, each Contributor hereby grants to You a perpetual,\r\n              worldwide, non-exclusive, no-charge, royalty-free, irrevocable\r\n              (except as stated in this section) patent license to make, have made,\r\n              use, offer to sell, sell, import, and otherwise transfer the Work,\r\n              where such license applies only to those patent claims licensable\r\n              by such Contributor that are necessarily infringed by their\r\n              Contribution(s) alone or by combination of their Contribution(s)\r\n              with the Work to which such Contribution(s) was submitted. If You\r\n              institute patent litigation against any entity (including a\r\n              cross-claim or counterclaim in a lawsuit) alleging that the Work\r\n              or a Contribution incorporated within the Work constitutes direct\r\n              or contributory patent infringement, then any patent licenses\r\n              granted to You under this License for that Work shall terminate\r\n              as of the date such litigation is filed.\r\n        \r\n           4. Redistribution. You may reproduce and distribute copies of the\r\n              Work or Derivative Works thereof in any medium, with or without\r\n              modifications, and in Source or Object form, provided that You\r\n              meet the following conditions:\r\n        \r\n              (a) You must give any other recipients of the Work or\r\n                  Derivative Works a copy of this License; and\r\n        \r\n              (b) You must cause any modified files to carry prominent notices\r\n                  stating that You changed the files; and\r\n        \r\n              (c) You must retain, in the Source form of any Derivative Works\r\n                  that You distribute, all copyright, patent, trademark, and\r\n                  attribution notices from the Source form of the Work,\r\n                  excluding those notices that do not pertain to any part of\r\n                  the Derivative Works; and\r\n        \r\n              (d) If the Work includes a \"NOTICE\" text file as part of its\r\n                  distribution, then any Derivative Works that You distribute must\r\n                  include a readable copy of the attribution notices contained\r\n                  within such NOTICE file, excluding those notices that do not\r\n                  pertain to any part of the Derivative Works, in at least one\r\n                  of the following places: within a NOTICE text file distributed\r\n                  as part of the Derivative Works; within the Source form or\r\n                  documentation, if provided along with the Derivative Works; or,\r\n                  within a display generated by the Derivative Works, if and\r\n                  wherever such third-party notices normally appear. The contents\r\n                  of the NOTICE file are for informational purposes only and\r\n                  do not modify the License. You may add Your own attribution\r\n                  notices within Derivative Works that You distribute, alongside\r\n                  or as an addendum to the NOTICE text from the Work, provided\r\n                  that such additional attribution notices cannot be construed\r\n                  as modifying the License.\r\n        \r\n              You may add Your own copyright statement to Your modifications and\r\n              may provide additional or different license terms and conditions\r\n              for use, reproduction, or distribution of Your modifications, or\r\n              for any such Derivative Works as a whole, provided Your use,\r\n              reproduction, and distribution of the Work otherwise complies with\r\n              the conditions stated in this License.\r\n        \r\n           5. Submission of Contributions. Unless You explicitly state otherwise,\r\n              any Contribution intentionally submitted for inclusion in the Work\r\n              by You to the Licensor shall be under the terms and conditions of\r\n              this License, without any additional terms or conditions.\r\n              Notwithstanding the above, nothing herein shall supersede or modify\r\n              the terms of any separate license agreement you may have executed\r\n              with Licensor regarding such Contributions.\r\n        \r\n           6. Trademarks. This License does not grant permission to use the trade\r\n              names, trademarks, service marks, or product names of the Licensor,\r\n              except as required for reasonable and customary use in describing the\r\n              origin of the Work and reproducing the content of the NOTICE file.\r\n        \r\n           7. Disclaimer of Warranty. Unless required by applicable law or\r\n              agreed to in writing, Licensor provides the Work (and each\r\n              Contributor provides its Contributions) on an \"AS IS\" BASIS,\r\n              WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\r\n              implied, including, without limitation, any warranties or conditions\r\n              of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A\r\n              PARTICULAR PURPOSE. You are solely responsible for determining the\r\n              appropriateness of using or redistributing the Work and assume any\r\n              risks associated with Your exercise of permissions under this License.\r\n        \r\n           8. Limitation of Liability. In no event and under no legal theory,\r\n              whether in tort (including negligence), contract, or otherwise,\r\n              unless required by applicable law (such as deliberate and grossly\r\n              negligent acts) or agreed to in writing, shall any Contributor be\r\n              liable to You for damages, including any direct, indirect, special,\r\n              incidental, or consequential damages of any character arising as a\r\n              result of this License or out of the use or inability to use the\r\n              Work (including but not limited to damages for loss of goodwill,\r\n              work stoppage, computer failure or malfunction, or any and all\r\n              other commercial damages or losses), even if such Contributor\r\n              has been advised of the possibility of such damages.\r\n        \r\n           9. Accepting Warranty or Additional Liability. While redistributing\r\n              the Work or Derivative Works thereof, You may choose to offer,\r\n              and charge a fee for, acceptance of support, warranty, indemnity,\r\n              or other liability obligations and/or rights consistent with this\r\n              License. However, in accepting such obligations, You may act only\r\n              on Your own behalf and on Your sole responsibility, not on behalf\r\n              of any other Contributor, and only if You agree to indemnify,\r\n              defend, and hold each Contributor harmless for any liability\r\n              incurred by, or claims asserted against, such Contributor by reason\r\n              of your accepting any such warranty or additional liability.\r\n        \r\n           END OF TERMS AND CONDITIONS\r\n        \r\n           APPENDIX: How to apply the Apache License to your work.\r\n        \r\n              To apply the Apache License to your work, attach the following\r\n              boilerplate notice, with the fields enclosed by brackets \"[]\"\r\n              replaced with your own identifying information. (Don't include\r\n              the brackets!)  The text should be enclosed in the appropriate\r\n              comment syntax for the file format. We also recommend that a\r\n              file or class name and description of purpose be included on the\r\n              same \"printed page\" as the copyright notice for easier\r\n              identification within third-party archives.\r\n        \r\n           Copyright [yyyy] [name of copyright owner]\r\n        \r\n           Licensed under the Apache License, Version 2.0 (the \"License\");\r\n           you may not use this file except in compliance with the License.\r\n           You may obtain a copy of the License at\r\n        \r\n               http://www.apache.org/licenses/LICENSE-2.0\r\n        \r\n           Unless required by applicable law or agreed to in writing, software\r\n           distributed under the License is distributed on an \"AS IS\" BASIS,\r\n           WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\r\n           See the License for the specific language governing permissions and\r\n           limitations under the License.\r\n        ",
    "summary": "A client library for LoLLMs generate endpoint",
    "version": "1.4.7",
    "project_urls": {
        "Homepage": "https://github.com/ParisNeo/lollms_client"
    },
    "split_keywords": [],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "84b8dd80a9cbb763684bd17eac2108d307ccbf451dc5ee7e264c641b1530b8ef",
                "md5": "f8618c5fc089081fa94a1876eb3a4d71",
                "sha256": "e1d55cf66ef2b61fc47d84615356ea87c89c7c9018929be935d69e1d85f88e87"
            },
            "downloads": -1,
            "filename": "lollms_client-1.4.7-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "f8618c5fc089081fa94a1876eb3a4d71",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.7",
            "size": 400065,
            "upload_time": "2025-09-10T13:26:41",
            "upload_time_iso_8601": "2025-09-10T13:26:41.119402Z",
            "url": "https://files.pythonhosted.org/packages/84/b8/dd80a9cbb763684bd17eac2108d307ccbf451dc5ee7e264c641b1530b8ef/lollms_client-1.4.7-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "242530bc0b781dcad981f646e7b27410793f1d76e95520499838598795c5b772",
                "md5": "3fa2804a52803d6bc02a71969e5388dd",
                "sha256": "0f48f0f99ff7c92a2ffffa0ef1f18436a6f3e7bfb91938e0333667a1def119cf"
            },
            "downloads": -1,
            "filename": "lollms_client-1.4.7.tar.gz",
            "has_sig": false,
            "md5_digest": "3fa2804a52803d6bc02a71969e5388dd",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.7",
            "size": 369896,
            "upload_time": "2025-09-10T13:26:42",
            "upload_time_iso_8601": "2025-09-10T13:26:42.964364Z",
            "url": "https://files.pythonhosted.org/packages/24/25/30bc0b781dcad981f646e7b27410793f1d76e95520499838598795c5b772/lollms_client-1.4.7.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-09-10 13:26:42",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "ParisNeo",
    "github_project": "lollms_client",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "requirements": [
        {
            "name": "requests",
            "specs": [
                [
                    ">=",
                    "2.25.1"
                ]
            ]
        },
        {
            "name": "ascii-colors",
            "specs": []
        },
        {
            "name": "pillow",
            "specs": []
        },
        {
            "name": "pipmaster",
            "specs": []
        },
        {
            "name": "pyyaml",
            "specs": []
        },
        {
            "name": "tiktoken",
            "specs": []
        },
        {
            "name": "pydantic",
            "specs": []
        },
        {
            "name": "numpy",
            "specs": []
        }
    ],
    "lcname": "lollms-client"
}
        
Elapsed time: 0.80270s