chatnificent


Namechatnificent JSON
Version 0.0.6 PyPI version JSON
download
home_pageNone
SummaryLLM chat app framework - Minimally complete. Maximally hackable
upload_time2025-09-04 19:13:14
maintainerNone
docs_urlNone
authorElias Dabbas
requires_python>=3.9
licenseNone
keywords llm chatbot chat ai flask dash web-framework openai anthropic conversational-ai
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            

# 🗯️ Chatnificent

### LLM chat app framework
### Minimally complete. Maximally hackable.

Build production-ready, full-stack chat applications in minutes. Customize everything in hours.


Chatnificent is a Python framework built on [Plotly's Dash](https://dash.plotly.com/) designed to get your LLM chat applications up and running instantly, while providing a robust, decoupled architecture for unlimited customization.

Stop wrestling with UI components, state management, and backend integrations. Start building magnificent chat apps.


[![PyPI version](https://img.shields.io/pypi/v/chatnificent.svg)](https://pypi.python.org/pypi/chatnificent) [![PyPI Downloads](https://static.pepy.tech/badge/chatnificent)](http://pepy.tech/project/chatnificent) [![DeepWiki](https://deepwiki.com/badge.svg)](https://deepwiki.com/eliasdabbas/chatnificent)

## The Ethos

Frameworks should get out of your way.

  * **Minimally Complete:** Out of the box, `Chatnificent` provides a fully functional, stateful, multi-user chat application with sensible defaults.
  * **Maximally Hackable:** Every core pillar—the UI, the LLM provider, the database, the authentication, the RAG pipeline, and the core orchestration—is  swappable. Customize or replace any part without fighting the framework.

## Features

  * **LLM Agnostic:** Built-in support for OpenAI, Anthropic, Gemini, Ollama, OpenRouter, DeepSeek, and any other LLM API.
  * **Flexible UI:** Default Bootstrap layout, with built-in Mantine and Minimal (pure HTML) layouts. Easily customizable with any Dash components.
  * **Pluggable Storage:** InMemory, File-system, and SQLite included. Easily extendable to Redis, Postgres, etc.
  * **Agentic Engine:** The core engine manages multi-turn conversations and standardized tool calling across providers.
  * **Auth Ready:** Abstracted authentication layer for easy integration. No-login anonymous user auth enabled by default.
  * **RTL Support:** Automatic detection and rendering of Right-to-Left languages.
  * **Dash Native:** Leverage the full power of Plotly's Dash to integrate complex data visualizations and analytics.

## Installation

To get started quickly with the default UI (Bootstrap) and the default LLM provider (OpenAI):

```bash
pip install "chatnificent[default]"

export OPENAI_API_KEY="YOUR_API_KEY"
```

For a minimal installation (no UI libraries or LLM SDKs included):

```bash
pip install chatnificent
```



## Quickstart: Hello World (3 Lines)

This is a complete, working chat application.

Create a file `app.py`:

```python
from chatnificent import Chatnificent

app = Chatnificent()

if __name__ == "__main__":
    app.run(debug=True)
```

Run it:

```bash
python app.py
```

Open your browser to [`http://127.0.0.1:8050`](http://127.0.0.1:8050). That's it. You have a fully functional chat UI with conversation history, mobile responsiveness, and URL-based session management.

## The Pillars of Hackability

Chatnificent's architecture is built around extensible Pillars. Every major function is handled by a dedicated component adhering to a strict interface.

| Pillar | Description | Defaults | Included Implementations |
| :--- | :--- | :--- | :--- |
| **`LLM`** | The brain (API calls, parsing). | `OpenAI` (or `Echo`) | OpenAI, Anthropic, Gemini, OpenRouter, DeepSeek, Ollama, Echo |
| **`Layout`** | The look and feel (UI components). | `Bootstrap` (or `Minimal`) | Bootstrap, Mantine, Minimal (HTML) |
| **`Store`** | The memory (Persistence). | `InMemory` | InMemory, File, SQLite |
| **`Auth`** | The gatekeeper (User identification). | `Anonymous` | Anonymous, SingleUser |
| **`Engine`** | The orchestrator (Request lifecycle). | `Synchronous` | Synchronous |
| **`Tools`** | Tool/function calling capabilities. | `NoTool` | PythonTool, NoTool |
| **`Retrieval`** | RAG knowledge retrieval. | `NoRetrieval` | NoRetrieval |
| **`URL`** | URL parsing and routing. | `PathBased` | PathBased, QueryParams |

You customize the app by injecting the implementations you need during initialization:

```python
from chatnificent import Chatnificent
import chatnificent as chat

app = Chatnificent(
    llm=chat.llm.Anthropic(),
    store=chat.store.SQLite(db_path="conversations.db"),
    layout=chat.layout.Mantine()
)
```

## Progressive Power: Swapping the Pillars

Let's evolve the "Hello World" example by swapping pillars.

### Level 1: Swapping the LLM 🧠

Want to use Anthropic's Claude 3.5 Sonnet? Just swap the `llm` pillar.

*(Requires `pip install anthropic` and setting `ANTHROPIC_API_KEY`)*

```python
from chatnificent import Chatnificent
import chatnificent as chat


app = Chatnificent(
    llm=chat.llm.Anthropic(default_model="claude-3-5-sonnet-20240620")
)

# Or try Gemini: app = Chatnificent(llm=chat.llm.Gemini())
# Or local Ollama: app = Chatnificent(llm=chat.llm.Ollama(default_model="llama3.1"))
```

Chatnificent handles the translation of message formats and tool-calling protocols automatically.

### Level 2: Adding Persistent Storage

The default `InMemory` store is ephemeral. Let's use `SQLite` for persistence.

```python
from chatnificent import Chatnificent
import chatnificent as chat

app = Chatnificent(
    store=store.SQLite(db_path="conversations.db")
)
# Or use the filesystem: store=chat.store.File(base_dir="./chat_data")
```

Conversations are now persisted across server restarts, and the sidebar automatically loads your history.

### Level 3: Changing the Look and Feel 🎨

Don't want Bootstrap? Let's try the Mantine layout.

*(Requires `pip install dash-mantine-components`)*

```python
from chatnificent import Chatnificent
import chatnificent as chat

app = Chatnificent(layout=chat.layout.Mantine())

# Or use the barebones HTML layout: layout=layout.Minimal()
```

Want a completely custom design? Implement the `layout.Layout` abstract base class. The framework ensures your custom layout integrates seamlessly, provided you include the required component IDs (e.g., `input_textarea`, `messages_container`, etc.).

### Level 4: Custom Authentication

The default `Anonymous` auth isolates users by random user ID. You can easily implement custom logic.

```python
from chatnificent import Chatnificent, auth

class HeaderAuth(auth.Auth):
    def get_current_user_id(self, **kwargs) -> str:
        from flask import request
        # Identify user based on a header (e.g., provided by an auth proxy)
        return request.headers.get("X-User-Id", "unknown_user")

app = Chatnificent(auth=HeaderAuth())
```

### Level 5: The Engine (Advanced Orchestration)

The `Engine` orchestrates the entire request lifecycle: resolving the conversation, RAG retrieval, the agentic loop (Tools + LLM calls), and persistence.

The default `Synchronous` engine provides "hooks" (empty methods called at specific points) and "seams" (core logic methods) that you can override to deeply customize behavior without rewriting the core logic.

```python
from chatnificent import Chatnificent
import chatnificent as chat
from typing import Any, Optional

# Create a custom engine by inheriting from the default
class CustomEngine(chat.engine.Synchronous):

    # 1. Override a HOOK to add monitoring/logging
    def _after_llm_call(self, llm_response: Any) -> None:
        # Example: Extract token usage if the LLM response object has a 'usage' attribute
        tokens = getattr(llm_response, 'usage', 'N/A')
        print(f"[MONITORING] LLM call complete. Tokens: {tokens}")

    # 2. Override a SEAM to modify core logic (e.g., prompt engineering)
    def _prepare_llm_payload(self, conversation, retrieval_context: Optional[str]):
        # Get the default payload (which already includes the context if present)
        payload = super()._prepare_llm_payload(conversation, retrieval_context)

        # Inject a custom system prompt if none exists
        if not any(m['role'] == 'system' for m in payload):
            payload.insert(0, {"role": "system", "content": "Be brief and professional."})
        return payload


# Initialize the app, passing the engine instance.
# Chatnificent's constructor will automatically bind the app reference to the engine.
app = Chatnificent(engine=CustomEngine())
```

## Architecture Overview

How the pillars work together during a request:

1.  **User Input**: The user submits a message via the `Layout`.
2.  **Callback Trigger**: A Dash callback delegates the input to the `Engine`.
3.  **Context Resolution**: The `Engine` uses `Auth`, `URL`, and `Store` to identify the user and load the conversation history.
4.  **Agentic Loop**:
      * The `Engine` calls `Retrieval` to gather context (RAG).
      * The `Engine` sends the history and context to the `LLM`.
      * If the `LLM` requests a tool call, the `Engine` executes it via `Tools` and loops back.
      * If the `LLM` returns a final response, the loop exits.
5.  **Persistence**: The `Engine` saves the updated conversation via the `Store`.
6.  **Rendering**: The `Engine` formats the messages using the `Layout` and updates the client UI.

## Building Your Own Pillars

The ultimate hackability comes from implementing your own pillars. Want to use MongoDB? Just implement the `store.Store` interface.

### Example: Custom Storage Implementation

```python
from chatnificent import Chatnificent
import chatnificent as chat
from typing import Optional, List

class MongoDBStore(chat.store.Store):
    def __init__(self, connection_string):
        # Initialize MongoDB client...
        print(f"Connecting to MongoDB at {connection_string}...")
        pass

    def load_conversation(self, user_id: str, convo_id: str) -> Optional[Conversation]:
        # Implement loading logic...
        return None

    # Implement the other required methods...
    def save_conversation(self, user_id: str, conversation: Conversation):
        pass
    def list_conversations(self, user_id: str) -> List[str]:
        return []
    def get_next_conversation_id(self, user_id: str) -> str:
        return "1"

# Use your custom implementation
# app = Chatnificent(store=MongoDBStore(connection_string="mongodb://..."))
```

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "chatnificent",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.9",
    "maintainer_email": null,
    "keywords": "llm, chatbot, chat, ai, flask, dash, web-framework, openai, anthropic, conversational-ai",
    "author": "Elias Dabbas",
    "author_email": "Elias Dabbas <eliasdabbas@gmail.com>",
    "download_url": "https://files.pythonhosted.org/packages/99/ba/f5b7e770e17095d06c90758d94b6565d53193fd675435f576cc09809df4c/chatnificent-0.0.6.tar.gz",
    "platform": null,
    "description": "\n\n# \ud83d\uddef\ufe0f Chatnificent\n\n### LLM chat app framework\n### Minimally complete. Maximally hackable.\n\nBuild production-ready, full-stack chat applications in minutes. Customize everything in hours.\n\n\nChatnificent is a Python framework built on [Plotly's Dash](https://dash.plotly.com/) designed to get your LLM chat applications up and running instantly, while providing a robust, decoupled architecture for unlimited customization.\n\nStop wrestling with UI components, state management, and backend integrations. Start building magnificent chat apps.\n\n\n[![PyPI version](https://img.shields.io/pypi/v/chatnificent.svg)](https://pypi.python.org/pypi/chatnificent) [![PyPI Downloads](https://static.pepy.tech/badge/chatnificent)](http://pepy.tech/project/chatnificent) [![DeepWiki](https://deepwiki.com/badge.svg)](https://deepwiki.com/eliasdabbas/chatnificent)\n\n## The Ethos\n\nFrameworks should get out of your way.\n\n  * **Minimally Complete:** Out of the box, `Chatnificent` provides a fully functional, stateful, multi-user chat application with sensible defaults.\n  * **Maximally Hackable:** Every core pillar\u2014the UI, the LLM provider, the database, the authentication, the RAG pipeline, and the core orchestration\u2014is  swappable. Customize or replace any part without fighting the framework.\n\n## Features\n\n  * **LLM Agnostic:** Built-in support for OpenAI, Anthropic, Gemini, Ollama, OpenRouter, DeepSeek, and any other LLM API.\n  * **Flexible UI:** Default Bootstrap layout, with built-in Mantine and Minimal (pure HTML) layouts. Easily customizable with any Dash components.\n  * **Pluggable Storage:** InMemory, File-system, and SQLite included. Easily extendable to Redis, Postgres, etc.\n  * **Agentic Engine:** The core engine manages multi-turn conversations and standardized tool calling across providers.\n  * **Auth Ready:** Abstracted authentication layer for easy integration. No-login anonymous user auth enabled by default.\n  * **RTL Support:** Automatic detection and rendering of Right-to-Left languages.\n  * **Dash Native:** Leverage the full power of Plotly's Dash to integrate complex data visualizations and analytics.\n\n## Installation\n\nTo get started quickly with the default UI (Bootstrap) and the default LLM provider (OpenAI):\n\n```bash\npip install \"chatnificent[default]\"\n\nexport OPENAI_API_KEY=\"YOUR_API_KEY\"\n```\n\nFor a minimal installation (no UI libraries or LLM SDKs included):\n\n```bash\npip install chatnificent\n```\n\n\n\n## Quickstart: Hello World (3 Lines)\n\nThis is a complete, working chat application.\n\nCreate a file `app.py`:\n\n```python\nfrom chatnificent import Chatnificent\n\napp = Chatnificent()\n\nif __name__ == \"__main__\":\n    app.run(debug=True)\n```\n\nRun it:\n\n```bash\npython app.py\n```\n\nOpen your browser to [`http://127.0.0.1:8050`](http://127.0.0.1:8050). That's it. You have a fully functional chat UI with conversation history, mobile responsiveness, and URL-based session management.\n\n## The Pillars of Hackability\n\nChatnificent's architecture is built around extensible Pillars. Every major function is handled by a dedicated component adhering to a strict interface.\n\n| Pillar | Description | Defaults | Included Implementations |\n| :--- | :--- | :--- | :--- |\n| **`LLM`** | The brain (API calls, parsing). | `OpenAI` (or `Echo`) | OpenAI, Anthropic, Gemini, OpenRouter, DeepSeek, Ollama, Echo |\n| **`Layout`** | The look and feel (UI components). | `Bootstrap` (or `Minimal`) | Bootstrap, Mantine, Minimal (HTML) |\n| **`Store`** | The memory (Persistence). | `InMemory` | InMemory, File, SQLite |\n| **`Auth`** | The gatekeeper (User identification). | `Anonymous` | Anonymous, SingleUser |\n| **`Engine`** | The orchestrator (Request lifecycle). | `Synchronous` | Synchronous |\n| **`Tools`** | Tool/function calling capabilities. | `NoTool` | PythonTool, NoTool |\n| **`Retrieval`** | RAG knowledge retrieval. | `NoRetrieval` | NoRetrieval |\n| **`URL`** | URL parsing and routing. | `PathBased` | PathBased, QueryParams |\n\nYou customize the app by injecting the implementations you need during initialization:\n\n```python\nfrom chatnificent import Chatnificent\nimport chatnificent as chat\n\napp = Chatnificent(\n    llm=chat.llm.Anthropic(),\n    store=chat.store.SQLite(db_path=\"conversations.db\"),\n    layout=chat.layout.Mantine()\n)\n```\n\n## Progressive Power: Swapping the Pillars\n\nLet's evolve the \"Hello World\" example by swapping pillars.\n\n### Level 1: Swapping the LLM \ud83e\udde0\n\nWant to use Anthropic's Claude 3.5 Sonnet? Just swap the `llm` pillar.\n\n*(Requires `pip install anthropic` and setting `ANTHROPIC_API_KEY`)*\n\n```python\nfrom chatnificent import Chatnificent\nimport chatnificent as chat\n\n\napp = Chatnificent(\n    llm=chat.llm.Anthropic(default_model=\"claude-3-5-sonnet-20240620\")\n)\n\n# Or try Gemini: app = Chatnificent(llm=chat.llm.Gemini())\n# Or local Ollama: app = Chatnificent(llm=chat.llm.Ollama(default_model=\"llama3.1\"))\n```\n\nChatnificent handles the translation of message formats and tool-calling protocols automatically.\n\n### Level 2: Adding Persistent Storage\n\nThe default `InMemory` store is ephemeral. Let's use `SQLite` for persistence.\n\n```python\nfrom chatnificent import Chatnificent\nimport chatnificent as chat\n\napp = Chatnificent(\n    store=store.SQLite(db_path=\"conversations.db\")\n)\n# Or use the filesystem: store=chat.store.File(base_dir=\"./chat_data\")\n```\n\nConversations are now persisted across server restarts, and the sidebar automatically loads your history.\n\n### Level 3: Changing the Look and Feel \ud83c\udfa8\n\nDon't want Bootstrap? Let's try the Mantine layout.\n\n*(Requires `pip install dash-mantine-components`)*\n\n```python\nfrom chatnificent import Chatnificent\nimport chatnificent as chat\n\napp = Chatnificent(layout=chat.layout.Mantine())\n\n# Or use the barebones HTML layout: layout=layout.Minimal()\n```\n\nWant a completely custom design? Implement the `layout.Layout` abstract base class. The framework ensures your custom layout integrates seamlessly, provided you include the required component IDs (e.g., `input_textarea`, `messages_container`, etc.).\n\n### Level 4: Custom Authentication\n\nThe default `Anonymous` auth isolates users by random user ID. You can easily implement custom logic.\n\n```python\nfrom chatnificent import Chatnificent, auth\n\nclass HeaderAuth(auth.Auth):\n    def get_current_user_id(self, **kwargs) -> str:\n        from flask import request\n        # Identify user based on a header (e.g., provided by an auth proxy)\n        return request.headers.get(\"X-User-Id\", \"unknown_user\")\n\napp = Chatnificent(auth=HeaderAuth())\n```\n\n### Level 5: The Engine (Advanced Orchestration)\n\nThe `Engine` orchestrates the entire request lifecycle: resolving the conversation, RAG retrieval, the agentic loop (Tools + LLM calls), and persistence.\n\nThe default `Synchronous` engine provides \"hooks\" (empty methods called at specific points) and \"seams\" (core logic methods) that you can override to deeply customize behavior without rewriting the core logic.\n\n```python\nfrom chatnificent import Chatnificent\nimport chatnificent as chat\nfrom typing import Any, Optional\n\n# Create a custom engine by inheriting from the default\nclass CustomEngine(chat.engine.Synchronous):\n\n    # 1. Override a HOOK to add monitoring/logging\n    def _after_llm_call(self, llm_response: Any) -> None:\n        # Example: Extract token usage if the LLM response object has a 'usage' attribute\n        tokens = getattr(llm_response, 'usage', 'N/A')\n        print(f\"[MONITORING] LLM call complete. Tokens: {tokens}\")\n\n    # 2. Override a SEAM to modify core logic (e.g., prompt engineering)\n    def _prepare_llm_payload(self, conversation, retrieval_context: Optional[str]):\n        # Get the default payload (which already includes the context if present)\n        payload = super()._prepare_llm_payload(conversation, retrieval_context)\n\n        # Inject a custom system prompt if none exists\n        if not any(m['role'] == 'system' for m in payload):\n            payload.insert(0, {\"role\": \"system\", \"content\": \"Be brief and professional.\"})\n        return payload\n\n\n# Initialize the app, passing the engine instance.\n# Chatnificent's constructor will automatically bind the app reference to the engine.\napp = Chatnificent(engine=CustomEngine())\n```\n\n## Architecture Overview\n\nHow the pillars work together during a request:\n\n1.  **User Input**: The user submits a message via the `Layout`.\n2.  **Callback Trigger**: A Dash callback delegates the input to the `Engine`.\n3.  **Context Resolution**: The `Engine` uses `Auth`, `URL`, and `Store` to identify the user and load the conversation history.\n4.  **Agentic Loop**:\n      * The `Engine` calls `Retrieval` to gather context (RAG).\n      * The `Engine` sends the history and context to the `LLM`.\n      * If the `LLM` requests a tool call, the `Engine` executes it via `Tools` and loops back.\n      * If the `LLM` returns a final response, the loop exits.\n5.  **Persistence**: The `Engine` saves the updated conversation via the `Store`.\n6.  **Rendering**: The `Engine` formats the messages using the `Layout` and updates the client UI.\n\n## Building Your Own Pillars\n\nThe ultimate hackability comes from implementing your own pillars. Want to use MongoDB? Just implement the `store.Store` interface.\n\n### Example: Custom Storage Implementation\n\n```python\nfrom chatnificent import Chatnificent\nimport chatnificent as chat\nfrom typing import Optional, List\n\nclass MongoDBStore(chat.store.Store):\n    def __init__(self, connection_string):\n        # Initialize MongoDB client...\n        print(f\"Connecting to MongoDB at {connection_string}...\")\n        pass\n\n    def load_conversation(self, user_id: str, convo_id: str) -> Optional[Conversation]:\n        # Implement loading logic...\n        return None\n\n    # Implement the other required methods...\n    def save_conversation(self, user_id: str, conversation: Conversation):\n        pass\n    def list_conversations(self, user_id: str) -> List[str]:\n        return []\n    def get_next_conversation_id(self, user_id: str) -> str:\n        return \"1\"\n\n# Use your custom implementation\n# app = Chatnificent(store=MongoDBStore(connection_string=\"mongodb://...\"))\n```\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "LLM chat app framework - Minimally complete. Maximally hackable",
    "version": "0.0.6",
    "project_urls": {
        "Bug Tracker": "https://github.com/eliasdabbas/chatnificent/issues",
        "Homepage": "https://github.com/eliasdabbas/chatnificent",
        "Repository": "https://github.com/eliasdabbas/chatnificent"
    },
    "split_keywords": [
        "llm",
        " chatbot",
        " chat",
        " ai",
        " flask",
        " dash",
        " web-framework",
        " openai",
        " anthropic",
        " conversational-ai"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "c57512342b247a6c279f498cad7e994e8c99e189d29767fe12225edf857a3987",
                "md5": "0b9ebe173d7ef333f65d2855546dfc75",
                "sha256": "c61104a10058966430c6bae0e67d5b8fb5fc2e8f976c2cae32bf631552d51dbc"
            },
            "downloads": -1,
            "filename": "chatnificent-0.0.6-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "0b9ebe173d7ef333f65d2855546dfc75",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.9",
            "size": 33959,
            "upload_time": "2025-09-04T19:13:12",
            "upload_time_iso_8601": "2025-09-04T19:13:12.136006Z",
            "url": "https://files.pythonhosted.org/packages/c5/75/12342b247a6c279f498cad7e994e8c99e189d29767fe12225edf857a3987/chatnificent-0.0.6-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "99baf5b7e770e17095d06c90758d94b6565d53193fd675435f576cc09809df4c",
                "md5": "964e9d4bd54683159cb1b4a109c440e1",
                "sha256": "150716a7960333591fd8c67afe18d11c3a91ff1dd3ce627b6ccc1daefd97ca8f"
            },
            "downloads": -1,
            "filename": "chatnificent-0.0.6.tar.gz",
            "has_sig": false,
            "md5_digest": "964e9d4bd54683159cb1b4a109c440e1",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.9",
            "size": 29108,
            "upload_time": "2025-09-04T19:13:14",
            "upload_time_iso_8601": "2025-09-04T19:13:14.051183Z",
            "url": "https://files.pythonhosted.org/packages/99/ba/f5b7e770e17095d06c90758d94b6565d53193fd675435f576cc09809df4c/chatnificent-0.0.6.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-09-04 19:13:14",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "eliasdabbas",
    "github_project": "chatnificent",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "chatnificent"
}
        
Elapsed time: 0.44001s