llmunchies


Namellmunchies JSON
Version 0.2.0 PyPI version JSON
download
home_pageNone
SummaryA lightweight, model-agnostic context engine for LLMs. Feed your model tasty context.
upload_time2025-07-28 00:40:02
maintainerNone
docs_urlNone
authorLLMunchies Team
requires_python>=3.8
licenseNone
keywords llm context memory prompt gpt claude ai middleware
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # 🧠 LLMunchies
*Feeding your model tasty context.*

LLMunchies is a lightweight, model-agnostic context engine for developers. It lets you store conversation history, format it for different models like GPT and Claude, and swap models on the fly without rewriting your prompt logic.

It is **NOT** a chatbot framework or a full AI agent platform. It's the simple, pluggable memory layer that sits between your app and the model API.

## Mission
Let developers format, swap, and manage LLM context without prompt PTSD.

## Installation
Currently, LLMunchies is not on PyPI. To use it, simply place the `llmunchies` folder into your project and import the `MemoryManager`. You will also need `tiktoken` for V2 functionality.

```bash
pip install tiktoken

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "llmunchies",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": null,
    "keywords": "llm, context, memory, prompt, gpt, claude, ai, middleware",
    "author": "LLMunchies Team",
    "author_email": null,
    "download_url": "https://files.pythonhosted.org/packages/9e/aa/f287991725685bdcbf4b92895ef42b50eadf06a01c27dbd0ad5494478fa7/llmunchies-0.2.0.tar.gz",
    "platform": null,
    "description": "# \ud83e\udde0 LLMunchies\r\n*Feeding your model tasty context.*\r\n\r\nLLMunchies is a lightweight, model-agnostic context engine for developers. It lets you store conversation history, format it for different models like GPT and Claude, and swap models on the fly without rewriting your prompt logic.\r\n\r\nIt is **NOT** a chatbot framework or a full AI agent platform. It's the simple, pluggable memory layer that sits between your app and the model API.\r\n\r\n## Mission\r\nLet developers format, swap, and manage LLM context without prompt PTSD.\r\n\r\n## Installation\r\nCurrently, LLMunchies is not on PyPI. To use it, simply place the `llmunchies` folder into your project and import the `MemoryManager`. You will also need `tiktoken` for V2 functionality.\r\n\r\n```bash\r\npip install tiktoken\r\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "A lightweight, model-agnostic context engine for LLMs. Feed your model tasty context.",
    "version": "0.2.0",
    "project_urls": {
        "Homepage": "https://github.com/kabiru-js/llmunchies.git",
        "Issues": "https://github.com/your-username/llmunchies/issues"
    },
    "split_keywords": [
        "llm",
        " context",
        " memory",
        " prompt",
        " gpt",
        " claude",
        " ai",
        " middleware"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "9e1bffe888d041bb064991c1e95a879b4b471f73b6a85c6d30913748184f006f",
                "md5": "988fa98bd44713e7f528fbfbf2cf31f8",
                "sha256": "1ef43a7cef76e24178b8e0975128a606a26f7f7d8f189cb32596c18f7f82ee38"
            },
            "downloads": -1,
            "filename": "llmunchies-0.2.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "988fa98bd44713e7f528fbfbf2cf31f8",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8",
            "size": 6552,
            "upload_time": "2025-07-28T00:40:01",
            "upload_time_iso_8601": "2025-07-28T00:40:01.454530Z",
            "url": "https://files.pythonhosted.org/packages/9e/1b/ffe888d041bb064991c1e95a879b4b471f73b6a85c6d30913748184f006f/llmunchies-0.2.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "9eaaf287991725685bdcbf4b92895ef42b50eadf06a01c27dbd0ad5494478fa7",
                "md5": "7ce65c315dda2f30700274f3cbcef2a7",
                "sha256": "08cf594a215a2c30a2a931aa03dc7d88cf41fc15c6180242a350c144cc87ef8d"
            },
            "downloads": -1,
            "filename": "llmunchies-0.2.0.tar.gz",
            "has_sig": false,
            "md5_digest": "7ce65c315dda2f30700274f3cbcef2a7",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 5496,
            "upload_time": "2025-07-28T00:40:02",
            "upload_time_iso_8601": "2025-07-28T00:40:02.675141Z",
            "url": "https://files.pythonhosted.org/packages/9e/aa/f287991725685bdcbf4b92895ef42b50eadf06a01c27dbd0ad5494478fa7/llmunchies-0.2.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-07-28 00:40:02",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "kabiru-js",
    "github_project": "llmunchies",
    "github_not_found": true,
    "lcname": "llmunchies"
}
        
Elapsed time: 0.70419s