<!-- HEADER -->
<div align="center">
<img src="./assets/paige_mascot.png" alt="Logo" width="200" height="200">
<h1 align="center">Incontinuity</h1>
<p align="center">Context control for LLM inference on a per-token basis</p>
</div>
<!-- BADGES -->
<p align="center">
<a href="https://pypi.org/project/incontinuity/"><img src="https://img.shields.io/pypi/v/incontinuity?logo=pypi&logoColor=white"/></a>
</p>
<!-- DESCRIPTION -->
Incontinuity allows developers to modify the context prompt after each token is generated by an LLM. Instead of waiting for the entire LLM output to be generated, developers can now control the prompt and outputs at any point in the generation.
For example, instead of waiting for an LLM to generate some typed XML code, you can make a `ContextController` that will backtrack and add context to the prompt if the LLM makes a mistake in the generation.
Think of it like using an IDE. You're writing code and you made a typo, so you go back and fix it. Oh no, you used the wrong type, so you go back and fix it. Incontinuity allows you to do the same thing with LLMs.
---
I've been working on LLM agents since the inception of LLMs and I've always wanted to have this level of control. I havent seen any packages for this, so I decided to create one. I hope you find it useful!
Raw data
{
"_id": null,
"home_page": "https://github.com/lukejagg/incontinuity",
"name": "incontinuity",
"maintainer": null,
"docs_url": null,
"requires_python": "<4.0,>=3.8",
"maintainer_email": null,
"keywords": "llm, llm agent, prompt, ai",
"author": "Lucas Jaggernauth",
"author_email": "lukejagg@gmail.com",
"download_url": "https://files.pythonhosted.org/packages/28/f7/725fa5eb636cd4a8fc23a6cfa25713ca214f9ed0b24a39612c87e69265de/incontinuity-0.0.0.tar.gz",
"platform": null,
"description": "<!-- HEADER -->\n<div align=\"center\">\n <img src=\"./assets/paige_mascot.png\" alt=\"Logo\" width=\"200\" height=\"200\">\n <h1 align=\"center\">Incontinuity</h1>\n <p align=\"center\">Context control for LLM inference on a per-token basis</p>\n</div>\n\n<!-- BADGES -->\n<p align=\"center\">\n <a href=\"https://pypi.org/project/incontinuity/\"><img src=\"https://img.shields.io/pypi/v/incontinuity?logo=pypi&logoColor=white\"/></a>\n</p>\n\n<!-- DESCRIPTION -->\nIncontinuity allows developers to modify the context prompt after each token is generated by an LLM. Instead of waiting for the entire LLM output to be generated, developers can now control the prompt and outputs at any point in the generation.\n\nFor example, instead of waiting for an LLM to generate some typed XML code, you can make a `ContextController` that will backtrack and add context to the prompt if the LLM makes a mistake in the generation. \n\nThink of it like using an IDE. You're writing code and you made a typo, so you go back and fix it. Oh no, you used the wrong type, so you go back and fix it. Incontinuity allows you to do the same thing with LLMs.\n\n---\n\nI've been working on LLM agents since the inception of LLMs and I've always wanted to have this level of control. I havent seen any packages for this, so I decided to create one. I hope you find it useful!\n\n\n",
"bugtrack_url": null,
"license": null,
"summary": "Context control for LLM inference on a per-token basis",
"version": "0.0.0",
"project_urls": {
"Bug Tracker": "https://github.com/lukejagg/incontinuity/issues",
"Homepage": "https://github.com/lukejagg/incontinuity",
"Repository": "https://github.com/lukejagg/incontinuity"
},
"split_keywords": [
"llm",
" llm agent",
" prompt",
" ai"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "67f9a6a70ef2cab2eee8080aba73189654200d15663372fb43a4c19866e96fe6",
"md5": "430f8c199a9008c7fa07e3231854bfb2",
"sha256": "f1f6864ff66a81556073be6315868d1be7d2c7885557a4cc0987b8423e93348f"
},
"downloads": -1,
"filename": "incontinuity-0.0.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "430f8c199a9008c7fa07e3231854bfb2",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "<4.0,>=3.8",
"size": 2704,
"upload_time": "2024-09-13T02:02:49",
"upload_time_iso_8601": "2024-09-13T02:02:49.164866Z",
"url": "https://files.pythonhosted.org/packages/67/f9/a6a70ef2cab2eee8080aba73189654200d15663372fb43a4c19866e96fe6/incontinuity-0.0.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "28f7725fa5eb636cd4a8fc23a6cfa25713ca214f9ed0b24a39612c87e69265de",
"md5": "de8359f2b02a20df3a077a51d9337ee3",
"sha256": "deae27ccd15ad7139270c1933fea2d782f6f83c9ac6bb9b6f1e9979bdc8102ed"
},
"downloads": -1,
"filename": "incontinuity-0.0.0.tar.gz",
"has_sig": false,
"md5_digest": "de8359f2b02a20df3a077a51d9337ee3",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "<4.0,>=3.8",
"size": 2309,
"upload_time": "2024-09-13T02:02:50",
"upload_time_iso_8601": "2024-09-13T02:02:50.578755Z",
"url": "https://files.pythonhosted.org/packages/28/f7/725fa5eb636cd4a8fc23a6cfa25713ca214f9ed0b24a39612c87e69265de/incontinuity-0.0.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-09-13 02:02:50",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "lukejagg",
"github_project": "incontinuity",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"lcname": "incontinuity"
}