[](https://github.com/lioarce01/agentic-context-toolkit/actions/workflows/ci.yml) [](https://github.com/lioarce01/agentic-context-toolkit/actions/workflows/docs.yml) 
[](https://github.com/lioarce01/agentic-context-toolkit/actions/workflows/ci.yml)
[](https://pepy.tech/projects/acet)
# Agentic Context Engineering Toolkit
Research-oriented framework for Agentic Context Engineering. It captures, ranks, and reuses "context deltas" from LLM interactions so agents adapt without retraining, following the methodology described in [Agentic Context Engineering Framework](https://www.arxiv.org/abs/2510.04618).
## Features
- LLM provider agnostic (OpenAI, Anthropic, LiteLLM, Ollama, custom wrappers)
- Storage backend agnostic (memory, SQLite, Postgres/pgvector, extensible interfaces)
- Token budget management, retrieval & ranking, reflection, and curation pipelines
- Ready for Python 3.12 with strict typing, async workflows, and modern tooling
## Getting Started
```bash
python -m venv .venv
source .venv/bin/activate # On Windows: .venv\Scripts\activate
pip install -r requirements.txt
```
## Project Layout
```
.
acet/ # Library source (packages added per phase)
benchmarks/ # Performance and benchmark suites
docs/ # Documentation site sources
examples/ # Usage examples and sample apps
tests/ # Unit, integration, and benchmark tests
```
## Development Workflow
1. Create/activate the local virtual environment.
2. Install dependencies with `pip install -r requirements.txt`.
3. Run format and lint checks: `black .` and `ruff check`.
4. Run type checks: `mypy --strict .`.
5. Run tests: `pytest --cov=acet`.
## Performance Snapshot
- **Delta retrieval (250 active deltas)**: ~2 ms mean latency (`tests/benchmarks/test_delta_retrieval.py`)
- **SQLite save/query (300 staged deltas)**: ~23 ms mean latency (`tests/benchmarks/test_storage_throughput.py`)
- **Curator dedup (300 proposed insights, 30% duplicates)**: ~140 ms mean latency (`tests/benchmarks/test_curator_throughput.py`)
All benchmarks are reproducible via the CLI harnesses under `benchmarks/`. For example:
```bash
python benchmarks/delta_retrieval.py --iterations 30 --plot benchmarks/artifacts/delta_latency.png
python benchmarks/storage_throughput.py --backend all --iterations 30 --plot benchmarks/artifacts/storage_latency.png
python benchmarks/curator_throughput.py --proposals 300 --duplicate-ratio 0.3 --iterations 20 --plot benchmarks/artifacts/curator_latency.png
```
Adjust the parameters or swap in your production embeddings/backends to profile your deployment.
Raw data
{
"_id": null,
"home_page": null,
"name": "acet",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.12",
"maintainer_email": null,
"keywords": "llm, agents, context, rag, ai, machine-learning, context-engineering, self-improving",
"author": null,
"author_email": "Lionel <lioarce1@gmail.com>",
"download_url": "https://files.pythonhosted.org/packages/43/d7/e8c35cc2c07aa5bd585029ecc119dac502d92673247fb5bc872f4eef91a1/acet-1.0.10.tar.gz",
"platform": null,
"description": "[](https://github.com/lioarce01/agentic-context-toolkit/actions/workflows/ci.yml) [](https://github.com/lioarce01/agentic-context-toolkit/actions/workflows/docs.yml) \n [](https://github.com/lioarce01/agentic-context-toolkit/actions/workflows/ci.yml)\n[](https://pepy.tech/projects/acet)\n# Agentic Context Engineering Toolkit\n\nResearch-oriented framework for Agentic Context Engineering. It captures, ranks, and reuses \"context deltas\" from LLM interactions so agents adapt without retraining, following the methodology described in [Agentic Context Engineering Framework](https://www.arxiv.org/abs/2510.04618).\n\n## Features\n- LLM provider agnostic (OpenAI, Anthropic, LiteLLM, Ollama, custom wrappers)\n- Storage backend agnostic (memory, SQLite, Postgres/pgvector, extensible interfaces)\n- Token budget management, retrieval & ranking, reflection, and curation pipelines\n- Ready for Python 3.12 with strict typing, async workflows, and modern tooling\n\n## Getting Started\n```bash\npython -m venv .venv\nsource .venv/bin/activate # On Windows: .venv\\Scripts\\activate\npip install -r requirements.txt\n```\n\n## Project Layout\n```\n.\n acet/ # Library source (packages added per phase)\n benchmarks/ # Performance and benchmark suites\n docs/ # Documentation site sources\n examples/ # Usage examples and sample apps\n tests/ # Unit, integration, and benchmark tests\n```\n\n## Development Workflow\n1. Create/activate the local virtual environment.\n2. Install dependencies with `pip install -r requirements.txt`.\n3. Run format and lint checks: `black .` and `ruff check`.\n4. Run type checks: `mypy --strict .`.\n5. Run tests: `pytest --cov=acet`.\n\n## Performance Snapshot\n- **Delta retrieval (250 active deltas)**: ~2 ms mean latency (`tests/benchmarks/test_delta_retrieval.py`)\n- **SQLite save/query (300 staged deltas)**: ~23 ms mean latency (`tests/benchmarks/test_storage_throughput.py`)\n- **Curator dedup (300 proposed insights, 30% duplicates)**: ~140 ms mean latency (`tests/benchmarks/test_curator_throughput.py`)\n\nAll benchmarks are reproducible via the CLI harnesses under `benchmarks/`. For example:\n```bash\npython benchmarks/delta_retrieval.py --iterations 30 --plot benchmarks/artifacts/delta_latency.png\npython benchmarks/storage_throughput.py --backend all --iterations 30 --plot benchmarks/artifacts/storage_latency.png\npython benchmarks/curator_throughput.py --proposals 300 --duplicate-ratio 0.3 --iterations 20 --plot benchmarks/artifacts/curator_latency.png\n```\nAdjust the parameters or swap in your production embeddings/backends to profile your deployment.\n\n\n\n\n\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "Agentic Context Toolkit: context delta learning for adaptive LLM agents",
"version": "1.0.10",
"project_urls": {
"CHANGELOG": "https://github.com/lioarce01/agentic-context-toolkit/blob/main/CHANGELOG.md",
"Documentation": "https://github.com/lioarce01/agentic-context-toolkit",
"Homepage": "https://github.com/lioarce01/agentic-context-toolkit",
"Issues": "https://github.com/lioarce01/agentic-context-toolkit/issues",
"Repository": "https://github.com/lioarce01/agentic-context-toolkit"
},
"split_keywords": [
"llm",
" agents",
" context",
" rag",
" ai",
" machine-learning",
" context-engineering",
" self-improving"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "ab0ae5f80b0a36030e44b0a5b034e2c1944baf31824fbdf45ddc6d7b4a150ce1",
"md5": "ce4d892344d2f747db0fcc9813a152ca",
"sha256": "9da911820198bd689936d1d497cd8481e7a1c160e10f33650c567ff8e64566ea"
},
"downloads": -1,
"filename": "acet-1.0.10-py3-none-any.whl",
"has_sig": false,
"md5_digest": "ce4d892344d2f747db0fcc9813a152ca",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.12",
"size": 35609,
"upload_time": "2025-10-20T23:49:35",
"upload_time_iso_8601": "2025-10-20T23:49:35.088639Z",
"url": "https://files.pythonhosted.org/packages/ab/0a/e5f80b0a36030e44b0a5b034e2c1944baf31824fbdf45ddc6d7b4a150ce1/acet-1.0.10-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "43d7e8c35cc2c07aa5bd585029ecc119dac502d92673247fb5bc872f4eef91a1",
"md5": "364028aa3f957881c9f4af9a5f4e03b1",
"sha256": "9491b66447370464a58fb25b6a109428c5a51267ac8d02984327d677364b6661"
},
"downloads": -1,
"filename": "acet-1.0.10.tar.gz",
"has_sig": false,
"md5_digest": "364028aa3f957881c9f4af9a5f4e03b1",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.12",
"size": 28387,
"upload_time": "2025-10-20T23:49:36",
"upload_time_iso_8601": "2025-10-20T23:49:36.440964Z",
"url": "https://files.pythonhosted.org/packages/43/d7/e8c35cc2c07aa5bd585029ecc119dac502d92673247fb5bc872f4eef91a1/acet-1.0.10.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-10-20 23:49:36",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "lioarce01",
"github_project": "agentic-context-toolkit",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"requirements": [
{
"name": "pydantic",
"specs": [
[
">=",
"2.9"
]
]
},
{
"name": "httpx",
"specs": [
[
">=",
"0.27"
]
]
},
{
"name": "tiktoken",
"specs": [
[
">=",
"0.8"
]
]
},
{
"name": "numpy",
"specs": [
[
">=",
"2.1"
]
]
},
{
"name": "aiosqlite",
"specs": [
[
">=",
"0.20"
]
]
},
{
"name": "sentence-transformers",
"specs": [
[
">=",
"3.2"
]
]
},
{
"name": "sqlalchemy",
"specs": [
[
">=",
"2.0"
]
]
},
{
"name": "structlog",
"specs": [
[
">=",
"24.0"
]
]
},
{
"name": "python-dotenv",
"specs": [
[
">=",
"1.0"
]
]
},
{
"name": "openai",
"specs": [
[
">=",
"1.54"
]
]
},
{
"name": "anthropic",
"specs": [
[
">=",
"0.39"
]
]
},
{
"name": "litellm",
"specs": [
[
">=",
"1.52"
]
]
},
{
"name": "ollama",
"specs": [
[
">=",
"0.4"
]
]
},
{
"name": "faiss-cpu",
"specs": [
[
">=",
"1.9"
]
]
},
{
"name": "chromadb",
"specs": [
[
">=",
"0.5"
]
]
},
{
"name": "psycopg2-binary",
"specs": [
[
">=",
"2.9"
]
]
},
{
"name": "pgvector",
"specs": [
[
">=",
"0.3"
]
]
},
{
"name": "langchain",
"specs": [
[
">=",
"0.3"
]
]
},
{
"name": "langchain-community",
"specs": [
[
">=",
"0.3"
]
]
},
{
"name": "llama-index",
"specs": [
[
">=",
"0.11"
]
]
},
{
"name": "pytest",
"specs": [
[
">=",
"8.3"
]
]
},
{
"name": "pytest-cov",
"specs": [
[
">=",
"6.0"
]
]
},
{
"name": "pytest-asyncio",
"specs": [
[
">=",
"0.24"
]
]
},
{
"name": "pytest-benchmark",
"specs": [
[
">=",
"4.0"
]
]
},
{
"name": "mypy",
"specs": [
[
">=",
"1.13"
]
]
},
{
"name": "ruff",
"specs": [
[
">=",
"0.7"
]
]
},
{
"name": "black",
"specs": [
[
">=",
"24.10"
]
]
},
{
"name": "sphinx",
"specs": [
[
">=",
"8.0"
]
]
},
{
"name": "sphinx-rtd-theme",
"specs": [
[
">=",
"3.0"
]
]
},
{
"name": "myst-parser",
"specs": [
[
">=",
"2.0"
]
]
},
{
"name": "matplotlib",
"specs": [
[
">=",
"3.9"
]
]
},
{
"name": "redis",
"specs": [
[
">=",
"5.0"
]
]
}
],
"lcname": "acet"
}