Name | cogniweave JSON |
Version |
0.1.7
JSON |
| download |
home_page | None |
Summary | Experimental agent framework built on top of LangChain |
upload_time | 2025-07-14 06:49:15 |
maintainer | None |
docs_url | None |
author | None |
requires_python | <4,>=3.11 |
license | Apache License 2.0 |
keywords |
agent
|
VCS |
 |
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# CogniWeave
CogniWeave is an experimental agent framework built on top of [LangChain](https://github.com/langchain-ai/langchain). The repository showcases how to combine short‑term memory, persistent chat history and a long‑term vector store with end‑of‑conversation detection. The code base mainly serves as a set of runnable components used by the demonstration scripts and tests.
<p align="left">
<img src="https://github.com/Inexplicable-YL/CogniWeave/blob/main/docs/flow.png" width="600px">
</p>
## Features
- **Extensible chat agent** – defaults to OpenAI models but can be switched to other providers via environment variables.
- **Persistent chat history** – messages are stored in a SQLite database for later analysis and memory generation.
- **Vectorised long‑term memory** – FAISS indexes store tagged long‑term memory and allow retrieval as the conversation evolves.
- **Automatic memory creation** – short and long‑term memories are generated when a session ends and merged into the history.
- **Interactive CLI** – run `python -m cogniweave demo` to try the full pipeline from the terminal.
Additional helper functions for building the pipeline are available in the `cogniweave.quickstart` module.
## Installation
Install CogniWeave from PyPI:
```bash
pip install cogniweave
```
## Environment variables
The agent relies on several environment variables. Reasonable defaults are used when a variable is not provided.
| Variable | Purpose | Default |
|----------|---------|---------|
| `CHAT_MODEL` | Chat model in the form `provider/model` | `openai/gpt-4.1` |
| `AGENT_MODEL` | Agent model in the form `provider/model` | `openai/gpt-4.1` |
| `EMBEDDINGS_MODEL` | Embedding model in the form `provider/model` | `openai/text-embedding-ada-002` |
| `SHORT_MEMORY_MODEL` | Model used to summarise recent messages | `openai/gpt-4.1-mini` |
| `LONG_MEMORY_MODEL` | Model used for long‑term memory extraction | `openai/o3` |
| `END_DETECTOR_MODEL` | Model that decides when a conversation is over | `openai/gpt-4.1-mini` |
Model providers usually require credentials such as `*_API_KEY` and `*_API_BASE`. These can be supplied via a `.env` file in the project root.
Environment variables are **case-insensitive** and override any value defined in
`config.toml`. All settings can be provided entirely through environment
variables. Nested options use `__` to separate levels, for example:
```bash
PROMPT_VALUES__CHAT__EN="You are a helpful assistant."
```
is equivalent to the configuration file section:
```toml
[prompt_values.chat]
en = "You are a helpful assistant."
```
## Configuration file
In addition to environment variables, settings can be defined in a `config.toml` (or
JSON/YAML) file. The CLI automatically loads this file when present, or you can
explicitly provide a path with `--config-file` or by calling
`cogniweave.init_config(_config_file=...)` in your own code.
```toml
index_name = "demo"
folder_path = "./.cache"
language = "en"
chat_model = "openai/gpt-4.1"
chat_temperature = 0.8
```
Any fields matching the keys of the :class:`cogniweave.config.Config` model are
accepted, including nested `prompt_values` sections for overriding system
prompts.
All `prompt_values` strings support the f-string placeholder `{default}`. The
placeholder is replaced with CogniWeave's built-in prompt so you can extend it
easily:
```toml
[prompt_values.end_detector]
en = "The agent's name is CogniWeave. {default}"
```
which becomes `"The agent's name is CogniWeave. You are a "message completeness detector. ..."`.
If you supply a configuration file or define nested options via environment
variables, make sure to call `cogniweave.init_config()` before invoking
`build_pipeline()` so the settings take effect.
## Multi-language support
The built-in prompt templates only include Chinese and English text. To use
another language, define the prompt in the `prompt_values` section and set the
`language` key to match. For Japanese using a TOML config:
```toml
language = "jp"
[prompt_values.chat]
jp = "あなたは役に立つアシスタントです。"
```
When a configuration file or environment variables include nested values like
this, remember to call `cogniweave.init_config()` before creating the
pipeline so the custom prompts are applied.
## CLI usage
After installing the dependencies (see `pyproject.toml`), start the interactive demo with:
```bash
python -m cogniweave demo
```
You can specify a session identifier to keep conversations separate:
```bash
python -m cogniweave demo my_session
```
Additional options control where history and vector data are stored:
```bash
python -m cogniweave demo my_session --index my_index --folder /tmp/cache
```
You can load custom configuration from a file using the --config-file argument:
```bash
python -m cogniweave demo my_session --config-file config.toml
```
The `--index` argument sets the file names for the SQLite database and FAISS index, while `--folder` chooses the directory used to store them. The optional `--config-file` points to a toml, json or yaml file that contains all the necessary settings for the demo.
## Quick build
The `quickstart.py` module assembles the entire pipeline for you:
```python
from cogniweave import init_config, build_pipeline
init_config()
pipeline = build_pipeline(index_name="demo")
```
The pipeline object exposes a LangChain `Runnable` that contains the agent, history store and vector store ready to use.
## Manual build
For full control you can construct the components step by step.
1. **Create embeddings**
```python
from cogniweave.quickstart import create_embeddings
embeddings = create_embeddings()
```
2. **Create history store**
```python
from cogniweave.quickstart import create_history_store
history_store = create_history_store(index_name="demo")
```
3. **Create vector store**
```python
from cogniweave.quickstart import create_vector_store
vector_store = create_vector_store(embeddings, index_name="demo")
```
4. **Create chat agent**
```python
from cogniweave.quickstart import create_agent
agent = create_agent()
```
5. **Wire up memory and end detection**
```python
from cogniweave.core.runnables.memory_maker import RunnableWithMemoryMaker
from cogniweave.core.runnables.end_detector import RunnableWithEndDetector
from cogniweave.core.runnables.history_store import RunnableWithHistoryStore
from cogniweave.core.end_detector import EndDetector
from cogniweave.core.time_splitter import TimeSplitter
pipeline = RunnableWithMemoryMaker(
agent,
history_store=history_store,
vector_store=vector_store,
input_messages_key="input",
history_messages_key="history",
short_memory_key="short_memory",
long_memory_key="long_memory",
)
pipeline = RunnableWithEndDetector(
pipeline,
end_detector=EndDetector(),
default={"output": []},
history_messages_key="history",
)
pipeline = RunnableWithHistoryStore(
pipeline,
history_store=history_store,
time_splitter=TimeSplitter(),
input_messages_key="input",
history_messages_key="history",
)
```
6. **Stream messages**
```python
for chunk in pipeline.stream({"input": "Hello"}, config={"configurable": {"session_id": "demo"}}):
print(chunk, end="")
```
With these steps you can tailor the pipeline to your own requirements.
## Thanks
- **[LangChain](https://github.com/langchain-ai/langchain)** : Our project is developed entirely based on Langchain.
- **[NoneBot](https://github.com/nonebot/nonebot2)** : The configuration extraction module in our project was developed with reference to certain parts of the NoneBot codebase.
Raw data
{
"_id": null,
"home_page": null,
"name": "cogniweave",
"maintainer": null,
"docs_url": null,
"requires_python": "<4,>=3.11",
"maintainer_email": null,
"keywords": "agent",
"author": null,
"author_email": "Kotodama <2682064633@qq.com>",
"download_url": "https://files.pythonhosted.org/packages/13/d5/c348fd593e1b11843b6b1616aa821024d92a3666a9dda9b34e5753c0c966/cogniweave-0.1.7.tar.gz",
"platform": null,
"description": "# CogniWeave\n\nCogniWeave is an experimental agent framework built on top of [LangChain](https://github.com/langchain-ai/langchain). The repository showcases how to combine short\u2011term memory, persistent chat history and a long\u2011term vector store with end\u2011of\u2011conversation detection. The code base mainly serves as a set of runnable components used by the demonstration scripts and tests.\n\n<p align=\"left\">\n<img src=\"https://github.com/Inexplicable-YL/CogniWeave/blob/main/docs/flow.png\" width=\"600px\">\n</p>\n\n## Features\n\n- **Extensible chat agent** \u2013 defaults to OpenAI models but can be switched to other providers via environment variables.\n- **Persistent chat history** \u2013 messages are stored in a SQLite database for later analysis and memory generation.\n- **Vectorised long\u2011term memory** \u2013 FAISS indexes store tagged long\u2011term memory and allow retrieval as the conversation evolves.\n- **Automatic memory creation** \u2013 short and long\u2011term memories are generated when a session ends and merged into the history.\n- **Interactive CLI** \u2013 run `python -m cogniweave demo` to try the full pipeline from the terminal.\n\nAdditional helper functions for building the pipeline are available in the `cogniweave.quickstart` module.\n\n## Installation\n\nInstall CogniWeave from PyPI:\n\n```bash\npip install cogniweave\n```\n\n## Environment variables\n\nThe agent relies on several environment variables. Reasonable defaults are used when a variable is not provided.\n\n| Variable | Purpose | Default |\n|----------|---------|---------|\n| `CHAT_MODEL` | Chat model in the form `provider/model` | `openai/gpt-4.1` |\n| `AGENT_MODEL` | Agent model in the form `provider/model` | `openai/gpt-4.1` |\n| `EMBEDDINGS_MODEL` | Embedding model in the form `provider/model` | `openai/text-embedding-ada-002` |\n| `SHORT_MEMORY_MODEL` | Model used to summarise recent messages | `openai/gpt-4.1-mini` |\n| `LONG_MEMORY_MODEL` | Model used for long\u2011term memory extraction | `openai/o3` |\n| `END_DETECTOR_MODEL` | Model that decides when a conversation is over | `openai/gpt-4.1-mini` |\n\nModel providers usually require credentials such as `*_API_KEY` and `*_API_BASE`. These can be supplied via a `.env` file in the project root.\n\nEnvironment variables are **case-insensitive** and override any value defined in\n`config.toml`. All settings can be provided entirely through environment\nvariables. Nested options use `__` to separate levels, for example:\n\n```bash\nPROMPT_VALUES__CHAT__EN=\"You are a helpful assistant.\"\n```\n\nis equivalent to the configuration file section:\n\n```toml\n[prompt_values.chat]\nen = \"You are a helpful assistant.\"\n```\n\n## Configuration file\n\nIn addition to environment variables, settings can be defined in a `config.toml` (or\nJSON/YAML) file. The CLI automatically loads this file when present, or you can\nexplicitly provide a path with `--config-file` or by calling\n`cogniweave.init_config(_config_file=...)` in your own code.\n\n```toml\nindex_name = \"demo\"\nfolder_path = \"./.cache\"\nlanguage = \"en\"\nchat_model = \"openai/gpt-4.1\"\nchat_temperature = 0.8\n```\n\nAny fields matching the keys of the :class:`cogniweave.config.Config` model are\naccepted, including nested `prompt_values` sections for overriding system\nprompts.\n\nAll `prompt_values` strings support the f-string placeholder `{default}`. The\nplaceholder is replaced with CogniWeave's built-in prompt so you can extend it\neasily:\n\n```toml\n[prompt_values.end_detector]\nen = \"The agent's name is CogniWeave. {default}\"\n```\n\nwhich becomes `\"The agent's name is CogniWeave. You are a \"message completeness detector. ...\"`.\n\nIf you supply a configuration file or define nested options via environment\nvariables, make sure to call `cogniweave.init_config()` before invoking\n`build_pipeline()` so the settings take effect.\n\n## Multi-language support\n\nThe built-in prompt templates only include Chinese and English text. To use\nanother language, define the prompt in the `prompt_values` section and set the\n`language` key to match. For Japanese using a TOML config:\n\n```toml\nlanguage = \"jp\"\n\n[prompt_values.chat]\njp = \"\u3042\u306a\u305f\u306f\u5f79\u306b\u7acb\u3064\u30a2\u30b7\u30b9\u30bf\u30f3\u30c8\u3067\u3059\u3002\"\n```\n\nWhen a configuration file or environment variables include nested values like\nthis, remember to call `cogniweave.init_config()` before creating the\npipeline so the custom prompts are applied.\n\n## CLI usage\n\nAfter installing the dependencies (see `pyproject.toml`), start the interactive demo with:\n\n```bash\npython -m cogniweave demo\n```\n\nYou can specify a session identifier to keep conversations separate:\n\n```bash\npython -m cogniweave demo my_session\n```\n\nAdditional options control where history and vector data are stored:\n\n```bash\npython -m cogniweave demo my_session --index my_index --folder /tmp/cache\n```\n\nYou can load custom configuration from a file using the --config-file argument:\n\n```bash\npython -m cogniweave demo my_session --config-file config.toml\n```\n\nThe `--index` argument sets the file names for the SQLite database and FAISS index, while `--folder` chooses the directory used to store them. The optional `--config-file` points to a toml, json or yaml file that contains all the necessary settings for the demo.\n\n## Quick build\n\nThe `quickstart.py` module assembles the entire pipeline for you:\n\n```python\nfrom cogniweave import init_config, build_pipeline\n\ninit_config()\npipeline = build_pipeline(index_name=\"demo\")\n```\n\nThe pipeline object exposes a LangChain `Runnable` that contains the agent, history store and vector store ready to use.\n\n## Manual build\n\nFor full control you can construct the components step by step.\n\n1. **Create embeddings**\n\n ```python\n from cogniweave.quickstart import create_embeddings\n\n embeddings = create_embeddings()\n ```\n\n2. **Create history store**\n\n ```python\n from cogniweave.quickstart import create_history_store\n\n history_store = create_history_store(index_name=\"demo\")\n ```\n\n3. **Create vector store**\n\n ```python\n from cogniweave.quickstart import create_vector_store\n\n vector_store = create_vector_store(embeddings, index_name=\"demo\")\n ```\n\n4. **Create chat agent**\n\n ```python\n from cogniweave.quickstart import create_agent\n\n agent = create_agent()\n ```\n\n5. **Wire up memory and end detection**\n\n ```python\n from cogniweave.core.runnables.memory_maker import RunnableWithMemoryMaker\n from cogniweave.core.runnables.end_detector import RunnableWithEndDetector\n from cogniweave.core.runnables.history_store import RunnableWithHistoryStore\n from cogniweave.core.end_detector import EndDetector\n from cogniweave.core.time_splitter import TimeSplitter\n\n pipeline = RunnableWithMemoryMaker(\n agent,\n history_store=history_store,\n vector_store=vector_store,\n input_messages_key=\"input\",\n history_messages_key=\"history\",\n short_memory_key=\"short_memory\",\n long_memory_key=\"long_memory\",\n )\n pipeline = RunnableWithEndDetector(\n pipeline,\n end_detector=EndDetector(),\n default={\"output\": []},\n history_messages_key=\"history\",\n )\n pipeline = RunnableWithHistoryStore(\n pipeline,\n history_store=history_store,\n time_splitter=TimeSplitter(),\n input_messages_key=\"input\",\n history_messages_key=\"history\",\n )\n ```\n\n6. **Stream messages**\n\n ```python\n for chunk in pipeline.stream({\"input\": \"Hello\"}, config={\"configurable\": {\"session_id\": \"demo\"}}):\n print(chunk, end=\"\")\n ```\n\nWith these steps you can tailor the pipeline to your own requirements.\n\n## Thanks\n- **[LangChain](https://github.com/langchain-ai/langchain)** : Our project is developed entirely based on Langchain.\n- **[NoneBot](https://github.com/nonebot/nonebot2)** : The configuration extraction module in our project was developed with reference to certain parts of the NoneBot codebase.\n",
"bugtrack_url": null,
"license": "Apache License 2.0",
"summary": "Experimental agent framework built on top of LangChain",
"version": "0.1.7",
"project_urls": {
"Repository": "https://github.com/Inexplicable-YL/CogniWeaveAgent"
},
"split_keywords": [
"agent"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "374f135b2e62175b5fdec988c591182934f298829ae5524027b24b87c0811ed3",
"md5": "6a5246ef0aac60e6225d29152111f1ed",
"sha256": "a86b5a8815b33ef2596d125455e8c79ebe47003331ca3fc09434971cf51f40cf"
},
"downloads": -1,
"filename": "cogniweave-0.1.7-py3-none-any.whl",
"has_sig": false,
"md5_digest": "6a5246ef0aac60e6225d29152111f1ed",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "<4,>=3.11",
"size": 93954,
"upload_time": "2025-07-14T06:49:13",
"upload_time_iso_8601": "2025-07-14T06:49:13.540458Z",
"url": "https://files.pythonhosted.org/packages/37/4f/135b2e62175b5fdec988c591182934f298829ae5524027b24b87c0811ed3/cogniweave-0.1.7-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "13d5c348fd593e1b11843b6b1616aa821024d92a3666a9dda9b34e5753c0c966",
"md5": "eb89754abc9fb3c652163060a6a68ab6",
"sha256": "866709d39383e6459eaff3d62e8249acaf6253d416706213f0dd6bffff1d8d1c"
},
"downloads": -1,
"filename": "cogniweave-0.1.7.tar.gz",
"has_sig": false,
"md5_digest": "eb89754abc9fb3c652163060a6a68ab6",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "<4,>=3.11",
"size": 73061,
"upload_time": "2025-07-14T06:49:15",
"upload_time_iso_8601": "2025-07-14T06:49:15.591788Z",
"url": "https://files.pythonhosted.org/packages/13/d5/c348fd593e1b11843b6b1616aa821024d92a3666a9dda9b34e5753c0c966/cogniweave-0.1.7.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-07-14 06:49:15",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "Inexplicable-YL",
"github_project": "CogniWeaveAgent",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"lcname": "cogniweave"
}