# memory-agent
[](https://github.com/gzileni/memory-agent)
[](https://github.com/gzileni/memory-agent/stargazers)
[](https://github.com/gzileni/memory-agent/network)
The library allows managing both [**persistence**](https://langchain-ai.github.io/langgraph/how-tos/persistence/) and [**memory**](https://langchain-ai.github.io/langgraph/concepts/memory/#what-is-memory) for a **LangGraph** agent.
**memory-agent** uses **Redis** as the backend for **short‑term memory** and **long‑term persistence** and **semantic search**.

---
## 🔑 Key Features
- **Dual-layer memory system**
- **Short-term memory with Redis** → fast, volatile storage with TTL for active sessions.
- **Long-term persistence with Qdrant** → semantic search, embeddings, and cross‑session retrieval.
- **Integration with LangGraph** → stateful LLM agents with checkpoints and memory tools.
- **Multi-LLM**
- OpenAI (via `AgentOpenAI`)
- Ollama (via `AgentOllama`) for local inference
- **Flexible embeddings**
- OpenAI embeddings (default)
- Ollama embeddings (e.g., `nomic-embed-text`)
- **Automatic memory management**
- Summarization and reflection to compress context
- **Observability**
- Structured logging, compatible with **Grafana/Loki**
- **Easy installation & deployment**
- `pip install`
- [Docker‑ready](./docker/README.md)
---
## 🧠 Memory vs 🗃️ Persistence
| Function | Database | Why |
|-----------------|----------|-----|
| **Memory** | Redis | Performance, TTL, fast session context |
| **Persistence** | Redis | Vector search, long‑term storage |
---
## 📦 Installation
```bash
pip install memory-agent
```
For local use with **Ollama** or local embeddings:
- Install Ollama: https://ollama.ai
---
## ▶️ Usage examples (repository root)
The examples show how to configure the agent, send messages (including **streaming**) and share memory between different agents.
### 1) [`demo.py`](./demo.py) — Quick start with Ollama + memory
What it does:
1. Saves to context: `"My name is Giuseppe. Remember that."`
2. Asks a factoid: `"What is the capital of France?"` (streaming)
3. Retrieves from **short‑term memory**: `"What is my name?"` (streaming)
Essential snippet (simplified):
```python
from memory_agent.agent.ollama import AgentOllama
from demo_config import thread_id, user_id, session_id, model_ollama, redis_config, model_embedding_vs_config, model_embedding_config, qdrant_config, collection_config
agent = AgentOllama(
thread_id=thread_id,
user_id=user_id,
session_id=session_id,
model_config=model_ollama,
redis_config=redis_config,
qdrant_config=qdrant_config,
collection_config=collection_config,
embedding_store_config=model_embedding_vs_config,
embedding_model_config=model_embedding_config,
)
# Non-streaming call
text = await agent.invoke("My name is Giuseppe. Remember that.")
# Streaming call
async for token in agent.invoke_stream("What is the capital of France?"):
print(token, end="")
# Retrieve from context
async for token in agent.invoke_stream("What is my name?"):
print(token, end="")
```
Run:
```bash
python demo.py
```
What to expect:
- On the first request the agent stores the information (“Giuseppe”).
- On the third request the agent should answer with the previously provided name.
---
### 2) [`demo_config.py`](./demo_config.py) — Centralized configuration
This file defines **all parameters** used by the examples:
- **Session identifiers**:
```python
thread_id = "thread_demo"
user_id = "user_demo"
session_id = "session_demo"
```
- **LLM model (Ollama)**:
```python
model_ollama = {
"model": "llama3.1",
"model_provider": "ollama",
"api_key": None,
"base_url": "http://localhost:11434",
"temperature": 0.5,
}
```
- **Qdrant**:
```python
qdrant_config = {
"url": "http://localhost:6333",
}
```
- **Embeddings (via Ollama)**:
```python
model_embedding_config = {
"name": "nomic-embed-text",
"url": "http://localhost:11434"
}
```
- **Vector Store / Collection** (example): COSINE distance with `qdrant_client.http.models.Distance.COSINE`.
- **Redis**: connection/TTL parameters for short‑term memory.
> Modify these values to point to your Redis/Qdrant/Ollama instances. Other examples import directly from `demo_config.py`.
---
### 3) [`demo_mem_shared.py`](./demo_mem_shared.py) — Shared memory between two agents (LangGraph)
This example shows how **two distinct agents** can **share the same memory**.
The idea is to create two `AgentOllama` instances (e.g., `agent_1` and `agent_2`) that use **the same backends** (Redis + Qdrant) and **the same relevant identifiers** (e.g., collection, user, thread), so that what the first agent stores is available to the second.
Flow:
1. `agent_1` receives: `"My name is Giuseppe. Remember that."` and stores it.
2. `agent_2` receives: `"What is my name?"` and retrieves the answer from shared memory.
Essential snippet (simplified):
```python
agent_1 = AgentOllama(... shared ...)
agent_2 = AgentOllama(... shared ...)
await agent_1.invoke("My name is Giuseppe. Remember that.")
# The other agent pulls from the same memory
answer = await agent_2.invoke("What is my name?")
print(answer) # → "Your name is Giuseppe" (expected)
```
Run:
```bash
python demo_mem_shared.py
```
> This pattern is useful when multiple services/workers collaborate on the same user or conversation, leveraging **Redis** for short‑term state and **Qdrant** for persistence/semantic search across sessions.
---
## ⚙️ Prerequisites
- **Redis** running (used for short‑term memory)
- **Ollama** running (LLM and optionally embeddings)
- **OpenAI** API KEY to make request to OpenAI API
- Correct variables/URLs in `demo_config.py`
---
## [Docker](./docker/README.md)
---
## 🧪 Tips
- For **multi‑worker** environments, ensure `thread_id`, `user_id` and `session_id` and collection keys are consistent across processes that need to share memory.
- To separate memories of different agents, use **distinct session/thread IDs** or different collections in Qdrant.
- Tune model `temperature` and pruning/summarization parameters to balance cost/quality/context.
---
## 🛠️ Troubleshooting
- **Doesn't retrieve memory** → check Redis reachability and that IDs (thread/user/session) are consistent between requests.
- **Semantic search not effective** → verify embeddings are enabled (e.g., `nomic-embed-text`) and that Qdrant has the correct collection.
- **Streaming prints nothing** → ensure you iterate the `invoke_stream(...)` generator and do `print(token, end="")`.
---
## [📄 License MIT](./LICENSE.md)
Raw data
{
"_id": null,
"home_page": null,
"name": "memory-agent",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.9",
"maintainer_email": null,
"keywords": "agent, memory",
"author": null,
"author_email": "Giuseppe Zileni <giuseppe.zileni@gmail.com>",
"download_url": "https://files.pythonhosted.org/packages/6a/ea/903457fa6f5f7aff8114900b0086a9a2d80c8056ce9c51ba7b99e9fcff67/memory_agent-2.0.16.tar.gz",
"platform": null,
"description": "# memory-agent \n\n[](https://github.com/gzileni/memory-agent) \n[](https://github.com/gzileni/memory-agent/stargazers) \n[](https://github.com/gzileni/memory-agent/network) \n\nThe library allows managing both [**persistence**](https://langchain-ai.github.io/langgraph/how-tos/persistence/) and [**memory**](https://langchain-ai.github.io/langgraph/concepts/memory/#what-is-memory) for a **LangGraph** agent.\n\n**memory-agent** uses **Redis** as the backend for **short\u2011term memory** and **long\u2011term persistence** and **semantic search**.\n\n\n\n---\n\n## \ud83d\udd11 Key Features\n\n- **Dual-layer memory system**\n - **Short-term memory with Redis** \u2192 fast, volatile storage with TTL for active sessions.\n - **Long-term persistence with Qdrant** \u2192 semantic search, embeddings, and cross\u2011session retrieval.\n- **Integration with LangGraph** \u2192 stateful LLM agents with checkpoints and memory tools.\n- **Multi-LLM**\n - OpenAI (via `AgentOpenAI`)\n - Ollama (via `AgentOllama`) for local inference\n- **Flexible embeddings**\n - OpenAI embeddings (default)\n - Ollama embeddings (e.g., `nomic-embed-text`)\n- **Automatic memory management**\n - Summarization and reflection to compress context\n- **Observability**\n - Structured logging, compatible with **Grafana/Loki**\n- **Easy installation & deployment**\n - `pip install`\n - [Docker\u2011ready](./docker/README.md)\n\n---\n\n## \ud83e\udde0 Memory vs \ud83d\uddc3\ufe0f Persistence\n\n| Function | Database | Why |\n|-----------------|----------|-----|\n| **Memory** | Redis | Performance, TTL, fast session context |\n| **Persistence** | Redis | Vector search, long\u2011term storage |\n\n---\n\n## \ud83d\udce6 Installation\n\n```bash\npip install memory-agent\n```\n\nFor local use with **Ollama** or local embeddings:\n- Install Ollama: https://ollama.ai\n\n---\n\n## \u25b6\ufe0f Usage examples (repository root)\n\nThe examples show how to configure the agent, send messages (including **streaming**) and share memory between different agents.\n\n### 1) [`demo.py`](./demo.py) \u2014 Quick start with Ollama + memory\n\nWhat it does:\n1. Saves to context: `\"My name is Giuseppe. Remember that.\"` \n2. Asks a factoid: `\"What is the capital of France?\"` (streaming) \n3. Retrieves from **short\u2011term memory**: `\"What is my name?\"` (streaming)\n\nEssential snippet (simplified):\n```python\nfrom memory_agent.agent.ollama import AgentOllama\nfrom demo_config import thread_id, user_id, session_id, model_ollama, redis_config, model_embedding_vs_config, model_embedding_config, qdrant_config, collection_config\n\nagent = AgentOllama(\n thread_id=thread_id,\n user_id=user_id,\n session_id=session_id,\n model_config=model_ollama,\n redis_config=redis_config,\n qdrant_config=qdrant_config,\n collection_config=collection_config,\n embedding_store_config=model_embedding_vs_config,\n embedding_model_config=model_embedding_config,\n)\n\n# Non-streaming call\ntext = await agent.invoke(\"My name is Giuseppe. Remember that.\")\n\n# Streaming call\nasync for token in agent.invoke_stream(\"What is the capital of France?\"):\n print(token, end=\"\")\n\n# Retrieve from context\nasync for token in agent.invoke_stream(\"What is my name?\"):\n print(token, end=\"\")\n```\n\nRun:\n```bash\npython demo.py\n```\n\nWhat to expect:\n- On the first request the agent stores the information (\u201cGiuseppe\u201d). \n- On the third request the agent should answer with the previously provided name.\n\n---\n\n### 2) [`demo_config.py`](./demo_config.py) \u2014 Centralized configuration\n\nThis file defines **all parameters** used by the examples:\n\n- **Session identifiers**:\n ```python\n thread_id = \"thread_demo\"\n user_id = \"user_demo\"\n session_id = \"session_demo\"\n ```\n- **LLM model (Ollama)**:\n ```python\n model_ollama = {\n \"model\": \"llama3.1\",\n \"model_provider\": \"ollama\",\n \"api_key\": None,\n \"base_url\": \"http://localhost:11434\",\n \"temperature\": 0.5,\n }\n ```\n- **Qdrant**:\n ```python\n qdrant_config = {\n \"url\": \"http://localhost:6333\",\n }\n ```\n- **Embeddings (via Ollama)**:\n ```python\n model_embedding_config = {\n \"name\": \"nomic-embed-text\",\n \"url\": \"http://localhost:11434\"\n }\n ```\n- **Vector Store / Collection** (example): COSINE distance with `qdrant_client.http.models.Distance.COSINE`. \n- **Redis**: connection/TTL parameters for short\u2011term memory.\n\n> Modify these values to point to your Redis/Qdrant/Ollama instances. Other examples import directly from `demo_config.py`.\n\n---\n\n### 3) [`demo_mem_shared.py`](./demo_mem_shared.py) \u2014 Shared memory between two agents (LangGraph)\n\nThis example shows how **two distinct agents** can **share the same memory**. \nThe idea is to create two `AgentOllama` instances (e.g., `agent_1` and `agent_2`) that use **the same backends** (Redis + Qdrant) and **the same relevant identifiers** (e.g., collection, user, thread), so that what the first agent stores is available to the second.\n\nFlow:\n1. `agent_1` receives: `\"My name is Giuseppe. Remember that.\"` and stores it. \n2. `agent_2` receives: `\"What is my name?\"` and retrieves the answer from shared memory.\n\nEssential snippet (simplified):\n```python\nagent_1 = AgentOllama(... shared ...)\nagent_2 = AgentOllama(... shared ...)\n\nawait agent_1.invoke(\"My name is Giuseppe. Remember that.\")\n\n# The other agent pulls from the same memory\nanswer = await agent_2.invoke(\"What is my name?\")\nprint(answer) # \u2192 \"Your name is Giuseppe\" (expected)\n```\n\nRun:\n```bash\npython demo_mem_shared.py\n```\n\n> This pattern is useful when multiple services/workers collaborate on the same user or conversation, leveraging **Redis** for short\u2011term state and **Qdrant** for persistence/semantic search across sessions.\n\n---\n\n## \u2699\ufe0f Prerequisites\n\n- **Redis** running (used for short\u2011term memory)\n- **Ollama** running (LLM and optionally embeddings)\n- **OpenAI** API KEY to make request to OpenAI API\n- Correct variables/URLs in `demo_config.py`\n\n---\n\n## [Docker](./docker/README.md)\n\n---\n\n## \ud83e\uddea Tips\n\n- For **multi\u2011worker** environments, ensure `thread_id`, `user_id` and `session_id` and collection keys are consistent across processes that need to share memory. \n- To separate memories of different agents, use **distinct session/thread IDs** or different collections in Qdrant. \n- Tune model `temperature` and pruning/summarization parameters to balance cost/quality/context.\n\n---\n\n## \ud83d\udee0\ufe0f Troubleshooting\n\n- **Doesn't retrieve memory** \u2192 check Redis reachability and that IDs (thread/user/session) are consistent between requests. \n- **Semantic search not effective** \u2192 verify embeddings are enabled (e.g., `nomic-embed-text`) and that Qdrant has the correct collection. \n- **Streaming prints nothing** \u2192 ensure you iterate the `invoke_stream(...)` generator and do `print(token, end=\"\")`.\n\n---\n\n## [\ud83d\udcc4 License MIT](./LICENSE.md)\n",
"bugtrack_url": null,
"license": null,
"summary": "A Python library for advanced memory management in AI agent applications",
"version": "2.0.16",
"project_urls": {
"Homepage": "https://gzileni.github.io/memory-agent",
"Repository": "https://github.com/gzileni/memory-agent"
},
"split_keywords": [
"agent",
" memory"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "084539f48b642fd629ce6ec0ad27529ba7962e5f9843a7488c8bbae2336dc586",
"md5": "c879b251d3e32869d8de389f44ebf4d1",
"sha256": "82e7e6f66910541dc7928915bcf2ea219a3e6a9e9af142f678cfc1213f24258c"
},
"downloads": -1,
"filename": "memory_agent-2.0.16-py3-none-any.whl",
"has_sig": false,
"md5_digest": "c879b251d3e32869d8de389f44ebf4d1",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.9",
"size": 43926,
"upload_time": "2025-09-14T20:20:46",
"upload_time_iso_8601": "2025-09-14T20:20:46.122362Z",
"url": "https://files.pythonhosted.org/packages/08/45/39f48b642fd629ce6ec0ad27529ba7962e5f9843a7488c8bbae2336dc586/memory_agent-2.0.16-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "6aea903457fa6f5f7aff8114900b0086a9a2d80c8056ce9c51ba7b99e9fcff67",
"md5": "c453fba688243137fff57859ed138883",
"sha256": "ea47e510620285e1bced246ba20d6e1cb69e0dac4ac3c51143c37db7d182148a"
},
"downloads": -1,
"filename": "memory_agent-2.0.16.tar.gz",
"has_sig": false,
"md5_digest": "c453fba688243137fff57859ed138883",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.9",
"size": 41425,
"upload_time": "2025-09-14T20:20:47",
"upload_time_iso_8601": "2025-09-14T20:20:47.969946Z",
"url": "https://files.pythonhosted.org/packages/6a/ea/903457fa6f5f7aff8114900b0086a9a2d80c8056ce9c51ba7b99e9fcff67/memory_agent-2.0.16.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-09-14 20:20:47",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "gzileni",
"github_project": "memory-agent",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"requirements": [
{
"name": "aioboto3",
"specs": [
[
"==",
"15.1.0"
]
]
},
{
"name": "aiobotocore",
"specs": [
[
"==",
"2.24.0"
]
]
},
{
"name": "aiofiles",
"specs": [
[
"==",
"24.1.0"
]
]
},
{
"name": "aiohappyeyeballs",
"specs": [
[
"==",
"2.6.1"
]
]
},
{
"name": "aiohttp",
"specs": [
[
"==",
"3.11.18"
]
]
},
{
"name": "aioitertools",
"specs": [
[
"==",
"0.12.0"
]
]
},
{
"name": "aiosignal",
"specs": [
[
"==",
"1.3.2"
]
]
},
{
"name": "annotated-types",
"specs": [
[
"==",
"0.7.0"
]
]
},
{
"name": "anthropic",
"specs": [
[
"==",
"0.64.0"
]
]
},
{
"name": "anyio",
"specs": [
[
"==",
"4.9.0"
]
]
},
{
"name": "attrs",
"specs": [
[
"==",
"25.3.0"
]
]
},
{
"name": "backoff",
"specs": [
[
"==",
"2.2.1"
]
]
},
{
"name": "beautifulsoup4",
"specs": [
[
"==",
"4.13.4"
]
]
},
{
"name": "boto3",
"specs": [
[
"==",
"1.39.11"
]
]
},
{
"name": "botocore",
"specs": [
[
"==",
"1.39.11"
]
]
},
{
"name": "build",
"specs": [
[
"==",
"1.2.2.post1"
]
]
},
{
"name": "certifi",
"specs": [
[
"==",
"2025.4.26"
]
]
},
{
"name": "cffi",
"specs": [
[
"==",
"1.17.1"
]
]
},
{
"name": "chardet",
"specs": [
[
"==",
"5.2.0"
]
]
},
{
"name": "charset-normalizer",
"specs": [
[
"==",
"3.4.2"
]
]
},
{
"name": "click",
"specs": [
[
"==",
"8.1.3"
]
]
},
{
"name": "coloredlogs",
"specs": [
[
"==",
"15.0.1"
]
]
},
{
"name": "contourpy",
"specs": [
[
"==",
"1.3.2"
]
]
},
{
"name": "cryptography",
"specs": [
[
"==",
"43.0.0"
]
]
},
{
"name": "cycler",
"specs": [
[
"==",
"0.12.1"
]
]
},
{
"name": "dataclasses-json",
"specs": [
[
"==",
"0.6.7"
]
]
},
{
"name": "distro",
"specs": [
[
"==",
"1.9.0"
]
]
},
{
"name": "docutils",
"specs": [
[
"==",
"0.21.2"
]
]
},
{
"name": "dydantic",
"specs": [
[
"==",
"0.0.8"
]
]
},
{
"name": "emoji",
"specs": [
[
"==",
"2.14.1"
]
]
},
{
"name": "fastembed",
"specs": [
[
"==",
"0.7.0"
]
]
},
{
"name": "filelock",
"specs": [
[
"==",
"3.18.0"
]
]
},
{
"name": "filetype",
"specs": [
[
"==",
"1.2.0"
]
]
},
{
"name": "flatbuffers",
"specs": [
[
"==",
"25.2.10"
]
]
},
{
"name": "fonttools",
"specs": [
[
"==",
"4.58.0"
]
]
},
{
"name": "fpdf",
"specs": [
[
"==",
"1.7.2"
]
]
},
{
"name": "frozenlist",
"specs": [
[
"==",
"1.6.0"
]
]
},
{
"name": "fsspec",
"specs": [
[
"==",
"2024.12.0"
]
]
},
{
"name": "greenlet",
"specs": [
[
"==",
"3.2.2"
]
]
},
{
"name": "grpcio",
"specs": [
[
"==",
"1.74.0"
]
]
},
{
"name": "h11",
"specs": [
[
"==",
"0.16.0"
]
]
},
{
"name": "h2",
"specs": [
[
"==",
"4.2.0"
]
]
},
{
"name": "hf-xet",
"specs": [
[
"==",
"1.1.9"
]
]
},
{
"name": "hpack",
"specs": [
[
"==",
"4.1.0"
]
]
},
{
"name": "html5lib",
"specs": [
[
"==",
"1.1"
]
]
},
{
"name": "httpcore",
"specs": [
[
"==",
"1.0.9"
]
]
},
{
"name": "httpx",
"specs": [
[
"==",
"0.28.1"
]
]
},
{
"name": "httpx-sse",
"specs": [
[
"==",
"0.4.0"
]
]
},
{
"name": "huggingface-hub",
"specs": [
[
"==",
"0.34.4"
]
]
},
{
"name": "humanfriendly",
"specs": [
[
"==",
"10.0"
]
]
},
{
"name": "hyperframe",
"specs": [
[
"==",
"6.1.0"
]
]
},
{
"name": "id",
"specs": [
[
"==",
"1.5.0"
]
]
},
{
"name": "idna",
"specs": [
[
"==",
"3.10"
]
]
},
{
"name": "iniconfig",
"specs": [
[
"==",
"2.1.0"
]
]
},
{
"name": "jaraco.classes",
"specs": [
[
"==",
"3.4.0"
]
]
},
{
"name": "jaraco.context",
"specs": [
[
"==",
"6.0.1"
]
]
},
{
"name": "jaraco.functools",
"specs": [
[
"==",
"4.1.0"
]
]
},
{
"name": "jiter",
"specs": [
[
"==",
"0.9.0"
]
]
},
{
"name": "jmespath",
"specs": [
[
"==",
"1.0.1"
]
]
},
{
"name": "joblib",
"specs": [
[
"==",
"1.5.0"
]
]
},
{
"name": "json_repair",
"specs": [
[
"==",
"0.44.1"
]
]
},
{
"name": "jsonpatch",
"specs": [
[
"==",
"1.33"
]
]
},
{
"name": "jsonpath-ng",
"specs": [
[
"==",
"1.7.0"
]
]
},
{
"name": "jsonpointer",
"specs": [
[
"==",
"2.1"
]
]
},
{
"name": "keyring",
"specs": [
[
"==",
"25.6.0"
]
]
},
{
"name": "kiwisolver",
"specs": [
[
"==",
"1.4.8"
]
]
},
{
"name": "langchain",
"specs": [
[
"==",
"0.3.27"
]
]
},
{
"name": "langchain-anthropic",
"specs": [
[
"==",
"0.3.19"
]
]
},
{
"name": "langchain-community",
"specs": [
[
"==",
"0.3.25"
]
]
},
{
"name": "langchain-core",
"specs": [
[
"==",
"0.3.75"
]
]
},
{
"name": "langchain-ollama",
"specs": [
[
"==",
"0.3.3"
]
]
},
{
"name": "langchain-openai",
"specs": [
[
"==",
"0.3.23"
]
]
},
{
"name": "langchain-qdrant",
"specs": [
[
"==",
"0.2.0"
]
]
},
{
"name": "langchain-text-splitters",
"specs": [
[
"==",
"0.3.10"
]
]
},
{
"name": "langcodes",
"specs": [
[
"==",
"3.5.0"
]
]
},
{
"name": "langdetect",
"specs": [
[
"==",
"1.0.9"
]
]
},
{
"name": "langgraph",
"specs": [
[
"==",
"0.6.6"
]
]
},
{
"name": "langgraph-checkpoint",
"specs": [
[
"==",
"2.1.1"
]
]
},
{
"name": "langgraph-checkpoint-redis",
"specs": [
[
"==",
"0.1.1"
]
]
},
{
"name": "langgraph-prebuilt",
"specs": [
[
"==",
"0.6.4"
]
]
},
{
"name": "langgraph-sdk",
"specs": [
[
"==",
"0.2.4"
]
]
},
{
"name": "langmem",
"specs": [
[
"==",
"0.0.29"
]
]
},
{
"name": "langsmith",
"specs": [
[
"==",
"0.3.45"
]
]
},
{
"name": "language_data",
"specs": [
[
"==",
"1.3.0"
]
]
},
{
"name": "loguru",
"specs": [
[
"==",
"0.7.3"
]
]
},
{
"name": "loki-logger-handler",
"specs": [
[
"==",
"1.1.2"
]
]
},
{
"name": "lxml",
"specs": [
[
"==",
"5.4.0"
]
]
},
{
"name": "marisa-trie",
"specs": [
[
"==",
"1.3.1"
]
]
},
{
"name": "Markdown",
"specs": [
[
"==",
"3.8.2"
]
]
},
{
"name": "markdown-it-py",
"specs": [
[
"==",
"3.0.0"
]
]
},
{
"name": "marshmallow",
"specs": [
[
"==",
"3.26.1"
]
]
},
{
"name": "matplotlib",
"specs": [
[
"==",
"3.10.3"
]
]
},
{
"name": "mdurl",
"specs": [
[
"==",
"0.1.2"
]
]
},
{
"name": "ml_dtypes",
"specs": [
[
"==",
"0.5.3"
]
]
},
{
"name": "mmh3",
"specs": [
[
"==",
"5.1.0"
]
]
},
{
"name": "more-itertools",
"specs": [
[
"==",
"10.7.0"
]
]
},
{
"name": "mpmath",
"specs": [
[
"==",
"1.3.0"
]
]
},
{
"name": "multidict",
"specs": [
[
"==",
"6.4.4"
]
]
},
{
"name": "mypy_extensions",
"specs": [
[
"==",
"1.1.0"
]
]
},
{
"name": "narwhals",
"specs": [
[
"==",
"1.43.1"
]
]
},
{
"name": "neo4j",
"specs": [
[
"==",
"5.28.2"
]
]
},
{
"name": "neo4j-graphrag",
"specs": [
[
"==",
"1.9.1"
]
]
},
{
"name": "nest-asyncio",
"specs": [
[
"==",
"1.6.0"
]
]
},
{
"name": "nh3",
"specs": [
[
"==",
"0.2.21"
]
]
},
{
"name": "nltk",
"specs": [
[
"==",
"3.9.1"
]
]
},
{
"name": "numpy",
"specs": [
[
"==",
"2.3.2"
]
]
},
{
"name": "olefile",
"specs": [
[
"==",
"0.47"
]
]
},
{
"name": "ollama",
"specs": [
[
"==",
"0.5.1"
]
]
},
{
"name": "onnxruntime",
"specs": [
[
"==",
"1.22.0"
]
]
},
{
"name": "openai",
"specs": [
[
"==",
"1.86.0"
]
]
},
{
"name": "orjson",
"specs": [
[
"==",
"3.10.18"
]
]
},
{
"name": "ormsgpack",
"specs": [
[
"==",
"1.10.0"
]
]
},
{
"name": "packaging",
"specs": [
[
"==",
"24.2"
]
]
},
{
"name": "pandas",
"specs": [
[
"==",
"2.3.0"
]
]
},
{
"name": "pillow",
"specs": [
[
"==",
"11.2.1"
]
]
},
{
"name": "plotly",
"specs": [
[
"==",
"6.1.2"
]
]
},
{
"name": "pluggy",
"specs": [
[
"==",
"1.6.0"
]
]
},
{
"name": "ply",
"specs": [
[
"==",
"3.11"
]
]
},
{
"name": "portalocker",
"specs": [
[
"==",
"2.10.1"
]
]
},
{
"name": "propcache",
"specs": [
[
"==",
"0.3.2"
]
]
},
{
"name": "protobuf",
"specs": [
[
"==",
"4.25.8"
]
]
},
{
"name": "psutil",
"specs": [
[
"==",
"7.0.0"
]
]
},
{
"name": "py_rust_stemmers",
"specs": [
[
"==",
"0.1.5"
]
]
},
{
"name": "pyaws-s3",
"specs": [
[
"==",
"1.0.28"
]
]
},
{
"name": "pycparser",
"specs": [
[
"==",
"2.22"
]
]
},
{
"name": "pydantic",
"specs": [
[
"==",
"2.11.6"
]
]
},
{
"name": "pydantic-settings",
"specs": [
[
"==",
"2.9.1"
]
]
},
{
"name": "pydantic_core",
"specs": [
[
"==",
"2.33.2"
]
]
},
{
"name": "Pygments",
"specs": [
[
"==",
"2.19.1"
]
]
},
{
"name": "pyparsing",
"specs": [
[
"==",
"3.2.3"
]
]
},
{
"name": "pypdf",
"specs": [
[
"==",
"5.6.0"
]
]
},
{
"name": "pyproject_hooks",
"specs": [
[
"==",
"1.2.0"
]
]
},
{
"name": "pytest",
"specs": [
[
"==",
"8.4.1"
]
]
},
{
"name": "python-dateutil",
"specs": [
[
"==",
"2.9.0.post0"
]
]
},
{
"name": "python-dotenv",
"specs": [
[
"==",
"1.1.0"
]
]
},
{
"name": "python-iso639",
"specs": [
[
"==",
"2025.2.18"
]
]
},
{
"name": "python-magic",
"specs": [
[
"==",
"0.4.27"
]
]
},
{
"name": "python-oxmsg",
"specs": [
[
"==",
"0.0.2"
]
]
},
{
"name": "python-ulid",
"specs": [
[
"==",
"3.1.0"
]
]
},
{
"name": "pytz",
"specs": [
[
"==",
"2025.2"
]
]
},
{
"name": "PyYAML",
"specs": [
[
"==",
"6.0.2"
]
]
},
{
"name": "qdrant-client",
"specs": [
[
"==",
"1.14.2"
]
]
},
{
"name": "RapidFuzz",
"specs": [
[
"==",
"3.13.0"
]
]
},
{
"name": "readme_renderer",
"specs": [
[
"==",
"44.0"
]
]
},
{
"name": "redis",
"specs": [
[
"==",
"6.2.0"
]
]
},
{
"name": "redisvl",
"specs": [
[
"==",
"0.8.2"
]
]
},
{
"name": "regex",
"specs": [
[
"==",
"2024.11.6"
]
]
},
{
"name": "reportlab",
"specs": [
[
"==",
"4.4.2"
]
]
},
{
"name": "requests",
"specs": [
[
"==",
"2.32.4"
]
]
},
{
"name": "requests-toolbelt",
"specs": [
[
"==",
"1.0.0"
]
]
},
{
"name": "rfc3986",
"specs": [
[
"==",
"2.0.0"
]
]
},
{
"name": "rich",
"specs": [
[
"==",
"14.0.0"
]
]
},
{
"name": "s3transfer",
"specs": [
[
"==",
"0.13.1"
]
]
},
{
"name": "scipy",
"specs": [
[
"==",
"1.16.1"
]
]
},
{
"name": "setuptools",
"specs": [
[
"==",
"80.9.0"
]
]
},
{
"name": "six",
"specs": [
[
"==",
"1.17.0"
]
]
},
{
"name": "sniffio",
"specs": [
[
"==",
"1.3.1"
]
]
},
{
"name": "soupsieve",
"specs": [
[
"==",
"2.7"
]
]
},
{
"name": "SQLAlchemy",
"specs": [
[
"==",
"2.0.41"
]
]
},
{
"name": "sympy",
"specs": [
[
"==",
"1.14.0"
]
]
},
{
"name": "tenacity",
"specs": [
[
"==",
"9.1.2"
]
]
},
{
"name": "tiktoken",
"specs": [
[
"==",
"0.9.0"
]
]
},
{
"name": "tokenizers",
"specs": [
[
"==",
"0.21.1"
]
]
},
{
"name": "tqdm",
"specs": [
[
"==",
"4.67.1"
]
]
},
{
"name": "trustcall",
"specs": [
[
"==",
"0.0.39"
]
]
},
{
"name": "twine",
"specs": [
[
"==",
"6.1.0"
]
]
},
{
"name": "types-PyYAML",
"specs": [
[
"==",
"6.0.12.20250516"
]
]
},
{
"name": "typing-inspect",
"specs": [
[
"==",
"0.9.0"
]
]
},
{
"name": "typing-inspection",
"specs": [
[
"==",
"0.4.1"
]
]
},
{
"name": "typing_extensions",
"specs": [
[
"==",
"4.14.0"
]
]
},
{
"name": "tzdata",
"specs": [
[
"==",
"2025.2"
]
]
},
{
"name": "unstructured",
"specs": [
[
"==",
"0.17.2"
]
]
},
{
"name": "unstructured-client",
"specs": [
[
"==",
"0.36.0"
]
]
},
{
"name": "urllib3",
"specs": [
[
"==",
"2.4.0"
]
]
},
{
"name": "webencodings",
"specs": [
[
"==",
"0.5.1"
]
]
},
{
"name": "wheel",
"specs": [
[
"==",
"0.45.1"
]
]
},
{
"name": "wrapt",
"specs": [
[
"==",
"1.17.2"
]
]
},
{
"name": "xxhash",
"specs": [
[
"==",
"3.5.0"
]
]
},
{
"name": "yarl",
"specs": [
[
"==",
"1.20.1"
]
]
},
{
"name": "zstandard",
"specs": [
[
"==",
"0.23.0"
]
]
}
],
"lcname": "memory-agent"
}