# autogen-vertexai-memory
VertexAI Memory integration for Autogen agents. Store and retrieve agent memories using Google Cloud's VertexAI Memory service with semantic search capabilities.
## Features
- 🧠 **Persistent Memory Storage** - Store agent memories in Google Cloud VertexAI
- 🔍 **Semantic Search** - Find relevant memories using natural language queries
- 🔄 **Automatic Context Updates** - Seamlessly inject memories into chat contexts
- ⚡ **Async/Await Support** - Full async API compatible with Autogen's runtime
- 🎯 **User-Scoped Isolation** - Multi-tenant memory management
- 🛠️ **Tool Integration** - Ready-to-use tools for agent workflows
## Installation
```bash
pip install autogen-vertexai-memory
```
## Prerequisites
1. **Google Cloud Project** with VertexAI API enabled
2. **Authentication** configured (Application Default Credentials)
3. **VertexAI Memory Resource** created in your project
```bash
# Set up authentication
gcloud auth application-default login
# Enable VertexAI API
gcloud services enable aiplatform.googleapis.com
```
## Quick Start
### Basic Memory Usage
```python
from autogen_vertexai_memory import VertexaiMemory, VertexaiMemoryConfig
from autogen_core.memory import MemoryContent, MemoryMimeType
# Configure memory
config = VertexaiMemoryConfig(
api_resource_name="projects/my-project/locations/us-central1/......./",
project_id="my-project",
location="us-central1",
user_id="user123"
)
memory = VertexaiMemory(config=config)
# Store a memory
await memory.add(
content=MemoryContent(
content="User prefers concise responses and uses Python",
mime_type=MemoryMimeType.TEXT
)
)
# Semantic search for relevant memories
results = await memory.query(query="programming preferences")
for mem in results.results:
print(mem.content)
# Output: User prefers concise responses and uses Python
# Retrieve all memories
all_memories = await memory.query(query="")
```
### Using with Autogen Agents
```python
from autogen_core.model_context import ChatCompletionContext
from autogen_core.models import UserMessage
# Create chat context
context = ChatCompletionContext()
# Add user message
await context.add_message(
UserMessage(content="What programming language should I use?")
)
# Inject relevant memories into context
result = await memory.update_context(context)
print(f"Added {len(result.memories.results)} memories to context")
# Now the agent has access to stored preferences
```
### Environment Variables
You can also configure using environment variables:
```bash
export VERTEX_PROJECT_ID="my-project"
export VERTEX_LOCATION="us-central1"
export VERTEX_USER_ID="user123"
export VERTEX_API_RESOURCE_NAME="projects/my-project/locations/us-central1/memories/agent-memory"
```
```python
# Auto-loads from environment
config = VertexaiMemoryConfig()
memory = VertexaiMemory(config=config)
```
## Memory Tools for Agents
Integrate memory capabilities directly into your Autogen agents:
```python
from autogen_agentchat.agents import AssistantAgent
from autogen_ext.models.openai import OpenAIChatCompletionClient
from autogen_vertexai_memory.tools import (
SearchVertexaiMemoryTool,
UpdateVertexaiMemoryTool,
VertexaiMemoryToolConfig
)
# Configure memory tools
memory_config = VertexaiMemoryToolConfig(
project_id="my-project",
location="us-central1",
user_id="user123",
api_resource_name="projects/my-project/locations/us-central1/memories/agent-memory"
)
# Create memory tools
search_tool = SearchVertexaiMemoryTool(config=memory_config)
update_tool = UpdateVertexaiMemoryTool(config=memory_config)
# Create agent with memory tools
agent = AssistantAgent(
name="memory_assistant",
model_client=OpenAIChatCompletionClient(model="gpt-4"),
tools=[search_tool, update_tool],
system_message="""You are a helpful assistant with memory capabilities.
Use search_vertexai_memory_tool to retrieve relevant information about the user.
Use update_vertexai_memory_tool to store important facts you learn during conversations.
"""
)
# Now the agent can search and store memories automatically!
# Example conversation:
# User: "I prefer Python for data analysis"
# Agent uses update_vertexai_memory_tool to store this preference
#
# Later...
# User: "What language should I use for my data project?"
# Agent uses search_vertexai_memory_tool, retrieves the preference, and responds accordingly
```
## API Reference
### VertexaiMemoryConfig
Configuration model for VertexAI Memory.
```python
VertexaiMemoryConfig(
api_resource_name: str, # Full resource name: "projects/{project}/locations/{location}/memories/{memory}"
project_id: str, # Google Cloud project ID
location: str, # GCP region (e.g., "us-central1", "europe-west1")
user_id: str # Unique user identifier for memory isolation
)
```
**Environment Variables:**
- `VERTEX_API_RESOURCE_NAME`
- `VERTEX_PROJECT_ID`
- `VERTEX_LOCATION`
- `VERTEX_USER_ID`
### VertexaiMemory
Main memory interface implementing Autogen's Memory protocol.
```python
VertexaiMemory(
config: Optional[VertexaiMemoryConfig] = None,
client: Optional[Client] = None
)
```
**Methods:**
#### `add(content, cancellation_token=None)`
Store a new memory.
```python
await memory.add(
content=MemoryContent(
content="Important fact to remember",
mime_type=MemoryMimeType.TEXT
)
)
```
#### `query(query="", cancellation_token=None, **kwargs)`
Search memories or retrieve all.
```python
# Semantic search (top 3 results)
results = await memory.query(query="user preferences")
# Get all memories
all_results = await memory.query(query="")
```
**Returns:** `MemoryQueryResult` with list of `MemoryContent` objects
#### `update_context(model_context)`
Inject memories into chat context as system message.
```python
context = ChatCompletionContext()
result = await memory.update_context(context)
# Context now includes relevant memories
```
**Returns:** `UpdateContextResult` with retrieved memories
#### `clear()`
⚠️ **Permanently delete all memories** (irreversible).
```python
await memory.clear() # Use with caution!
```
#### `close()`
Cleanup resources (currently a no-op but provided for protocol compliance).
```python
await memory.close()
```
### Memory Tools
#### VertexaiMemoryToolConfig
Shared configuration for memory tools.
```python
VertexaiMemoryToolConfig(
project_id: str,
location: str,
user_id: str,
api_resource_name: str
)
```
#### SearchVertexaiMemoryTool
Tool for semantic memory search. Automatically used by agents to retrieve relevant memories.
```python
SearchVertexaiMemoryTool(config: Optional[VertexaiMemoryToolConfig] = None, **kwargs)
```
**Tool Name:** `search_vertexai_memory_tool`
**Description:** Perform a search with given parameters using vertexai memory bank
**Parameters:**
- `query` (str): Semantic search query to retrieve information about user
- `top_k` (int, default=5): Maximum number of relevant memories to retrieve
**Returns:** List of matching memory strings
#### UpdateVertexaiMemoryTool
Tool for storing new memories. Automatically used by agents to save important information.
```python
UpdateVertexaiMemoryTool(config: Optional[VertexaiMemoryToolConfig] = None, **kwargs)
```
**Tool Name:** `update_vertexai_memory_tool`
**Description:** Store a new memory fact in the VertexAI memory bank for the user
**Parameters:**
- `content` (str): The memory content to store as a fact in the memory bank
**Returns:** Success status and message
## Advanced Examples
### Custom Client Configuration
```python
from vertexai import Client
# Create custom client with specific settings
client = Client(
project="my-project",
location="us-central1"
)
memory = VertexaiMemory(config=config, client=client)
```
### Async Context Manager
```python
async with VertexaiMemory(config=config) as memory:
await memory.add(content)
results = await memory.query("query")
# Automatic cleanup
```
### Multi-User Isolation
```python
# User 1's memories
user1_config = VertexaiMemoryConfig(
api_resource_name="projects/my-project/locations/us-central1/memories/app-memory",
project_id="my-project",
location="us-central1",
user_id="user1"
)
user1_memory = VertexaiMemory(config=user1_config)
# User 2's memories (isolated from User 1)
user2_config = VertexaiMemoryConfig(
api_resource_name="projects/my-project/locations/us-central1/memories/app-memory",
project_id="my-project",
location="us-central1",
user_id="user2"
)
user2_memory = VertexaiMemory(config=user2_config)
```
### Sharing Config Across Tools
```python
from autogen_agentchat.agents import AssistantAgent
from autogen_ext.models.openai import OpenAIChatCompletionClient
from autogen_vertexai_memory.tools import (
SearchVertexaiMemoryTool,
UpdateVertexaiMemoryTool,
VertexaiMemoryToolConfig
)
# Create config once
config = VertexaiMemoryToolConfig(
project_id="my-project",
location="us-central1",
user_id="user123",
api_resource_name="projects/my-project/locations/us-central1/memories/agent-memory"
)
# Share across multiple tools
search_tool = SearchVertexaiMemoryTool(config=config)
update_tool = UpdateVertexaiMemoryTool(config=config)
# Use in multiple agents
agent1 = AssistantAgent(
name="agent1",
model_client=OpenAIChatCompletionClient(model="gpt-4"),
tools=[search_tool, update_tool]
)
agent2 = AssistantAgent(
name="agent2",
model_client=OpenAIChatCompletionClient(model="gpt-4"),
tools=[search_tool] # This agent can only search, not update
)
# Both agents use the same VertexAI client and configuration
```
## Development
### Setup
```bash
# Clone repository
git clone https://github.com/thelaycon/autogen-vertexai-memory.git
cd autogen-vertexai-memory
# Install dependencies with Poetry
poetry install
# Run tests
poetry run pytest
# Run tests with coverage
poetry run pytest --cov=autogen_vertexai_memory --cov-report=html
# Type checking
poetry run mypy src/autogen_vertexai_memory
# Linting
poetry run ruff check src/
```
### Project Structure
```
autogen-vertexai-memory/
├── src/
│ └── autogen_vertexai_memory/
│ ├── __init__.py
│ ├── memory/
│ │ ├── __init__.py
│ │ └── _vertexai_memory.py # Main memory implementation
│ └── tools/
│ ├── __init__.py
│ └── _vertexai_memory_tools.py # Tool implementations
├── tests/
│ ├── conftest.py
│ └── test_vertexai_memory.py
├── pyproject.toml
└── README.md
```
### Running Tests
The test suite uses mocking to avoid real VertexAI API calls:
```bash
# Run all tests
poetry run pytest
# Run with verbose output
poetry run pytest -v
# Run specific test class
poetry run pytest tests/test_vertexai_memory.py::TestVertexaiMemoryConfig
# Run with coverage report
poetry run pytest --cov=autogen_vertexai_memory --cov-report=term-missing
```
## Troubleshooting
### Authentication Issues
```python
# Verify authentication
gcloud auth application-default print-access-token
# Set explicit credentials
export GOOGLE_APPLICATION_CREDENTIALS="/path/to/service-account-key.json"
```
### Memory Resource Not Found
Ensure your `api_resource_name` is correct:
```python
# Format: projects/{project_id}/locations/{location}/memories/{memory_id}
api_resource_name = "projects/my-project/locations/us-central1/memories/my-memory"
```
### Empty Query Results
```python
# Check if memories exist
all_memories = await memory.query(query="")
print(f"Total memories: {len(all_memories.results)}")
# Verify user_id matches
print(f"Using user_id: {memory.user_id}")
```
## Contributing
Contributions are welcome! Please follow these steps:
1. Fork the repository
2. Create a feature branch (`git checkout -b feature/amazing-feature`)
3. Make your changes with tests
4. Run tests (`poetry run pytest`)
5. Commit your changes (`git commit -m 'Add amazing feature'`)
6. Push to the branch (`git push origin feature/amazing-feature`)
7. Open a Pull Request
### Development Guidelines
- Write tests for new features
- Follow existing code style
- Update documentation for API changes
- Ensure all tests pass before submitting PR
## License
MIT License - see [LICENSE](LICENSE) file for details.
## Support
- 📫 [GitHub Issues](https://github.com/thelaycon/autogen-vertexai-memory/issues) - Bug reports and feature requests
- 💬 [GitHub Discussions](https://github.com/thelaycon/autogen-vertexai-memory/discussions) - Questions and community support
- 📚 [VertexAI Documentation](https://cloud.google.com/vertex-ai/docs) - Official VertexAI docs
- 🤖 [Autogen Documentation](https://microsoft.github.io/autogen/) - Autogen framework docs
## Acknowledgments
- Built for the [Autogen](https://github.com/microsoft/autogen) framework
- Powered by [Google Cloud VertexAI](https://cloud.google.com/vertex-ai)
---
Made with ❤️ for the Autogen community
Raw data
{
"_id": null,
"home_page": "https://github.com/thelaycon/autogen-vertexai-memory",
"name": "autogen-vertexai-memory",
"maintainer": null,
"docs_url": null,
"requires_python": "<4.0,>=3.10",
"maintainer_email": null,
"keywords": "autogen, vertexai, memory, ai, agents",
"author": "thelaycon",
"author_email": "tobitobitobiwhy@gmail.com",
"download_url": "https://files.pythonhosted.org/packages/76/c4/dae12c043e1ac3d5c6fae84a062b1bb411fec3bedf396e8416b345159f5c/autogen_vertexai_memory-0.1.14.tar.gz",
"platform": null,
"description": "# autogen-vertexai-memory\n\nVertexAI Memory integration for Autogen agents. Store and retrieve agent memories using Google Cloud's VertexAI Memory service with semantic search capabilities.\n\n## Features\n\n- \ud83e\udde0 **Persistent Memory Storage** - Store agent memories in Google Cloud VertexAI\n- \ud83d\udd0d **Semantic Search** - Find relevant memories using natural language queries\n- \ud83d\udd04 **Automatic Context Updates** - Seamlessly inject memories into chat contexts\n- \u26a1 **Async/Await Support** - Full async API compatible with Autogen's runtime\n- \ud83c\udfaf **User-Scoped Isolation** - Multi-tenant memory management\n- \ud83d\udee0\ufe0f **Tool Integration** - Ready-to-use tools for agent workflows\n\n## Installation\n\n```bash\npip install autogen-vertexai-memory\n```\n\n## Prerequisites\n\n1. **Google Cloud Project** with VertexAI API enabled\n2. **Authentication** configured (Application Default Credentials)\n3. **VertexAI Memory Resource** created in your project\n\n```bash\n# Set up authentication\ngcloud auth application-default login\n\n# Enable VertexAI API\ngcloud services enable aiplatform.googleapis.com\n```\n\n## Quick Start\n\n### Basic Memory Usage\n\n```python\nfrom autogen_vertexai_memory import VertexaiMemory, VertexaiMemoryConfig\nfrom autogen_core.memory import MemoryContent, MemoryMimeType\n\n# Configure memory\nconfig = VertexaiMemoryConfig(\n api_resource_name=\"projects/my-project/locations/us-central1/......./\",\n project_id=\"my-project\",\n location=\"us-central1\",\n user_id=\"user123\"\n)\n\nmemory = VertexaiMemory(config=config)\n\n# Store a memory\nawait memory.add(\n content=MemoryContent(\n content=\"User prefers concise responses and uses Python\",\n mime_type=MemoryMimeType.TEXT\n )\n)\n\n# Semantic search for relevant memories\nresults = await memory.query(query=\"programming preferences\")\nfor mem in results.results:\n print(mem.content)\n# Output: User prefers concise responses and uses Python\n\n# Retrieve all memories\nall_memories = await memory.query(query=\"\")\n```\n\n### Using with Autogen Agents\n\n```python\nfrom autogen_core.model_context import ChatCompletionContext\nfrom autogen_core.models import UserMessage\n\n# Create chat context\ncontext = ChatCompletionContext()\n\n# Add user message\nawait context.add_message(\n UserMessage(content=\"What programming language should I use?\")\n)\n\n# Inject relevant memories into context\nresult = await memory.update_context(context)\nprint(f\"Added {len(result.memories.results)} memories to context\")\n\n# Now the agent has access to stored preferences\n```\n\n### Environment Variables\n\nYou can also configure using environment variables:\n\n```bash\nexport VERTEX_PROJECT_ID=\"my-project\"\nexport VERTEX_LOCATION=\"us-central1\"\nexport VERTEX_USER_ID=\"user123\"\nexport VERTEX_API_RESOURCE_NAME=\"projects/my-project/locations/us-central1/memories/agent-memory\"\n```\n\n```python\n# Auto-loads from environment\nconfig = VertexaiMemoryConfig()\nmemory = VertexaiMemory(config=config)\n```\n\n## Memory Tools for Agents\n\nIntegrate memory capabilities directly into your Autogen agents:\n\n```python\nfrom autogen_agentchat.agents import AssistantAgent\nfrom autogen_ext.models.openai import OpenAIChatCompletionClient\nfrom autogen_vertexai_memory.tools import (\n SearchVertexaiMemoryTool,\n UpdateVertexaiMemoryTool,\n VertexaiMemoryToolConfig\n)\n\n# Configure memory tools\nmemory_config = VertexaiMemoryToolConfig(\n project_id=\"my-project\",\n location=\"us-central1\",\n user_id=\"user123\",\n api_resource_name=\"projects/my-project/locations/us-central1/memories/agent-memory\"\n)\n\n# Create memory tools\nsearch_tool = SearchVertexaiMemoryTool(config=memory_config)\nupdate_tool = UpdateVertexaiMemoryTool(config=memory_config)\n\n# Create agent with memory tools\nagent = AssistantAgent(\n name=\"memory_assistant\",\n model_client=OpenAIChatCompletionClient(model=\"gpt-4\"),\n tools=[search_tool, update_tool],\n system_message=\"\"\"You are a helpful assistant with memory capabilities.\n \n Use search_vertexai_memory_tool to retrieve relevant information about the user.\n Use update_vertexai_memory_tool to store important facts you learn during conversations.\n \"\"\"\n)\n\n# Now the agent can search and store memories automatically!\n# Example conversation:\n# User: \"I prefer Python for data analysis\"\n# Agent uses update_vertexai_memory_tool to store this preference\n# \n# Later...\n# User: \"What language should I use for my data project?\"\n# Agent uses search_vertexai_memory_tool, retrieves the preference, and responds accordingly\n```\n\n## API Reference\n\n### VertexaiMemoryConfig\n\nConfiguration model for VertexAI Memory.\n\n```python\nVertexaiMemoryConfig(\n api_resource_name: str, # Full resource name: \"projects/{project}/locations/{location}/memories/{memory}\"\n project_id: str, # Google Cloud project ID\n location: str, # GCP region (e.g., \"us-central1\", \"europe-west1\")\n user_id: str # Unique user identifier for memory isolation\n)\n```\n\n**Environment Variables:**\n- `VERTEX_API_RESOURCE_NAME`\n- `VERTEX_PROJECT_ID`\n- `VERTEX_LOCATION`\n- `VERTEX_USER_ID`\n\n### VertexaiMemory\n\nMain memory interface implementing Autogen's Memory protocol.\n\n```python\nVertexaiMemory(\n config: Optional[VertexaiMemoryConfig] = None,\n client: Optional[Client] = None\n)\n```\n\n**Methods:**\n\n#### `add(content, cancellation_token=None)`\nStore a new memory.\n\n```python\nawait memory.add(\n content=MemoryContent(\n content=\"Important fact to remember\",\n mime_type=MemoryMimeType.TEXT\n )\n)\n```\n\n#### `query(query=\"\", cancellation_token=None, **kwargs)`\nSearch memories or retrieve all.\n\n```python\n# Semantic search (top 3 results)\nresults = await memory.query(query=\"user preferences\")\n\n# Get all memories\nall_results = await memory.query(query=\"\")\n```\n\n**Returns:** `MemoryQueryResult` with list of `MemoryContent` objects\n\n#### `update_context(model_context)`\nInject memories into chat context as system message.\n\n```python\ncontext = ChatCompletionContext()\nresult = await memory.update_context(context)\n# Context now includes relevant memories\n```\n\n**Returns:** `UpdateContextResult` with retrieved memories\n\n#### `clear()`\n\u26a0\ufe0f **Permanently delete all memories** (irreversible).\n\n```python\nawait memory.clear() # Use with caution!\n```\n\n#### `close()`\nCleanup resources (currently a no-op but provided for protocol compliance).\n\n```python\nawait memory.close()\n```\n\n### Memory Tools\n\n#### VertexaiMemoryToolConfig\n\nShared configuration for memory tools.\n\n```python\nVertexaiMemoryToolConfig(\n project_id: str,\n location: str,\n user_id: str,\n api_resource_name: str\n)\n```\n\n#### SearchVertexaiMemoryTool\n\nTool for semantic memory search. Automatically used by agents to retrieve relevant memories.\n\n```python\nSearchVertexaiMemoryTool(config: Optional[VertexaiMemoryToolConfig] = None, **kwargs)\n```\n\n**Tool Name:** `search_vertexai_memory_tool` \n**Description:** Perform a search with given parameters using vertexai memory bank \n**Parameters:**\n- `query` (str): Semantic search query to retrieve information about user\n- `top_k` (int, default=5): Maximum number of relevant memories to retrieve\n\n**Returns:** List of matching memory strings\n\n#### UpdateVertexaiMemoryTool\n\nTool for storing new memories. Automatically used by agents to save important information.\n\n```python\nUpdateVertexaiMemoryTool(config: Optional[VertexaiMemoryToolConfig] = None, **kwargs)\n```\n\n**Tool Name:** `update_vertexai_memory_tool` \n**Description:** Store a new memory fact in the VertexAI memory bank for the user \n**Parameters:**\n- `content` (str): The memory content to store as a fact in the memory bank\n\n**Returns:** Success status and message\n\n## Advanced Examples\n\n### Custom Client Configuration\n\n```python\nfrom vertexai import Client\n\n# Create custom client with specific settings\nclient = Client(\n project=\"my-project\",\n location=\"us-central1\"\n)\n\nmemory = VertexaiMemory(config=config, client=client)\n```\n\n### Async Context Manager\n\n```python\nasync with VertexaiMemory(config=config) as memory:\n await memory.add(content)\n results = await memory.query(\"query\")\n# Automatic cleanup\n```\n\n### Multi-User Isolation\n\n```python\n# User 1's memories\nuser1_config = VertexaiMemoryConfig(\n api_resource_name=\"projects/my-project/locations/us-central1/memories/app-memory\",\n project_id=\"my-project\",\n location=\"us-central1\",\n user_id=\"user1\"\n)\nuser1_memory = VertexaiMemory(config=user1_config)\n\n# User 2's memories (isolated from User 1)\nuser2_config = VertexaiMemoryConfig(\n api_resource_name=\"projects/my-project/locations/us-central1/memories/app-memory\",\n project_id=\"my-project\",\n location=\"us-central1\",\n user_id=\"user2\"\n)\nuser2_memory = VertexaiMemory(config=user2_config)\n```\n\n### Sharing Config Across Tools\n\n```python\nfrom autogen_agentchat.agents import AssistantAgent\nfrom autogen_ext.models.openai import OpenAIChatCompletionClient\nfrom autogen_vertexai_memory.tools import (\n SearchVertexaiMemoryTool,\n UpdateVertexaiMemoryTool,\n VertexaiMemoryToolConfig\n)\n\n# Create config once\nconfig = VertexaiMemoryToolConfig(\n project_id=\"my-project\",\n location=\"us-central1\",\n user_id=\"user123\",\n api_resource_name=\"projects/my-project/locations/us-central1/memories/agent-memory\"\n)\n\n# Share across multiple tools\nsearch_tool = SearchVertexaiMemoryTool(config=config)\nupdate_tool = UpdateVertexaiMemoryTool(config=config)\n\n# Use in multiple agents\nagent1 = AssistantAgent(\n name=\"agent1\",\n model_client=OpenAIChatCompletionClient(model=\"gpt-4\"),\n tools=[search_tool, update_tool]\n)\n\nagent2 = AssistantAgent(\n name=\"agent2\",\n model_client=OpenAIChatCompletionClient(model=\"gpt-4\"),\n tools=[search_tool] # This agent can only search, not update\n)\n\n# Both agents use the same VertexAI client and configuration\n```\n\n## Development\n\n### Setup\n\n```bash\n# Clone repository\ngit clone https://github.com/thelaycon/autogen-vertexai-memory.git\ncd autogen-vertexai-memory\n\n# Install dependencies with Poetry\npoetry install\n\n# Run tests\npoetry run pytest\n\n# Run tests with coverage\npoetry run pytest --cov=autogen_vertexai_memory --cov-report=html\n\n# Type checking\npoetry run mypy src/autogen_vertexai_memory\n\n# Linting\npoetry run ruff check src/\n```\n\n### Project Structure\n\n```\nautogen-vertexai-memory/\n\u251c\u2500\u2500 src/\n\u2502 \u2514\u2500\u2500 autogen_vertexai_memory/\n\u2502 \u251c\u2500\u2500 __init__.py\n\u2502 \u251c\u2500\u2500 memory/\n\u2502 \u2502 \u251c\u2500\u2500 __init__.py\n\u2502 \u2502 \u2514\u2500\u2500 _vertexai_memory.py # Main memory implementation\n\u2502 \u2514\u2500\u2500 tools/\n\u2502 \u251c\u2500\u2500 __init__.py\n\u2502 \u2514\u2500\u2500 _vertexai_memory_tools.py # Tool implementations\n\u251c\u2500\u2500 tests/\n\u2502 \u251c\u2500\u2500 conftest.py\n\u2502 \u2514\u2500\u2500 test_vertexai_memory.py\n\u251c\u2500\u2500 pyproject.toml\n\u2514\u2500\u2500 README.md\n```\n\n### Running Tests\n\nThe test suite uses mocking to avoid real VertexAI API calls:\n\n```bash\n# Run all tests\npoetry run pytest\n\n# Run with verbose output\npoetry run pytest -v\n\n# Run specific test class\npoetry run pytest tests/test_vertexai_memory.py::TestVertexaiMemoryConfig\n\n# Run with coverage report\npoetry run pytest --cov=autogen_vertexai_memory --cov-report=term-missing\n```\n\n## Troubleshooting\n\n### Authentication Issues\n\n```python\n# Verify authentication\ngcloud auth application-default print-access-token\n\n# Set explicit credentials\nexport GOOGLE_APPLICATION_CREDENTIALS=\"/path/to/service-account-key.json\"\n```\n\n### Memory Resource Not Found\n\nEnsure your `api_resource_name` is correct:\n```python\n# Format: projects/{project_id}/locations/{location}/memories/{memory_id}\napi_resource_name = \"projects/my-project/locations/us-central1/memories/my-memory\"\n```\n\n### Empty Query Results\n\n```python\n# Check if memories exist\nall_memories = await memory.query(query=\"\")\nprint(f\"Total memories: {len(all_memories.results)}\")\n\n# Verify user_id matches\nprint(f\"Using user_id: {memory.user_id}\")\n```\n\n## Contributing\n\nContributions are welcome! Please follow these steps:\n\n1. Fork the repository\n2. Create a feature branch (`git checkout -b feature/amazing-feature`)\n3. Make your changes with tests\n4. Run tests (`poetry run pytest`)\n5. Commit your changes (`git commit -m 'Add amazing feature'`)\n6. Push to the branch (`git push origin feature/amazing-feature`)\n7. Open a Pull Request\n\n### Development Guidelines\n\n- Write tests for new features\n- Follow existing code style\n- Update documentation for API changes\n- Ensure all tests pass before submitting PR\n\n## License\n\nMIT License - see [LICENSE](LICENSE) file for details.\n\n## Support\n\n- \ud83d\udceb [GitHub Issues](https://github.com/thelaycon/autogen-vertexai-memory/issues) - Bug reports and feature requests\n- \ud83d\udcac [GitHub Discussions](https://github.com/thelaycon/autogen-vertexai-memory/discussions) - Questions and community support\n- \ud83d\udcda [VertexAI Documentation](https://cloud.google.com/vertex-ai/docs) - Official VertexAI docs\n- \ud83e\udd16 [Autogen Documentation](https://microsoft.github.io/autogen/) - Autogen framework docs\n\n## Acknowledgments\n\n- Built for the [Autogen](https://github.com/microsoft/autogen) framework\n- Powered by [Google Cloud VertexAI](https://cloud.google.com/vertex-ai)\n\n---\n\nMade with \u2764\ufe0f for the Autogen community",
"bugtrack_url": null,
"license": "MIT",
"summary": "VertexAI Memory integration for Autogen agents",
"version": "0.1.14",
"project_urls": {
"Documentation": "https://github.com/thelaycon/autogen-vertexai-memory",
"Homepage": "https://github.com/thelaycon/autogen-vertexai-memory",
"Repository": "https://github.com/thelaycon/autogen-vertexai-memory"
},
"split_keywords": [
"autogen",
" vertexai",
" memory",
" ai",
" agents"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "039dbf4b1873098f937d5f525e6d8893256d40db931472c8e4d501559ad965b5",
"md5": "0fe0bd3157435e50efd1f603cd2316f6",
"sha256": "c20b9f3feb86d811a9ab22372b46d9c929481dbf9ac29088be4362aa757c0df9"
},
"downloads": -1,
"filename": "autogen_vertexai_memory-0.1.14-py3-none-any.whl",
"has_sig": false,
"md5_digest": "0fe0bd3157435e50efd1f603cd2316f6",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "<4.0,>=3.10",
"size": 21701,
"upload_time": "2025-10-21T05:18:17",
"upload_time_iso_8601": "2025-10-21T05:18:17.561719Z",
"url": "https://files.pythonhosted.org/packages/03/9d/bf4b1873098f937d5f525e6d8893256d40db931472c8e4d501559ad965b5/autogen_vertexai_memory-0.1.14-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "76c4dae12c043e1ac3d5c6fae84a062b1bb411fec3bedf396e8416b345159f5c",
"md5": "81b4a746496045737943616e1406283f",
"sha256": "24bab7a0269b979bd4f24abd09665a93332c1b930f1ab454162ce9b3e0bacc2c"
},
"downloads": -1,
"filename": "autogen_vertexai_memory-0.1.14.tar.gz",
"has_sig": false,
"md5_digest": "81b4a746496045737943616e1406283f",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "<4.0,>=3.10",
"size": 22134,
"upload_time": "2025-10-21T05:18:19",
"upload_time_iso_8601": "2025-10-21T05:18:19.454171Z",
"url": "https://files.pythonhosted.org/packages/76/c4/dae12c043e1ac3d5c6fae84a062b1bb411fec3bedf396e8416b345159f5c/autogen_vertexai_memory-0.1.14.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-10-21 05:18:19",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "thelaycon",
"github_project": "autogen-vertexai-memory",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"lcname": "autogen-vertexai-memory"
}