teams_memory


Nameteams_memory JSON
Version 0.1.2a0 PyPI version JSON
download
home_pageNone
SummaryMemory module for creating intelligent agents within Microsoft Teams
upload_time2025-01-29 23:39:30
maintainerNone
docs_urlNone
authorNone
requires_python>=3.12
licenseNone
keywords accelerator agents ai bot memory microsoft teams
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            > [!IMPORTANT]  
> _`teams_memory` is in alpha. We are still internally validating and testing!_

# Teams Memory Module

The Teams Memory Module is a simple yet powerful library designed to help manage memories for Teams AI Agents. By offloading the responsibility of tracking user-related facts, it enables developers to create more personable and efficient agents.

# Features

- **Seamless Integration with Teams AI SDK**:  
  The memory module integrates directly with the Teams AI SDK via middleware, tracking both incoming and outgoing messages.

- **Automatic Memory Extraction**:  
  Define a set of topics (or use default ones) relevant to your application, and the memory module will automatically extract and store related memories.

- **Simple Short-Term Memory Retrieval**:  
  Easily retrieve working memory using paradigms like "last N minutes" or "last M messages."

- **Query-Based or Topic-Based Memory Retrieval**:  
  Search for existing memories using natural language queries or predefined topics.

# Integration

Integrating the Memory Module into your Teams AI SDK application (or Bot Framework) is straightforward.

## Prerequisites

- **Azure OpenAI or OpenAI Keys**:  
  The LLM layer is built using [LiteLLM](https://docs.litellm.ai/), which supports multiple [providers](https://docs.litellm.ai/docs/providers). However, only Azure OpenAI (AOAI) and OpenAI (OAI) have been tested.

## Integrating into a Teams AI SDK Application

### Adding Messages

#### Incoming / Outgoing Messages

Memory extraction requires incoming and outgoing messages to your application. To simplify this, you can use middleware to automate the process.

After building your bot `Application`, create a `MemoryMiddleware` with the following configurations:

- **`llm`**: Configuration for the LLM (required).
- **`storage`**: Configuration for the storage layer. Defaults to `InMemoryStorage` if not provided.
- **`buffer_size`**: Minimum size of the message buffer before memory extraction is triggered.
- **`timeout_seconds`**: Time elapsed after the buffer starts filling up before extraction occurs.
  - **Note**: Extraction occurs when either the `buffer_size` is reached or the `timeout_seconds` elapses, whichever happens first.
- **`topics`**: Topics relevant to your application. These help the LLM focus on important information and avoid unnecessary extractions.

```python
memory_middleware = MemoryMiddleware(
    config=MemoryModuleConfig(
        llm=LLMConfig(**memory_llm_config),
        storage=StorageConfig(
            db_path=os.path.join(os.path.dirname(__file__), "data", "memory.db")
        ),  # Uses SQLite if `db_path` is provided
        timeout_seconds=60,  # Extraction occurs 60 seconds after the first message
        enable_logging=True,  # Helpful for debugging
        topics=[
            Topic(name="Device Type", description="The type of device the user has"),
            Topic(name="Operating System", description="The operating system for the user's device"),
            Topic(name="Device Year", description="The year of the user's device"),
        ],  # Example topics for a tech-assistant agent
    )
)
bot_app.adapter.use(memory_middleware)
```

At this point, the application automatically listens to all incoming and outgoing messages.

> [!TIP]  
> This integration augments the `TurnContext` with a `memory_module` property, scoped to the conversation for the request. Access it via:
>
> ```python
> memory_module: BaseScopedMemoryModule = context.get("memory_module")
> ```

#### [Optional] Internal Messages

The previous step only stores incoming and outgoing messages. You also have the option to o store `InternalMessage` objects (e.g., for additional context or tracking internal conversation states) via:

```python
async def add_internal_message(self, context: TurnContext, tool_call_name: str, tool_call_result: str):
    conversation_ref_dict = TurnContext.get_conversation_reference(context.activity)
    memory_module: BaseScopedMemoryModule = context.get("memory_module")
    await memory_module.add_message(
        InternalMessageInput(
            content=json.dumps({"tool_call_name": tool_call_name, "result": tool_call_result}),
            author_id=conversation_ref_dict.bot.id,
            conversation_ref=memory_module.conversation_ref,
        )
    )
    return True
```

### Extracting Memories

> [!NOTE]  
> The memory module currently supports extracting **semantic memories** about a user. Future updates will include support for conversation-level memories. See [Future Work](#future-work) for details.

There are two ways to extract memories:

1. **Automatic Extraction**: Memories are extracted when the `buffer_size` is reached or the `timeout_seconds` elapses.
2. **On-Demand Extraction**: Manually trigger extraction by calling `memory_module.process_messages()`.

#### Automatic Extraction

Enable automatic extraction by calling `memory_middleware.memory_module.listen()` when your application starts. This listens to messages and triggers extraction based on the configured conditions.

```python
async def initialize_memory_module(_app: web.Application):
    await memory_middleware.memory_module.listen()

async def shutdown_memory_module(_app: web.Application):
    await memory_middleware.memory_module.shutdown()

app.on_startup.append(initialize_memory_module)
app.on_shutdown.append(shutdown_memory_module)

web.run_app(app, host="localhost", port=Config.PORT)
```

> [!IMPORTANT]  
> When performing automatic extraction via `listen()`, it's important to ensure that you also configure your application to cleanup resources when the application shuts down using the `shutdown()` method.

#### On-Demand Extraction

Use on-demand extraction to trigger memory extraction at specific points, such as after a `tool_call` or a particular message.

```python
async def extract_memories_after_tool_call(context: TurnContext):
    memory_module: ScopedMemoryModule = context.get('memory_module')
    await memory_module.process_messages()  # Extracts memories from the buffer
```

> [!NOTE]  
> `memory_module.process_messages()` can be called at any time, even if automatic extraction is enabled.

### Using Short-Term Memories (Working Memory)

The memory module simplifies the retrieval of recent messages for use as context in your LLM.

```python
async def build_llm_messages(self, context: TurnContext, system_message: str):
    memory_module: BaseScopedMemoryModule = context.get("memory_module")
    assert memory_module
    messages = await memory_module.retrieve_chat_history(
        ShortTermMemoryRetrievalConfig(last_minutes=1)
    )
    llm_messages: List = [
        {"role": "system", "content": system_prompt},
        *[
            {"role": "user" if message.type == "user" else "assistant", "content": message.content}
            for message in messages
        ],  # UserMessages have a `role` of `user`; others are `assistant`
    ]
    return llm_messages
```

### Using Extracted Semantic Memory

Access extracted memories via the `ScopedMemoryModule` available in the `TurnContext`:

```python
async def retrieve_device_type_memories(context: TurnContext):
    memory_module: ScopedMemoryModule = context.get('memory_module')
    device_type_memories = await memory_module.search_memories(
        topic="Device Type", # This name must match the topic name in the config
        query="What device does the user own?"
    )
```

You can search for memories using a topic, a natural language query, or both.

## Logging

Enable logging in the memory module configuration:

```python
config = MemoryModuleConfig()
config.enable_logging = True
```

The module uses Python's [logging](https://docs.python.org/3.12/library/logging.html) library. By default, it logs debug messages (and higher severity) to the console. Customize the logging behavior as follows:

```python
from teams_memory import configure_logging

configure_logging(logging_level=logging.INFO)
```

# Model Performance

| Model  | Embedding Model        | Tested | Notes                                   |
| ------ | ---------------------- | ------ | --------------------------------------- |
| gpt-4o | text-embedding-3-small | ✅     | Tested via both OpenAI and Azure OpenAI |

# Future Work

The Teams Memory Module is in active development. Planned features include:

- **Evals and Performance Testing**: Support for additional models.
- **More Storage Providers**: Integration with PostgreSQL, CosmosDB, etc.
- **Automatic Message Expiration**: Delete messages older than a specified duration (e.g., 1 day).
- **Episodic Memory Extraction**: Memories about conversations, not just users.
- **Sophisticated Memory Access Patterns**: Secure sharing of memories across multiple groups.

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "teams_memory",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.12",
    "maintainer_email": null,
    "keywords": "accelerator, agents, ai, bot, memory, microsoft, teams",
    "author": null,
    "author_email": "Microsoft <teams@microsoft.com>",
    "download_url": "https://files.pythonhosted.org/packages/e8/fa/56f029871163c270d4366d82091bc9b7890d7e2d8839ce611eac2c7436cf/teams_memory-0.1.2a0.tar.gz",
    "platform": null,
    "description": "> [!IMPORTANT]  \n> _`teams_memory` is in alpha. We are still internally validating and testing!_\n\n# Teams Memory Module\n\nThe Teams Memory Module is a simple yet powerful library designed to help manage memories for Teams AI Agents. By offloading the responsibility of tracking user-related facts, it enables developers to create more personable and efficient agents.\n\n# Features\n\n- **Seamless Integration with Teams AI SDK**:  \n  The memory module integrates directly with the Teams AI SDK via middleware, tracking both incoming and outgoing messages.\n\n- **Automatic Memory Extraction**:  \n  Define a set of topics (or use default ones) relevant to your application, and the memory module will automatically extract and store related memories.\n\n- **Simple Short-Term Memory Retrieval**:  \n  Easily retrieve working memory using paradigms like \"last N minutes\" or \"last M messages.\"\n\n- **Query-Based or Topic-Based Memory Retrieval**:  \n  Search for existing memories using natural language queries or predefined topics.\n\n# Integration\n\nIntegrating the Memory Module into your Teams AI SDK application (or Bot Framework) is straightforward.\n\n## Prerequisites\n\n- **Azure OpenAI or OpenAI Keys**:  \n  The LLM layer is built using [LiteLLM](https://docs.litellm.ai/), which supports multiple [providers](https://docs.litellm.ai/docs/providers). However, only Azure OpenAI (AOAI) and OpenAI (OAI) have been tested.\n\n## Integrating into a Teams AI SDK Application\n\n### Adding Messages\n\n#### Incoming / Outgoing Messages\n\nMemory extraction requires incoming and outgoing messages to your application. To simplify this, you can use middleware to automate the process.\n\nAfter building your bot `Application`, create a `MemoryMiddleware` with the following configurations:\n\n- **`llm`**: Configuration for the LLM (required).\n- **`storage`**: Configuration for the storage layer. Defaults to `InMemoryStorage` if not provided.\n- **`buffer_size`**: Minimum size of the message buffer before memory extraction is triggered.\n- **`timeout_seconds`**: Time elapsed after the buffer starts filling up before extraction occurs.\n  - **Note**: Extraction occurs when either the `buffer_size` is reached or the `timeout_seconds` elapses, whichever happens first.\n- **`topics`**: Topics relevant to your application. These help the LLM focus on important information and avoid unnecessary extractions.\n\n```python\nmemory_middleware = MemoryMiddleware(\n    config=MemoryModuleConfig(\n        llm=LLMConfig(**memory_llm_config),\n        storage=StorageConfig(\n            db_path=os.path.join(os.path.dirname(__file__), \"data\", \"memory.db\")\n        ),  # Uses SQLite if `db_path` is provided\n        timeout_seconds=60,  # Extraction occurs 60 seconds after the first message\n        enable_logging=True,  # Helpful for debugging\n        topics=[\n            Topic(name=\"Device Type\", description=\"The type of device the user has\"),\n            Topic(name=\"Operating System\", description=\"The operating system for the user's device\"),\n            Topic(name=\"Device Year\", description=\"The year of the user's device\"),\n        ],  # Example topics for a tech-assistant agent\n    )\n)\nbot_app.adapter.use(memory_middleware)\n```\n\nAt this point, the application automatically listens to all incoming and outgoing messages.\n\n> [!TIP]  \n> This integration augments the `TurnContext` with a `memory_module` property, scoped to the conversation for the request. Access it via:\n>\n> ```python\n> memory_module: BaseScopedMemoryModule = context.get(\"memory_module\")\n> ```\n\n#### [Optional] Internal Messages\n\nThe previous step only stores incoming and outgoing messages. You also have the option to o store `InternalMessage` objects (e.g., for additional context or tracking internal conversation states) via:\n\n```python\nasync def add_internal_message(self, context: TurnContext, tool_call_name: str, tool_call_result: str):\n    conversation_ref_dict = TurnContext.get_conversation_reference(context.activity)\n    memory_module: BaseScopedMemoryModule = context.get(\"memory_module\")\n    await memory_module.add_message(\n        InternalMessageInput(\n            content=json.dumps({\"tool_call_name\": tool_call_name, \"result\": tool_call_result}),\n            author_id=conversation_ref_dict.bot.id,\n            conversation_ref=memory_module.conversation_ref,\n        )\n    )\n    return True\n```\n\n### Extracting Memories\n\n> [!NOTE]  \n> The memory module currently supports extracting **semantic memories** about a user. Future updates will include support for conversation-level memories. See [Future Work](#future-work) for details.\n\nThere are two ways to extract memories:\n\n1. **Automatic Extraction**: Memories are extracted when the `buffer_size` is reached or the `timeout_seconds` elapses.\n2. **On-Demand Extraction**: Manually trigger extraction by calling `memory_module.process_messages()`.\n\n#### Automatic Extraction\n\nEnable automatic extraction by calling `memory_middleware.memory_module.listen()` when your application starts. This listens to messages and triggers extraction based on the configured conditions.\n\n```python\nasync def initialize_memory_module(_app: web.Application):\n    await memory_middleware.memory_module.listen()\n\nasync def shutdown_memory_module(_app: web.Application):\n    await memory_middleware.memory_module.shutdown()\n\napp.on_startup.append(initialize_memory_module)\napp.on_shutdown.append(shutdown_memory_module)\n\nweb.run_app(app, host=\"localhost\", port=Config.PORT)\n```\n\n> [!IMPORTANT]  \n> When performing automatic extraction via `listen()`, it's important to ensure that you also configure your application to cleanup resources when the application shuts down using the `shutdown()` method.\n\n#### On-Demand Extraction\n\nUse on-demand extraction to trigger memory extraction at specific points, such as after a `tool_call` or a particular message.\n\n```python\nasync def extract_memories_after_tool_call(context: TurnContext):\n    memory_module: ScopedMemoryModule = context.get('memory_module')\n    await memory_module.process_messages()  # Extracts memories from the buffer\n```\n\n> [!NOTE]  \n> `memory_module.process_messages()` can be called at any time, even if automatic extraction is enabled.\n\n### Using Short-Term Memories (Working Memory)\n\nThe memory module simplifies the retrieval of recent messages for use as context in your LLM.\n\n```python\nasync def build_llm_messages(self, context: TurnContext, system_message: str):\n    memory_module: BaseScopedMemoryModule = context.get(\"memory_module\")\n    assert memory_module\n    messages = await memory_module.retrieve_chat_history(\n        ShortTermMemoryRetrievalConfig(last_minutes=1)\n    )\n    llm_messages: List = [\n        {\"role\": \"system\", \"content\": system_prompt},\n        *[\n            {\"role\": \"user\" if message.type == \"user\" else \"assistant\", \"content\": message.content}\n            for message in messages\n        ],  # UserMessages have a `role` of `user`; others are `assistant`\n    ]\n    return llm_messages\n```\n\n### Using Extracted Semantic Memory\n\nAccess extracted memories via the `ScopedMemoryModule` available in the `TurnContext`:\n\n```python\nasync def retrieve_device_type_memories(context: TurnContext):\n    memory_module: ScopedMemoryModule = context.get('memory_module')\n    device_type_memories = await memory_module.search_memories(\n        topic=\"Device Type\", # This name must match the topic name in the config\n        query=\"What device does the user own?\"\n    )\n```\n\nYou can search for memories using a topic, a natural language query, or both.\n\n## Logging\n\nEnable logging in the memory module configuration:\n\n```python\nconfig = MemoryModuleConfig()\nconfig.enable_logging = True\n```\n\nThe module uses Python's [logging](https://docs.python.org/3.12/library/logging.html) library. By default, it logs debug messages (and higher severity) to the console. Customize the logging behavior as follows:\n\n```python\nfrom teams_memory import configure_logging\n\nconfigure_logging(logging_level=logging.INFO)\n```\n\n# Model Performance\n\n| Model  | Embedding Model        | Tested | Notes                                   |\n| ------ | ---------------------- | ------ | --------------------------------------- |\n| gpt-4o | text-embedding-3-small | \u2705     | Tested via both OpenAI and Azure OpenAI |\n\n# Future Work\n\nThe Teams Memory Module is in active development. Planned features include:\n\n- **Evals and Performance Testing**: Support for additional models.\n- **More Storage Providers**: Integration with PostgreSQL, CosmosDB, etc.\n- **Automatic Message Expiration**: Delete messages older than a specified duration (e.g., 1 day).\n- **Episodic Memory Extraction**: Memories about conversations, not just users.\n- **Sophisticated Memory Access Patterns**: Secure sharing of memories across multiple groups.\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "Memory module for creating intelligent agents within Microsoft Teams",
    "version": "0.1.2a0",
    "project_urls": {
        "Homepage": "https://github.com/microsoft/teams-agent-accelerator-libs-py/tree/main/packages/teams_memory"
    },
    "split_keywords": [
        "accelerator",
        " agents",
        " ai",
        " bot",
        " memory",
        " microsoft",
        " teams"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "f064cade5a141b90037f25b6dd61722ae6c3aca935a8a887d11df93cd85f2837",
                "md5": "fb100480a8e71028ecd39a60ad318fad",
                "sha256": "53ce75c8547bdf9d6856877d6837a9e5760de35f0d18bed5967a8ef712b9b645"
            },
            "downloads": -1,
            "filename": "teams_memory-0.1.2a0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "fb100480a8e71028ecd39a60ad318fad",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.12",
            "size": 52753,
            "upload_time": "2025-01-29T23:39:27",
            "upload_time_iso_8601": "2025-01-29T23:39:27.786652Z",
            "url": "https://files.pythonhosted.org/packages/f0/64/cade5a141b90037f25b6dd61722ae6c3aca935a8a887d11df93cd85f2837/teams_memory-0.1.2a0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "e8fa56f029871163c270d4366d82091bc9b7890d7e2d8839ce611eac2c7436cf",
                "md5": "2aca2551b78e402634f23af8fd94073b",
                "sha256": "63e04b6072b6aa5d37aa6e260907cef22d21b64d10e482720a969a59cdbd74e3"
            },
            "downloads": -1,
            "filename": "teams_memory-0.1.2a0.tar.gz",
            "has_sig": false,
            "md5_digest": "2aca2551b78e402634f23af8fd94073b",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.12",
            "size": 36123,
            "upload_time": "2025-01-29T23:39:30",
            "upload_time_iso_8601": "2025-01-29T23:39:30.028028Z",
            "url": "https://files.pythonhosted.org/packages/e8/fa/56f029871163c270d4366d82091bc9b7890d7e2d8839ce611eac2c7436cf/teams_memory-0.1.2a0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-01-29 23:39:30",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "microsoft",
    "github_project": "teams-agent-accelerator-libs-py",
    "github_not_found": true,
    "lcname": "teams_memory"
}
        
Elapsed time: 0.46192s