call-context-lib


Namecall-context-lib JSON
Version 0.2.5 PyPI version JSON
download
home_pageNone
SummaryA context management library for Python applications with callback support
upload_time2025-08-08 07:45:33
maintainerNone
docs_urlNone
authorNone
requires_python>=3.9
licenseMIT
keywords callback context context-management library python
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Call Context Lib

[![CI](https://github.com/jitokim/call-context-lib/actions/workflows/ci.yml/badge.svg)](https://github.com/jitokim/call-context-lib/actions/workflows/ci.yml)
[![PyPI version](https://badge.fury.io/py/call-context-lib.svg)](https://badge.fury.io/py/call-context-lib)
[![Python](https://img.shields.io/pypi/pyversions/call-context-lib.svg)](https://pypi.org/project/call-context-lib/)
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)

A Python context management library designed for LLM applications with LangChain callback integration. Manage execution context, metadata, and experiment logging seamlessly across your AI application stack using standard LangChain callback patterns.

## Features

- **Context Management**: Track user sessions, turns, and metadata across function calls
- **LangChain Integration**: Native support for LangChain's BaseCallbackHandler pattern
- **Async Support**: Full support for async/await patterns and async generators
- **Callback System**: Execute callbacks on context completion with standard LangChain interface
- **Metadata Handling**: Store and retrieve metadata with support for multiple values per key
- **Streaming Support**: Built-in support for streaming responses with context preservation
- **Type Safety**: Fully typed with Python type hints

## Installation

```bash
pip install call-context-lib
```

For development:

```bash
pip install call-context-lib[dev]
```

## Quick Start

### Basic Usage with LangChain Integration

```python
from call_context_lib import CallContext, CallContextCallbackHandler
from langchain_openai import ChatOpenAI
from langchain_core.messages import HumanMessage

# Create a context
ctx = CallContext(user_id="user123", turn_id="turn456")

# Set metadata
ctx.set_meta("request_type", "chat")
ctx.set_meta("model", "gpt-4")

# Create LangChain callback with context
callback = CallContextCallbackHandler(ctx)

# Use with LangChain LLM
llm = ChatOpenAI(model="gpt-4", callbacks=[callback])
result = await llm.ainvoke([HumanMessage(content="Hello")])

# Complete context callbacks
await ctx.on_complete()
```

### Streaming Support with LangChain

```python
from call_context_lib import CallContext, CallContextCallbackHandler
from langchain_openai import ChatOpenAI
from langchain_core.messages import HumanMessage

async def stream_with_context(input_text: str):
    ctx = CallContext(user_id="user123", turn_id="turn456")
    ctx.set_meta("model", "gpt-4")
    
    # Create callback with context
    callback = CallContextCallbackHandler(ctx)
    
    # Stream with LangChain
    llm = ChatOpenAI(model="gpt-4", streaming=True, callbacks=[callback])
    async for chunk in llm.astream([HumanMessage(content=input_text)]):
        if chunk.content:
            yield chunk.content
    
    # Complete context callbacks
    await ctx.on_complete()

# Usage
async for token in stream_with_context("Tell me about Python"):
    print(token, end="")
```

### Multiple Callback Pattern

```python
from call_context_lib import CallContext, CallContextCallbackHandler
from langchain_core.callbacks import BaseCallbackHandler
from langchain_openai import ChatOpenAI
from langchain_core.messages import HumanMessage

# Custom experiment logging callback
class ExperimentLogger(BaseCallbackHandler):
    def __init__(self, ctx: CallContext):
        self.ctx = ctx
    
    def on_llm_end(self, response, **kwargs):
        print(f"Experiment completed for user {self.ctx.get_user_id()}")
        print(f"Model: {self.ctx.get_meta('model')}")

async def multi_callback_example():
    ctx = CallContext(user_id="user123", turn_id="turn456")
    ctx.set_meta("model", "gpt-4")
    
    # Combine multiple callbacks
    callbacks = [
        CallContextCallbackHandler(ctx),
        ExperimentLogger(ctx)
    ]
    
    llm = ChatOpenAI(model="gpt-4", callbacks=callbacks)
    result = await llm.ainvoke([HumanMessage(content="Hello")])
    
    await ctx.on_complete()
    return result
```

### Multiple Values for Same Key

```python
ctx.set_meta("tag", "python")
ctx.set_meta("tag", "async")
ctx.set_meta("tag", "context")

# Get the most recent value
latest_tag = ctx.get_meta("tag")  # Returns "context"

# Get all values
all_tags = ctx.get_meta("tag", all_values=True)  # Returns ["python", "async", "context"]
```

## API Reference

### CallContext

The main context class that manages execution state and metadata.

#### Constructor

```python
CallContext(user_id: str, turn_id: str, meta: dict = None, callbacks: list = None)
```

#### Methods

- `get_user_id() -> str`: Get the user ID
- `get_turn_id() -> str`: Get the turn ID  
- `get_meta(key: str, all_values: bool = False) -> Any`: Get metadata value(s)
- `set_meta(key: str, value: Any) -> None`: Set metadata value
- `set_error(error: Exception)`: Set error state
- `on_complete() -> None`: Execute all registered callbacks

### CallContextCallbackHandler

LangChain BaseCallbackHandler integration for context management.

```python
from call_context_lib import CallContextCallbackHandler

# Create callback handler with context
ctx = CallContext(user_id="user123", turn_id="turn456")
callback = CallContextCallbackHandler(ctx)

# Use with any LangChain component
llm = ChatOpenAI(callbacks=[callback])
```

#### Callback Methods

- `on_llm_start(*args, **kwargs)`: Called when LLM starts
- `on_llm_end(response, **kwargs)`: Called when LLM completes
- `on_llm_error(error, **kwargs)`: Called when LLM encounters error

## Examples

The `examples/` directory contains practical examples:

- **FastAPI Integration**: How to use the library with FastAPI streaming applications
- **LangChain Callback Integration**: Examples using CallContextCallbackHandler with LangChain LLMs
- **Custom Experiment Logging**: Implementing custom BaseCallbackHandler for experiment tracking
- **Multiple Callback Patterns**: Combining context callbacks with other LangChain callbacks

### Running Examples

```bash
# Install dependencies
cd examples
uv sync

# Set your OpenAI API key
export OPENAI_API_KEY="your-api-key"

# Run the FastAPI server
python -m uvicorn main:app --reload --port 8001
```

### Example API Endpoints

- `POST /openai-stream-example`: Streaming LLM response with context
- `POST /openai-invoke-example`: Single LLM response with context
- `POST /llm-module-stream-example`: Custom LLM module streaming
- `POST /llm-module-invoke-example`: Custom LLM module invoke

## Development

### Setting up development environment

```bash
# Clone the repository
git clone https://github.com/jitokim/call-context-lib.git
cd call-context-lib

# Install development dependencies
make install-dev

# Run tests
make test

# Run linting
make lint

# Format code
make format
```

### Available Make Commands

- `make install` - Install package
- `make install-dev` - Install with development dependencies  
- `make test` - Run tests
- `make test-cov` - Run tests with coverage
- `make lint` - Run linting
- `make format` - Format code
- `make build` - Build package
- `make publish` - Publish to PyPI
- `make clean` - Clean build artifacts

## Contributing

Contributions are welcome! Please feel free to submit a Pull Request. For major changes, please open an issue first to discuss what you would like to change.

1. Fork the repository
2. Create your feature branch (`git checkout -b feature/amazing-feature`)
3. Commit your changes (`git commit -m 'Add some amazing feature'`)
4. Push to the branch (`git push origin feature/amazing-feature`)
5. Open a Pull Request

## License

This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.

## Changelog

See [CHANGELOG.md](CHANGELOG.md) for a list of changes and version history.

## Support

If you encounter any problems or have questions, please [open an issue](https://github.com/jitokim/call-context-lib/issues) on GitHub.
            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "call-context-lib",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.9",
    "maintainer_email": "Jihoon Kim <pigberger70@gmail.com>",
    "keywords": "callback, context, context-management, library, python",
    "author": null,
    "author_email": "Jihoon Kim <pigberger70@gmail.com>",
    "download_url": "https://files.pythonhosted.org/packages/fe/8e/529600e4f5c79828fed887e8e0a52ded4647bb8e3c96c86918938373a895/call_context_lib-0.2.5.tar.gz",
    "platform": null,
    "description": "# Call Context Lib\n\n[![CI](https://github.com/jitokim/call-context-lib/actions/workflows/ci.yml/badge.svg)](https://github.com/jitokim/call-context-lib/actions/workflows/ci.yml)\n[![PyPI version](https://badge.fury.io/py/call-context-lib.svg)](https://badge.fury.io/py/call-context-lib)\n[![Python](https://img.shields.io/pypi/pyversions/call-context-lib.svg)](https://pypi.org/project/call-context-lib/)\n[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)\n\nA Python context management library designed for LLM applications with LangChain callback integration. Manage execution context, metadata, and experiment logging seamlessly across your AI application stack using standard LangChain callback patterns.\n\n## Features\n\n- **Context Management**: Track user sessions, turns, and metadata across function calls\n- **LangChain Integration**: Native support for LangChain's BaseCallbackHandler pattern\n- **Async Support**: Full support for async/await patterns and async generators\n- **Callback System**: Execute callbacks on context completion with standard LangChain interface\n- **Metadata Handling**: Store and retrieve metadata with support for multiple values per key\n- **Streaming Support**: Built-in support for streaming responses with context preservation\n- **Type Safety**: Fully typed with Python type hints\n\n## Installation\n\n```bash\npip install call-context-lib\n```\n\nFor development:\n\n```bash\npip install call-context-lib[dev]\n```\n\n## Quick Start\n\n### Basic Usage with LangChain Integration\n\n```python\nfrom call_context_lib import CallContext, CallContextCallbackHandler\nfrom langchain_openai import ChatOpenAI\nfrom langchain_core.messages import HumanMessage\n\n# Create a context\nctx = CallContext(user_id=\"user123\", turn_id=\"turn456\")\n\n# Set metadata\nctx.set_meta(\"request_type\", \"chat\")\nctx.set_meta(\"model\", \"gpt-4\")\n\n# Create LangChain callback with context\ncallback = CallContextCallbackHandler(ctx)\n\n# Use with LangChain LLM\nllm = ChatOpenAI(model=\"gpt-4\", callbacks=[callback])\nresult = await llm.ainvoke([HumanMessage(content=\"Hello\")])\n\n# Complete context callbacks\nawait ctx.on_complete()\n```\n\n### Streaming Support with LangChain\n\n```python\nfrom call_context_lib import CallContext, CallContextCallbackHandler\nfrom langchain_openai import ChatOpenAI\nfrom langchain_core.messages import HumanMessage\n\nasync def stream_with_context(input_text: str):\n    ctx = CallContext(user_id=\"user123\", turn_id=\"turn456\")\n    ctx.set_meta(\"model\", \"gpt-4\")\n    \n    # Create callback with context\n    callback = CallContextCallbackHandler(ctx)\n    \n    # Stream with LangChain\n    llm = ChatOpenAI(model=\"gpt-4\", streaming=True, callbacks=[callback])\n    async for chunk in llm.astream([HumanMessage(content=input_text)]):\n        if chunk.content:\n            yield chunk.content\n    \n    # Complete context callbacks\n    await ctx.on_complete()\n\n# Usage\nasync for token in stream_with_context(\"Tell me about Python\"):\n    print(token, end=\"\")\n```\n\n### Multiple Callback Pattern\n\n```python\nfrom call_context_lib import CallContext, CallContextCallbackHandler\nfrom langchain_core.callbacks import BaseCallbackHandler\nfrom langchain_openai import ChatOpenAI\nfrom langchain_core.messages import HumanMessage\n\n# Custom experiment logging callback\nclass ExperimentLogger(BaseCallbackHandler):\n    def __init__(self, ctx: CallContext):\n        self.ctx = ctx\n    \n    def on_llm_end(self, response, **kwargs):\n        print(f\"Experiment completed for user {self.ctx.get_user_id()}\")\n        print(f\"Model: {self.ctx.get_meta('model')}\")\n\nasync def multi_callback_example():\n    ctx = CallContext(user_id=\"user123\", turn_id=\"turn456\")\n    ctx.set_meta(\"model\", \"gpt-4\")\n    \n    # Combine multiple callbacks\n    callbacks = [\n        CallContextCallbackHandler(ctx),\n        ExperimentLogger(ctx)\n    ]\n    \n    llm = ChatOpenAI(model=\"gpt-4\", callbacks=callbacks)\n    result = await llm.ainvoke([HumanMessage(content=\"Hello\")])\n    \n    await ctx.on_complete()\n    return result\n```\n\n### Multiple Values for Same Key\n\n```python\nctx.set_meta(\"tag\", \"python\")\nctx.set_meta(\"tag\", \"async\")\nctx.set_meta(\"tag\", \"context\")\n\n# Get the most recent value\nlatest_tag = ctx.get_meta(\"tag\")  # Returns \"context\"\n\n# Get all values\nall_tags = ctx.get_meta(\"tag\", all_values=True)  # Returns [\"python\", \"async\", \"context\"]\n```\n\n## API Reference\n\n### CallContext\n\nThe main context class that manages execution state and metadata.\n\n#### Constructor\n\n```python\nCallContext(user_id: str, turn_id: str, meta: dict = None, callbacks: list = None)\n```\n\n#### Methods\n\n- `get_user_id() -> str`: Get the user ID\n- `get_turn_id() -> str`: Get the turn ID  \n- `get_meta(key: str, all_values: bool = False) -> Any`: Get metadata value(s)\n- `set_meta(key: str, value: Any) -> None`: Set metadata value\n- `set_error(error: Exception)`: Set error state\n- `on_complete() -> None`: Execute all registered callbacks\n\n### CallContextCallbackHandler\n\nLangChain BaseCallbackHandler integration for context management.\n\n```python\nfrom call_context_lib import CallContextCallbackHandler\n\n# Create callback handler with context\nctx = CallContext(user_id=\"user123\", turn_id=\"turn456\")\ncallback = CallContextCallbackHandler(ctx)\n\n# Use with any LangChain component\nllm = ChatOpenAI(callbacks=[callback])\n```\n\n#### Callback Methods\n\n- `on_llm_start(*args, **kwargs)`: Called when LLM starts\n- `on_llm_end(response, **kwargs)`: Called when LLM completes\n- `on_llm_error(error, **kwargs)`: Called when LLM encounters error\n\n## Examples\n\nThe `examples/` directory contains practical examples:\n\n- **FastAPI Integration**: How to use the library with FastAPI streaming applications\n- **LangChain Callback Integration**: Examples using CallContextCallbackHandler with LangChain LLMs\n- **Custom Experiment Logging**: Implementing custom BaseCallbackHandler for experiment tracking\n- **Multiple Callback Patterns**: Combining context callbacks with other LangChain callbacks\n\n### Running Examples\n\n```bash\n# Install dependencies\ncd examples\nuv sync\n\n# Set your OpenAI API key\nexport OPENAI_API_KEY=\"your-api-key\"\n\n# Run the FastAPI server\npython -m uvicorn main:app --reload --port 8001\n```\n\n### Example API Endpoints\n\n- `POST /openai-stream-example`: Streaming LLM response with context\n- `POST /openai-invoke-example`: Single LLM response with context\n- `POST /llm-module-stream-example`: Custom LLM module streaming\n- `POST /llm-module-invoke-example`: Custom LLM module invoke\n\n## Development\n\n### Setting up development environment\n\n```bash\n# Clone the repository\ngit clone https://github.com/jitokim/call-context-lib.git\ncd call-context-lib\n\n# Install development dependencies\nmake install-dev\n\n# Run tests\nmake test\n\n# Run linting\nmake lint\n\n# Format code\nmake format\n```\n\n### Available Make Commands\n\n- `make install` - Install package\n- `make install-dev` - Install with development dependencies  \n- `make test` - Run tests\n- `make test-cov` - Run tests with coverage\n- `make lint` - Run linting\n- `make format` - Format code\n- `make build` - Build package\n- `make publish` - Publish to PyPI\n- `make clean` - Clean build artifacts\n\n## Contributing\n\nContributions are welcome! Please feel free to submit a Pull Request. For major changes, please open an issue first to discuss what you would like to change.\n\n1. Fork the repository\n2. Create your feature branch (`git checkout -b feature/amazing-feature`)\n3. Commit your changes (`git commit -m 'Add some amazing feature'`)\n4. Push to the branch (`git push origin feature/amazing-feature`)\n5. Open a Pull Request\n\n## License\n\nThis project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.\n\n## Changelog\n\nSee [CHANGELOG.md](CHANGELOG.md) for a list of changes and version history.\n\n## Support\n\nIf you encounter any problems or have questions, please [open an issue](https://github.com/jitokim/call-context-lib/issues) on GitHub.",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "A context management library for Python applications with callback support",
    "version": "0.2.5",
    "project_urls": {
        "Bug Tracker": "https://github.com/jitokim/call-context-lib/issues",
        "Documentation": "https://jitokim.github.io/call-context-lib",
        "Homepage": "https://github.com/jitokim/call-context-lib",
        "Repository": "https://github.com/jitokim/call-context-lib"
    },
    "split_keywords": [
        "callback",
        " context",
        " context-management",
        " library",
        " python"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "ef8a86496b684276eb3d194ff283d3b807cad24c2b0a8f97ba1e3200ebcc0096",
                "md5": "2cd4e27da44dbe571e381a3fe395d3c1",
                "sha256": "84ebf7ffa145602b3760b71807961bd48d8303bebe4591359191d51eb080c7a6"
            },
            "downloads": -1,
            "filename": "call_context_lib-0.2.5-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "2cd4e27da44dbe571e381a3fe395d3c1",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.9",
            "size": 8429,
            "upload_time": "2025-08-08T07:45:32",
            "upload_time_iso_8601": "2025-08-08T07:45:32.229697Z",
            "url": "https://files.pythonhosted.org/packages/ef/8a/86496b684276eb3d194ff283d3b807cad24c2b0a8f97ba1e3200ebcc0096/call_context_lib-0.2.5-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "fe8e529600e4f5c79828fed887e8e0a52ded4647bb8e3c96c86918938373a895",
                "md5": "0bb0674041c6491c60a871b814de8678",
                "sha256": "a2d33ad8415faa8c790e2ff1650991b0c727e2b74105f8c2c5b53b64543e3eaf"
            },
            "downloads": -1,
            "filename": "call_context_lib-0.2.5.tar.gz",
            "has_sig": false,
            "md5_digest": "0bb0674041c6491c60a871b814de8678",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.9",
            "size": 266707,
            "upload_time": "2025-08-08T07:45:33",
            "upload_time_iso_8601": "2025-08-08T07:45:33.775750Z",
            "url": "https://files.pythonhosted.org/packages/fe/8e/529600e4f5c79828fed887e8e0a52ded4647bb8e3c96c86918938373a895/call_context_lib-0.2.5.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-08-08 07:45:33",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "jitokim",
    "github_project": "call-context-lib",
    "github_not_found": true,
    "lcname": "call-context-lib"
}
        
Elapsed time: 0.78439s