# 🔄 Revenium Core Middleware
[](https://pypi.org/project/revenium-middleware-core/)
[](https://pypi.org/project/revenium-middleware-core/)
[](https://www.gnu.org/licenses/lgpl-3.0)
A foundational library that provides core metering functionality shared across all Revenium AI provider-specific middleware implementations (OpenAI, Anthropic, Ollama, etc). 🐍✨
## ✨ Features
- **🧠 Shared Core Functionality**: Provides the essential metering infrastructure used by all Revenium middleware implementations
- **🔄 Asynchronous Processing**: Background thread management for non-blocking metering operations
- **🛑 Graceful Shutdown**: Ensures all metering data is properly sent even during application shutdown
- **🔌 Provider Agnostic**: Designed to work with any AI provider through specific middleware implementations
## 📥 Installation
```bash
pip install revenium-middleware-core
```
## 🔧 Usage
### 🔄 Direct Usage
While this package is primarily intended as a dependency for provider-specific middleware, you can use it directly:
```python
from revenium_middleware_core import client, run_async_in_thread, shutdown_event
# Record usage directly
client.record_usage(
model="gpt-4",
prompt_tokens=500,
completion_tokens=200,
user_id="user123",
session_id="session456"
)
# Run async metering tasks in background threads
async def async_metering_task():
await client.async_record_usage(
model="gpt-3.5-turbo",
prompt_tokens=300,
completion_tokens=150,
user_id="user789"
)
thread = run_async_in_thread(async_metering_task())
# Application continues while metering happens in background
```
### 🏗️ Building Provider-Specific Middleware
This library is designed to be extended by provider-specific middleware implementations:
```python
from revenium_middleware_core import client, run_async_in_thread
# Example of how a provider-specific middleware might use the core
def record_provider_usage(response_data, metadata):
# Extract token counts from provider-specific response format
prompt_tokens = response_data.usage.prompt_tokens
completion_tokens = response_data.usage.completion_tokens
# Use the core client to record the usage
run_async_in_thread(
client.async_record_usage(
model=response_data.model,
prompt_tokens=prompt_tokens,
completion_tokens=completion_tokens,
**metadata
)
)
```
## 🔄 Compatibility
- 🐍 Python 3.8+
- 🤝 Compatible with all Revenium provider-specific middleware implementations
## 📚 Documentation
For more detailed documentation, please refer to the docstrings in the code or visit our GitHub repository.
## 👥 Contributing
Contributions are welcome! Please check out our contributing guidelines for details.
1. 🍴 Fork the repository
2. 🌿 Create your feature branch (`git checkout -b feature/amazing-feature`)
3. 💾 Commit your changes (`git commit -m 'Add some amazing feature'`)
4. 🚀 Push to the branch (`git push origin feature/amazing-feature`)
5. 🔍 Open a Pull Request
## 📄 License
This project is licensed under the GNU Lesser General Public License v3.0 (LGPL-3.0) - see the LICENSE file for details.
## 🙏 Acknowledgments
- 💖 Built with ❤️ by the Revenium team
Raw data
{
"_id": null,
"home_page": null,
"name": "revenium-middleware-core",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.8",
"maintainer_email": null,
"keywords": "revenium, middleware, logging, token-usage, metering",
"author": null,
"author_email": "Revenium <info@revenium.io>",
"download_url": "https://files.pythonhosted.org/packages/36/fa/575033355733c214ce68d472a296863d4dddee511969f7fe939719aaebbe/revenium_middleware_core-0.1.0.tar.gz",
"platform": null,
"description": "# \ud83d\udd04 Revenium Core Middleware\n\n[](https://pypi.org/project/revenium-middleware-core/)\n[](https://pypi.org/project/revenium-middleware-core/)\n[](https://www.gnu.org/licenses/lgpl-3.0)\n\nA foundational library that provides core metering functionality shared across all Revenium AI provider-specific middleware implementations (OpenAI, Anthropic, Ollama, etc). \ud83d\udc0d\u2728\n\n## \u2728 Features\n\n- **\ud83e\udde0 Shared Core Functionality**: Provides the essential metering infrastructure used by all Revenium middleware implementations\n- **\ud83d\udd04 Asynchronous Processing**: Background thread management for non-blocking metering operations\n- **\ud83d\uded1 Graceful Shutdown**: Ensures all metering data is properly sent even during application shutdown\n- **\ud83d\udd0c Provider Agnostic**: Designed to work with any AI provider through specific middleware implementations\n\n## \ud83d\udce5 Installation\n\n```bash\npip install revenium-middleware-core\n```\n\n## \ud83d\udd27 Usage\n\n### \ud83d\udd04 Direct Usage\n\nWhile this package is primarily intended as a dependency for provider-specific middleware, you can use it directly:\n\n```python\nfrom revenium_middleware_core import client, run_async_in_thread, shutdown_event\n\n# Record usage directly\nclient.record_usage(\n model=\"gpt-4\",\n prompt_tokens=500,\n completion_tokens=200,\n user_id=\"user123\",\n session_id=\"session456\"\n)\n\n# Run async metering tasks in background threads\nasync def async_metering_task():\n await client.async_record_usage(\n model=\"gpt-3.5-turbo\",\n prompt_tokens=300,\n completion_tokens=150,\n user_id=\"user789\"\n )\n\nthread = run_async_in_thread(async_metering_task())\n\n# Application continues while metering happens in background\n```\n\n### \ud83c\udfd7\ufe0f Building Provider-Specific Middleware\n\nThis library is designed to be extended by provider-specific middleware implementations:\n\n```python\nfrom revenium_middleware_core import client, run_async_in_thread\n\n# Example of how a provider-specific middleware might use the core\ndef record_provider_usage(response_data, metadata):\n # Extract token counts from provider-specific response format\n prompt_tokens = response_data.usage.prompt_tokens\n completion_tokens = response_data.usage.completion_tokens\n \n # Use the core client to record the usage\n run_async_in_thread(\n client.async_record_usage(\n model=response_data.model,\n prompt_tokens=prompt_tokens,\n completion_tokens=completion_tokens,\n **metadata\n )\n )\n```\n\n## \ud83d\udd04 Compatibility\n\n- \ud83d\udc0d Python 3.8+\n- \ud83e\udd1d Compatible with all Revenium provider-specific middleware implementations\n\n## \ud83d\udcda Documentation\n\nFor more detailed documentation, please refer to the docstrings in the code or visit our GitHub repository.\n\n## \ud83d\udc65 Contributing\n\nContributions are welcome! Please check out our contributing guidelines for details.\n\n1. \ud83c\udf74 Fork the repository\n2. \ud83c\udf3f Create your feature branch (`git checkout -b feature/amazing-feature`)\n3. \ud83d\udcbe Commit your changes (`git commit -m 'Add some amazing feature'`)\n4. \ud83d\ude80 Push to the branch (`git push origin feature/amazing-feature`)\n5. \ud83d\udd0d Open a Pull Request\n\n## \ud83d\udcc4 License\n\nThis project is licensed under the GNU Lesser General Public License v3.0 (LGPL-3.0) - see the LICENSE file for details.\n\n## \ud83d\ude4f Acknowledgments\n\n- \ud83d\udc96 Built with \u2764\ufe0f by the Revenium team\n",
"bugtrack_url": null,
"license": "GNU Lesser General Public License v3 or later (LGPLv3+)",
"summary": "A Python library that provides core functionality to send AI metering data to Revenium.",
"version": "0.1.0",
"project_urls": {
"Bug Tracker": "https://github.com/revenium/revenium-middleware-core-python/issues",
"Documentation": "https://github.com/revenium/revenium-middleware-core-python/blob/main/README.md",
"Homepage": "https://github.com/revenium/revenium-middleware-core-python"
},
"split_keywords": [
"revenium",
" middleware",
" logging",
" token-usage",
" metering"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "9d061e3a6a6f0b4e274f55d3cf4feb7539d484d30d2a44c7f3298456a306918a",
"md5": "8f2029f6d0ba3fb81d023b7209efe490",
"sha256": "b5cdd59e06979fd55a2e1601b697ad22493db21919640251ba41847ca0131724"
},
"downloads": -1,
"filename": "revenium_middleware_core-0.1.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "8f2029f6d0ba3fb81d023b7209efe490",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.8",
"size": 4748,
"upload_time": "2025-03-22T01:03:45",
"upload_time_iso_8601": "2025-03-22T01:03:45.850775Z",
"url": "https://files.pythonhosted.org/packages/9d/06/1e3a6a6f0b4e274f55d3cf4feb7539d484d30d2a44c7f3298456a306918a/revenium_middleware_core-0.1.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "36fa575033355733c214ce68d472a296863d4dddee511969f7fe939719aaebbe",
"md5": "6ce4d9cf49edd6d75200b3eb18151b77",
"sha256": "efdcbde53849665e3ee15c7e1bd2e8f97dd141cbc2c5ad093a7df640e627adf7"
},
"downloads": -1,
"filename": "revenium_middleware_core-0.1.0.tar.gz",
"has_sig": false,
"md5_digest": "6ce4d9cf49edd6d75200b3eb18151b77",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8",
"size": 4545,
"upload_time": "2025-03-22T01:03:46",
"upload_time_iso_8601": "2025-03-22T01:03:46.823016Z",
"url": "https://files.pythonhosted.org/packages/36/fa/575033355733c214ce68d472a296863d4dddee511969f7fe939719aaebbe/revenium_middleware_core-0.1.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-03-22 01:03:46",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "revenium",
"github_project": "revenium-middleware-core-python",
"github_not_found": true,
"lcname": "revenium-middleware-core"
}