| Name | cnoe-agent-utils JSON |
| Version |
0.3.5
JSON |
| download |
| home_page | None |
| Summary | Core utilities for CNOE agents including LLM factory and tracing |
| upload_time | 2025-10-30 22:12:17 |
| maintainer | None |
| docs_url | None |
| author | None |
| requires_python | <4.0,>=3.13 |
| license | None |
| keywords |
agents
cnoe
llm
observability
tracing
|
| VCS |
|
| bugtrack_url |
|
| requirements |
No requirements were recorded.
|
| Travis-CI |
No Travis.
|
| coveralls test coverage |
No coveralls.
|
# π€ cnoe-agent-utils
[](https://pypi.org/project/cnoe-agent-utils/)
[](https://github.com/cnoe-io/cnoe-agent-utils/actions/workflows/unit-tests.yml)
[](https://github.com/cnoe-io/cnoe-agent-utils/actions/workflows/pypi.yml)
[](https://github.com/cnoe-io/cnoe-agent-utils/actions/workflows/unit-tests.yml)
[](https://github.com/cnoe-io/cnoe-agent-utils/actions/workflows/test-aws-bedrock.yml)
[](https://github.com/cnoe-io/cnoe-agent-utils/actions/workflows/test-azure-openai.yml)
[](https://github.com/cnoe-io/cnoe-agent-utils/actions/workflows/test-openai.yml)
[](https://github.com/cnoe-io/cnoe-agent-utils/actions/workflows/test-gcp-vertex.yml)
[](https://github.com/cnoe-io/cnoe-agent-utils/actions/workflows/test-google-gemini.yml)
* **Reusable utilities and abstractions** for building agent-based (LLM-powered) systems.
* **Centralized LLM Factory** supporting major providers (AWS, Azure, GCP, OpenAI, Gemini, Anthropic).
* **Centralized Tracing Utilities** (since v0.2.0) to eliminate duplicated tracing code across CNOE agents.
## Key Features
### **Core Utilities**
* Unified interface (LLM Factory) for seamless LLM instantiation across multiple clouds and vendors.
- π **LLM Factory** for easy model instantiation across:
- βοΈ AWS
- βοΈ Azure
- βοΈ GCP Vertex
- π€ Google Gemini
- π€ Anthropic Claude
- π€ OpenAI
* Simple, environment-variable-driven configuration.
* Example scripts for each LLM provider with setup instructions.
### **Agent Tracing (since v0.2.0)**
* **Centralized tracing logic:** Removes 350+ lines of repeated code per agent.
* **Single import/decorator:** No more copy-pasting tracing logic.
* **Environment-based toggling:** Use `ENABLE_TRACING` env var to control all tracing.
* **A2A Tracing Disabling:** Single method to monkey-patch/disable agent-to-agent tracing everywhere.
* **Graceful fallback:** Works with or without Langfuse; tracing is zero-overhead when disabled.
---
**Note:** Checkout this tutorial on [Tracing](TRACING.md)
## π LLM Factory Getting Started
### π‘οΈ Create and Activate a Virtual Environment
It is recommended to use a virtual environment to manage dependencies:
```bash
python3 -m venv .venv
source .venv/bin/activate
```
### β‘ Prerequisite: Install `uv`
Before running the examples, install [`uv`](https://github.com/astral-sh/uv):
```bash
pip install uv
```
### π¦ Installation
#### Installation Options
**Default Installation (recommended for most users):**
```bash
pip install cnoe-agent-utils
```
This installs all dependencies and provides full functionality. It's equivalent to `pip install 'cnoe-agent-utils[all]'`.
**Minimal Installation (specific functionality only):**
Use these when you only need specific functionality or want to minimize package size:
```bash
# Anthropic Claude support only
pip install "cnoe-agent-utils[anthropic]"
# OpenAI support (openai.com GPT models) only
pip install "cnoe-agent-utils[openai]"
# Azure OpenAI support (Azure-hosted GPT models) only
pip install "cnoe-agent-utils[azure]"
# AWS support (Bedrock, etc.) only
pip install "cnoe-agent-utils[aws]"
# Google Cloud support (Vertex AI, Gemini) only
pip install "cnoe-agent-utils[gcp]"
# Advanced tracing and observability (Langfuse, OpenTelemetry) only
pip install "cnoe-agent-utils[tracing]"
# Development dependencies (testing, linting, etc.)
pip install "cnoe-agent-utils[dev]"
```
#### Using uv
```bash
# Default installation (all dependencies)
uv add cnoe-agent-utils
# Minimal installation (specific functionality only)
uv add "cnoe-agent-utils[anthropic]"
uv add "cnoe-agent-utils[openai]"
uv add "cnoe-agent-utils[azure]"
uv add "cnoe-agent-utils[aws]"
uv add "cnoe-agent-utils[gcp]"
uv add "cnoe-agent-utils[tracing]"
```
#### Local Development
If you are developing locally:
```bash
git clone https://github.com/cnoe-agent-utils/cnoe-agent-utils.git
cd cnoe-agent-utils
uv sync
```
---
## π§βπ» Usage
To test integration with different LLM providers, configure the required environment variables for each provider as shown below. Then, run the corresponding example script using `uv`.
---
### π€ Anthropic
Set the following environment variables:
```bash
export ANTHROPIC_API_KEY=<your_anthropic_api_key>
export ANTHROPIC_MODEL_NAME=<model_name>
# Optional: Enable extended thinking for Claude 4+ models
export ANTHROPIC_THINKING_ENABLED=true
export ANTHROPIC_THINKING_BUDGET=1024 # Default: 1024, Min: 1024
```
Run the example:
```bash
uv run examples/test_anthropic.py
```
---
### βοΈ AWS Bedrock (Anthropic Claude)
Set the following environment variables:
```bash
export AWS_PROFILE=<your_aws_profile>
export AWS_REGION=<your_aws_region>
export AWS_BEDROCK_MODEL_ID="us.anthropic.claude-3-7-sonnet-20250219-v1:0"
export AWS_BEDROCK_PROVIDER="anthropic"
# Optional: Enable extended thinking for Claude 4+ models
export AWS_BEDROCK_THINKING_ENABLED=true
export AWS_BEDROCK_THINKING_BUDGET=1024 # Default: 1024, Min: 1024
```
Run the example:
```bash
uv run examples/test_aws_bedrock_claude.py
```
#### AWS Bedrock Prompt Caching
AWS Bedrock supports **prompt caching** to reduce latency and costs by caching repeated context across requests. This feature is particularly beneficial for:
- Multi-turn conversations with long system prompts
- Repeated use of large context documents
- Agent systems with consistent instructions
**Enable prompt caching:**
```bash
export AWS_BEDROCK_ENABLE_PROMPT_CACHE=true
```
**Supported Models:**
For the latest list of models that support prompt caching and their minimum token requirements, see the [AWS Bedrock Prompt Caching documentation](https://docs.aws.amazon.com/bedrock/latest/userguide/prompt-caching.html).
**Implementation Note:** When `AWS_BEDROCK_ENABLE_PROMPT_CACHE=true`, the library uses `ChatBedrockConverse` which has native prompt caching support. If your model doesn't support caching, AWS Bedrock will return a clear error message. There's no need to validate model compatibility in advanceβAWS handles this automatically.
**Note:** Model IDs may include regional prefixes (`us.`, `eu.`, `ap.`, etc.) depending on your AWS account configuration. Pass the full model ID as provided by AWS:
- Example: `us.anthropic.claude-3-7-sonnet-20250219-v1:0`
- Example: `anthropic.claude-opus-4-1-20250805-v1:0`
**Benefits:**
- Up to **85% reduction in latency** for cached content
- Up to **90% reduction in costs** for cached tokens
- **5-minute cache TTL** (automatically managed by AWS)
- Maximum **4 cache checkpoints** per request
**Usage Example:**
```python
import os
from cnoe_agent_utils.llm_factory import LLMFactory
from langchain_core.messages import SystemMessage, HumanMessage
# Enable caching
os.environ["AWS_BEDROCK_ENABLE_PROMPT_CACHE"] = "true"
# Initialize LLM
llm = LLMFactory("aws-bedrock").get_llm()
# Create cache point for system message
cache_point = llm.create_cache_point()
# Build messages with cache control
messages = [
SystemMessage(content=[
{"text": "You are a helpful AI assistant with expertise in..."},
cache_point # Marks cache checkpoint
]),
HumanMessage(content="What is your primary function?")
]
# Invoke with caching
response = llm.invoke(messages)
# Check cache statistics in response metadata
if hasattr(response, 'response_metadata'):
usage = response.response_metadata.get('usage', {})
print(f"Cache read tokens: {usage.get('cacheReadInputTokens', 0)}")
print(f"Cache creation tokens: {usage.get('cacheCreationInputTokens', 0)}")
```
**Run the caching example:**
```bash
uv run examples/aws_bedrock_cache_example.py
```
**Monitoring Cache Performance:**
Cache hit/miss statistics are available in:
1. **Response metadata** - `cacheReadInputTokens` and `cacheCreationInputTokens`
2. **CloudWatch metrics** - Track cache performance across all requests
3. **Application logs** - Enable via `AWS_CREDENTIALS_DEBUG=true`
**Best Practices:**
- Use cache for system prompts and context that remain consistent across requests
- Ensure cached content meets minimum token requirements (see AWS documentation for model-specific limits)
- Place cache points strategically (after system messages, large context documents, or tool definitions)
- Monitor cache hit rates to optimize placement
---
### βοΈ Azure OpenAI
Set the following environment variables:
```bash
export AZURE_OPENAI_API_KEY=<your_azure_openai_api_key>
export AZURE_OPENAI_API_VERSION=<api_version>
export AZURE_OPENAI_DEPLOYMENT=gpt-4.1
export AZURE_OPENAI_ENDPOINT=<your_azure_openai_endpoint>
```
Run the example:
```bash
uv run examples/test_azure_openai.py
```
---
### π€ OpenAI
Set the following environment variables:
```bash
export OPENAI_API_KEY=<your_openai_api_key>
export OPENAI_ENDPOINT=https://api.openai.com/v1
export OPENAI_MODEL_NAME=gpt-4.1
```
Optional configuration:
```bash
export OPENAI_DEFAULT_HEADERS='{"my-header-key":"my-value"}'
export OPENAI_USER=user-identifier
```
Run the example:
```bash
uv run examples/test_openai.py
```
---
### π€ Google Gemini
Set the following environment variable:
```bash
export GOOGLE_API_KEY=<your_google_api_key>
```
Run the example:
```bash
uv run examples/test_google_gemini.py
```
---
### βοΈ GCP Vertex AI
Set the following environment variables:
```bash
export GOOGLE_APPLICATION_CREDENTIALS=~/.config/gcp.json
export VERTEXAI_MODEL_NAME="gemini-2.0-flash-001"
# Optional: Enable extended thinking for Claude 4+ models on Vertex AI
export VERTEXAI_THINKING_ENABLED=true
export VERTEXAI_THINKING_BUDGET=1024 # Default: 1024, Min: 1024
```
Run the example:
```bash
uv run examples/test_gcp_vertexai.py
```
This demonstrates how to use the LLM Factory and other utilities provided by the library.
---
## π License
Apache 2.0 (see [LICENSE](./LICENSE))
---
## π₯ Maintainers
See [MAINTAINERS.md](MAINTAINERS.md)
- Contributions welcome via PR or issue!
Raw data
{
"_id": null,
"home_page": null,
"name": "cnoe-agent-utils",
"maintainer": null,
"docs_url": null,
"requires_python": "<4.0,>=3.13",
"maintainer_email": null,
"keywords": "agents, cnoe, llm, observability, tracing",
"author": null,
"author_email": "CNOE Contributors <cnoe-steering@googlegroups.com>",
"download_url": "https://files.pythonhosted.org/packages/df/55/9f115e5e70e7e158dd3d5ef0a8205fc8ef6030b324ba55c1de9277ead358/cnoe_agent_utils-0.3.5.tar.gz",
"platform": null,
"description": "# \ud83e\udd16 cnoe-agent-utils\n\n[](https://pypi.org/project/cnoe-agent-utils/)\n[](https://github.com/cnoe-io/cnoe-agent-utils/actions/workflows/unit-tests.yml)\n[](https://github.com/cnoe-io/cnoe-agent-utils/actions/workflows/pypi.yml)\n[](https://github.com/cnoe-io/cnoe-agent-utils/actions/workflows/unit-tests.yml)\n\n[](https://github.com/cnoe-io/cnoe-agent-utils/actions/workflows/test-aws-bedrock.yml)\n[](https://github.com/cnoe-io/cnoe-agent-utils/actions/workflows/test-azure-openai.yml)\n[](https://github.com/cnoe-io/cnoe-agent-utils/actions/workflows/test-openai.yml)\n[](https://github.com/cnoe-io/cnoe-agent-utils/actions/workflows/test-gcp-vertex.yml)\n[](https://github.com/cnoe-io/cnoe-agent-utils/actions/workflows/test-google-gemini.yml)\n\n* **Reusable utilities and abstractions** for building agent-based (LLM-powered) systems.\n* **Centralized LLM Factory** supporting major providers (AWS, Azure, GCP, OpenAI, Gemini, Anthropic).\n* **Centralized Tracing Utilities** (since v0.2.0) to eliminate duplicated tracing code across CNOE agents.\n\n## Key Features\n\n### **Core Utilities**\n\n* Unified interface (LLM Factory) for seamless LLM instantiation across multiple clouds and vendors.\n - \ud83c\udfed **LLM Factory** for easy model instantiation across:\n - \u2601\ufe0f AWS\n - \u2601\ufe0f Azure\n - \u2601\ufe0f GCP Vertex\n - \ud83e\udd16 Google Gemini\n - \ud83e\udd16 Anthropic Claude\n - \ud83e\udd16 OpenAI\n* Simple, environment-variable-driven configuration.\n* Example scripts for each LLM provider with setup instructions.\n\n### **Agent Tracing (since v0.2.0)**\n\n* **Centralized tracing logic:** Removes 350+ lines of repeated code per agent.\n* **Single import/decorator:** No more copy-pasting tracing logic.\n* **Environment-based toggling:** Use `ENABLE_TRACING` env var to control all tracing.\n* **A2A Tracing Disabling:** Single method to monkey-patch/disable agent-to-agent tracing everywhere.\n* **Graceful fallback:** Works with or without Langfuse; tracing is zero-overhead when disabled.\n\n---\n\n**Note:** Checkout this tutorial on [Tracing](TRACING.md)\n\n## \ud83d\ude80 LLM Factory Getting Started\n\n### \ud83d\udee1\ufe0f Create and Activate a Virtual Environment\n\nIt is recommended to use a virtual environment to manage dependencies:\n\n```bash\npython3 -m venv .venv\nsource .venv/bin/activate\n```\n\n### \u26a1 Prerequisite: Install `uv`\n\nBefore running the examples, install [`uv`](https://github.com/astral-sh/uv):\n\n```bash\npip install uv\n```\n\n### \ud83d\udce6 Installation\n\n#### Installation Options\n\n**Default Installation (recommended for most users):**\n\n```bash\npip install cnoe-agent-utils\n```\nThis installs all dependencies and provides full functionality. It's equivalent to `pip install 'cnoe-agent-utils[all]'`.\n\n**Minimal Installation (specific functionality only):**\nUse these when you only need specific functionality or want to minimize package size:\n\n```bash\n# Anthropic Claude support only\npip install \"cnoe-agent-utils[anthropic]\"\n\n# OpenAI support (openai.com GPT models) only\npip install \"cnoe-agent-utils[openai]\"\n\n# Azure OpenAI support (Azure-hosted GPT models) only\npip install \"cnoe-agent-utils[azure]\"\n\n# AWS support (Bedrock, etc.) only\npip install \"cnoe-agent-utils[aws]\"\n\n# Google Cloud support (Vertex AI, Gemini) only\npip install \"cnoe-agent-utils[gcp]\"\n\n# Advanced tracing and observability (Langfuse, OpenTelemetry) only\npip install \"cnoe-agent-utils[tracing]\"\n\n# Development dependencies (testing, linting, etc.)\npip install \"cnoe-agent-utils[dev]\"\n```\n\n#### Using uv\n```bash\n# Default installation (all dependencies)\nuv add cnoe-agent-utils\n\n# Minimal installation (specific functionality only)\nuv add \"cnoe-agent-utils[anthropic]\"\nuv add \"cnoe-agent-utils[openai]\"\nuv add \"cnoe-agent-utils[azure]\"\nuv add \"cnoe-agent-utils[aws]\"\nuv add \"cnoe-agent-utils[gcp]\"\nuv add \"cnoe-agent-utils[tracing]\"\n```\n\n#### Local Development\nIf you are developing locally:\n\n```bash\ngit clone https://github.com/cnoe-agent-utils/cnoe-agent-utils.git\ncd cnoe-agent-utils\nuv sync\n```\n\n---\n\n## \ud83e\uddd1\u200d\ud83d\udcbb Usage\n\nTo test integration with different LLM providers, configure the required environment variables for each provider as shown below. Then, run the corresponding example script using `uv`.\n\n---\n\n### \ud83e\udd16 Anthropic\n\nSet the following environment variables:\n\n```bash\nexport ANTHROPIC_API_KEY=<your_anthropic_api_key>\nexport ANTHROPIC_MODEL_NAME=<model_name>\n\n# Optional: Enable extended thinking for Claude 4+ models\nexport ANTHROPIC_THINKING_ENABLED=true\nexport ANTHROPIC_THINKING_BUDGET=1024 # Default: 1024, Min: 1024\n```\n\nRun the example:\n\n```bash\nuv run examples/test_anthropic.py\n```\n\n---\n\n### \u2601\ufe0f AWS Bedrock (Anthropic Claude)\n\nSet the following environment variables:\n\n```bash\nexport AWS_PROFILE=<your_aws_profile>\nexport AWS_REGION=<your_aws_region>\nexport AWS_BEDROCK_MODEL_ID=\"us.anthropic.claude-3-7-sonnet-20250219-v1:0\"\nexport AWS_BEDROCK_PROVIDER=\"anthropic\"\n\n# Optional: Enable extended thinking for Claude 4+ models\nexport AWS_BEDROCK_THINKING_ENABLED=true\nexport AWS_BEDROCK_THINKING_BUDGET=1024 # Default: 1024, Min: 1024\n```\n\nRun the example:\n\n```bash\nuv run examples/test_aws_bedrock_claude.py\n```\n\n#### AWS Bedrock Prompt Caching\n\nAWS Bedrock supports **prompt caching** to reduce latency and costs by caching repeated context across requests. This feature is particularly beneficial for:\n- Multi-turn conversations with long system prompts\n- Repeated use of large context documents\n- Agent systems with consistent instructions\n\n**Enable prompt caching:**\n\n```bash\nexport AWS_BEDROCK_ENABLE_PROMPT_CACHE=true\n```\n\n**Supported Models:**\n\nFor the latest list of models that support prompt caching and their minimum token requirements, see the [AWS Bedrock Prompt Caching documentation](https://docs.aws.amazon.com/bedrock/latest/userguide/prompt-caching.html).\n\n**Implementation Note:** When `AWS_BEDROCK_ENABLE_PROMPT_CACHE=true`, the library uses `ChatBedrockConverse` which has native prompt caching support. If your model doesn't support caching, AWS Bedrock will return a clear error message. There's no need to validate model compatibility in advance\u2014AWS handles this automatically.\n\n**Note:** Model IDs may include regional prefixes (`us.`, `eu.`, `ap.`, etc.) depending on your AWS account configuration. Pass the full model ID as provided by AWS:\n- Example: `us.anthropic.claude-3-7-sonnet-20250219-v1:0`\n- Example: `anthropic.claude-opus-4-1-20250805-v1:0`\n\n**Benefits:**\n- Up to **85% reduction in latency** for cached content\n- Up to **90% reduction in costs** for cached tokens\n- **5-minute cache TTL** (automatically managed by AWS)\n- Maximum **4 cache checkpoints** per request\n\n**Usage Example:**\n\n```python\nimport os\nfrom cnoe_agent_utils.llm_factory import LLMFactory\nfrom langchain_core.messages import SystemMessage, HumanMessage\n\n# Enable caching\nos.environ[\"AWS_BEDROCK_ENABLE_PROMPT_CACHE\"] = \"true\"\n\n# Initialize LLM\nllm = LLMFactory(\"aws-bedrock\").get_llm()\n\n# Create cache point for system message\ncache_point = llm.create_cache_point()\n\n# Build messages with cache control\nmessages = [\n SystemMessage(content=[\n {\"text\": \"You are a helpful AI assistant with expertise in...\"},\n cache_point # Marks cache checkpoint\n ]),\n HumanMessage(content=\"What is your primary function?\")\n]\n\n# Invoke with caching\nresponse = llm.invoke(messages)\n\n# Check cache statistics in response metadata\nif hasattr(response, 'response_metadata'):\n usage = response.response_metadata.get('usage', {})\n print(f\"Cache read tokens: {usage.get('cacheReadInputTokens', 0)}\")\n print(f\"Cache creation tokens: {usage.get('cacheCreationInputTokens', 0)}\")\n```\n\n**Run the caching example:**\n\n```bash\nuv run examples/aws_bedrock_cache_example.py\n```\n\n**Monitoring Cache Performance:**\n\nCache hit/miss statistics are available in:\n1. **Response metadata** - `cacheReadInputTokens` and `cacheCreationInputTokens`\n2. **CloudWatch metrics** - Track cache performance across all requests\n3. **Application logs** - Enable via `AWS_CREDENTIALS_DEBUG=true`\n\n**Best Practices:**\n- Use cache for system prompts and context that remain consistent across requests\n- Ensure cached content meets minimum token requirements (see AWS documentation for model-specific limits)\n- Place cache points strategically (after system messages, large context documents, or tool definitions)\n- Monitor cache hit rates to optimize placement\n\n---\n\n### \u2601\ufe0f Azure OpenAI\n\nSet the following environment variables:\n\n```bash\nexport AZURE_OPENAI_API_KEY=<your_azure_openai_api_key>\nexport AZURE_OPENAI_API_VERSION=<api_version>\nexport AZURE_OPENAI_DEPLOYMENT=gpt-4.1\nexport AZURE_OPENAI_ENDPOINT=<your_azure_openai_endpoint>\n```\n\nRun the example:\n\n```bash\nuv run examples/test_azure_openai.py\n```\n\n---\n\n### \ud83e\udd16 OpenAI\n\nSet the following environment variables:\n\n```bash\nexport OPENAI_API_KEY=<your_openai_api_key>\nexport OPENAI_ENDPOINT=https://api.openai.com/v1\nexport OPENAI_MODEL_NAME=gpt-4.1\n```\n\nOptional configuration:\n\n```bash\nexport OPENAI_DEFAULT_HEADERS='{\"my-header-key\":\"my-value\"}'\nexport OPENAI_USER=user-identifier\n```\n\nRun the example:\n\n```bash\nuv run examples/test_openai.py\n```\n\n---\n\n### \ud83e\udd16 Google Gemini\n\nSet the following environment variable:\n\n```bash\nexport GOOGLE_API_KEY=<your_google_api_key>\n```\n\nRun the example:\n\n```bash\nuv run examples/test_google_gemini.py\n```\n\n---\n\n### \u2601\ufe0f GCP Vertex AI\n\nSet the following environment variables:\n\n```bash\nexport GOOGLE_APPLICATION_CREDENTIALS=~/.config/gcp.json\nexport VERTEXAI_MODEL_NAME=\"gemini-2.0-flash-001\"\n\n# Optional: Enable extended thinking for Claude 4+ models on Vertex AI\nexport VERTEXAI_THINKING_ENABLED=true\nexport VERTEXAI_THINKING_BUDGET=1024 # Default: 1024, Min: 1024\n```\n\nRun the example:\n\n```bash\nuv run examples/test_gcp_vertexai.py\n```\n\nThis demonstrates how to use the LLM Factory and other utilities provided by the library.\n\n---\n\n## \ud83d\udcdc License\n\nApache 2.0 (see [LICENSE](./LICENSE))\n\n---\n\n## \ud83d\udc65 Maintainers\n\nSee [MAINTAINERS.md](MAINTAINERS.md)\n\n- Contributions welcome via PR or issue!",
"bugtrack_url": null,
"license": null,
"summary": "Core utilities for CNOE agents including LLM factory and tracing",
"version": "0.3.5",
"project_urls": null,
"split_keywords": [
"agents",
" cnoe",
" llm",
" observability",
" tracing"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "17ae8a5449dd2a27f7f081c1c3f421cf55cce99d2a68ce851d5c7eadf1465e6e",
"md5": "c687f9c8029f4fec56c74e7b5a7bfa78",
"sha256": "a68b1a76b032837903e5e761250f1c389af925bfad9a2ff438a0d34c5c5e1475"
},
"downloads": -1,
"filename": "cnoe_agent_utils-0.3.5-py3-none-any.whl",
"has_sig": false,
"md5_digest": "c687f9c8029f4fec56c74e7b5a7bfa78",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "<4.0,>=3.13",
"size": 27349,
"upload_time": "2025-10-30T22:12:16",
"upload_time_iso_8601": "2025-10-30T22:12:16.375571Z",
"url": "https://files.pythonhosted.org/packages/17/ae/8a5449dd2a27f7f081c1c3f421cf55cce99d2a68ce851d5c7eadf1465e6e/cnoe_agent_utils-0.3.5-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "df559f115e5e70e7e158dd3d5ef0a8205fc8ef6030b324ba55c1de9277ead358",
"md5": "f7df5210f9c1b2c204f155674e02e02e",
"sha256": "1db6a9698dac1525c423f511af65d8c6b87e6cb2b422c14f530a6009fe2885a0"
},
"downloads": -1,
"filename": "cnoe_agent_utils-0.3.5.tar.gz",
"has_sig": false,
"md5_digest": "f7df5210f9c1b2c204f155674e02e02e",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "<4.0,>=3.13",
"size": 138573,
"upload_time": "2025-10-30T22:12:17",
"upload_time_iso_8601": "2025-10-30T22:12:17.349604Z",
"url": "https://files.pythonhosted.org/packages/df/55/9f115e5e70e7e158dd3d5ef0a8205fc8ef6030b324ba55c1de9277ead358/cnoe_agent_utils-0.3.5.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-10-30 22:12:17",
"github": false,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"lcname": "cnoe-agent-utils"
}