# agnosmodel2 - Universal GenAI Provider Abstraction
[](https://www.python.org/downloads/)
[](https://polyformproject.org/licenses/noncommercial/1.0.0)
> **β οΈ Alpha Release** - This is the debut release of agnosmodel2. While functional, the API may change in future versions.
**Universal Generative AI Provider Interface** - Switch between any GenAI provider (OpenAI, Anthropic, local models) without changing your code. Define providers once, switch seamlessly at runtime.
Part of the Agnosweaverβ’ project suite.
## π₯ Why agnosmodel2?
Tired of AI vendor lock-in? Frustrated with rewriting code every time you want to try a different model? **agnosmodel2** solves this by providing a unified interface across all GenAI providers.
```python
# Define once, switch anytime
manager = GenManager()
manager.add_provider('gpt', 'openai', {'model': 'gpt-4', 'api_key': '...'})
manager.add_provider('claude', 'anthropic', {'model': 'claude-3', 'api_key': '...'})
response = manager.generate("Hello world") # Uses current provider
manager.switch_provider('claude')
response = manager.generate("Hello world") # Now uses Claude - same code!
```
## β¨ Features
- **π Provider Agnostic**: Switch between OpenAI, Anthropic, local models, or any GenAI provider
- **π Datatype Agnostic**: Handle JSON, XML, binary, protobuf, or any data format seamlessly
- **ποΈ Extensible Architecture**: Bring-your-own provider implementations
- **β‘ Sync & Async**: Full support for both synchronous and asynchronous operations
- **π Streaming Support**: Real-time streaming responses when supported by providers
- **π¦ Minimal Core**: Lightweight interface without bloat
- **π― Standardized Responses**: Consistent response format across all providers
- **π Plugin System**: Registry-based provider discovery and registration
## π¦ Installation
```bash
# PyPI
pip install agnosmodel2
# GitHub
pip install git+https://github.com/agnosweaver/agnosmodel2.git
```
**Requirements**: Python 3.11 or higher
## π Quick Start
### 1. Basic Setup
```python
from agnosmodel2 import GenManager, ProviderRegistry, BaseGenProvider
# Create manager
manager = GenManager()
```
### 2. Implement a Provider
Since agnosmodel2 provides the core interface, you need to implement providers for your specific needs:
```python
class OpenAIProvider(BaseGenProvider):
def generate(self, prompt: str, **kwargs):
# Your OpenAI implementation
response = self.call_openai_api(prompt)
return GenResponse(
content=response.text,
provider=self.name,
model=self.model,
timestamp=datetime.now().isoformat(),
content_type="text/plain"
)
async def agenerate(self, prompt: str, **kwargs):
# Your async OpenAI implementation
pass
def validate(self):
return 'api_key' in self.config
# Register your provider
ProviderRegistry.register('openai', OpenAIProvider)
```
### 3. Use It!
```python
# Add providers
manager.add_provider('gpt4', 'openai', {
'model': 'gpt-4',
'api_key': 'your-key'
})
manager.add_provider('claude', 'anthropic', {
'model': 'claude-3-sonnet',
'api_key': 'your-key'
})
# Generate responses
response = manager.generate("Explain quantum computing")
print(f"Response from {response.provider}: {response.content}")
# Switch providers seamlessly
manager.switch_provider('claude')
response = manager.generate("Explain quantum computing") # Same call, different provider!
```
## π― Common Use Cases
### Model A/B Testing
```python
# Test different models with the same prompt
results = {}
for provider in ['gpt4', 'claude', 'local-llama']:
manager.switch_provider(provider)
response = manager.generate("Write a haiku about AI")
results[provider] = response.content
```
### Fallback Strategy
```python
def generate_with_fallback(prompt):
providers = ['premium-model', 'standard-model', 'local-backup']
for provider in providers:
try:
manager.switch_provider(provider)
return manager.generate(prompt)
except Exception as e:
print(f"{provider} failed: {e}")
continue
raise Exception("All providers failed")
```
### Cost Optimization
```python
# Route simple queries to cheaper models
def smart_generate(prompt):
if len(prompt) < 100:
manager.switch_provider('gpt-3.5')
else:
manager.switch_provider('gpt-4')
return manager.generate(prompt)
```
### Streaming Responses
```python
# Stream from any provider that supports it
for chunk in manager.generate_stream("Tell me a long story"):
print(chunk.content, end='', flush=True)
if chunk.is_final:
break
```
### Multi-Modal Generation
```python
# Stream from any provider that supports it
for chunk in manager.generate_stream("Tell me a long story"):
print(chunk.content, end='', flush=True)
if chunk.is_final:
break
# Handle different data types seamlessly
audio_response = manager.generate("Say hello", output_type="audio")
video_response = manager.generate("Create a short clip", output_type="video")
# Content can be any type: str, bytes, custom objects
if isinstance(audio_response.content, bytes):
with open("output.wav", "wb") as f:
f.write(audio_response.content)
```
## π API Reference
### Core Classes
#### `GenManager`
Main interface for managing and switching between providers.
**Methods:**
- `add_provider(name, provider_type, config)` - Add a new provider
- `switch_provider(name)` - Switch to a different provider
- `generate(prompt, **kwargs)` - Generate content synchronously
- `agenerate(prompt, **kwargs)` - Generate content asynchronously
- `generate_stream(prompt, **kwargs)` - Stream content synchronously
- `agenerate_stream(prompt, **kwargs)` - Stream content asynchronously
#### `BaseGenProvider`
Abstract base class for all providers.
**Must Implement:**
- `generate(prompt, **kwargs)` - Sync generation
- `agenerate(prompt, **kwargs)` - Async generation
- `validate()` - Configuration validation
**Optional Override:**
- `generate_stream(prompt, **kwargs)` - Sync streaming
- `agenerate_stream(prompt, **kwargs)` - Async streaming
#### `ProviderRegistry`
Registry for provider discovery and instantiation.
**Methods:**
- `register(provider_type, provider_class)` - Register a provider class
- `create(provider_type, name, config)` - Create provider instance
- `list_types()` - List registered provider types
### Data Structures
#### `GenResponse`
```python
@dataclass
class GenResponse:
content: Union[str, bytes, Any]
provider: str
model: str
timestamp: str
content_type: Optional[str] = None
metadata: Optional[Dict[str, Any]] = None
```
#### `GenStreamChunk`
```python
@dataclass
class GenStreamChunk:
content: Union[str, bytes, Any]
provider: str
model: str
timestamp: str
content_type: Optional[str] = None
metadata: Optional[Dict[str, Any]] = None
is_final: bool = False
```
## ποΈ Architecture
agnosmodel2 follows a clean, extensible architecture:
```
βββββββββββββββββββ ββββββββββββββββββββ βββββββββββββββββββ
β GenManager ββββββ ProviderRegistry ββββββ BaseGenProvider β
β (Router) β β (Discovery) β β (Implementation)β
βββββββββββββββββββ ββββββββββββββββββββ βββββββββββββββββββ
β β β
β ββββββββββ΄βββββββββ β
β β β β
βΌ βΌ βΌ βΌ
βββββββββββββββββββ ββββββββββββ ββββββββββββββββ ββββββββββββββββ
β Your App Code β β OpenAI β β Anthropic β β Local Model β
β β β Provider β β Provider β β Provider β
βββββββββββββββββββ ββββββββββββ ββββββββββββββββ ββββββββββββββββ
```
**Key Principles:**
- **Separation of Concerns**: Core routing logic separate from provider implementations
- **Plugin Architecture**: Providers register themselves via the registry
- **Consistent Interface**: All providers expose the same methods
- **Transport Agnostic**: Supports HTTP, gRPC, local calls, anything
## β‘ Advanced Features
### Custom Transport Layer
```python
class HTTPTransport(BaseModelTransport):
def send(self, request_data, **kwargs):
# Custom HTTP implementation
pass
class gRPCTransport(BaseModelTransport):
def send(self, request_data, **kwargs):
# Custom gRPC implementation
pass
```
### Response Parsing
```python
class JSONResponseParser(BaseResponseParser):
def parse_response(self, raw_response):
return json.loads(raw_response)['content']
def get_content_type(self, raw_response):
return "application/json"
```
## π€ Contributing
**Coming Soon**: A dedicated repository for community contributions and ideas is being set up. Stay tuned!
For now, if you have suggestions or find issues, please reach out via email.
## π License
Licensed under the [PolyForm Noncommercial License 1.0.0](LICENSE).
**β οΈ Important**: This license **prohibits commercial use**. If you need to use agnosmodel2 in a commercial project, please contact [agnosweaver@gmail.com](mailto:agnosweaver@gmail.com) for a commercial license.
## π Contact
- **Email**: [agnosweaver@gmail.com](mailto:agnosweaver@gmail.com)
- **Project**: Part of the Agnosweaverβ’ project suite
- **Repository**: [https://github.com/agnosweaver/agnosmodel2](https://github.com/agnosweaver/agnosmodel2)
---
**Built for developers who hate being boxed in.**
*Pick, switch, and combine GenAI providers freely.*
Raw data
{
"_id": null,
"home_page": null,
"name": "agnosmodel2",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.11",
"maintainer_email": null,
"keywords": "generative-ai, provider, agnostic, ai, agnosweaver",
"author": null,
"author_email": "gauravkg32 <agnosweaver@gmail.com>",
"download_url": "https://files.pythonhosted.org/packages/b1/d2/5bd30dd106775806069d3b0f301e96fd5d21c3f66abce1305dfda5e27207/agnosmodel2-0.0.1a0.tar.gz",
"platform": null,
"description": "# agnosmodel2 - Universal GenAI Provider Abstraction\r\n\r\n[](https://www.python.org/downloads/)\r\n[](https://polyformproject.org/licenses/noncommercial/1.0.0)\r\n\r\n> **\u26a0\ufe0f Alpha Release** - This is the debut release of agnosmodel2. While functional, the API may change in future versions.\r\n\r\n**Universal Generative AI Provider Interface** - Switch between any GenAI provider (OpenAI, Anthropic, local models) without changing your code. Define providers once, switch seamlessly at runtime.\r\n\r\nPart of the Agnosweaver\u2122 project suite.\r\n\r\n## \ud83d\udd25 Why agnosmodel2?\r\n\r\nTired of AI vendor lock-in? Frustrated with rewriting code every time you want to try a different model? **agnosmodel2** solves this by providing a unified interface across all GenAI providers.\r\n\r\n```python\r\n# Define once, switch anytime\r\nmanager = GenManager()\r\nmanager.add_provider('gpt', 'openai', {'model': 'gpt-4', 'api_key': '...'})\r\nmanager.add_provider('claude', 'anthropic', {'model': 'claude-3', 'api_key': '...'})\r\n\r\nresponse = manager.generate(\"Hello world\") # Uses current provider\r\nmanager.switch_provider('claude')\r\nresponse = manager.generate(\"Hello world\") # Now uses Claude - same code!\r\n```\r\n\r\n## \u2728 Features\r\n\r\n- **\ud83d\udd04 Provider Agnostic**: Switch between OpenAI, Anthropic, local models, or any GenAI provider\r\n- **\ud83d\udcca Datatype Agnostic**: Handle JSON, XML, binary, protobuf, or any data format seamlessly\r\n- **\ud83c\udfd7\ufe0f Extensible Architecture**: Bring-your-own provider implementations\r\n- **\u26a1 Sync & Async**: Full support for both synchronous and asynchronous operations\r\n- **\ud83c\udf0a Streaming Support**: Real-time streaming responses when supported by providers\r\n- **\ud83d\udce6 Minimal Core**: Lightweight interface without bloat\r\n- **\ud83c\udfaf Standardized Responses**: Consistent response format across all providers\r\n- **\ud83d\udd0c Plugin System**: Registry-based provider discovery and registration\r\n\r\n## \ud83d\udce6 Installation\r\n\r\n```bash\r\n# PyPI\r\npip install agnosmodel2\r\n\r\n# GitHub\r\npip install git+https://github.com/agnosweaver/agnosmodel2.git\r\n```\r\n\r\n**Requirements**: Python 3.11 or higher\r\n\r\n## \ud83d\ude80 Quick Start\r\n\r\n### 1. Basic Setup\r\n\r\n```python\r\nfrom agnosmodel2 import GenManager, ProviderRegistry, BaseGenProvider\r\n\r\n# Create manager\r\nmanager = GenManager()\r\n```\r\n\r\n### 2. Implement a Provider\r\n\r\nSince agnosmodel2 provides the core interface, you need to implement providers for your specific needs:\r\n\r\n```python\r\nclass OpenAIProvider(BaseGenProvider):\r\n def generate(self, prompt: str, **kwargs):\r\n # Your OpenAI implementation\r\n response = self.call_openai_api(prompt)\r\n return GenResponse(\r\n content=response.text,\r\n provider=self.name,\r\n model=self.model,\r\n timestamp=datetime.now().isoformat(),\r\n content_type=\"text/plain\"\r\n )\r\n \r\n async def agenerate(self, prompt: str, **kwargs):\r\n # Your async OpenAI implementation\r\n pass\r\n \r\n def validate(self):\r\n return 'api_key' in self.config\r\n\r\n# Register your provider\r\nProviderRegistry.register('openai', OpenAIProvider)\r\n```\r\n\r\n### 3. Use It!\r\n\r\n```python\r\n# Add providers\r\nmanager.add_provider('gpt4', 'openai', {\r\n 'model': 'gpt-4',\r\n 'api_key': 'your-key'\r\n})\r\n\r\nmanager.add_provider('claude', 'anthropic', {\r\n 'model': 'claude-3-sonnet',\r\n 'api_key': 'your-key'\r\n})\r\n\r\n# Generate responses\r\nresponse = manager.generate(\"Explain quantum computing\")\r\nprint(f\"Response from {response.provider}: {response.content}\")\r\n\r\n# Switch providers seamlessly\r\nmanager.switch_provider('claude')\r\nresponse = manager.generate(\"Explain quantum computing\") # Same call, different provider!\r\n```\r\n\r\n## \ud83c\udfaf Common Use Cases\r\n\r\n### Model A/B Testing\r\n```python\r\n# Test different models with the same prompt\r\nresults = {}\r\nfor provider in ['gpt4', 'claude', 'local-llama']:\r\n manager.switch_provider(provider)\r\n response = manager.generate(\"Write a haiku about AI\")\r\n results[provider] = response.content\r\n```\r\n\r\n### Fallback Strategy\r\n```python\r\ndef generate_with_fallback(prompt):\r\n providers = ['premium-model', 'standard-model', 'local-backup']\r\n \r\n for provider in providers:\r\n try:\r\n manager.switch_provider(provider)\r\n return manager.generate(prompt)\r\n except Exception as e:\r\n print(f\"{provider} failed: {e}\")\r\n continue\r\n \r\n raise Exception(\"All providers failed\")\r\n```\r\n\r\n### Cost Optimization\r\n```python\r\n# Route simple queries to cheaper models\r\ndef smart_generate(prompt):\r\n if len(prompt) < 100:\r\n manager.switch_provider('gpt-3.5')\r\n else:\r\n manager.switch_provider('gpt-4')\r\n \r\n return manager.generate(prompt)\r\n```\r\n\r\n### Streaming Responses\r\n```python\r\n# Stream from any provider that supports it\r\nfor chunk in manager.generate_stream(\"Tell me a long story\"):\r\n print(chunk.content, end='', flush=True)\r\n if chunk.is_final:\r\n break\r\n```\r\n\r\n### Multi-Modal Generation\r\n```python\r\n# Stream from any provider that supports it\r\nfor chunk in manager.generate_stream(\"Tell me a long story\"):\r\n print(chunk.content, end='', flush=True)\r\n if chunk.is_final:\r\n break\r\n\r\n# Handle different data types seamlessly\r\naudio_response = manager.generate(\"Say hello\", output_type=\"audio\")\r\nvideo_response = manager.generate(\"Create a short clip\", output_type=\"video\")\r\n\r\n# Content can be any type: str, bytes, custom objects\r\nif isinstance(audio_response.content, bytes):\r\n with open(\"output.wav\", \"wb\") as f:\r\n f.write(audio_response.content)\r\n```\r\n\r\n## \ud83d\udcda API Reference\r\n\r\n### Core Classes\r\n\r\n#### `GenManager`\r\nMain interface for managing and switching between providers.\r\n\r\n**Methods:**\r\n- `add_provider(name, provider_type, config)` - Add a new provider\r\n- `switch_provider(name)` - Switch to a different provider\r\n- `generate(prompt, **kwargs)` - Generate content synchronously\r\n- `agenerate(prompt, **kwargs)` - Generate content asynchronously\r\n- `generate_stream(prompt, **kwargs)` - Stream content synchronously\r\n- `agenerate_stream(prompt, **kwargs)` - Stream content asynchronously\r\n\r\n#### `BaseGenProvider`\r\nAbstract base class for all providers.\r\n\r\n**Must Implement:**\r\n- `generate(prompt, **kwargs)` - Sync generation\r\n- `agenerate(prompt, **kwargs)` - Async generation\r\n- `validate()` - Configuration validation\r\n\r\n**Optional Override:**\r\n- `generate_stream(prompt, **kwargs)` - Sync streaming\r\n- `agenerate_stream(prompt, **kwargs)` - Async streaming\r\n\r\n#### `ProviderRegistry`\r\nRegistry for provider discovery and instantiation.\r\n\r\n**Methods:**\r\n- `register(provider_type, provider_class)` - Register a provider class\r\n- `create(provider_type, name, config)` - Create provider instance\r\n- `list_types()` - List registered provider types\r\n\r\n### Data Structures\r\n\r\n#### `GenResponse`\r\n```python\r\n@dataclass\r\nclass GenResponse:\r\n content: Union[str, bytes, Any]\r\n provider: str\r\n model: str\r\n timestamp: str\r\n content_type: Optional[str] = None\r\n metadata: Optional[Dict[str, Any]] = None\r\n```\r\n\r\n#### `GenStreamChunk`\r\n```python\r\n@dataclass\r\nclass GenStreamChunk:\r\n content: Union[str, bytes, Any]\r\n provider: str\r\n model: str\r\n timestamp: str\r\n content_type: Optional[str] = None\r\n metadata: Optional[Dict[str, Any]] = None\r\n is_final: bool = False\r\n```\r\n\r\n## \ud83c\udfd7\ufe0f Architecture\r\n\r\nagnosmodel2 follows a clean, extensible architecture:\r\n\r\n```\r\n\u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\r\n\u2502 GenManager \u2502\u2500\u2500\u2500\u2500\u2502 ProviderRegistry \u2502\u2500\u2500\u2500\u2500\u2502 BaseGenProvider \u2502\r\n\u2502 (Router) \u2502 \u2502 (Discovery) \u2502 \u2502 (Implementation)\u2502\r\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\r\n \u2502 \u2502 \u2502\r\n \u2502 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u2502\r\n \u2502 \u2502 \u2502 \u2502\r\n \u25bc \u25bc \u25bc \u25bc\r\n\u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\r\n\u2502 Your App Code \u2502 \u2502 OpenAI \u2502 \u2502 Anthropic \u2502 \u2502 Local Model \u2502\r\n\u2502 \u2502 \u2502 Provider \u2502 \u2502 Provider \u2502 \u2502 Provider \u2502\r\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\r\n```\r\n\r\n**Key Principles:**\r\n- **Separation of Concerns**: Core routing logic separate from provider implementations\r\n- **Plugin Architecture**: Providers register themselves via the registry\r\n- **Consistent Interface**: All providers expose the same methods\r\n- **Transport Agnostic**: Supports HTTP, gRPC, local calls, anything\r\n\r\n## \u26a1 Advanced Features\r\n\r\n### Custom Transport Layer\r\n```python\r\nclass HTTPTransport(BaseModelTransport):\r\n def send(self, request_data, **kwargs):\r\n # Custom HTTP implementation\r\n pass\r\n\r\nclass gRPCTransport(BaseModelTransport):\r\n def send(self, request_data, **kwargs):\r\n # Custom gRPC implementation\r\n pass\r\n```\r\n\r\n### Response Parsing\r\n```python\r\nclass JSONResponseParser(BaseResponseParser):\r\n def parse_response(self, raw_response):\r\n return json.loads(raw_response)['content']\r\n \r\n def get_content_type(self, raw_response):\r\n return \"application/json\"\r\n```\r\n\r\n## \ud83e\udd1d Contributing\r\n\r\n**Coming Soon**: A dedicated repository for community contributions and ideas is being set up. Stay tuned!\r\n\r\nFor now, if you have suggestions or find issues, please reach out via email.\r\n\r\n## \ud83d\udcc4 License\r\n\r\nLicensed under the [PolyForm Noncommercial License 1.0.0](LICENSE).\r\n\r\n**\u26a0\ufe0f Important**: This license **prohibits commercial use**. If you need to use agnosmodel2 in a commercial project, please contact [agnosweaver@gmail.com](mailto:agnosweaver@gmail.com) for a commercial license.\r\n\r\n## \ud83d\udcde Contact\r\n\r\n- **Email**: [agnosweaver@gmail.com](mailto:agnosweaver@gmail.com)\r\n- **Project**: Part of the Agnosweaver\u2122 project suite\r\n- **Repository**: [https://github.com/agnosweaver/agnosmodel2](https://github.com/agnosweaver/agnosmodel2)\r\n\r\n---\r\n\r\n**Built for developers who hate being boxed in.**\r\n\r\n*Pick, switch, and combine GenAI providers freely.*\r\n",
"bugtrack_url": null,
"license": null,
"summary": "Universal Generative AI provider interface \u2014 switch providers without changing code.",
"version": "0.0.1a0",
"project_urls": {
"Homepage": "https://github.com/agnosweaver/agnosmodel2",
"License": "https://polyformproject.org/licenses/noncommercial/1.0.0",
"Repository": "https://github.com/agnosweaver/agnosmodel2"
},
"split_keywords": [
"generative-ai",
" provider",
" agnostic",
" ai",
" agnosweaver"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "6f895b7f74e1c441c66e55b224ad0ebec52a7f5cd18b08c8a6d55dbff7e71ca7",
"md5": "b31eb3f93c6cbc90338eb1592943fc52",
"sha256": "76fcfd3903f53e45f6ba0f96eb9398da247758e3e8436d11b42e12270c1472f4"
},
"downloads": -1,
"filename": "agnosmodel2-0.0.1a0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "b31eb3f93c6cbc90338eb1592943fc52",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.11",
"size": 16095,
"upload_time": "2025-09-12T21:23:43",
"upload_time_iso_8601": "2025-09-12T21:23:43.974512Z",
"url": "https://files.pythonhosted.org/packages/6f/89/5b7f74e1c441c66e55b224ad0ebec52a7f5cd18b08c8a6d55dbff7e71ca7/agnosmodel2-0.0.1a0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "b1d25bd30dd106775806069d3b0f301e96fd5d21c3f66abce1305dfda5e27207",
"md5": "65b47ac1405a9b482e3620ed9ad201b5",
"sha256": "1bb9fc29cb961e0a127274f89c41deb645025e05b7ae46491376a534020b5086"
},
"downloads": -1,
"filename": "agnosmodel2-0.0.1a0.tar.gz",
"has_sig": false,
"md5_digest": "65b47ac1405a9b482e3620ed9ad201b5",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.11",
"size": 17675,
"upload_time": "2025-09-12T21:23:45",
"upload_time_iso_8601": "2025-09-12T21:23:45.214685Z",
"url": "https://files.pythonhosted.org/packages/b1/d2/5bd30dd106775806069d3b0f301e96fd5d21c3f66abce1305dfda5e27207/agnosmodel2-0.0.1a0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-09-12 21:23:45",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "agnosweaver",
"github_project": "agnosmodel2",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"requirements": [
{
"name": "requests",
"specs": []
},
{
"name": "aiohttp",
"specs": []
},
{
"name": "websockets",
"specs": []
}
],
"lcname": "agnosmodel2"
}