| Name | erdo JSON |
| Version |
0.1.13
JSON |
| download |
| home_page | None |
| Summary | Python SDK for building workflow automation agents with Erdo |
| upload_time | 2025-11-05 16:31:17 |
| maintainer | None |
| docs_url | None |
| author | None |
| requires_python | >=3.9 |
| license | Proprietary |
| keywords |
agents
ai
automation
data
workflow
|
| VCS |
 |
| bugtrack_url |
|
| requirements |
No requirements were recorded.
|
| Travis-CI |
No Travis.
|
| coveralls test coverage |
No coveralls.
|
# Erdo Agent SDK
Build AI agents and workflows with Python. The Erdo Agent SDK provides a declarative way to create agents that can be executed by the [Erdo platform](https://erdo.ai).
## Installation
```bash
pip install erdo
```
## Quick Start
### Creating Agents
Create agents using the `Agent` class and define steps with actions:
```python
from erdo import Agent, state
from erdo.actions import memory, llm
from erdo.conditions import IsSuccess, GreaterThan
# Create an agent
data_analyzer = Agent(
name="data analyzer",
description="Analyzes data files and provides insights",
running_message="Analyzing data...",
finished_message="Analysis complete",
)
# Step 1: Search for relevant context
search_step = data_analyzer.step(
memory.search(
query=state.query,
organization_scope="specific",
limit=5,
max_distance=0.8
)
)
# Step 2: Analyze the data with AI
analyze_step = data_analyzer.step(
llm.message(
model="claude-sonnet-4-20250514",
system_prompt="You are a data analyst. Analyze the data and provide insights.",
query=state.query,
context=search_step.output.memories,
response_format={
"Type": "json_schema",
"Schema": {
"type": "object",
"required": ["insights", "confidence", "recommendations"],
"properties": {
"insights": {"type": "string", "description": "Key insights found"},
"confidence": {"type": "number", "description": "Confidence 0-1"},
"recommendations": {"type": "array", "items": {"type": "string"}},
},
},
},
),
depends_on=search_step,
)
```
### Code Execution with External Files
Use the `@agent.exec` decorator to execute code with external Python files:
```python
from erdo.types import PythonFile
@data_analyzer.exec(
code_files=[
PythonFile(filename="analysis_files/analyze.py"),
PythonFile(filename="analysis_files/utils.py"),
]
)
def execute_analysis():
"""Execute detailed analysis using external code files."""
from analysis_files.analyze import analyze_data
from analysis_files.utils import prepare_data
# Prepare and analyze data
prepared_data = prepare_data(context.parameters.get("dataset", {}))
results = analyze_data(context)
return results
```
### Conditional Step Execution
Handle step results with conditions:
```python
from erdo.conditions import IsSuccess, GreaterThan
# Store high-confidence results
analyze_step.on(
IsSuccess() & GreaterThan("confidence", "0.8"),
memory.store(
memory={
"content": analyze_step.output.insights,
"description": "High-confidence data analysis results",
"type": "analysis",
"tags": ["analysis", "high-confidence"],
}
),
)
# Execute detailed analysis for high-confidence results
analyze_step.on(
IsSuccess() & GreaterThan("confidence", "0.8"),
execute_analysis
)
```
### Complex Execution Modes
Use execution modes for advanced workflows:
```python
from erdo import ExecutionMode, ExecutionModeType
from erdo.actions import bot
from erdo.conditions import And, IsAny
from erdo.template import TemplateString
# Iterate over resources
analyze_files = agent.step(
action=bot.invoke(
bot_name="file analyzer",
parameters={"resource": TemplateString("{{resources}}")},
),
key="analyze_files",
execution_mode=ExecutionMode(
mode=ExecutionModeType.ITERATE_OVER,
data="parameters.resource",
if_condition=And(
IsAny(key="dataset.analysis_summary", value=["", None]),
IsAny(key="dataset.type", value=["FILE"]),
),
)
)
```
### Loading Prompts
Use the `Prompt` class to load prompts from files:
```python
from erdo import Prompt
# Load prompts from a directory
prompts = Prompt.load_from_directory("prompts")
# Use in your agent steps
step = agent.step(
llm.message(
system_prompt=prompts.system_prompt,
query=state.query,
)
)
```
### State and Templating
Access dynamic data using the `state` object and template strings:
```python
from erdo import state
from erdo.template import TemplateString
# Access input parameters
query = state.query
dataset = state.dataset
# Use in template strings
template = TemplateString("Analyzing: {{query}} for dataset {{dataset.id}}")
```
## Core Concepts
### Actions
Actions are the building blocks of your agents. Available action modules:
- `erdo.actions.memory` - Memory storage and search
- `erdo.actions.llm` - Large language model interactions
- `erdo.actions.bot` - Bot invocation and orchestration
- `erdo.actions.codeexec` - Code execution
- `erdo.actions.utils` - Utility functions
- `erdo.actions.resource_definitions` - Resource management
### Conditions
Conditions control when steps execute:
- `IsSuccess()`, `IsError()` - Check step status
- `GreaterThan()`, `LessThan()` - Numeric comparisons
- `TextEquals()`, `TextContains()` - Text matching
- `And()`, `Or()`, `Not()` - Logical operators
### Types
Key types for agent development:
- `Agent` - Main agent class
- `ExecutionMode` - Control step execution behavior
- `PythonFile` - Reference external Python files
- `TemplateString` - Dynamic string templates
- `Prompt` - Prompt management
## Advanced Features
### Multi-Step Dependencies
Create complex workflows with step dependencies:
```python
step1 = agent.step(memory.search(...))
step2 = agent.step(llm.message(...), depends_on=step1)
step3 = agent.step(utils.send_status(...), depends_on=[step1, step2])
```
### Dynamic Data Access
Use the state object to access runtime data:
```python
# Access nested data
user_id = state.user.id
dataset_config = state.dataset.config.type
# Use in actions
step = agent.step(
memory.search(query=f"data for user {state.user.id}")
)
```
### Error Handling
Handle errors with conditions and fallback steps:
```python
from erdo.conditions import IsError
main_step = agent.step(llm.message(...))
# Handle errors
main_step.on(
IsError(),
utils.send_status(
message="Analysis failed, please try again",
status="error"
)
)
```
## Invoking Agents
Use the `invoke()` function to execute agents programmatically:
```python
from erdo import invoke
# Invoke an agent
response = invoke(
"data-question-answerer",
messages=[{"role": "user", "content": "What were Q4 sales?"}],
datasets=["sales-2024"],
parameters={"time_period": "Q4"},
)
if response.success:
print(response.result)
else:
print(f"Error: {response.error}")
```
### Invocation Modes
Control how bot actions are executed for testing and development:
| Mode | Description | Cost |
|------|-------------|------|
| **live** | Execute with real API calls | $$$ per run |
| **replay** | Cache responses, replay on subsequent runs | $$$ first run, FREE after |
| **manual** | Use developer-provided mock responses | FREE always |
```python
# Live mode (default) - real API calls
response = invoke("my-agent", messages=[...], mode="live")
# Replay mode - cache after first run (recommended for testing!)
response = invoke("my-agent", messages=[...], mode="replay")
# Replay with refresh - bypass cache, get fresh response
response = invoke("my-agent", messages=[...], mode={"mode": "replay", "refresh": True})
# Manual mode - use mock responses
response = invoke(
"my-agent",
messages=[...],
mode="manual",
manual_mocks={
"llm.message": {
"status": "success",
"output": {"content": "Mocked response"}
}
}
)
```
**Replay Mode Refresh:**
The `refresh` parameter forces a fresh API call while staying in replay mode:
- First run: Executes and caches the response
- Subsequent runs: Uses cached response (free!)
- With refresh: Bypasses cache, gets fresh response, updates cache
Perfect for updating cached responses after bot changes without switching modes.
## Testing Agents
Write fast, parallel agent tests using `agent_test_*` functions:
```python
from erdo import invoke
from erdo.test import text_contains
def agent_test_csv_sales():
"""Test CSV sales analysis."""
response = invoke(
"data-question-answerer",
messages=[{"role": "user", "content": "What were Q4 sales?"}],
datasets=["sales-q4-2024"],
mode="replay", # Free after first run!
)
assert response.success
result_text = str(response.result)
assert text_contains(result_text, "sales", case_sensitive=False)
```
Run tests in parallel with the CLI:
```bash
# Run all tests
erdo agent-test tests/test_my_agent.py
# Verbose output
erdo agent-test tests/test_my_agent.py --verbose
# Refresh cached responses (bypass cache for all replay mode tests)
erdo agent-test tests/test_my_agent.py --refresh
```
**Refresh Flag:**
The `--refresh` flag forces all tests using `mode="replay"` to bypass cache and get fresh responses. Perfect for:
- Updating cached responses after bot changes
- Verifying tests with current LLM behavior
- No code changes needed - just add the flag!
### Test Helpers
The `erdo.test` module provides assertion helpers:
```python
from erdo.test import (
text_contains, # Check if text contains substring
text_equals, # Check exact match
text_matches, # Check regex pattern
json_path_equals, # Check JSON path value
json_path_exists, # Check if JSON path exists
has_dataset, # Check if dataset is present
)
```
## CLI Integration
Deploy and manage your agents using the Erdo CLI:
```bash
# Login to your account
erdo login
# Sync your agents to the platform
erdo sync-agent my_agent.py
# Invoke an agent
erdo invoke my-agent --message "Hello!"
# Run agent tests
erdo agent-test tests/test_my_agent.py
```
## Examples
See the `examples/` directory for complete examples:
- `agent_centric_example.py` - Comprehensive agent with multiple steps
- `state_example.py` - State management and templating
- `invoke_example.py` - Agent invocation patterns
- `agent_test_example.py` - Agent testing examples
## API Reference
### Core Classes
- **Agent**: Main agent class for creating workflows
- **ExecutionMode**: Control step execution (iterate, conditional, etc.)
- **Prompt**: Load and manage prompt templates
### Actions
- **memory**: Store and search memories
- **llm**: Interact with language models
- **bot**: Invoke other bots and agents
- **codeexec**: Execute Python code
- **utils**: Utility functions (status, notifications, etc.)
### Conditions
- **Comparison**: `GreaterThan`, `LessThan`, `TextEquals`, etc.
- **Status**: `IsSuccess`, `IsError`, `IsNull`, etc.
- **Logical**: `And`, `Or`, `Not`
### State & Templating
- **state**: Access runtime parameters and data
- **TemplateString**: Dynamic string templates with `{{variable}}` syntax
## License
Commercial License - see LICENSE file for details.
Raw data
{
"_id": null,
"home_page": null,
"name": "erdo",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.9",
"maintainer_email": null,
"keywords": "agents, ai, automation, data, workflow",
"author": null,
"author_email": "Erdo AI <hello@erdo.ai>",
"download_url": "https://files.pythonhosted.org/packages/80/31/5e911d3be910464dabe572725f41e3cfb03dfdc364d6cf913eb9ead86286/erdo-0.1.13.tar.gz",
"platform": null,
"description": "# Erdo Agent SDK\n\nBuild AI agents and workflows with Python. The Erdo Agent SDK provides a declarative way to create agents that can be executed by the [Erdo platform](https://erdo.ai).\n\n## Installation\n\n```bash\npip install erdo\n```\n\n## Quick Start\n\n### Creating Agents\n\nCreate agents using the `Agent` class and define steps with actions:\n\n```python\nfrom erdo import Agent, state\nfrom erdo.actions import memory, llm\nfrom erdo.conditions import IsSuccess, GreaterThan\n\n# Create an agent\ndata_analyzer = Agent(\n name=\"data analyzer\",\n description=\"Analyzes data files and provides insights\",\n running_message=\"Analyzing data...\",\n finished_message=\"Analysis complete\",\n)\n\n# Step 1: Search for relevant context\nsearch_step = data_analyzer.step(\n memory.search(\n query=state.query,\n organization_scope=\"specific\",\n limit=5,\n max_distance=0.8\n )\n)\n\n# Step 2: Analyze the data with AI\nanalyze_step = data_analyzer.step(\n llm.message(\n model=\"claude-sonnet-4-20250514\",\n system_prompt=\"You are a data analyst. Analyze the data and provide insights.\",\n query=state.query,\n context=search_step.output.memories,\n response_format={\n \"Type\": \"json_schema\",\n \"Schema\": {\n \"type\": \"object\",\n \"required\": [\"insights\", \"confidence\", \"recommendations\"],\n \"properties\": {\n \"insights\": {\"type\": \"string\", \"description\": \"Key insights found\"},\n \"confidence\": {\"type\": \"number\", \"description\": \"Confidence 0-1\"},\n \"recommendations\": {\"type\": \"array\", \"items\": {\"type\": \"string\"}},\n },\n },\n },\n ),\n depends_on=search_step,\n)\n```\n\n### Code Execution with External Files\n\nUse the `@agent.exec` decorator to execute code with external Python files:\n\n```python\nfrom erdo.types import PythonFile\n\n@data_analyzer.exec(\n code_files=[\n PythonFile(filename=\"analysis_files/analyze.py\"),\n PythonFile(filename=\"analysis_files/utils.py\"),\n ]\n)\ndef execute_analysis():\n \"\"\"Execute detailed analysis using external code files.\"\"\"\n from analysis_files.analyze import analyze_data\n from analysis_files.utils import prepare_data\n\n # Prepare and analyze data\n prepared_data = prepare_data(context.parameters.get(\"dataset\", {}))\n results = analyze_data(context)\n\n return results\n```\n\n### Conditional Step Execution\n\nHandle step results with conditions:\n\n```python\nfrom erdo.conditions import IsSuccess, GreaterThan\n\n# Store high-confidence results\nanalyze_step.on(\n IsSuccess() & GreaterThan(\"confidence\", \"0.8\"),\n memory.store(\n memory={\n \"content\": analyze_step.output.insights,\n \"description\": \"High-confidence data analysis results\",\n \"type\": \"analysis\",\n \"tags\": [\"analysis\", \"high-confidence\"],\n }\n ),\n)\n\n# Execute detailed analysis for high-confidence results\nanalyze_step.on(\n IsSuccess() & GreaterThan(\"confidence\", \"0.8\"),\n execute_analysis\n)\n```\n\n### Complex Execution Modes\n\nUse execution modes for advanced workflows:\n\n```python\nfrom erdo import ExecutionMode, ExecutionModeType\nfrom erdo.actions import bot\nfrom erdo.conditions import And, IsAny\nfrom erdo.template import TemplateString\n\n# Iterate over resources\nanalyze_files = agent.step(\n action=bot.invoke(\n bot_name=\"file analyzer\",\n parameters={\"resource\": TemplateString(\"{{resources}}\")},\n ),\n key=\"analyze_files\",\n execution_mode=ExecutionMode(\n mode=ExecutionModeType.ITERATE_OVER,\n data=\"parameters.resource\",\n if_condition=And(\n IsAny(key=\"dataset.analysis_summary\", value=[\"\", None]),\n IsAny(key=\"dataset.type\", value=[\"FILE\"]),\n ),\n )\n)\n```\n\n### Loading Prompts\n\nUse the `Prompt` class to load prompts from files:\n\n```python\nfrom erdo import Prompt\n\n# Load prompts from a directory\nprompts = Prompt.load_from_directory(\"prompts\")\n\n# Use in your agent steps\nstep = agent.step(\n llm.message(\n system_prompt=prompts.system_prompt,\n query=state.query,\n )\n)\n```\n\n### State and Templating\n\nAccess dynamic data using the `state` object and template strings:\n\n```python\nfrom erdo import state\nfrom erdo.template import TemplateString\n\n# Access input parameters\nquery = state.query\ndataset = state.dataset\n\n# Use in template strings\ntemplate = TemplateString(\"Analyzing: {{query}} for dataset {{dataset.id}}\")\n```\n\n## Core Concepts\n\n### Actions\n\nActions are the building blocks of your agents. Available action modules:\n\n- `erdo.actions.memory` - Memory storage and search\n- `erdo.actions.llm` - Large language model interactions\n- `erdo.actions.bot` - Bot invocation and orchestration\n- `erdo.actions.codeexec` - Code execution\n- `erdo.actions.utils` - Utility functions\n- `erdo.actions.resource_definitions` - Resource management\n\n### Conditions\n\nConditions control when steps execute:\n\n- `IsSuccess()`, `IsError()` - Check step status\n- `GreaterThan()`, `LessThan()` - Numeric comparisons\n- `TextEquals()`, `TextContains()` - Text matching\n- `And()`, `Or()`, `Not()` - Logical operators\n\n### Types\n\nKey types for agent development:\n\n- `Agent` - Main agent class\n- `ExecutionMode` - Control step execution behavior\n- `PythonFile` - Reference external Python files\n- `TemplateString` - Dynamic string templates\n- `Prompt` - Prompt management\n\n## Advanced Features\n\n### Multi-Step Dependencies\n\nCreate complex workflows with step dependencies:\n\n```python\nstep1 = agent.step(memory.search(...))\nstep2 = agent.step(llm.message(...), depends_on=step1)\nstep3 = agent.step(utils.send_status(...), depends_on=[step1, step2])\n```\n\n### Dynamic Data Access\n\nUse the state object to access runtime data:\n\n```python\n# Access nested data\nuser_id = state.user.id\ndataset_config = state.dataset.config.type\n\n# Use in actions\nstep = agent.step(\n memory.search(query=f\"data for user {state.user.id}\")\n)\n```\n\n### Error Handling\n\nHandle errors with conditions and fallback steps:\n\n```python\nfrom erdo.conditions import IsError\n\nmain_step = agent.step(llm.message(...))\n\n# Handle errors\nmain_step.on(\n IsError(),\n utils.send_status(\n message=\"Analysis failed, please try again\",\n status=\"error\"\n )\n)\n```\n\n## Invoking Agents\n\nUse the `invoke()` function to execute agents programmatically:\n\n```python\nfrom erdo import invoke\n\n# Invoke an agent\nresponse = invoke(\n \"data-question-answerer\",\n messages=[{\"role\": \"user\", \"content\": \"What were Q4 sales?\"}],\n datasets=[\"sales-2024\"],\n parameters={\"time_period\": \"Q4\"},\n)\n\nif response.success:\n print(response.result)\nelse:\n print(f\"Error: {response.error}\")\n```\n\n### Invocation Modes\n\nControl how bot actions are executed for testing and development:\n\n| Mode | Description | Cost |\n|------|-------------|------|\n| **live** | Execute with real API calls | $$$ per run |\n| **replay** | Cache responses, replay on subsequent runs | $$$ first run, FREE after |\n| **manual** | Use developer-provided mock responses | FREE always |\n\n```python\n# Live mode (default) - real API calls\nresponse = invoke(\"my-agent\", messages=[...], mode=\"live\")\n\n# Replay mode - cache after first run (recommended for testing!)\nresponse = invoke(\"my-agent\", messages=[...], mode=\"replay\")\n\n# Replay with refresh - bypass cache, get fresh response\nresponse = invoke(\"my-agent\", messages=[...], mode={\"mode\": \"replay\", \"refresh\": True})\n\n# Manual mode - use mock responses\nresponse = invoke(\n \"my-agent\",\n messages=[...],\n mode=\"manual\",\n manual_mocks={\n \"llm.message\": {\n \"status\": \"success\",\n \"output\": {\"content\": \"Mocked response\"}\n }\n }\n)\n```\n\n**Replay Mode Refresh:**\nThe `refresh` parameter forces a fresh API call while staying in replay mode:\n- First run: Executes and caches the response\n- Subsequent runs: Uses cached response (free!)\n- With refresh: Bypasses cache, gets fresh response, updates cache\n\nPerfect for updating cached responses after bot changes without switching modes.\n\n## Testing Agents\n\nWrite fast, parallel agent tests using `agent_test_*` functions:\n\n```python\nfrom erdo import invoke\nfrom erdo.test import text_contains\n\ndef agent_test_csv_sales():\n \"\"\"Test CSV sales analysis.\"\"\"\n response = invoke(\n \"data-question-answerer\",\n messages=[{\"role\": \"user\", \"content\": \"What were Q4 sales?\"}],\n datasets=[\"sales-q4-2024\"],\n mode=\"replay\", # Free after first run!\n )\n\n assert response.success\n result_text = str(response.result)\n assert text_contains(result_text, \"sales\", case_sensitive=False)\n```\n\nRun tests in parallel with the CLI:\n\n```bash\n# Run all tests\nerdo agent-test tests/test_my_agent.py\n\n# Verbose output\nerdo agent-test tests/test_my_agent.py --verbose\n\n# Refresh cached responses (bypass cache for all replay mode tests)\nerdo agent-test tests/test_my_agent.py --refresh\n```\n\n**Refresh Flag:**\nThe `--refresh` flag forces all tests using `mode=\"replay\"` to bypass cache and get fresh responses. Perfect for:\n- Updating cached responses after bot changes\n- Verifying tests with current LLM behavior\n- No code changes needed - just add the flag!\n\n### Test Helpers\n\nThe `erdo.test` module provides assertion helpers:\n\n```python\nfrom erdo.test import (\n text_contains, # Check if text contains substring\n text_equals, # Check exact match\n text_matches, # Check regex pattern\n json_path_equals, # Check JSON path value\n json_path_exists, # Check if JSON path exists\n has_dataset, # Check if dataset is present\n)\n```\n\n## CLI Integration\n\nDeploy and manage your agents using the Erdo CLI:\n\n```bash\n# Login to your account\nerdo login\n\n# Sync your agents to the platform\nerdo sync-agent my_agent.py\n\n# Invoke an agent\nerdo invoke my-agent --message \"Hello!\"\n\n# Run agent tests\nerdo agent-test tests/test_my_agent.py\n```\n\n## Examples\n\nSee the `examples/` directory for complete examples:\n\n- `agent_centric_example.py` - Comprehensive agent with multiple steps\n- `state_example.py` - State management and templating\n- `invoke_example.py` - Agent invocation patterns\n- `agent_test_example.py` - Agent testing examples\n\n## API Reference\n\n### Core Classes\n\n- **Agent**: Main agent class for creating workflows\n- **ExecutionMode**: Control step execution (iterate, conditional, etc.)\n- **Prompt**: Load and manage prompt templates\n\n### Actions\n\n- **memory**: Store and search memories\n- **llm**: Interact with language models\n- **bot**: Invoke other bots and agents\n- **codeexec**: Execute Python code\n- **utils**: Utility functions (status, notifications, etc.)\n\n### Conditions\n\n- **Comparison**: `GreaterThan`, `LessThan`, `TextEquals`, etc.\n- **Status**: `IsSuccess`, `IsError`, `IsNull`, etc.\n- **Logical**: `And`, `Or`, `Not`\n\n### State & Templating\n\n- **state**: Access runtime parameters and data\n- **TemplateString**: Dynamic string templates with `{{variable}}` syntax\n\n## License\n\nCommercial License - see LICENSE file for details.\n",
"bugtrack_url": null,
"license": "Proprietary",
"summary": "Python SDK for building workflow automation agents with Erdo",
"version": "0.1.13",
"project_urls": {
"Documentation": "https://docs.erdo.ai",
"Homepage": "https://erdo.ai",
"Issues": "https://github.com/erdoai/erdo-python-sdk/issues",
"Repository": "https://github.com/erdoai/erdo-python-sdk"
},
"split_keywords": [
"agents",
" ai",
" automation",
" data",
" workflow"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "52fdeaf92d1d3f10a6cc16c9d963a14ff750f271a6e0abb4fdd4bdb6f5aef846",
"md5": "d1b750009e7d9344807f0c52d84710d8",
"sha256": "e8e822fc41fb195358955f9c1a72e37ffbe77e3c6abebaa309b65c82fdfc945a"
},
"downloads": -1,
"filename": "erdo-0.1.13-py3-none-any.whl",
"has_sig": false,
"md5_digest": "d1b750009e7d9344807f0c52d84710d8",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.9",
"size": 94540,
"upload_time": "2025-11-05T16:31:12",
"upload_time_iso_8601": "2025-11-05T16:31:12.771299Z",
"url": "https://files.pythonhosted.org/packages/52/fd/eaf92d1d3f10a6cc16c9d963a14ff750f271a6e0abb4fdd4bdb6f5aef846/erdo-0.1.13-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "80315e911d3be910464dabe572725f41e3cfb03dfdc364d6cf913eb9ead86286",
"md5": "81303c380b61bfbd258b37ac3b56e1ec",
"sha256": "2e84e427596173d24779e96e6e588f34c503ef2339f688f8b18c69d83b9ae7ec"
},
"downloads": -1,
"filename": "erdo-0.1.13.tar.gz",
"has_sig": false,
"md5_digest": "81303c380b61bfbd258b37ac3b56e1ec",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.9",
"size": 72920,
"upload_time": "2025-11-05T16:31:17",
"upload_time_iso_8601": "2025-11-05T16:31:17.620892Z",
"url": "https://files.pythonhosted.org/packages/80/31/5e911d3be910464dabe572725f41e3cfb03dfdc364d6cf913eb9ead86286/erdo-0.1.13.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-11-05 16:31:17",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "erdoai",
"github_project": "erdo-python-sdk",
"github_not_found": true,
"lcname": "erdo"
}