# FastADK
FastADK is an open‑source Python framework that makes building LLM-powered agents simple, efficient, and production-ready. It offers declarative APIs, comprehensive observability, and powerful scaling capabilities that enable developers to go from prototype to production with the same codebase.
## Features
### Core Features
- **Declarative Agent Development**: Build agents with `@Agent` and `@tool` decorators
- **Multi-Provider Support**: Easily switch between OpenAI, Anthropic, Google Gemini, and custom providers
- **Token & Cost Tracking**: Built-in visibility into token usage and cost estimation
- **Memory Management**: Sliding window, summarization, and vector store memory backends
- **Async & Parallelism**: True async execution for high performance and concurrency
- **Plugin Architecture**: Extensible system for custom integrations and tools
- **Context Policies**: Advanced context management with customizable strategies
- **Configuration System**: Powerful YAML/environment-based configuration
### Developer Experience
- **CLI Tools**: Interactive REPL, configuration validation, and project scaffolding
- **Debugging & Observability**: Structured logs, metrics, traces, and verbose mode
- **Testing Utilities**: Mock LLMs, simulation tools, and test scenario decorators
- **IDE Support**: VSCode snippets and type hints for better autocompletion
- **Hot Reload**: Development mode with auto-reload for rapid iteration
### Integration & Extension
- **HTTP API**: Auto-generated FastAPI endpoints for all agents
- **Workflow Orchestration**: Build complex multi-agent systems with sequential and parallel execution
- **System Adapters**: Ready-made Discord and Slack integrations
- **Fine-tuning Helpers**: Utilities for model customization and training
- **Batch Processing**: Tooling for high-volume processing
## Quick Start
```python
from fastadk import Agent, BaseAgent, tool
@Agent(model="gemini-1.5-pro", description="Weather assistant")
class WeatherAgent(BaseAgent):
@tool
def get_weather(self, city: str) -> dict:
"""Fetch current weather for a city."""
# This would typically come from an actual weather API
return {
"city": city,
"current": {
"temp_c": 22.5,
"condition": "Partly cloudy",
"humidity": 65,
"wind_kph": 15.3
},
"forecast": {
"tomorrow": {"temp_c": 24.0, "condition": "Sunny"},
"day_after": {"temp_c": 20.0, "condition": "Light rain"}
}
}
# Run the agent
if __name__ == "__main__":
import asyncio
async def main():
agent = WeatherAgent()
response = await agent.run("What's the weather in London?")
print(response)
asyncio.run(main())
```
## Installation
```bash
pip install fastadk
```
For development, we recommend using [UV](https://github.com/astral-sh/uv) for faster package management:
```bash
# Install uv
pip install uv
# Install FastADK with uv
uv pip install fastadk
# Run examples with uv
uv run -m examples.basic.weather_agent
```
## Workflow Examples
### Context Management with Memory
```python
from fastadk import Agent, BaseAgent
from fastadk.memory import VectorMemoryBackend
from fastadk.core.context_policy import SummarizeOlderPolicy
@Agent(model="gemini-1.5-pro")
class MemoryAgent(BaseAgent):
def __init__(self):
super().__init__()
# Set up vector-based memory
self.memory = VectorMemoryBackend()
# Summarize older messages when context gets too large
self.context_policy = SummarizeOlderPolicy(
threshold_tokens=3000,
summarizer=self.model
)
```
### Parallel Tool Execution with Workflow
```python
from fastadk.core.workflow import Workflow, step
@step
async def fetch_weather(city: str):
# Implementation details...
return {"city": city, "weather": "sunny"}
@step
async def fetch_news(city: str):
# Implementation details...
return {"city": city, "headlines": ["Local event", "Sports update"]}
async def get_city_info(city: str):
# Run steps in parallel
results = await Workflow.parallel(
fetch_weather(city),
fetch_news(city)
).execute()
return {
"city": city,
"weather": results[0]["weather"],
"news": results[1]["headlines"]
}
```
### Token Usage and Cost Tracking
```python
from fastadk import Agent, BaseAgent
from fastadk.tokens import TokenBudget
@Agent(model="gpt-4")
class BudgetAwareAgent(BaseAgent):
def __init__(self):
super().__init__()
# Set budget constraints
self.token_budget = TokenBudget(
max_tokens_per_session=100000,
max_cost_per_session=5.0, # $5.00 USD
on_exceed="warn" # Other options: "error", "log"
)
async def run(self, prompt: str):
response = await super().run(prompt)
# Check usage after run
usage = self.last_run_token_usage
print(f"Used {usage.total_tokens} tokens (${usage.estimated_cost:.4f})")
return response
```
### HTTP API with FastAPI
```python
# api.py
from fastapi import FastAPI
from fastadk import Agent, BaseAgent, tool, registry
@Agent(model="gemini-1.5-pro")
class CalculatorAgent(BaseAgent):
@tool
def add(self, a: float, b: float) -> float:
"""Add two numbers."""
return a + b
@tool
def multiply(self, a: float, b: float) -> float:
"""Multiply two numbers."""
return a * b
# Register the agent and create FastAPI app
registry.register(CalculatorAgent)
app = FastAPI()
# Include auto-generated FastADK router
from fastadk.api.router import get_router
app.include_router(get_router(), prefix="/api/agents")
# Run with: uv run -m uvicorn api:app --reload
```
### Discord or Slack Integration
```python
from fastadk import Agent, BaseAgent, tool
from fastadk.adapters.discord import DiscordAdapter
@Agent(model="gemini-1.5-pro")
class HelpfulAssistant(BaseAgent):
@tool
def search_knowledge_base(self, query: str) -> str:
"""Search internal knowledge base for information."""
# Implementation details...
return "Here's what I found about your question..."
# Connect to Discord
adapter = DiscordAdapter(
agent=HelpfulAssistant(),
bot_token="YOUR_DISCORD_BOT_TOKEN",
channels=["general", "help-desk"],
prefix="!assist"
)
# Start the bot
if __name__ == "__main__":
import asyncio
asyncio.run(adapter.start())
```
## Advanced Features
### Custom Context Policies
```python
from fastadk.core.context_policy import ContextPolicy
from typing import List, Any
class CustomContextPolicy(ContextPolicy):
"""Custom policy that prioritizes questions and important information."""
def __init__(self, max_tokens: int = 3000):
self.max_tokens = max_tokens
self.important_keywords = ["urgent", "critical", "important"]
async def apply(self, history: List[Any]) -> List[Any]:
# Implementation that keeps important messages and removes less relevant ones
# when context size exceeds max_tokens
# ...
return filtered_history
```
### Pluggable Provider System
```python
from fastadk.providers.base import ModelProviderABC
from fastadk import registry
class MyCustomProvider(ModelProviderABC):
"""Custom LLM provider implementation."""
def __init__(self, api_key: str, model: str):
self.api_key = api_key
self.model = model
# Other initialization...
async def generate(self, prompt: str, **kwargs):
# Implementation for text generation
# ...
return response
async def stream(self, prompt: str, **kwargs):
# Implementation for streaming response
# ...
yield chunk
# Register custom provider
registry.register_provider("my_provider", MyCustomProvider)
# Use custom provider
@Agent(model="my-model", provider="my_provider")
class CustomAgent(BaseAgent):
# Agent implementation...
```
### Telemetry and Observability
```python
from fastadk.observability import configure_logging, configure_metrics
# Configure structured JSON logging
configure_logging(
level="INFO",
format="json",
redact_sensitive=True,
log_file="agent.log"
)
# Configure Prometheus metrics
configure_metrics(
enable=True,
port=9090,
labels={"environment": "production", "service": "agent-api"}
)
# Track custom metrics
from fastadk.observability.metrics import counter, gauge, histogram
# Increment counter when agent is used
counter("agent_calls_total", "Total number of agent calls").inc()
# Record latency of operations
with histogram("agent_latency_seconds", "Latency of agent operations").time():
# Operation to measure
result = await agent.run(prompt)
```
## Configuration
FastADK supports configuration through YAML files and environment variables:
```yaml
# fastadk.yaml
environment: production
model:
provider: gemini
model_name: gemini-1.5-pro
api_key_env_var: GEMINI_API_KEY
timeout_seconds: 30
retry_attempts: 3
memory:
backend_type: redis
connection_string: ${REDIS_URL}
ttl_seconds: 3600
namespace: "my-agent"
context:
policy: "summarize_older"
max_tokens: 8000
window_size: 10
observability:
log_level: info
metrics_enabled: true
tracing_enabled: true
redact_patterns:
- "api_key=([a-zA-Z0-9-_]+)"
- "password=([^&]+)"
```
## Testing
FastADK provides comprehensive testing tools:
```python
from fastadk.testing import AgentTest, test_scenario, MockModel
class TestWeatherAgent(AgentTest):
agent = WeatherAgent()
def setup_method(self):
# Replace real model with mock for testing
self.agent.model = MockModel(responses=[
"The weather in London is currently sunny with a temperature of 22°C."
])
@test_scenario("basic_weather_query")
async def test_basic_weather_query(self):
response = await self.agent.run("What's the weather in London?")
# Assertions
assert "sunny" in response.lower()
assert "22°c" in response.lower()
assert self.agent.tools_used == ["get_weather"]
assert self.agent.total_tokens < 1000
```
## CLI Commands
FastADK includes a powerful CLI for development and testing:
```bash
# Start interactive REPL with an agent
fastadk repl agent_file.py
# Validate configuration
fastadk config validate
# Initialize a new agent project
fastadk init my-new-agent
# Run an agent with a prompt
fastadk run agent_file.py "What's the weather in London?"
# Start development server with hot reload
fastadk serve agent_api.py --reload
```
## Documentation
- [System Overview](docs/system-overview.md): Detailed architecture and design
- [Getting Started](docs/getting-started/quick-start.md): Build your first agent
- [Examples](examples/): Real-world agent examples
- [API Reference](docs/api/): Detailed API documentation
- [Cookbook](docs/cookbook.md): Common patterns and recipes
## License
FastADK is released under the MIT License.
Raw data
{
"_id": null,
"home_page": null,
"name": "fastadk",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.10",
"maintainer_email": "FastADK Team <team@fastadk.dev>",
"keywords": "agents, ai, artificial-intelligence, automation, chatbots, framework, google-adk, llm",
"author": null,
"author_email": "FastADK Team <team@fastadk.dev>",
"download_url": "https://files.pythonhosted.org/packages/b1/6d/e4cbac209d7445555dc9bc02abaa991d8206a08985bb4c4a44bc4f5b36c4/fastadk-0.2.0.tar.gz",
"platform": null,
"description": "# FastADK\n\nFastADK is an open\u2011source Python framework that makes building LLM-powered agents simple, efficient, and production-ready. It offers declarative APIs, comprehensive observability, and powerful scaling capabilities that enable developers to go from prototype to production with the same codebase.\n\n## Features\n\n### Core Features\n\n- **Declarative Agent Development**: Build agents with `@Agent` and `@tool` decorators\n- **Multi-Provider Support**: Easily switch between OpenAI, Anthropic, Google Gemini, and custom providers\n- **Token & Cost Tracking**: Built-in visibility into token usage and cost estimation\n- **Memory Management**: Sliding window, summarization, and vector store memory backends\n- **Async & Parallelism**: True async execution for high performance and concurrency\n- **Plugin Architecture**: Extensible system for custom integrations and tools\n- **Context Policies**: Advanced context management with customizable strategies\n- **Configuration System**: Powerful YAML/environment-based configuration\n\n### Developer Experience\n\n- **CLI Tools**: Interactive REPL, configuration validation, and project scaffolding\n- **Debugging & Observability**: Structured logs, metrics, traces, and verbose mode\n- **Testing Utilities**: Mock LLMs, simulation tools, and test scenario decorators\n- **IDE Support**: VSCode snippets and type hints for better autocompletion\n- **Hot Reload**: Development mode with auto-reload for rapid iteration\n\n### Integration & Extension\n\n- **HTTP API**: Auto-generated FastAPI endpoints for all agents\n- **Workflow Orchestration**: Build complex multi-agent systems with sequential and parallel execution\n- **System Adapters**: Ready-made Discord and Slack integrations\n- **Fine-tuning Helpers**: Utilities for model customization and training\n- **Batch Processing**: Tooling for high-volume processing\n\n## Quick Start\n\n```python\nfrom fastadk import Agent, BaseAgent, tool\n\n@Agent(model=\"gemini-1.5-pro\", description=\"Weather assistant\")\nclass WeatherAgent(BaseAgent):\n @tool\n def get_weather(self, city: str) -> dict:\n \"\"\"Fetch current weather for a city.\"\"\"\n # This would typically come from an actual weather API\n return {\n \"city\": city,\n \"current\": {\n \"temp_c\": 22.5,\n \"condition\": \"Partly cloudy\",\n \"humidity\": 65,\n \"wind_kph\": 15.3\n },\n \"forecast\": {\n \"tomorrow\": {\"temp_c\": 24.0, \"condition\": \"Sunny\"},\n \"day_after\": {\"temp_c\": 20.0, \"condition\": \"Light rain\"}\n }\n }\n\n# Run the agent\nif __name__ == \"__main__\":\n import asyncio\n \n async def main():\n agent = WeatherAgent()\n response = await agent.run(\"What's the weather in London?\")\n print(response)\n \n asyncio.run(main())\n```\n\n## Installation\n\n```bash\npip install fastadk\n```\n\nFor development, we recommend using [UV](https://github.com/astral-sh/uv) for faster package management:\n\n```bash\n# Install uv\npip install uv\n\n# Install FastADK with uv\nuv pip install fastadk\n\n# Run examples with uv\nuv run -m examples.basic.weather_agent\n```\n\n## Workflow Examples\n\n### Context Management with Memory\n\n```python\nfrom fastadk import Agent, BaseAgent\nfrom fastadk.memory import VectorMemoryBackend\nfrom fastadk.core.context_policy import SummarizeOlderPolicy\n\n@Agent(model=\"gemini-1.5-pro\")\nclass MemoryAgent(BaseAgent):\n def __init__(self):\n super().__init__()\n # Set up vector-based memory\n self.memory = VectorMemoryBackend()\n # Summarize older messages when context gets too large\n self.context_policy = SummarizeOlderPolicy(\n threshold_tokens=3000,\n summarizer=self.model\n )\n```\n\n### Parallel Tool Execution with Workflow\n\n```python\nfrom fastadk.core.workflow import Workflow, step\n\n@step\nasync def fetch_weather(city: str):\n # Implementation details...\n return {\"city\": city, \"weather\": \"sunny\"}\n\n@step\nasync def fetch_news(city: str):\n # Implementation details...\n return {\"city\": city, \"headlines\": [\"Local event\", \"Sports update\"]}\n\nasync def get_city_info(city: str):\n # Run steps in parallel\n results = await Workflow.parallel(\n fetch_weather(city),\n fetch_news(city)\n ).execute()\n \n return {\n \"city\": city,\n \"weather\": results[0][\"weather\"],\n \"news\": results[1][\"headlines\"]\n }\n```\n\n### Token Usage and Cost Tracking\n\n```python\nfrom fastadk import Agent, BaseAgent\nfrom fastadk.tokens import TokenBudget\n\n@Agent(model=\"gpt-4\")\nclass BudgetAwareAgent(BaseAgent):\n def __init__(self):\n super().__init__()\n # Set budget constraints\n self.token_budget = TokenBudget(\n max_tokens_per_session=100000,\n max_cost_per_session=5.0, # $5.00 USD\n on_exceed=\"warn\" # Other options: \"error\", \"log\"\n )\n \n async def run(self, prompt: str):\n response = await super().run(prompt)\n # Check usage after run\n usage = self.last_run_token_usage\n print(f\"Used {usage.total_tokens} tokens (${usage.estimated_cost:.4f})\")\n return response\n```\n\n### HTTP API with FastAPI\n\n```python\n# api.py\nfrom fastapi import FastAPI\nfrom fastadk import Agent, BaseAgent, tool, registry\n\n@Agent(model=\"gemini-1.5-pro\")\nclass CalculatorAgent(BaseAgent):\n @tool\n def add(self, a: float, b: float) -> float:\n \"\"\"Add two numbers.\"\"\"\n return a + b\n \n @tool\n def multiply(self, a: float, b: float) -> float:\n \"\"\"Multiply two numbers.\"\"\"\n return a * b\n\n# Register the agent and create FastAPI app\nregistry.register(CalculatorAgent)\napp = FastAPI()\n\n# Include auto-generated FastADK router\nfrom fastadk.api.router import get_router\napp.include_router(get_router(), prefix=\"/api/agents\")\n\n# Run with: uv run -m uvicorn api:app --reload\n```\n\n### Discord or Slack Integration\n\n```python\nfrom fastadk import Agent, BaseAgent, tool\nfrom fastadk.adapters.discord import DiscordAdapter\n\n@Agent(model=\"gemini-1.5-pro\")\nclass HelpfulAssistant(BaseAgent):\n @tool\n def search_knowledge_base(self, query: str) -> str:\n \"\"\"Search internal knowledge base for information.\"\"\"\n # Implementation details...\n return \"Here's what I found about your question...\"\n\n# Connect to Discord\nadapter = DiscordAdapter(\n agent=HelpfulAssistant(),\n bot_token=\"YOUR_DISCORD_BOT_TOKEN\",\n channels=[\"general\", \"help-desk\"],\n prefix=\"!assist\"\n)\n\n# Start the bot\nif __name__ == \"__main__\":\n import asyncio\n asyncio.run(adapter.start())\n```\n\n## Advanced Features\n\n### Custom Context Policies\n\n```python\nfrom fastadk.core.context_policy import ContextPolicy\nfrom typing import List, Any\n\nclass CustomContextPolicy(ContextPolicy):\n \"\"\"Custom policy that prioritizes questions and important information.\"\"\"\n \n def __init__(self, max_tokens: int = 3000):\n self.max_tokens = max_tokens\n self.important_keywords = [\"urgent\", \"critical\", \"important\"]\n \n async def apply(self, history: List[Any]) -> List[Any]:\n # Implementation that keeps important messages and removes less relevant ones\n # when context size exceeds max_tokens\n # ...\n return filtered_history\n```\n\n### Pluggable Provider System\n\n```python\nfrom fastadk.providers.base import ModelProviderABC\nfrom fastadk import registry\n\nclass MyCustomProvider(ModelProviderABC):\n \"\"\"Custom LLM provider implementation.\"\"\"\n \n def __init__(self, api_key: str, model: str):\n self.api_key = api_key\n self.model = model\n # Other initialization...\n \n async def generate(self, prompt: str, **kwargs):\n # Implementation for text generation\n # ...\n return response\n \n async def stream(self, prompt: str, **kwargs):\n # Implementation for streaming response\n # ...\n yield chunk\n\n# Register custom provider\nregistry.register_provider(\"my_provider\", MyCustomProvider)\n\n# Use custom provider\n@Agent(model=\"my-model\", provider=\"my_provider\")\nclass CustomAgent(BaseAgent):\n # Agent implementation...\n```\n\n### Telemetry and Observability\n\n```python\nfrom fastadk.observability import configure_logging, configure_metrics\n\n# Configure structured JSON logging\nconfigure_logging(\n level=\"INFO\",\n format=\"json\",\n redact_sensitive=True,\n log_file=\"agent.log\"\n)\n\n# Configure Prometheus metrics\nconfigure_metrics(\n enable=True,\n port=9090,\n labels={\"environment\": \"production\", \"service\": \"agent-api\"}\n)\n\n# Track custom metrics\nfrom fastadk.observability.metrics import counter, gauge, histogram\n\n# Increment counter when agent is used\ncounter(\"agent_calls_total\", \"Total number of agent calls\").inc()\n\n# Record latency of operations\nwith histogram(\"agent_latency_seconds\", \"Latency of agent operations\").time():\n # Operation to measure\n result = await agent.run(prompt)\n```\n\n## Configuration\n\nFastADK supports configuration through YAML files and environment variables:\n\n```yaml\n# fastadk.yaml\nenvironment: production\n\nmodel:\n provider: gemini\n model_name: gemini-1.5-pro\n api_key_env_var: GEMINI_API_KEY\n timeout_seconds: 30\n retry_attempts: 3\n\nmemory:\n backend_type: redis\n connection_string: ${REDIS_URL}\n ttl_seconds: 3600\n namespace: \"my-agent\"\n\ncontext:\n policy: \"summarize_older\"\n max_tokens: 8000\n window_size: 10\n\nobservability:\n log_level: info\n metrics_enabled: true\n tracing_enabled: true\n redact_patterns:\n - \"api_key=([a-zA-Z0-9-_]+)\"\n - \"password=([^&]+)\"\n```\n\n## Testing\n\nFastADK provides comprehensive testing tools:\n\n```python\nfrom fastadk.testing import AgentTest, test_scenario, MockModel\n\nclass TestWeatherAgent(AgentTest):\n agent = WeatherAgent()\n \n def setup_method(self):\n # Replace real model with mock for testing\n self.agent.model = MockModel(responses=[\n \"The weather in London is currently sunny with a temperature of 22\u00b0C.\"\n ])\n \n @test_scenario(\"basic_weather_query\")\n async def test_basic_weather_query(self):\n response = await self.agent.run(\"What's the weather in London?\")\n \n # Assertions\n assert \"sunny\" in response.lower()\n assert \"22\u00b0c\" in response.lower()\n assert self.agent.tools_used == [\"get_weather\"]\n assert self.agent.total_tokens < 1000\n```\n\n## CLI Commands\n\nFastADK includes a powerful CLI for development and testing:\n\n```bash\n# Start interactive REPL with an agent\nfastadk repl agent_file.py\n\n# Validate configuration\nfastadk config validate\n\n# Initialize a new agent project\nfastadk init my-new-agent\n\n# Run an agent with a prompt\nfastadk run agent_file.py \"What's the weather in London?\"\n\n# Start development server with hot reload\nfastadk serve agent_api.py --reload\n```\n\n## Documentation\n\n- [System Overview](docs/system-overview.md): Detailed architecture and design\n- [Getting Started](docs/getting-started/quick-start.md): Build your first agent\n- [Examples](examples/): Real-world agent examples\n- [API Reference](docs/api/): Detailed API documentation\n- [Cookbook](docs/cookbook.md): Common patterns and recipes\n\n## License\n\nFastADK is released under the MIT License.\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "A developer-friendly framework for building AI agents with Google ADK",
"version": "0.2.0",
"project_urls": {
"Bug Tracker": "https://github.com/fastadk/fastadk/issues",
"Discord": "https://discord.gg/fastadk",
"Discussions": "https://github.com/fastadk/fastadk/discussions",
"Documentation": "https://docs.fastadk.dev",
"Homepage": "https://github.com/fastadk/fastadk",
"Repository": "https://github.com/fastadk/fastadk"
},
"split_keywords": [
"agents",
" ai",
" artificial-intelligence",
" automation",
" chatbots",
" framework",
" google-adk",
" llm"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "1d75410c745be2e58d52c8b819f322dcb61264ff3e3a7bf1b4e21f250fd81492",
"md5": "894ecc9e4d49d30835d9cf870f1d6865",
"sha256": "56359e1aaca4d0faafc480f50dcca495037b4aafe02b2c93cc726965637c2474"
},
"downloads": -1,
"filename": "fastadk-0.2.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "894ecc9e4d49d30835d9cf870f1d6865",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.10",
"size": 150485,
"upload_time": "2025-07-09T19:43:53",
"upload_time_iso_8601": "2025-07-09T19:43:53.107625Z",
"url": "https://files.pythonhosted.org/packages/1d/75/410c745be2e58d52c8b819f322dcb61264ff3e3a7bf1b4e21f250fd81492/fastadk-0.2.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "b16de4cbac209d7445555dc9bc02abaa991d8206a08985bb4c4a44bc4f5b36c4",
"md5": "047c374d783093adb79347b43460b1c7",
"sha256": "af2a84298eb22259eeb5571e3082a81212983eeec6ff855a010ec60c01869163"
},
"downloads": -1,
"filename": "fastadk-0.2.0.tar.gz",
"has_sig": false,
"md5_digest": "047c374d783093adb79347b43460b1c7",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.10",
"size": 313375,
"upload_time": "2025-07-09T19:43:56",
"upload_time_iso_8601": "2025-07-09T19:43:56.138940Z",
"url": "https://files.pythonhosted.org/packages/b1/6d/e4cbac209d7445555dc9bc02abaa991d8206a08985bb4c4a44bc4f5b36c4/fastadk-0.2.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-07-09 19:43:56",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "fastadk",
"github_project": "fastadk",
"github_not_found": true,
"lcname": "fastadk"
}