# Wake AI
An LLM orchestration framework that wraps terminal-based AI agents (like Claude Code) to provide structured, multi-step workflows for smart contract security analysis and beyond, making agentic execution more predictable and reliable through validation and progressive task decomposition.

## The Problem
Traditional approaches to AI-powered code analysis suffer from unpredictability. You write a prompt, cross your fingers, and hope the AI completes all the work correctly in one go. When working with complex security audits or vulnerability detection, this approach is fundamentally flawed. AI agents are powerful but inherently non-deterministic – they might miss steps, produce inconsistent outputs, or fail partway through execution.
## Our Solution
Wake AI bridges the gap between AI's generalization capabilities and the need for reliable, reproducible results. By breaking complex tasks into discrete steps with validation between each, we achieve:
- **Predictable Execution**: Each step has clear inputs, outputs, and success criteria
- **Progressive Validation**: Verify the AI completed necessary work before proceeding
- **Multi-Agent Collaboration**: Different steps can use specialized agents
- **Rapid Prototyping**: Test new ideas without building from scratch
## Installation
```bash
pip install wake-ai
```
## Quick Start
```bash
# Run a comprehensive security audit
wake-ai audit
# Detect reentrancy vulnerabilities
wake-ai reentrancy
```
## Why Wake AI?
### 1. **Beyond Static Analysis**
We're entering a new era of vulnerability detection. While traditional detectors rely on pattern matching and static analysis, Wake AI leverages LLMs to understand context, reason about code behavior, and find complex vulnerabilities that rule-based systems miss.
### 2. **Structured Yet Flexible**
The framework provides structure without sacrificing flexibility:
- Define workflows as a series of steps
- Each step can use different tools and prompts
- Validation ensures quality before proceeding
- Context flows seamlessly between steps
### 3. **Cost-Controlled Execution**
Real-world AI usage requires cost management:
- Set per-step cost limits
- Automatic retry with feedback on failures
- Efficient prompt optimization when approaching limits
- Track total workflow costs
### 4. **Progress Tracking**
Visual feedback for long-running workflows:
- Rich progress bars with percentage completion
- Status messages during retries and validation
- External progress hooks for app integration
### 5. **Security Workflows Included**
- Pre-built audit and detector workflows to get you started
- Standardized detection output format with JSON export
- Build any workflow you need - security, testing, analysis, or beyond
- Easy to extend with your own custom workflows
## Core Concepts
### Working Directory
Multi-step AI workflows face a fundamental problem: context sharing. Whether you're using a single agent across all steps or multiple specialized agents, data needs to flow between operations. How does an exploit developer agent access vulnerabilities found by an auditor agent? How does step three know what step one discovered? Traditional approaches would require complex state management or forcing the AI to re-analyze everything.
Wake AI takes a straightforward approach: each workflow gets an isolated working directory where all agents and steps operate. This shared workspace becomes the workflow's persistent memory:
```
.wake/ai/<YYYYMMDD_HHMMSS_random>/
├── state/ # Workflow state metadata
├── findings.yaml # Discovered vulnerabilities
├── analysis.md # Detailed investigation notes
└── report.json # Final structured output
```
Communication happens through files. One agent writes findings, another reads and validates them. A security auditor documents vulnerabilities, an exploit developer tests them, a report writer consolidates everything. No state passing, no variable juggling - just a shared filesystem that persists across the entire workflow execution. Your project directory remains clean while each agent has full freedom to create, modify, and reference files in this sandbox.
#### Post-Workflow Cleanup
By default, workflows are configured to automatically clean up their working directories after successful completion. This behavior can be overridden either within the `Workflow` class for specific workflows, or on the command line:
```bash
wake-ai --working-dir .audit/ --no-cleanup audit # Preserve working directory
```
Some workflows might not require structured outputs and instead provide results within the working directory. An example of this could be a specialized `audit` workflow, where the output is written to markdown files in the working directory, which a human auditor can then review after the workflow has finished.
### Validation
To ensure correct output from the entire workflow, each step must produce correct intermediate results. Without validation, AI responses might be unparseable, missing required fields, or contain errors that cascade through subsequent steps.
Each workflow step can include validators to ensure outputs are correct before proceeding. When validation fails, the step automatically retries with error correction prompts until outputs meet requirements.
Example validator:
```python
def validate_findings(self, response):
# Check if required files were created
if not (self.working_dir / "findings.yaml").exists():
return False, ["No findings file created"]
# Validate YAML structure
findings = yaml.load(open(self.working_dir / "findings.yaml"))
if not all(k in findings for k in ["vulnerabilities", "severity"]):
return False, ["Invalid findings structure"]
return True, []
```
_Benefits:_
- **Self-correcting**: AI fixes validation errors automatically
- **Quality control**: Outputs always meet specifications
- **No cascading errors**: Invalid outputs won't propagate to subsequent steps
- **Cost-effective**: Long-running workflows can be terminated early if validation fails
- **Schema-based output**: Validators enforce specific output formats, enabling parsing of AI responses into structured data for export and integration with other tools
## Architecture
### Workflows and Steps
At its heart, Wake AI treats complex tasks as workflows composed of individual steps:
```python
from wake_ai import AIWorkflow
class MyAuditWorkflow(AIWorkflow):
def _setup_steps(self):
# Step 1: Map the codebase structure
self.add_step(
name="map_contracts",
prompt_template="Find all Solidity contracts and identify their relationships...",
validator=self.validate_mapping,
max_cost=5.0
)
# Step 2: Focus on critical contracts (continues session)
self.add_step(
name="analyze_critical",
prompt_template="Based on the contracts you just mapped, analyze the 3 most critical ones for vulnerabilities...",
max_cost=10.0,
continue_session=True # Needs to remember which contracts were identified
)
```
_Features:_
- **Cost control**: If `max_cost` is set, once reaching the limit, the agent will be prompted to quickly finish the step, useful for managing cost-intensive steps.
- **Session continuation**: `continue_session` controls whether the agent session is continued from the previous step or a new one is created, allowing you to choose between single or multi-agent workflows.
- **Validator**: Each step can have a validator function which can be used to check if the step has been completed successfully.
- **Conditional execution**: `condition` can be used to skip the step based on a boolean expression.
**Note**: Steps are executed sequentially. Wake AI does not currently support parallel step execution.
### Context Sharing
Wake AI's primary approach to context sharing is through the _working directory_. Additionally, workflows provide context management methods and the `add_extraction_step` helper function to extract structured data.
```python
from pydantic import BaseModel
class Vulnerability(BaseModel):
name: str
description: str
severity: str
file: str
line: int
class VulnerabilitiesList(BaseModel):
vulnerabilities: List[Vulnerability]
self.add_step(
name="analyze_critical",
prompt_template="Analyze the codebase for critical vulnerabilities...",
)
self.add_extraction_step(
after_step="analyze_critical",
output_schema=VulnerabilitiesList,
context_key="vulnerabilities",
)
```
The extracted data will be stored in the `context` state under the key specified in `context_key` (defaults to `<step_name>_data`).
### Context Management
Wake AI workflows provide methods to manage context that flows between steps:
```python
# Add data to context
workflow.add_context("project_name", "MyDeFiProtocol")
workflow.add_context("audit_scope", ["Token.sol", "Vault.sol"])
# Retrieve context data
project = workflow.get_context("project_name") # "MyDeFiProtocol"
scope = workflow.get_context("audit_scope") # ["Token.sol", "Vault.sol"]
# Get all available context keys
keys = workflow.get_context_keys() # ["project_name", "audit_scope", ...]
```
Context includes:
- User-defined values via `add_context()`
- Step outputs as `{{step_name}_output}}`
- Extracted data from `add_extraction_step()` (defaults to `<step_name>_data`, but can be overridden with the `context_key` parameter)
- Built-in values like `{{working_dir}}` and `{{execution_dir}}`
### Dynamic Prompt Templates
To create dynamic prompt templates, Wake AI utilizes Jinja2 templates, which allow you to pass in context variables in the `{{ context_key }}` format. Workflow classes keep track of their `context` state, where you can store any data you want to pass to the prompt template.
```python
from pydantic import BaseModel
class ContractList(BaseModel):
contracts: str
class AuditWorkflow(AIWorkflow):
def __init__(self, **kwargs):
super().__init__(**kwargs)
# Add files to context for use in prompts
self.add_context("files", list(self.working_dir.glob("**/*.sol")))
def _setup_steps(self):
self.add_step(
name="map_contracts",
prompt_template="Find all Solidity contracts and identify their relationships. Here are the files in the codebase: {{files}}",
)
self.add_step(
name="determine_focus",
prompt_template="Determine the top 3 core contracts to focus on for analysis.",
continue_session=True # Needs to remember which contracts were identified
)
self.add_extraction_step(
after_step="determine_focus",
output_schema=ContractList,
context_key="files_to_focus",
)
self.add_step(
name="analyze_focus",
prompt_template="Conduct a thorough analysis of the following contracts: {{files_to_focus}}",
max_cost=10.0,
)
```
## Creating Custom Detectors
Wake AI makes it trivial to prototype new vulnerability detectors with the `SimpleDetector` helper class:
```python
import rich_click as click
from wake_ai import workflow
from wake_ai.templates import SimpleDetector
@workflow.command("flashloan")
@click.option("--focus-area", "-f", help="Specific area to focus analysis")
def factory(focus_area):
"""Detect flash loan attack vectors."""
detector = FlashLoanDetector()
detector.focus_area = focus_area
return detector
class FlashLoanDetector(SimpleDetector):
"""Flash loan vulnerability detector."""
focus_area: str = None
def get_detector_prompt(self) -> str:
focus_context = f"Focus specifically on: {self.focus_area}" if self.focus_area else ""
return f"""
Analyze this codebase for flash loan attack vectors.
{focus_context}
Focus on:
1. Price manipulation opportunities
2. Unprotected external calls in loan callbacks
3. Missing reentrancy guards
4. Incorrect balance assumptions
For each issue, explain the attack scenario and impact.
"""
```
The helper class automatically parses detector output into a standardized format for CLI display or JSON export.
## Advanced Features
### Dynamic Step Generation
Generate steps based on runtime discoveries:
```python
def generate_file_reviews(response, context):
# Parse discovered contracts
contracts = parse_contracts(response.content)
# Create a review step for each contract
return [
WorkflowStep(
name=f"review_{contract.name}",
prompt_template=f"Audit {contract.path} for vulnerabilities...",
max_cost=3.0
)
for contract in contracts
]
self.add_dynamic_steps("reviews", generate_file_reviews, after_step="scan")
```
### Conditional Execution
Skip expensive steps based on runtime conditions:
```python
self.add_step(
name="deep_analysis",
prompt_template="Perform computational analysis...",
condition=lambda ctx: len(ctx.get("critical_findings", [])) > 0,
max_cost=20.0
)
```
### Tool Restrictions
Fine-grained control over AI capabilities:
```python
# Only allow specific Wake commands
allowed_tools = ["Read", "Write", "Bash(wake detect *)", "Bash(wake init)"]
# Prevent any modifications
self.add_step(
name="review_only",
prompt_template="Review the code...",
disallowed_tools=["Write", "Edit", "Bash"]
)
```
## Real-World Example: Security Audit Workflow
Our production audit workflow demonstrates the framework's power:
1. **Initial Analysis** - Map attack surface and identify focus areas
2. **Vulnerability Hunting** - Deep dive with specialized prompts
3. **Report Generation** - Professional audit documentation
Each step validates outputs, ensuring no critical checks are missed.
## Documentation
See [docs/README.md](docs/README.md) for:
- Complete API reference
- Step-by-step tutorials
- Advanced workflow patterns
- Integration guides
## License
ISC
Raw data
{
"_id": null,
"home_page": null,
"name": "wake-ai",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.8",
"maintainer_email": null,
"keywords": "ai, audit, claude, ethereum, security, smart-contracts, solidity",
"author": "Ackee Blockchain",
"author_email": null,
"download_url": "https://files.pythonhosted.org/packages/bd/4d/a824ba87eebd2035de1bfc83e2844aac023c5dd05d1056a878ed95bc5800/wake_ai-0.1.1.tar.gz",
"platform": null,
"description": "# Wake AI\n\nAn LLM orchestration framework that wraps terminal-based AI agents (like Claude Code) to provide structured, multi-step workflows for smart contract security analysis and beyond, making agentic execution more predictable and reliable through validation and progressive task decomposition.\n\n\n\n## The Problem\n\nTraditional approaches to AI-powered code analysis suffer from unpredictability. You write a prompt, cross your fingers, and hope the AI completes all the work correctly in one go. When working with complex security audits or vulnerability detection, this approach is fundamentally flawed. AI agents are powerful but inherently non-deterministic \u2013 they might miss steps, produce inconsistent outputs, or fail partway through execution.\n\n## Our Solution\n\nWake AI bridges the gap between AI's generalization capabilities and the need for reliable, reproducible results. By breaking complex tasks into discrete steps with validation between each, we achieve:\n\n- **Predictable Execution**: Each step has clear inputs, outputs, and success criteria\n- **Progressive Validation**: Verify the AI completed necessary work before proceeding\n- **Multi-Agent Collaboration**: Different steps can use specialized agents\n- **Rapid Prototyping**: Test new ideas without building from scratch\n\n## Installation\n\n```bash\npip install wake-ai\n```\n\n## Quick Start\n\n```bash\n# Run a comprehensive security audit\nwake-ai audit\n\n# Detect reentrancy vulnerabilities\nwake-ai reentrancy\n```\n\n## Why Wake AI?\n\n### 1. **Beyond Static Analysis**\n\nWe're entering a new era of vulnerability detection. While traditional detectors rely on pattern matching and static analysis, Wake AI leverages LLMs to understand context, reason about code behavior, and find complex vulnerabilities that rule-based systems miss.\n\n### 2. **Structured Yet Flexible**\n\nThe framework provides structure without sacrificing flexibility:\n\n- Define workflows as a series of steps\n- Each step can use different tools and prompts\n- Validation ensures quality before proceeding\n- Context flows seamlessly between steps\n\n### 3. **Cost-Controlled Execution**\n\nReal-world AI usage requires cost management:\n\n- Set per-step cost limits\n- Automatic retry with feedback on failures\n- Efficient prompt optimization when approaching limits\n- Track total workflow costs\n\n### 4. **Progress Tracking**\n\nVisual feedback for long-running workflows:\n\n- Rich progress bars with percentage completion\n- Status messages during retries and validation\n- External progress hooks for app integration\n\n### 5. **Security Workflows Included**\n\n- Pre-built audit and detector workflows to get you started\n- Standardized detection output format with JSON export\n- Build any workflow you need - security, testing, analysis, or beyond\n- Easy to extend with your own custom workflows\n\n## Core Concepts\n\n### Working Directory\n\nMulti-step AI workflows face a fundamental problem: context sharing. Whether you're using a single agent across all steps or multiple specialized agents, data needs to flow between operations. How does an exploit developer agent access vulnerabilities found by an auditor agent? How does step three know what step one discovered? Traditional approaches would require complex state management or forcing the AI to re-analyze everything.\n\nWake AI takes a straightforward approach: each workflow gets an isolated working directory where all agents and steps operate. This shared workspace becomes the workflow's persistent memory:\n\n```\n.wake/ai/<YYYYMMDD_HHMMSS_random>/\n\u251c\u2500\u2500 state/ # Workflow state metadata\n\u251c\u2500\u2500 findings.yaml # Discovered vulnerabilities\n\u251c\u2500\u2500 analysis.md # Detailed investigation notes\n\u2514\u2500\u2500 report.json # Final structured output\n```\n\nCommunication happens through files. One agent writes findings, another reads and validates them. A security auditor documents vulnerabilities, an exploit developer tests them, a report writer consolidates everything. No state passing, no variable juggling - just a shared filesystem that persists across the entire workflow execution. Your project directory remains clean while each agent has full freedom to create, modify, and reference files in this sandbox.\n\n#### Post-Workflow Cleanup\n\nBy default, workflows are configured to automatically clean up their working directories after successful completion. This behavior can be overridden either within the `Workflow` class for specific workflows, or on the command line:\n\n```bash\nwake-ai --working-dir .audit/ --no-cleanup audit # Preserve working directory\n```\n\nSome workflows might not require structured outputs and instead provide results within the working directory. An example of this could be a specialized `audit` workflow, where the output is written to markdown files in the working directory, which a human auditor can then review after the workflow has finished.\n\n### Validation\n\nTo ensure correct output from the entire workflow, each step must produce correct intermediate results. Without validation, AI responses might be unparseable, missing required fields, or contain errors that cascade through subsequent steps.\n\nEach workflow step can include validators to ensure outputs are correct before proceeding. When validation fails, the step automatically retries with error correction prompts until outputs meet requirements.\n\nExample validator:\n\n```python\ndef validate_findings(self, response):\n # Check if required files were created\n if not (self.working_dir / \"findings.yaml\").exists():\n return False, [\"No findings file created\"]\n\n # Validate YAML structure\n findings = yaml.load(open(self.working_dir / \"findings.yaml\"))\n if not all(k in findings for k in [\"vulnerabilities\", \"severity\"]):\n return False, [\"Invalid findings structure\"]\n\n return True, []\n```\n\n_Benefits:_\n\n- **Self-correcting**: AI fixes validation errors automatically\n- **Quality control**: Outputs always meet specifications\n- **No cascading errors**: Invalid outputs won't propagate to subsequent steps\n- **Cost-effective**: Long-running workflows can be terminated early if validation fails\n- **Schema-based output**: Validators enforce specific output formats, enabling parsing of AI responses into structured data for export and integration with other tools\n\n## Architecture\n\n### Workflows and Steps\n\nAt its heart, Wake AI treats complex tasks as workflows composed of individual steps:\n\n```python\nfrom wake_ai import AIWorkflow\n\nclass MyAuditWorkflow(AIWorkflow):\n def _setup_steps(self):\n # Step 1: Map the codebase structure\n self.add_step(\n name=\"map_contracts\",\n prompt_template=\"Find all Solidity contracts and identify their relationships...\",\n validator=self.validate_mapping,\n max_cost=5.0\n )\n\n # Step 2: Focus on critical contracts (continues session)\n self.add_step(\n name=\"analyze_critical\",\n prompt_template=\"Based on the contracts you just mapped, analyze the 3 most critical ones for vulnerabilities...\",\n max_cost=10.0,\n continue_session=True # Needs to remember which contracts were identified\n )\n```\n\n_Features:_\n\n- **Cost control**: If `max_cost` is set, once reaching the limit, the agent will be prompted to quickly finish the step, useful for managing cost-intensive steps.\n- **Session continuation**: `continue_session` controls whether the agent session is continued from the previous step or a new one is created, allowing you to choose between single or multi-agent workflows.\n- **Validator**: Each step can have a validator function which can be used to check if the step has been completed successfully.\n- **Conditional execution**: `condition` can be used to skip the step based on a boolean expression.\n\n**Note**: Steps are executed sequentially. Wake AI does not currently support parallel step execution.\n\n### Context Sharing\n\nWake AI's primary approach to context sharing is through the _working directory_. Additionally, workflows provide context management methods and the `add_extraction_step` helper function to extract structured data.\n\n```python\nfrom pydantic import BaseModel\n\nclass Vulnerability(BaseModel):\n name: str\n description: str\n severity: str\n file: str\n line: int\n\nclass VulnerabilitiesList(BaseModel):\n vulnerabilities: List[Vulnerability]\n\nself.add_step(\n name=\"analyze_critical\",\n prompt_template=\"Analyze the codebase for critical vulnerabilities...\",\n)\n\nself.add_extraction_step(\n after_step=\"analyze_critical\",\n output_schema=VulnerabilitiesList,\n context_key=\"vulnerabilities\",\n)\n```\n\nThe extracted data will be stored in the `context` state under the key specified in `context_key` (defaults to `<step_name>_data`).\n\n### Context Management\n\nWake AI workflows provide methods to manage context that flows between steps:\n\n```python\n# Add data to context\nworkflow.add_context(\"project_name\", \"MyDeFiProtocol\")\nworkflow.add_context(\"audit_scope\", [\"Token.sol\", \"Vault.sol\"])\n\n# Retrieve context data\nproject = workflow.get_context(\"project_name\") # \"MyDeFiProtocol\"\nscope = workflow.get_context(\"audit_scope\") # [\"Token.sol\", \"Vault.sol\"]\n\n# Get all available context keys\nkeys = workflow.get_context_keys() # [\"project_name\", \"audit_scope\", ...]\n```\n\nContext includes:\n\n- User-defined values via `add_context()`\n- Step outputs as `{{step_name}_output}}`\n- Extracted data from `add_extraction_step()` (defaults to `<step_name>_data`, but can be overridden with the `context_key` parameter)\n- Built-in values like `{{working_dir}}` and `{{execution_dir}}`\n\n### Dynamic Prompt Templates\n\nTo create dynamic prompt templates, Wake AI utilizes Jinja2 templates, which allow you to pass in context variables in the `{{ context_key }}` format. Workflow classes keep track of their `context` state, where you can store any data you want to pass to the prompt template.\n\n```python\nfrom pydantic import BaseModel\n\nclass ContractList(BaseModel):\n contracts: str\n\nclass AuditWorkflow(AIWorkflow):\n def __init__(self, **kwargs):\n super().__init__(**kwargs)\n # Add files to context for use in prompts\n self.add_context(\"files\", list(self.working_dir.glob(\"**/*.sol\")))\n\n def _setup_steps(self):\n self.add_step(\n name=\"map_contracts\",\n prompt_template=\"Find all Solidity contracts and identify their relationships. Here are the files in the codebase: {{files}}\",\n )\n\n self.add_step(\n name=\"determine_focus\",\n prompt_template=\"Determine the top 3 core contracts to focus on for analysis.\",\n continue_session=True # Needs to remember which contracts were identified\n )\n\n self.add_extraction_step(\n after_step=\"determine_focus\",\n output_schema=ContractList,\n context_key=\"files_to_focus\",\n )\n\n self.add_step(\n name=\"analyze_focus\",\n prompt_template=\"Conduct a thorough analysis of the following contracts: {{files_to_focus}}\",\n max_cost=10.0,\n )\n\n```\n\n## Creating Custom Detectors\n\nWake AI makes it trivial to prototype new vulnerability detectors with the `SimpleDetector` helper class:\n\n```python\nimport rich_click as click\nfrom wake_ai import workflow\nfrom wake_ai.templates import SimpleDetector\n\n@workflow.command(\"flashloan\")\n@click.option(\"--focus-area\", \"-f\", help=\"Specific area to focus analysis\")\ndef factory(focus_area):\n \"\"\"Detect flash loan attack vectors.\"\"\"\n detector = FlashLoanDetector()\n detector.focus_area = focus_area\n return detector\n\nclass FlashLoanDetector(SimpleDetector):\n \"\"\"Flash loan vulnerability detector.\"\"\"\n\n focus_area: str = None\n\n def get_detector_prompt(self) -> str:\n focus_context = f\"Focus specifically on: {self.focus_area}\" if self.focus_area else \"\"\n\n return f\"\"\"\n Analyze this codebase for flash loan attack vectors.\n {focus_context}\n\n Focus on:\n 1. Price manipulation opportunities\n 2. Unprotected external calls in loan callbacks\n 3. Missing reentrancy guards\n 4. Incorrect balance assumptions\n\n For each issue, explain the attack scenario and impact.\n \"\"\"\n```\n\nThe helper class automatically parses detector output into a standardized format for CLI display or JSON export.\n\n## Advanced Features\n\n### Dynamic Step Generation\n\nGenerate steps based on runtime discoveries:\n\n```python\ndef generate_file_reviews(response, context):\n # Parse discovered contracts\n contracts = parse_contracts(response.content)\n\n # Create a review step for each contract\n return [\n WorkflowStep(\n name=f\"review_{contract.name}\",\n prompt_template=f\"Audit {contract.path} for vulnerabilities...\",\n max_cost=3.0\n )\n for contract in contracts\n ]\n\nself.add_dynamic_steps(\"reviews\", generate_file_reviews, after_step=\"scan\")\n```\n\n### Conditional Execution\n\nSkip expensive steps based on runtime conditions:\n\n```python\nself.add_step(\n name=\"deep_analysis\",\n prompt_template=\"Perform computational analysis...\",\n condition=lambda ctx: len(ctx.get(\"critical_findings\", [])) > 0,\n max_cost=20.0\n)\n```\n\n### Tool Restrictions\n\nFine-grained control over AI capabilities:\n\n```python\n# Only allow specific Wake commands\nallowed_tools = [\"Read\", \"Write\", \"Bash(wake detect *)\", \"Bash(wake init)\"]\n\n# Prevent any modifications\nself.add_step(\n name=\"review_only\",\n prompt_template=\"Review the code...\",\n disallowed_tools=[\"Write\", \"Edit\", \"Bash\"]\n)\n```\n\n## Real-World Example: Security Audit Workflow\n\nOur production audit workflow demonstrates the framework's power:\n\n1. **Initial Analysis** - Map attack surface and identify focus areas\n2. **Vulnerability Hunting** - Deep dive with specialized prompts\n3. **Report Generation** - Professional audit documentation\n\nEach step validates outputs, ensuring no critical checks are missed.\n\n## Documentation\n\nSee [docs/README.md](docs/README.md) for:\n\n- Complete API reference\n- Step-by-step tutorials\n- Advanced workflow patterns\n- Integration guides\n\n## License\n\nISC\n",
"bugtrack_url": null,
"license": "ISC",
"summary": "AI-powered smart contract security analysis framework",
"version": "0.1.1",
"project_urls": {
"Homepage": "https://github.com/wakehacker/wake-ai",
"Repository": "https://github.com/wakehacker/wake-ai"
},
"split_keywords": [
"ai",
" audit",
" claude",
" ethereum",
" security",
" smart-contracts",
" solidity"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "1b32cf4752c32f5a3df0ea89ce921033351e623022bbe71217b5850df222462a",
"md5": "746a3ac3e4afee78482ac57953a3c05e",
"sha256": "640c1a9cb81de9e3c2a37d5d60ff641ec37b76e98afec389c6c097d61663cbcb"
},
"downloads": -1,
"filename": "wake_ai-0.1.1-py3-none-any.whl",
"has_sig": false,
"md5_digest": "746a3ac3e4afee78482ac57953a3c05e",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.8",
"size": 42518,
"upload_time": "2025-08-21T12:38:38",
"upload_time_iso_8601": "2025-08-21T12:38:38.654293Z",
"url": "https://files.pythonhosted.org/packages/1b/32/cf4752c32f5a3df0ea89ce921033351e623022bbe71217b5850df222462a/wake_ai-0.1.1-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "bd4da824ba87eebd2035de1bfc83e2844aac023c5dd05d1056a878ed95bc5800",
"md5": "871f201c273446f8756e12bca2b8381c",
"sha256": "340da402b7e1d40476829a73ddce68a671e03465d9c5f722dd0544327b458bf9"
},
"downloads": -1,
"filename": "wake_ai-0.1.1.tar.gz",
"has_sig": false,
"md5_digest": "871f201c273446f8756e12bca2b8381c",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8",
"size": 36575,
"upload_time": "2025-08-21T12:38:39",
"upload_time_iso_8601": "2025-08-21T12:38:39.710358Z",
"url": "https://files.pythonhosted.org/packages/bd/4d/a824ba87eebd2035de1bfc83e2844aac023c5dd05d1056a878ed95bc5800/wake_ai-0.1.1.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-08-21 12:38:39",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "wakehacker",
"github_project": "wake-ai",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"lcname": "wake-ai"
}