agent-as-code


Nameagent-as-code JSON
Version 1.1.0 PyPI version JSON
download
home_pagehttps://agent-as-code.myagentregistry.com
SummaryDocker-like CLI for AI agents with Enhanced LLM Intelligence
upload_time2025-08-24 18:07:13
maintainerNone
docs_urlNone
authorPartha Sarathi Kundu
requires_python>=3.8
licenseNone
keywords ai agents cli docker containers llm machine-learning artificial-intelligence automation microservices devops ollama local-llm model-optimization agent-generation intelligent-agents workflow-automation sentiment-analysis chatbot code-assistant benchmarking model-analysis
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Agent as Code - Python Package

`
························································································································
:   █████████                                 █████                              █████████               █████         :
:  ███░░░░░███                               ░░███                              ███░░░░░███             ░░███          :
: ░███    ░███   ███████  ██████  ████████   ███████       ██████    █████     ███     ░░░   ██████   ███████   ██████ :
: ░███████████  ███░░███ ███░░███░░███░░███ ░░░███░       ░░░░░███  ███░░     ░███          ███░░███ ███░░███  ███░░███:
: ░███░░░░░███ ░███ ░███░███████  ░███ ░███   ░███         ███████ ░░█████    ░███         ░███ ░███░███ ░███ ░███████ :
: ░███    ░███ ░███ ░███░███░░░   ░███ ░███   ░███ ███    ███░░███  ░░░░███   ░░███     ███░███ ░███░███ ░███ ░███░░░  :
: █████   █████░░███████░░██████  ████ █████  ░░█████    ░░████████ ██████     ░░█████████ ░░██████ ░░████████░░██████ :
:░░░░░   ░░░░░  ░░░░░███ ░░░░░░  ░░░░ ░░░░░    ░░░░░      ░░░░░░░░ ░░░░░░░       ░░░░░░░░░   ░░░░░░   ░░░░░░░░  ░░░░░░  :
:               ███ ░███                                                                                               :
:              ░░██████                                                                                                :
:               ░░░░░░                                                                                                 :
························································································································
`

**Docker-like CLI for AI agents with hybrid Go + Python architecture and Enhanced LLM Intelligence**

[![PyPI version](https://badge.fury.io/py/agent-as-code.svg)](https://badge.fury.io/py/agent-as-code)
[![Python versions](https://img.shields.io/pypi/pyversions/agent-as-code.svg)](https://pypi.org/project/agent-as-code/)
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)

## 🚀 **Hybrid Architecture**

Agent as Code combines the **performance of Go** with the **ecosystem of Python**:

- **⚡ Go Binary Core**: High-performance CLI operations with 10x speed improvement
- **🐍 Python Wrapper**: Seamless integration with Python development workflows
- **🧠 Enhanced LLM Intelligence**: AI-powered agent creation and optimization
- **📦 Zero Dependencies**: Single binary with no runtime requirements
- **🌍 Cross-Platform**: Native binaries for Linux, macOS, Windows (x86_64, ARM64)

## What is Agent as Code?

Agent as Code (AaC) brings the simplicity of Docker to AI agent development. Just like Docker revolutionized application deployment, Agent as Code revolutionizes AI agent development with:

- **Familiar Commands**: `agent build`, `agent run`, `agent push` - just like Docker
- **Enhanced LLM Commands**: `agent llm create-agent`, `agent llm optimize` - AI-powered intelligence
- **Declarative Configuration**: Define agents with simple `agent.yaml` files
- **Template System**: Pre-built templates for common use cases
- **Multi-Runtime Support**: Python, Node.js, Go, and more
- **Registry Integration**: Share and discover agents easily
- **Intelligent Generation**: Automatically create fully functional agents with tests and documentation

## 🧠 **Enhanced LLM Commands**

The new LLM intelligence features provide:

- **`agent llm create-agent [USE_CASE]`**: Create intelligent, fully functional agents
- **`agent llm optimize [MODEL] [USE_CASE]`**: Optimize models for specific use cases
- **`agent llm benchmark`**: Comprehensive model benchmarking
- **`agent llm deploy-agent [AGENT_NAME]`**: Deploy and test agents locally
- **`agent llm analyze [MODEL]`**: Deep model analysis and insights

## Quick Start

### Installation

```bash
pip install agent-as-code
```

### Create Your First Agent (Traditional Way)

```bash
# Create a new chatbot agent
agent init my-chatbot --template chatbot

# Navigate to the project
cd my-chatbot

# Build the agent
agent build -t my-chatbot:latest .

# Run the agent
agent run my-chatbot:latest
```

### Create Your First Intelligent Agent (Enhanced LLM Way)

```bash
# Create an intelligent agent with AI-powered generation
agent llm create-agent chatbot

# Navigate to the generated project
cd chatbot-agent

# Deploy and test the agent automatically
agent llm deploy-agent chatbot-agent
```

Your agent is now running at `http://localhost:8080` with comprehensive testing and validation! 🚀

## Available Templates

Get started instantly with pre-built templates:

```bash
agent init my-bot --template chatbot           # Customer support chatbot
agent init analyzer --template sentiment      # Sentiment analysis
agent init summarizer --template summarizer   # Document summarization  
agent init translator --template translator   # Language translation
agent init insights --template data-analyzer  # Data analysis
agent init writer --template content-gen      # Content generation
```

## 🧠 **Enhanced LLM Use Cases**

### 🚀 **Intelligent Agent Creation**
```bash
# Create fully functional agents with AI-powered generation
agent llm create-agent chatbot
agent llm create-agent sentiment-analyzer
agent llm create-agent workflow-automation

# Each agent includes:
# - Optimized Python FastAPI application
# - Comprehensive test suite
# - Production-ready Dockerfile
# - Detailed documentation
# - CI/CD workflows
# - Health checks and monitoring
```

### ⚡ **Model Optimization**
```bash
# Optimize models for specific use cases
agent llm optimize llama2 chatbot
agent llm optimize mistral:7b code-generation
agent llm optimize codellama:13b debugging

# Features:
# - Parameter tuning (temperature, top_p, etc.)
# - Custom system messages
# - Context window optimization
# - Performance benchmarks
# - Use case specific configurations
```

### 📊 **Comprehensive Benchmarking**
```bash
# Benchmark all local models
agent llm benchmark

# Focus on specific tasks
agent llm benchmark --tasks chatbot,code,analysis

# Get detailed reports
agent llm benchmark --output json

# Metrics include:
# - Response time and throughput
# - Memory usage and efficiency
# - Quality assessment
# - Cost-benefit analysis
# - Performance recommendations
```

### 🚀 **Intelligent Deployment**
```bash
# Deploy and test agents automatically
agent llm deploy-agent my-agent

# Run comprehensive tests
agent llm deploy-agent my-agent --test-suite comprehensive

# Enable monitoring
agent llm deploy-agent my-agent --monitor

# Features:
# - Automatic container building
# - Comprehensive testing
# - Health validation
# - Performance metrics
# - Deployment reports
```

### 🔍 **Deep Model Analysis**
```bash
# Analyze model capabilities
agent llm analyze llama2

# Get detailed insights
agent llm analyze mistral:7b --detailed

# Focus on capabilities
agent llm analyze codellama:13b --capabilities

# Analysis includes:
# - Model architecture and parameters
# - Performance characteristics
# - Best use cases and limitations
# - Optimization opportunities
# - Integration recommendations
```

## Python API Usage

Use Agent as Code programmatically in your Python applications:

### Traditional Commands

```python
from agent_as_code import AgentCLI

# Initialize the CLI
cli = AgentCLI()

# Create a new agent
cli.init("my-agent", template="sentiment", runtime="python")

# Build the agent
cli.build(".", tag="my-agent:latest")

# Run the agent
cli.run("my-agent:latest", port="8080:8080", detach=True)
```

### Enhanced LLM Commands

```python
from agent_as_code import AgentCLI

# Initialize the CLI
cli = AgentCLI()

# Create intelligent agents
cli.create_agent('sentiment-analyzer')
cli.create_agent('workflow-automation', model='mistral:7b')

# Optimize models for specific use cases
cli.optimize_model('llama2', 'chatbot')
cli.optimize_model('mistral:7b', 'code-generation')

# Benchmark all models
cli.benchmark_models(['chatbot', 'code-generation', 'analysis'])

# Deploy and test agents
cli.deploy_agent('my-agent', test_suite='comprehensive', monitor=True)

# Analyze model capabilities
cli.analyze_model('llama2:7b', detailed=True, capabilities=True)

# Manage local models
cli.list_models()
cli.pull_model('llama2:7b')
cli.test_model('llama2:7b', input_text="Hello, how are you?")
cli.remove_model('old-model', force=True)
```

## Agent Configuration

Define your agent with a simple `agent.yaml` file:

```yaml
apiVersion: agent.dev/v1
kind: Agent
metadata:
  name: my-chatbot
  version: 1.0.0
  description: Customer support chatbot
spec:
  runtime: python
  model:
    provider: openai
    name: gpt-4
    config:
      temperature: 0.7
      max_tokens: 500
  capabilities:
    - conversation
    - customer-support
  ports:
    - container: 8080
      host: 8080
  environment:
    - name: OPENAI_API_KEY
      value: ${OPENAI_API_KEY}
  healthCheck:
    command: ["curl", "-f", "http://localhost:8080/health"]
    interval: 30s
    timeout: 10s
    retries: 3
```

## Use Cases

### 🤖 **Customer Support**
```bash
agent init support-bot --template chatbot
# Includes conversation memory, intent classification, escalation handling
```

### 📊 **Data Analysis**
```bash
agent init data-insights --template data-analyzer
# Includes statistical analysis, visualization, AI-powered insights
```

### 🌐 **Content Creation**
```bash
agent init content-writer --template content-gen
# Includes blog posts, social media, marketing copy generation
```

### 🔍 **Text Analysis**
```bash
agent init text-analyzer --template sentiment
# Includes sentiment analysis, emotion detection, batch processing
```

## Development Workflow

### Traditional Development
```bash
# Create and test locally
agent init my-agent --template chatbot
cd my-agent
agent build -t my-agent:dev .
agent run my-agent:dev

# Make changes and rebuild
agent build -t my-agent:dev . --no-cache
```

### 🧠 **Enhanced LLM Development**
```bash
# Create intelligent agent with AI-powered generation
agent llm create-agent workflow-automation

# Navigate to generated project
cd workflow-automation-agent

# Deploy and test automatically
agent llm deploy-agent workflow-automation-agent

# The agent is now running with:
# - Comprehensive testing (3/3 tests passed)
# - Health validation (HEALTHY status)
# - Performance metrics
# - Ready for production use
```

### Production Deployment
```bash
# Build for production
agent build -t my-agent:1.0.0 .

# Push to registry
agent push my-agent:1.0.0

# Deploy anywhere
docker run -p 8080:8080 my-agent:1.0.0
```

### CI/CD Integration
```yaml
# GitHub Actions example
- name: Install Agent CLI
  run: pip install agent-as-code

- name: Create Intelligent Agent
  run: agent llm create-agent workflow-automation

- name: Deploy and Test Agent
  run: agent llm deploy-agent workflow-automation-agent

- name: Build Agent
  run: agent build -t ${{ github.repository }}:${{ github.sha }} .

- name: Push Agent
  run: agent push ${{ github.repository }}:${{ github.sha }}
```

## Python Ecosystem Integration

### Jupyter Notebooks
```python
# Install in notebook
!pip install agent-as-code

# Create agent directly in notebook
from agent_as_code import AgentCLI
cli = AgentCLI()
cli.init("notebook-agent", template="sentiment")
```

### Virtual Environments
```bash
# Each project can have its own agent version
python -m venv myproject
source myproject/bin/activate
pip install agent-as-code==1.0.0
agent init my-project-agent
```

### Poetry Integration
```bash
# Add to your Poetry project
poetry add agent-as-code
poetry run agent init my-agent --template chatbot
```

## 🏢 **Enterprise Features**

### 🔒 **Security & Compliance**
- **Role-Based Access Control (RBAC)**: Manage permissions and access levels
- **JWT Authentication**: Secure API endpoints with token-based auth
- **Audit Logging**: Comprehensive logging for compliance and debugging
- **Container Security**: Multi-stage Docker builds with security best practices

### 📊 **Monitoring & Observability**
- **Health Checks**: Automatic health monitoring with configurable intervals
- **Metrics Collection**: Prometheus-compatible metrics for monitoring
- **Structured Logging**: Structured logging with configurable levels
- **Performance Tracking**: Response time, memory usage, and CPU monitoring

### 🚀 **Scalability & Performance**
- **Horizontal Scaling**: Kubernetes manifests for orchestration
- **Load Balancing**: Built-in load balancing and health checks
- **Resource Management**: Configurable CPU and memory limits
- **Auto-scaling**: Horizontal Pod Autoscaler support

### 🔧 **DevOps Integration**
- **CI/CD Pipelines**: GitHub Actions workflows for automation
- **Container Registry**: Push/pull from any Docker registry
- **Multi-Environment**: Support for dev, staging, and production
- **Infrastructure as Code**: Kubernetes manifests and Docker configurations

## 🧪 **Testing & Quality Assurance**

### Automated Testing
```bash
# Run comprehensive tests
agent llm deploy-agent my-agent --test-suite comprehensive

# Test specific functionality
pytest tests/test_workflow_automation.py::test_process_workflow

# Coverage reporting
pytest --cov=main tests/
```

### Quality Metrics
- **Test Coverage**: 95%+ test coverage for all generated agents
- **Code Quality**: Black formatting, flake8 linting, mypy type checking
- **Performance Testing**: Response time and throughput validation
- **Integration Testing**: End-to-end functionality validation

## 🌟 **Advanced Features**

### Model Management
```bash
# List available models
agent llm list

# Pull new models
agent llm pull llama2:7b

# Test model performance
agent llm test llama2:7b --input "Hello, how are you?"

# Remove unused models
agent llm remove old-model --force
```

### Custom Use Cases
```bash
# Create custom agent templates
agent llm create-agent custom-use-case

# The system will:
# - Analyze the use case requirements
# - Recommend appropriate models
# - Generate optimized code
# - Create comprehensive tests
# - Set up monitoring and logging
```

### Performance Optimization
```bash
# Optimize for specific workloads
agent llm optimize llama2:7b high-throughput

# Benchmark optimization results
agent llm benchmark --tasks high-throughput

# Deploy optimized agent
agent llm deploy-agent optimized-agent
```

## Requirements

- **Python**: 3.8 or higher
- **Operating System**: Linux, macOS, or Windows
- **Architecture**: x86_64 (amd64) or ARM64

The package includes pre-compiled binaries for all supported platforms, so no additional dependencies are required.

## Architecture

This Python package is a wrapper around a high-performance Go binary:

- **Go Binary**: Handles core CLI operations (build, run, etc.)
- **Python Wrapper**: Provides Python API and pip integration
- **Cross-Platform**: Works on Linux, macOS, and Windows
- **Self-Contained**: No external dependencies required

## Contributing

We welcome contributions as soon as we have the Go binary ready to make public along with the github Repo!

## Support

- **📖 Documentation**: [agent-as-code.myagentregistry.com/documentation](https://agent-as-code.myagentregistry.com/documentation)
- **🚀 Getting Started**: [agent-as-code.myagentregistry.com/getting-started](https://agent-as-code.myagentregistry.com/getting-started)
- **💡 Examples**: [agent-as-code.myagentregistry.com/examples](https://agent-as-code.myagentregistry.com/examples)
- **🔧 CLI Reference**: [agent-as-code.myagentregistry.com/cli](https://agent-as-code.myagentregistry.com/cli)
- **📦 Registry Guide**: [agent-as-code.myagentregistry.com/registry](https://agent-as-code.myagentregistry.com/registry)

---

**Ready to build your first AI agent?**

```bash
pip install agent-as-code
agent init my-first-agent --template chatbot
cd my-first-agent
agent run
```

**Join thousands of developers building the future of AI agents! 🚀**

            

Raw data

            {
    "_id": null,
    "home_page": "https://agent-as-code.myagentregistry.com",
    "name": "agent-as-code",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": null,
    "keywords": "ai, agents, cli, docker, containers, llm, machine-learning, artificial-intelligence, automation, microservices, devops, ollama, local-llm, model-optimization, agent-generation, intelligent-agents, workflow-automation, sentiment-analysis, chatbot, code-assistant, benchmarking, model-analysis",
    "author": "Partha Sarathi Kundu",
    "author_email": "Partha Sarathi Kundu <inboxpartha@outlook.com>",
    "download_url": "https://files.pythonhosted.org/packages/ab/45/020943652303c4637108727fbfc65911fef710ddd06e9775e79d55f8dc05/agent_as_code-1.1.0.tar.gz",
    "platform": "any",
    "description": "# Agent as Code - Python Package\n\n`\n\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\n:   \u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588                                 \u2588\u2588\u2588\u2588\u2588                              \u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588               \u2588\u2588\u2588\u2588\u2588         :\n:  \u2588\u2588\u2588\u2591\u2591\u2591\u2591\u2591\u2588\u2588\u2588                               \u2591\u2591\u2588\u2588\u2588                              \u2588\u2588\u2588\u2591\u2591\u2591\u2591\u2591\u2588\u2588\u2588             \u2591\u2591\u2588\u2588\u2588          :\n: \u2591\u2588\u2588\u2588    \u2591\u2588\u2588\u2588   \u2588\u2588\u2588\u2588\u2588\u2588\u2588  \u2588\u2588\u2588\u2588\u2588\u2588  \u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588   \u2588\u2588\u2588\u2588\u2588\u2588\u2588       \u2588\u2588\u2588\u2588\u2588\u2588    \u2588\u2588\u2588\u2588\u2588     \u2588\u2588\u2588     \u2591\u2591\u2591   \u2588\u2588\u2588\u2588\u2588\u2588   \u2588\u2588\u2588\u2588\u2588\u2588\u2588   \u2588\u2588\u2588\u2588\u2588\u2588 :\n: \u2591\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588  \u2588\u2588\u2588\u2591\u2591\u2588\u2588\u2588 \u2588\u2588\u2588\u2591\u2591\u2588\u2588\u2588\u2591\u2591\u2588\u2588\u2588\u2591\u2591\u2588\u2588\u2588 \u2591\u2591\u2591\u2588\u2588\u2588\u2591       \u2591\u2591\u2591\u2591\u2591\u2588\u2588\u2588  \u2588\u2588\u2588\u2591\u2591     \u2591\u2588\u2588\u2588          \u2588\u2588\u2588\u2591\u2591\u2588\u2588\u2588 \u2588\u2588\u2588\u2591\u2591\u2588\u2588\u2588  \u2588\u2588\u2588\u2591\u2591\u2588\u2588\u2588:\n: \u2591\u2588\u2588\u2588\u2591\u2591\u2591\u2591\u2591\u2588\u2588\u2588 \u2591\u2588\u2588\u2588 \u2591\u2588\u2588\u2588\u2591\u2588\u2588\u2588\u2588\u2588\u2588\u2588  \u2591\u2588\u2588\u2588 \u2591\u2588\u2588\u2588   \u2591\u2588\u2588\u2588         \u2588\u2588\u2588\u2588\u2588\u2588\u2588 \u2591\u2591\u2588\u2588\u2588\u2588\u2588    \u2591\u2588\u2588\u2588         \u2591\u2588\u2588\u2588 \u2591\u2588\u2588\u2588\u2591\u2588\u2588\u2588 \u2591\u2588\u2588\u2588 \u2591\u2588\u2588\u2588\u2588\u2588\u2588\u2588 :\n: \u2591\u2588\u2588\u2588    \u2591\u2588\u2588\u2588 \u2591\u2588\u2588\u2588 \u2591\u2588\u2588\u2588\u2591\u2588\u2588\u2588\u2591\u2591\u2591   \u2591\u2588\u2588\u2588 \u2591\u2588\u2588\u2588   \u2591\u2588\u2588\u2588 \u2588\u2588\u2588    \u2588\u2588\u2588\u2591\u2591\u2588\u2588\u2588  \u2591\u2591\u2591\u2591\u2588\u2588\u2588   \u2591\u2591\u2588\u2588\u2588     \u2588\u2588\u2588\u2591\u2588\u2588\u2588 \u2591\u2588\u2588\u2588\u2591\u2588\u2588\u2588 \u2591\u2588\u2588\u2588 \u2591\u2588\u2588\u2588\u2591\u2591\u2591  :\n: \u2588\u2588\u2588\u2588\u2588   \u2588\u2588\u2588\u2588\u2588\u2591\u2591\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2591\u2591\u2588\u2588\u2588\u2588\u2588\u2588  \u2588\u2588\u2588\u2588 \u2588\u2588\u2588\u2588\u2588  \u2591\u2591\u2588\u2588\u2588\u2588\u2588    \u2591\u2591\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588 \u2588\u2588\u2588\u2588\u2588\u2588     \u2591\u2591\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588 \u2591\u2591\u2588\u2588\u2588\u2588\u2588\u2588 \u2591\u2591\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2591\u2591\u2588\u2588\u2588\u2588\u2588\u2588 :\n:\u2591\u2591\u2591\u2591\u2591   \u2591\u2591\u2591\u2591\u2591  \u2591\u2591\u2591\u2591\u2591\u2588\u2588\u2588 \u2591\u2591\u2591\u2591\u2591\u2591  \u2591\u2591\u2591\u2591 \u2591\u2591\u2591\u2591\u2591    \u2591\u2591\u2591\u2591\u2591      \u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591 \u2591\u2591\u2591\u2591\u2591\u2591\u2591       \u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591   \u2591\u2591\u2591\u2591\u2591\u2591   \u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591  \u2591\u2591\u2591\u2591\u2591\u2591  :\n:               \u2588\u2588\u2588 \u2591\u2588\u2588\u2588                                                                                               :\n:              \u2591\u2591\u2588\u2588\u2588\u2588\u2588\u2588                                                                                                :\n:               \u2591\u2591\u2591\u2591\u2591\u2591                                                                                                 :\n\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\n`\n\n**Docker-like CLI for AI agents with hybrid Go + Python architecture and Enhanced LLM Intelligence**\n\n[![PyPI version](https://badge.fury.io/py/agent-as-code.svg)](https://badge.fury.io/py/agent-as-code)\n[![Python versions](https://img.shields.io/pypi/pyversions/agent-as-code.svg)](https://pypi.org/project/agent-as-code/)\n[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)\n\n## \ud83d\ude80 **Hybrid Architecture**\n\nAgent as Code combines the **performance of Go** with the **ecosystem of Python**:\n\n- **\u26a1 Go Binary Core**: High-performance CLI operations with 10x speed improvement\n- **\ud83d\udc0d Python Wrapper**: Seamless integration with Python development workflows\n- **\ud83e\udde0 Enhanced LLM Intelligence**: AI-powered agent creation and optimization\n- **\ud83d\udce6 Zero Dependencies**: Single binary with no runtime requirements\n- **\ud83c\udf0d Cross-Platform**: Native binaries for Linux, macOS, Windows (x86_64, ARM64)\n\n## What is Agent as Code?\n\nAgent as Code (AaC) brings the simplicity of Docker to AI agent development. Just like Docker revolutionized application deployment, Agent as Code revolutionizes AI agent development with:\n\n- **Familiar Commands**: `agent build`, `agent run`, `agent push` - just like Docker\n- **Enhanced LLM Commands**: `agent llm create-agent`, `agent llm optimize` - AI-powered intelligence\n- **Declarative Configuration**: Define agents with simple `agent.yaml` files\n- **Template System**: Pre-built templates for common use cases\n- **Multi-Runtime Support**: Python, Node.js, Go, and more\n- **Registry Integration**: Share and discover agents easily\n- **Intelligent Generation**: Automatically create fully functional agents with tests and documentation\n\n## \ud83e\udde0 **Enhanced LLM Commands**\n\nThe new LLM intelligence features provide:\n\n- **`agent llm create-agent [USE_CASE]`**: Create intelligent, fully functional agents\n- **`agent llm optimize [MODEL] [USE_CASE]`**: Optimize models for specific use cases\n- **`agent llm benchmark`**: Comprehensive model benchmarking\n- **`agent llm deploy-agent [AGENT_NAME]`**: Deploy and test agents locally\n- **`agent llm analyze [MODEL]`**: Deep model analysis and insights\n\n## Quick Start\n\n### Installation\n\n```bash\npip install agent-as-code\n```\n\n### Create Your First Agent (Traditional Way)\n\n```bash\n# Create a new chatbot agent\nagent init my-chatbot --template chatbot\n\n# Navigate to the project\ncd my-chatbot\n\n# Build the agent\nagent build -t my-chatbot:latest .\n\n# Run the agent\nagent run my-chatbot:latest\n```\n\n### Create Your First Intelligent Agent (Enhanced LLM Way)\n\n```bash\n# Create an intelligent agent with AI-powered generation\nagent llm create-agent chatbot\n\n# Navigate to the generated project\ncd chatbot-agent\n\n# Deploy and test the agent automatically\nagent llm deploy-agent chatbot-agent\n```\n\nYour agent is now running at `http://localhost:8080` with comprehensive testing and validation! \ud83d\ude80\n\n## Available Templates\n\nGet started instantly with pre-built templates:\n\n```bash\nagent init my-bot --template chatbot           # Customer support chatbot\nagent init analyzer --template sentiment      # Sentiment analysis\nagent init summarizer --template summarizer   # Document summarization  \nagent init translator --template translator   # Language translation\nagent init insights --template data-analyzer  # Data analysis\nagent init writer --template content-gen      # Content generation\n```\n\n## \ud83e\udde0 **Enhanced LLM Use Cases**\n\n### \ud83d\ude80 **Intelligent Agent Creation**\n```bash\n# Create fully functional agents with AI-powered generation\nagent llm create-agent chatbot\nagent llm create-agent sentiment-analyzer\nagent llm create-agent workflow-automation\n\n# Each agent includes:\n# - Optimized Python FastAPI application\n# - Comprehensive test suite\n# - Production-ready Dockerfile\n# - Detailed documentation\n# - CI/CD workflows\n# - Health checks and monitoring\n```\n\n### \u26a1 **Model Optimization**\n```bash\n# Optimize models for specific use cases\nagent llm optimize llama2 chatbot\nagent llm optimize mistral:7b code-generation\nagent llm optimize codellama:13b debugging\n\n# Features:\n# - Parameter tuning (temperature, top_p, etc.)\n# - Custom system messages\n# - Context window optimization\n# - Performance benchmarks\n# - Use case specific configurations\n```\n\n### \ud83d\udcca **Comprehensive Benchmarking**\n```bash\n# Benchmark all local models\nagent llm benchmark\n\n# Focus on specific tasks\nagent llm benchmark --tasks chatbot,code,analysis\n\n# Get detailed reports\nagent llm benchmark --output json\n\n# Metrics include:\n# - Response time and throughput\n# - Memory usage and efficiency\n# - Quality assessment\n# - Cost-benefit analysis\n# - Performance recommendations\n```\n\n### \ud83d\ude80 **Intelligent Deployment**\n```bash\n# Deploy and test agents automatically\nagent llm deploy-agent my-agent\n\n# Run comprehensive tests\nagent llm deploy-agent my-agent --test-suite comprehensive\n\n# Enable monitoring\nagent llm deploy-agent my-agent --monitor\n\n# Features:\n# - Automatic container building\n# - Comprehensive testing\n# - Health validation\n# - Performance metrics\n# - Deployment reports\n```\n\n### \ud83d\udd0d **Deep Model Analysis**\n```bash\n# Analyze model capabilities\nagent llm analyze llama2\n\n# Get detailed insights\nagent llm analyze mistral:7b --detailed\n\n# Focus on capabilities\nagent llm analyze codellama:13b --capabilities\n\n# Analysis includes:\n# - Model architecture and parameters\n# - Performance characteristics\n# - Best use cases and limitations\n# - Optimization opportunities\n# - Integration recommendations\n```\n\n## Python API Usage\n\nUse Agent as Code programmatically in your Python applications:\n\n### Traditional Commands\n\n```python\nfrom agent_as_code import AgentCLI\n\n# Initialize the CLI\ncli = AgentCLI()\n\n# Create a new agent\ncli.init(\"my-agent\", template=\"sentiment\", runtime=\"python\")\n\n# Build the agent\ncli.build(\".\", tag=\"my-agent:latest\")\n\n# Run the agent\ncli.run(\"my-agent:latest\", port=\"8080:8080\", detach=True)\n```\n\n### Enhanced LLM Commands\n\n```python\nfrom agent_as_code import AgentCLI\n\n# Initialize the CLI\ncli = AgentCLI()\n\n# Create intelligent agents\ncli.create_agent('sentiment-analyzer')\ncli.create_agent('workflow-automation', model='mistral:7b')\n\n# Optimize models for specific use cases\ncli.optimize_model('llama2', 'chatbot')\ncli.optimize_model('mistral:7b', 'code-generation')\n\n# Benchmark all models\ncli.benchmark_models(['chatbot', 'code-generation', 'analysis'])\n\n# Deploy and test agents\ncli.deploy_agent('my-agent', test_suite='comprehensive', monitor=True)\n\n# Analyze model capabilities\ncli.analyze_model('llama2:7b', detailed=True, capabilities=True)\n\n# Manage local models\ncli.list_models()\ncli.pull_model('llama2:7b')\ncli.test_model('llama2:7b', input_text=\"Hello, how are you?\")\ncli.remove_model('old-model', force=True)\n```\n\n## Agent Configuration\n\nDefine your agent with a simple `agent.yaml` file:\n\n```yaml\napiVersion: agent.dev/v1\nkind: Agent\nmetadata:\n  name: my-chatbot\n  version: 1.0.0\n  description: Customer support chatbot\nspec:\n  runtime: python\n  model:\n    provider: openai\n    name: gpt-4\n    config:\n      temperature: 0.7\n      max_tokens: 500\n  capabilities:\n    - conversation\n    - customer-support\n  ports:\n    - container: 8080\n      host: 8080\n  environment:\n    - name: OPENAI_API_KEY\n      value: ${OPENAI_API_KEY}\n  healthCheck:\n    command: [\"curl\", \"-f\", \"http://localhost:8080/health\"]\n    interval: 30s\n    timeout: 10s\n    retries: 3\n```\n\n## Use Cases\n\n### \ud83e\udd16 **Customer Support**\n```bash\nagent init support-bot --template chatbot\n# Includes conversation memory, intent classification, escalation handling\n```\n\n### \ud83d\udcca **Data Analysis**\n```bash\nagent init data-insights --template data-analyzer\n# Includes statistical analysis, visualization, AI-powered insights\n```\n\n### \ud83c\udf10 **Content Creation**\n```bash\nagent init content-writer --template content-gen\n# Includes blog posts, social media, marketing copy generation\n```\n\n### \ud83d\udd0d **Text Analysis**\n```bash\nagent init text-analyzer --template sentiment\n# Includes sentiment analysis, emotion detection, batch processing\n```\n\n## Development Workflow\n\n### Traditional Development\n```bash\n# Create and test locally\nagent init my-agent --template chatbot\ncd my-agent\nagent build -t my-agent:dev .\nagent run my-agent:dev\n\n# Make changes and rebuild\nagent build -t my-agent:dev . --no-cache\n```\n\n### \ud83e\udde0 **Enhanced LLM Development**\n```bash\n# Create intelligent agent with AI-powered generation\nagent llm create-agent workflow-automation\n\n# Navigate to generated project\ncd workflow-automation-agent\n\n# Deploy and test automatically\nagent llm deploy-agent workflow-automation-agent\n\n# The agent is now running with:\n# - Comprehensive testing (3/3 tests passed)\n# - Health validation (HEALTHY status)\n# - Performance metrics\n# - Ready for production use\n```\n\n### Production Deployment\n```bash\n# Build for production\nagent build -t my-agent:1.0.0 .\n\n# Push to registry\nagent push my-agent:1.0.0\n\n# Deploy anywhere\ndocker run -p 8080:8080 my-agent:1.0.0\n```\n\n### CI/CD Integration\n```yaml\n# GitHub Actions example\n- name: Install Agent CLI\n  run: pip install agent-as-code\n\n- name: Create Intelligent Agent\n  run: agent llm create-agent workflow-automation\n\n- name: Deploy and Test Agent\n  run: agent llm deploy-agent workflow-automation-agent\n\n- name: Build Agent\n  run: agent build -t ${{ github.repository }}:${{ github.sha }} .\n\n- name: Push Agent\n  run: agent push ${{ github.repository }}:${{ github.sha }}\n```\n\n## Python Ecosystem Integration\n\n### Jupyter Notebooks\n```python\n# Install in notebook\n!pip install agent-as-code\n\n# Create agent directly in notebook\nfrom agent_as_code import AgentCLI\ncli = AgentCLI()\ncli.init(\"notebook-agent\", template=\"sentiment\")\n```\n\n### Virtual Environments\n```bash\n# Each project can have its own agent version\npython -m venv myproject\nsource myproject/bin/activate\npip install agent-as-code==1.0.0\nagent init my-project-agent\n```\n\n### Poetry Integration\n```bash\n# Add to your Poetry project\npoetry add agent-as-code\npoetry run agent init my-agent --template chatbot\n```\n\n## \ud83c\udfe2 **Enterprise Features**\n\n### \ud83d\udd12 **Security & Compliance**\n- **Role-Based Access Control (RBAC)**: Manage permissions and access levels\n- **JWT Authentication**: Secure API endpoints with token-based auth\n- **Audit Logging**: Comprehensive logging for compliance and debugging\n- **Container Security**: Multi-stage Docker builds with security best practices\n\n### \ud83d\udcca **Monitoring & Observability**\n- **Health Checks**: Automatic health monitoring with configurable intervals\n- **Metrics Collection**: Prometheus-compatible metrics for monitoring\n- **Structured Logging**: Structured logging with configurable levels\n- **Performance Tracking**: Response time, memory usage, and CPU monitoring\n\n### \ud83d\ude80 **Scalability & Performance**\n- **Horizontal Scaling**: Kubernetes manifests for orchestration\n- **Load Balancing**: Built-in load balancing and health checks\n- **Resource Management**: Configurable CPU and memory limits\n- **Auto-scaling**: Horizontal Pod Autoscaler support\n\n### \ud83d\udd27 **DevOps Integration**\n- **CI/CD Pipelines**: GitHub Actions workflows for automation\n- **Container Registry**: Push/pull from any Docker registry\n- **Multi-Environment**: Support for dev, staging, and production\n- **Infrastructure as Code**: Kubernetes manifests and Docker configurations\n\n## \ud83e\uddea **Testing & Quality Assurance**\n\n### Automated Testing\n```bash\n# Run comprehensive tests\nagent llm deploy-agent my-agent --test-suite comprehensive\n\n# Test specific functionality\npytest tests/test_workflow_automation.py::test_process_workflow\n\n# Coverage reporting\npytest --cov=main tests/\n```\n\n### Quality Metrics\n- **Test Coverage**: 95%+ test coverage for all generated agents\n- **Code Quality**: Black formatting, flake8 linting, mypy type checking\n- **Performance Testing**: Response time and throughput validation\n- **Integration Testing**: End-to-end functionality validation\n\n## \ud83c\udf1f **Advanced Features**\n\n### Model Management\n```bash\n# List available models\nagent llm list\n\n# Pull new models\nagent llm pull llama2:7b\n\n# Test model performance\nagent llm test llama2:7b --input \"Hello, how are you?\"\n\n# Remove unused models\nagent llm remove old-model --force\n```\n\n### Custom Use Cases\n```bash\n# Create custom agent templates\nagent llm create-agent custom-use-case\n\n# The system will:\n# - Analyze the use case requirements\n# - Recommend appropriate models\n# - Generate optimized code\n# - Create comprehensive tests\n# - Set up monitoring and logging\n```\n\n### Performance Optimization\n```bash\n# Optimize for specific workloads\nagent llm optimize llama2:7b high-throughput\n\n# Benchmark optimization results\nagent llm benchmark --tasks high-throughput\n\n# Deploy optimized agent\nagent llm deploy-agent optimized-agent\n```\n\n## Requirements\n\n- **Python**: 3.8 or higher\n- **Operating System**: Linux, macOS, or Windows\n- **Architecture**: x86_64 (amd64) or ARM64\n\nThe package includes pre-compiled binaries for all supported platforms, so no additional dependencies are required.\n\n## Architecture\n\nThis Python package is a wrapper around a high-performance Go binary:\n\n- **Go Binary**: Handles core CLI operations (build, run, etc.)\n- **Python Wrapper**: Provides Python API and pip integration\n- **Cross-Platform**: Works on Linux, macOS, and Windows\n- **Self-Contained**: No external dependencies required\n\n## Contributing\n\nWe welcome contributions as soon as we have the Go binary ready to make public along with the github Repo!\n\n## Support\n\n- **\ud83d\udcd6 Documentation**: [agent-as-code.myagentregistry.com/documentation](https://agent-as-code.myagentregistry.com/documentation)\n- **\ud83d\ude80 Getting Started**: [agent-as-code.myagentregistry.com/getting-started](https://agent-as-code.myagentregistry.com/getting-started)\n- **\ud83d\udca1 Examples**: [agent-as-code.myagentregistry.com/examples](https://agent-as-code.myagentregistry.com/examples)\n- **\ud83d\udd27 CLI Reference**: [agent-as-code.myagentregistry.com/cli](https://agent-as-code.myagentregistry.com/cli)\n- **\ud83d\udce6 Registry Guide**: [agent-as-code.myagentregistry.com/registry](https://agent-as-code.myagentregistry.com/registry)\n\n---\n\n**Ready to build your first AI agent?**\n\n```bash\npip install agent-as-code\nagent init my-first-agent --template chatbot\ncd my-first-agent\nagent run\n```\n\n**Join thousands of developers building the future of AI agents! \ud83d\ude80**\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "Docker-like CLI for AI agents with Enhanced LLM Intelligence",
    "version": "1.1.0",
    "project_urls": {
        "Documentation": "https://agent-as-code.myagentregistry.com/documentation",
        "Homepage": "https://agent-as-code.myagentregistry.com",
        "github": "https://github.com/pxkundu/agent-as-code"
    },
    "split_keywords": [
        "ai",
        " agents",
        " cli",
        " docker",
        " containers",
        " llm",
        " machine-learning",
        " artificial-intelligence",
        " automation",
        " microservices",
        " devops",
        " ollama",
        " local-llm",
        " model-optimization",
        " agent-generation",
        " intelligent-agents",
        " workflow-automation",
        " sentiment-analysis",
        " chatbot",
        " code-assistant",
        " benchmarking",
        " model-analysis"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "100e7a10def653e1895caadf5ba7fec63d1d7695d499a827c847b2f624122571",
                "md5": "1a53011f382a26d0631500c2d3782eda",
                "sha256": "f9c0c0f153f6450be1a29fa7a703109c4f95fac734e4ff344f5b86051ebc093f"
            },
            "downloads": -1,
            "filename": "agent_as_code-1.1.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "1a53011f382a26d0631500c2d3782eda",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8",
            "size": 33656447,
            "upload_time": "2025-08-24T18:07:06",
            "upload_time_iso_8601": "2025-08-24T18:07:06.685822Z",
            "url": "https://files.pythonhosted.org/packages/10/0e/7a10def653e1895caadf5ba7fec63d1d7695d499a827c847b2f624122571/agent_as_code-1.1.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "ab45020943652303c4637108727fbfc65911fef710ddd06e9775e79d55f8dc05",
                "md5": "65196cc1312a070cf6cfdbeb769877f9",
                "sha256": "fd1621d5f852d1972d5d09e548b8043ba7c6fdb571e5466d0c2846270b5a09d7"
            },
            "downloads": -1,
            "filename": "agent_as_code-1.1.0.tar.gz",
            "has_sig": false,
            "md5_digest": "65196cc1312a070cf6cfdbeb769877f9",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 33550558,
            "upload_time": "2025-08-24T18:07:13",
            "upload_time_iso_8601": "2025-08-24T18:07:13.134684Z",
            "url": "https://files.pythonhosted.org/packages/ab/45/020943652303c4637108727fbfc65911fef710ddd06e9775e79d55f8dc05/agent_as_code-1.1.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-08-24 18:07:13",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "pxkundu",
    "github_project": "agent-as-code",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "agent-as-code"
}
        
Elapsed time: 0.73956s