dsat-ai


Namedsat-ai JSON
Version 0.3.4 PyPI version JSON
download
home_pageNone
SummaryDan's Simple Agent Toolkit - Multi-provider LLM agents and experiment tracking
upload_time2025-08-30 16:28:53
maintainerNone
docs_urlNone
authorNone
requires_python>=3.12
licenseMIT
keywords agents ai anthropic claude experiments google llm ollama vertex-ai
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Dan's Simple Agent Toolkit (DSAT)

DSAT is a comprehensive Python toolkit for building LLM applications and running experiments. It provides three core components that work independently or together:

## 💬 [Chat CLI](readme-chat.md)

An interactive terminal-based chat interface for testing prompts and having conversations with LLM agents.

**Key Features:**
- **Zero-config mode**: Auto-detects providers via environment variables
- **Real-time streaming**: Token-by-token streaming support for all providers
- **Multiple usage patterns**: Config files, inline creation, or auto-discovery
- **Interactive commands**: `/help`, `/agents`, `/switch`, `/stream`, `/memory`, `/compact`, and more
- **Memory management**: Configurable conversation limits, auto-compaction, and persistent storage
- **Flexible prompts**: Multiple directory search strategies and per-agent overrides
- **Plugin system**: Entry points for custom LLM provider extensions
- **Session management**: History tracking and conversation export

**Quick Start:**
```bash
# Zero-config (with API key in environment)
dsat chat

# Enable real-time streaming
dsat chat --stream

# Use existing agent configuration
dsat chat --config agents.json --agent my_assistant

# Create agent inline
dsat chat --provider anthropic --model claude-3-5-haiku-latest
```

## 🤖 [Agents Framework](readme-agents.md)

A unified interface for working with multiple LLM providers through configuration-driven agents.

**Key Features:**
- **Multi-provider support**: Anthropic Claude, Google Vertex AI, Ollama (local models)
- **Async streaming support**: Real-time token streaming with `invoke_async()` method
- **Configuration-driven**: JSON configs + TOML prompt templates
- **Comprehensive logging**: Standard Python logging, JSONL files, or custom callbacks
- **Prompt versioning**: Versioned prompt management with TOML templates
- **Factory patterns**: Easy agent creation and management

**Quick Example:**
```python
from agents.agent import Agent, AgentConfig

config = AgentConfig(
    agent_name="my_assistant",
    model_provider="anthropic",  # or "google", "ollama"
    model_family="claude", 
    model_version="claude-3-5-haiku-latest",
    prompt="assistant:v1",
    provider_auth={"api_key": "your-api-key"},
    stream=True,  # Enable streaming support
    memory_enabled=True,  # Enable conversation memory
    max_memory_tokens=8000  # Configure memory limit
)

agent = Agent.create(config)

# Traditional response
response = agent.invoke("Hello, how are you?")

# Streaming response
async for chunk in agent.invoke_async("Tell me a story"):
    print(chunk, end='', flush=True)
```

## 📊 [Scryptorum Framework](readme-scryptorum.md)

A modern, annotation-driven framework for running and tracking LLM experiments.

**Key Features:**
- **Dual run types**: Trial runs (logs only) vs Milestone runs (full versioning) 
- **Annotation-driven**: `@experiment`, `@metric`, `@timer`, `@llm_call` decorators
- **CLI-configurable**: Same code runs as trial or milestone based on CLI flags
- **Thread-safe logging**: JSONL format for metrics, timings, and LLM calls
- **Project integration**: Seamlessly integrates with existing Python projects

**Quick Example:**
```python
from scryptorum import experiment, metric, timer

@experiment(name="sentiment_analysis")
def main():
    reviews = load_reviews()
    results = []
    
    for review in reviews:
        sentiment = analyze_sentiment(review)
        results.append(sentiment)
    
    accuracy = calculate_accuracy(results)
    return accuracy

@timer("data_loading")
def load_reviews():
    return ["Great product!", "Terrible service", "Love it!"]

@metric(name="accuracy", metric_type="accuracy")
def calculate_accuracy(results):
    return 0.85
```

## 🔧 Framework Integration

When used together, DSAT provides `AgentExperiment` and `AgentRun` classes that extend Scryptorum's base classes with agent-specific capabilities:

```python
from agents.agent_experiment import AgentExperiment
from scryptorum import metric

@experiment(name="agent_evaluation")
def evaluate_agents():
    # Load agents from configs
    agent1 = Agent.create(config1)
    agent2 = Agent.create(config2)
    
    # Run evaluation with automatic LLM call logging
    score1 = evaluate_agent(agent1)
    score2 = evaluate_agent(agent2) 
    
    return {"agent1": score1, "agent2": score2}
```

## 🚀 Quick Start

### Installation
```bash
# Basic installation
git clone <repository-url>
cd dsat
uv sync

# With optional dependencies
uv sync --extra dev      # Development tools
uv sync --extra server   # HTTP server support
```

### Initialize a Project
```bash
# Initialize scryptorum in your Python project
scryptorum init

# Create your first experiment
scryptorum create-experiment my_experiment
```

### Run Examples
```bash
# Interactive chat interface
dsat chat --config examples/config/agents.json --agent pirate

# Agent conversation demo
python examples/agents/conversation.py

# Agent logging examples  
python examples/agents/agent_logging_examples.py

# Complete experiment with agent evaluation
python examples/scryptorum/literary_evaluation.py
```

## 📁 Examples

The [`examples/`](examples/) directory contains comprehensive demonstrations:

- **[`examples/agents/`](examples/agents/)**: Agent framework examples including logging patterns and character conversations
- **[`examples/scryptorum/`](examples/scryptorum/)**: Experiment tracking examples with literary agent evaluation
- **[`examples/config/`](examples/config/)**: Shared configurations and prompt templates
- **[`examples/flexible-prompts/`](examples/flexible-prompts/)**: Chat CLI examples with flexible prompts directory management

## 🏗️ Architecture

```
your_project/                    ← Your Python Package
├── src/your_package/
│   ├── experiments/             ← Your experiment code
│   └── agents/                  ← Your agent code  
├── .scryptorum                  ← Scryptorum config
└── pyproject.toml              ← Dependencies

~/experiments/                   ← Scryptorum Project (separate location)
├── your_package/               ← Project tracking
│   ├── experiments/            ← Experiment data & results
│   │   └── my_experiment/
│   │       ├── runs/           ← Trial & milestone runs
│   │       ├── config/         ← Agent configs
│   │       └── prompts/        ← Prompt templates
│   └── data/                   ← Shared data
```

## 📖 Documentation

- **[Chat CLI](readme-chat.md)**: Interactive terminal chat interface for agent testing
- **[Agents Framework](readme-agents.md)**: Multi-provider LLM agent system
- **[Scryptorum Framework](readme-scryptorum.md)**: Experiment tracking and management
- **[Examples Documentation](examples/README.md)**: Comprehensive examples and tutorials

## 🛠️ Development

```bash
# Install development dependencies
uv sync --extra dev

# Run tests
python -m pytest test/ -v

# Format code
black src/

# Lint code  
ruff check src/
```

## 📄 License

MIT License - see LICENSE file for details.

---

*DSAT simplifies LLM application development by providing unified agent abstractions and comprehensive experiment tracking with minimal boilerplate.*
            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "dsat-ai",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.12",
    "maintainer_email": null,
    "keywords": "agents, ai, anthropic, claude, experiments, google, llm, ollama, vertex-ai",
    "author": null,
    "author_email": "Dan <ecodan@hotmail.com>",
    "download_url": "https://files.pythonhosted.org/packages/72/c5/c9bb3cf7a5a7fd1cacd1ac0125107dcf011188fb5ac3c17d8fc47a02e9f6/dsat_ai-0.3.4.tar.gz",
    "platform": null,
    "description": "# Dan's Simple Agent Toolkit (DSAT)\n\nDSAT is a comprehensive Python toolkit for building LLM applications and running experiments. It provides three core components that work independently or together:\n\n## \ud83d\udcac [Chat CLI](readme-chat.md)\n\nAn interactive terminal-based chat interface for testing prompts and having conversations with LLM agents.\n\n**Key Features:**\n- **Zero-config mode**: Auto-detects providers via environment variables\n- **Real-time streaming**: Token-by-token streaming support for all providers\n- **Multiple usage patterns**: Config files, inline creation, or auto-discovery\n- **Interactive commands**: `/help`, `/agents`, `/switch`, `/stream`, `/memory`, `/compact`, and more\n- **Memory management**: Configurable conversation limits, auto-compaction, and persistent storage\n- **Flexible prompts**: Multiple directory search strategies and per-agent overrides\n- **Plugin system**: Entry points for custom LLM provider extensions\n- **Session management**: History tracking and conversation export\n\n**Quick Start:**\n```bash\n# Zero-config (with API key in environment)\ndsat chat\n\n# Enable real-time streaming\ndsat chat --stream\n\n# Use existing agent configuration\ndsat chat --config agents.json --agent my_assistant\n\n# Create agent inline\ndsat chat --provider anthropic --model claude-3-5-haiku-latest\n```\n\n## \ud83e\udd16 [Agents Framework](readme-agents.md)\n\nA unified interface for working with multiple LLM providers through configuration-driven agents.\n\n**Key Features:**\n- **Multi-provider support**: Anthropic Claude, Google Vertex AI, Ollama (local models)\n- **Async streaming support**: Real-time token streaming with `invoke_async()` method\n- **Configuration-driven**: JSON configs + TOML prompt templates\n- **Comprehensive logging**: Standard Python logging, JSONL files, or custom callbacks\n- **Prompt versioning**: Versioned prompt management with TOML templates\n- **Factory patterns**: Easy agent creation and management\n\n**Quick Example:**\n```python\nfrom agents.agent import Agent, AgentConfig\n\nconfig = AgentConfig(\n    agent_name=\"my_assistant\",\n    model_provider=\"anthropic\",  # or \"google\", \"ollama\"\n    model_family=\"claude\", \n    model_version=\"claude-3-5-haiku-latest\",\n    prompt=\"assistant:v1\",\n    provider_auth={\"api_key\": \"your-api-key\"},\n    stream=True,  # Enable streaming support\n    memory_enabled=True,  # Enable conversation memory\n    max_memory_tokens=8000  # Configure memory limit\n)\n\nagent = Agent.create(config)\n\n# Traditional response\nresponse = agent.invoke(\"Hello, how are you?\")\n\n# Streaming response\nasync for chunk in agent.invoke_async(\"Tell me a story\"):\n    print(chunk, end='', flush=True)\n```\n\n## \ud83d\udcca [Scryptorum Framework](readme-scryptorum.md)\n\nA modern, annotation-driven framework for running and tracking LLM experiments.\n\n**Key Features:**\n- **Dual run types**: Trial runs (logs only) vs Milestone runs (full versioning) \n- **Annotation-driven**: `@experiment`, `@metric`, `@timer`, `@llm_call` decorators\n- **CLI-configurable**: Same code runs as trial or milestone based on CLI flags\n- **Thread-safe logging**: JSONL format for metrics, timings, and LLM calls\n- **Project integration**: Seamlessly integrates with existing Python projects\n\n**Quick Example:**\n```python\nfrom scryptorum import experiment, metric, timer\n\n@experiment(name=\"sentiment_analysis\")\ndef main():\n    reviews = load_reviews()\n    results = []\n    \n    for review in reviews:\n        sentiment = analyze_sentiment(review)\n        results.append(sentiment)\n    \n    accuracy = calculate_accuracy(results)\n    return accuracy\n\n@timer(\"data_loading\")\ndef load_reviews():\n    return [\"Great product!\", \"Terrible service\", \"Love it!\"]\n\n@metric(name=\"accuracy\", metric_type=\"accuracy\")\ndef calculate_accuracy(results):\n    return 0.85\n```\n\n## \ud83d\udd27 Framework Integration\n\nWhen used together, DSAT provides `AgentExperiment` and `AgentRun` classes that extend Scryptorum's base classes with agent-specific capabilities:\n\n```python\nfrom agents.agent_experiment import AgentExperiment\nfrom scryptorum import metric\n\n@experiment(name=\"agent_evaluation\")\ndef evaluate_agents():\n    # Load agents from configs\n    agent1 = Agent.create(config1)\n    agent2 = Agent.create(config2)\n    \n    # Run evaluation with automatic LLM call logging\n    score1 = evaluate_agent(agent1)\n    score2 = evaluate_agent(agent2) \n    \n    return {\"agent1\": score1, \"agent2\": score2}\n```\n\n## \ud83d\ude80 Quick Start\n\n### Installation\n```bash\n# Basic installation\ngit clone <repository-url>\ncd dsat\nuv sync\n\n# With optional dependencies\nuv sync --extra dev      # Development tools\nuv sync --extra server   # HTTP server support\n```\n\n### Initialize a Project\n```bash\n# Initialize scryptorum in your Python project\nscryptorum init\n\n# Create your first experiment\nscryptorum create-experiment my_experiment\n```\n\n### Run Examples\n```bash\n# Interactive chat interface\ndsat chat --config examples/config/agents.json --agent pirate\n\n# Agent conversation demo\npython examples/agents/conversation.py\n\n# Agent logging examples  \npython examples/agents/agent_logging_examples.py\n\n# Complete experiment with agent evaluation\npython examples/scryptorum/literary_evaluation.py\n```\n\n## \ud83d\udcc1 Examples\n\nThe [`examples/`](examples/) directory contains comprehensive demonstrations:\n\n- **[`examples/agents/`](examples/agents/)**: Agent framework examples including logging patterns and character conversations\n- **[`examples/scryptorum/`](examples/scryptorum/)**: Experiment tracking examples with literary agent evaluation\n- **[`examples/config/`](examples/config/)**: Shared configurations and prompt templates\n- **[`examples/flexible-prompts/`](examples/flexible-prompts/)**: Chat CLI examples with flexible prompts directory management\n\n## \ud83c\udfd7\ufe0f Architecture\n\n```\nyour_project/                    \u2190 Your Python Package\n\u251c\u2500\u2500 src/your_package/\n\u2502   \u251c\u2500\u2500 experiments/             \u2190 Your experiment code\n\u2502   \u2514\u2500\u2500 agents/                  \u2190 Your agent code  \n\u251c\u2500\u2500 .scryptorum                  \u2190 Scryptorum config\n\u2514\u2500\u2500 pyproject.toml              \u2190 Dependencies\n\n~/experiments/                   \u2190 Scryptorum Project (separate location)\n\u251c\u2500\u2500 your_package/               \u2190 Project tracking\n\u2502   \u251c\u2500\u2500 experiments/            \u2190 Experiment data & results\n\u2502   \u2502   \u2514\u2500\u2500 my_experiment/\n\u2502   \u2502       \u251c\u2500\u2500 runs/           \u2190 Trial & milestone runs\n\u2502   \u2502       \u251c\u2500\u2500 config/         \u2190 Agent configs\n\u2502   \u2502       \u2514\u2500\u2500 prompts/        \u2190 Prompt templates\n\u2502   \u2514\u2500\u2500 data/                   \u2190 Shared data\n```\n\n## \ud83d\udcd6 Documentation\n\n- **[Chat CLI](readme-chat.md)**: Interactive terminal chat interface for agent testing\n- **[Agents Framework](readme-agents.md)**: Multi-provider LLM agent system\n- **[Scryptorum Framework](readme-scryptorum.md)**: Experiment tracking and management\n- **[Examples Documentation](examples/README.md)**: Comprehensive examples and tutorials\n\n## \ud83d\udee0\ufe0f Development\n\n```bash\n# Install development dependencies\nuv sync --extra dev\n\n# Run tests\npython -m pytest test/ -v\n\n# Format code\nblack src/\n\n# Lint code  \nruff check src/\n```\n\n## \ud83d\udcc4 License\n\nMIT License - see LICENSE file for details.\n\n---\n\n*DSAT simplifies LLM application development by providing unified agent abstractions and comprehensive experiment tracking with minimal boilerplate.*",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "Dan's Simple Agent Toolkit - Multi-provider LLM agents and experiment tracking",
    "version": "0.3.4",
    "project_urls": {
        "Documentation": "https://github.com/ecodan/dsat/blob/main/README.md",
        "Homepage": "https://github.com/ecodan/dsat",
        "Issues": "https://github.com/ecodan/dsat/issues",
        "Repository": "https://github.com/ecodan/dsat"
    },
    "split_keywords": [
        "agents",
        " ai",
        " anthropic",
        " claude",
        " experiments",
        " google",
        " llm",
        " ollama",
        " vertex-ai"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "f3e69f61cf01ed80b3eeaff080a2fa06d48552c012357dd5856bc222ca16fc4a",
                "md5": "4e4eb2317e3a84dec3c38904c69a9829",
                "sha256": "1221b6e9dde273334c1302ba4cac74f64a067da1a5688bdffb0781a593c2d427"
            },
            "downloads": -1,
            "filename": "dsat_ai-0.3.4-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "4e4eb2317e3a84dec3c38904c69a9829",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.12",
            "size": 79533,
            "upload_time": "2025-08-30T16:28:52",
            "upload_time_iso_8601": "2025-08-30T16:28:52.415256Z",
            "url": "https://files.pythonhosted.org/packages/f3/e6/9f61cf01ed80b3eeaff080a2fa06d48552c012357dd5856bc222ca16fc4a/dsat_ai-0.3.4-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "72c5c9bb3cf7a5a7fd1cacd1ac0125107dcf011188fb5ac3c17d8fc47a02e9f6",
                "md5": "841ee0a8c2377b34f587fb18d1052240",
                "sha256": "f5ec2262208d592e6bdf0d20d0f8b30e425354b2443cc2a36e1eca0be138da5a"
            },
            "downloads": -1,
            "filename": "dsat_ai-0.3.4.tar.gz",
            "has_sig": false,
            "md5_digest": "841ee0a8c2377b34f587fb18d1052240",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.12",
            "size": 254168,
            "upload_time": "2025-08-30T16:28:53",
            "upload_time_iso_8601": "2025-08-30T16:28:53.885007Z",
            "url": "https://files.pythonhosted.org/packages/72/c5/c9bb3cf7a5a7fd1cacd1ac0125107dcf011188fb5ac3c17d8fc47a02e9f6/dsat_ai-0.3.4.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-08-30 16:28:53",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "ecodan",
    "github_project": "dsat",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "dsat-ai"
}
        
Elapsed time: 1.20071s