# Dan's Simple Agent Toolkit (DSAT)
DSAT is a comprehensive Python toolkit for building LLM applications and running experiments. It consists of two main frameworks that work independently or together:
## 🤖 [Agents Framework](readme-agents.md)
A unified interface for working with multiple LLM providers through configuration-driven agents.
**Key Features:**
- **Multi-provider support**: Anthropic Claude, Google Vertex AI, Ollama (local models)
- **Configuration-driven**: JSON configs + TOML prompt templates
- **Comprehensive logging**: Standard Python logging, JSONL files, or custom callbacks
- **Prompt versioning**: Versioned prompt management with TOML templates
- **Factory patterns**: Easy agent creation and management
**Quick Example:**
```python
from agents.agent import Agent, AgentConfig
config = AgentConfig(
agent_name="my_assistant",
model_provider="anthropic", # or "google", "ollama"
model_family="claude",
model_version="claude-3-5-haiku-latest",
prompt="assistant:v1",
provider_auth={"api_key": "your-api-key"}
)
agent = Agent.create(config)
response = agent.invoke("Hello, how are you?")
```
## 📊 [Scryptorum Framework](readme-scryptorum.md)
A modern, annotation-driven framework for running and tracking LLM experiments.
**Key Features:**
- **Dual run types**: Trial runs (logs only) vs Milestone runs (full versioning)
- **Annotation-driven**: `@experiment`, `@metric`, `@timer`, `@llm_call` decorators
- **CLI-configurable**: Same code runs as trial or milestone based on CLI flags
- **Thread-safe logging**: JSONL format for metrics, timings, and LLM calls
- **Project integration**: Seamlessly integrates with existing Python projects
**Quick Example:**
```python
from scryptorum import experiment, metric, timer
@experiment(name="sentiment_analysis")
def main():
reviews = load_reviews()
results = []
for review in reviews:
sentiment = analyze_sentiment(review)
results.append(sentiment)
accuracy = calculate_accuracy(results)
return accuracy
@timer("data_loading")
def load_reviews():
return ["Great product!", "Terrible service", "Love it!"]
@metric(name="accuracy", metric_type="accuracy")
def calculate_accuracy(results):
return 0.85
```
## 🔧 Framework Integration
When used together, DSAT provides `AgentExperiment` and `AgentRun` classes that extend Scryptorum's base classes with agent-specific capabilities:
```python
from agents.agent_experiment import AgentExperiment
from scryptorum import metric
@experiment(name="agent_evaluation")
def evaluate_agents():
# Load agents from configs
agent1 = Agent.create(config1)
agent2 = Agent.create(config2)
# Run evaluation with automatic LLM call logging
score1 = evaluate_agent(agent1)
score2 = evaluate_agent(agent2)
return {"agent1": score1, "agent2": score2}
```
## 🚀 Quick Start
### Installation
```bash
# Basic installation
git clone <repository-url>
cd dsat
uv sync
# With optional dependencies
uv sync --extra dev # Development tools
uv sync --extra server # HTTP server support
```
### Initialize a Project
```bash
# Initialize scryptorum in your Python project
scryptorum init
# Create your first experiment
scryptorum create-experiment my_experiment
```
### Run Examples
```bash
# Agent conversation demo
python examples/agents/conversation.py
# Agent logging examples
python examples/agents/agent_logging_examples.py
# Complete experiment with agent evaluation
python examples/scryptorum/literary_evaluation.py
```
## 📁 Examples
The [`examples/`](examples/) directory contains comprehensive demonstrations:
- **[`examples/agents/`](examples/agents/)**: Agent framework examples including logging patterns and character conversations
- **[`examples/scryptorum/`](examples/scryptorum/)**: Experiment tracking examples with literary agent evaluation
- **[`examples/config/`](examples/config/)**: Shared configurations and prompt templates
## 🏗️ Architecture
```
your_project/ ← Your Python Package
├── src/your_package/
│ ├── experiments/ ← Your experiment code
│ └── agents/ ← Your agent code
├── .scryptorum ← Scryptorum config
└── pyproject.toml ← Dependencies
~/experiments/ ← Scryptorum Project (separate location)
├── your_package/ ← Project tracking
│ ├── experiments/ ← Experiment data & results
│ │ └── my_experiment/
│ │ ├── runs/ ← Trial & milestone runs
│ │ ├── config/ ← Agent configs
│ │ └── prompts/ ← Prompt templates
│ └── data/ ← Shared data
```
## 📖 Documentation
- **[Agents Framework](readme-agents.md)**: Multi-provider LLM agent system
- **[Scryptorum Framework](readme-scryptorum.md)**: Experiment tracking and management
- **[Examples Documentation](examples/README.md)**: Comprehensive examples and tutorials
## 🛠️ Development
```bash
# Install development dependencies
uv sync --extra dev
# Run tests
python -m pytest test/ -v
# Format code
black src/
# Lint code
ruff check src/
```
## 📄 License
MIT License - see LICENSE file for details.
---
*DSAT simplifies LLM application development by providing unified agent abstractions and comprehensive experiment tracking with minimal boilerplate.*
Raw data
{
"_id": null,
"home_page": null,
"name": "dsat-ai",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.12",
"maintainer_email": null,
"keywords": "agents, ai, anthropic, claude, experiments, google, llm, ollama, vertex-ai",
"author": null,
"author_email": "Dan <ecodan@hotmail.com>",
"download_url": "https://files.pythonhosted.org/packages/23/5a/1ddbfb9bf3b4a37b345fa61c61f956419e75b16044c3c67aec5b5f5db5e3/dsat_ai-0.1.0.tar.gz",
"platform": null,
"description": "# Dan's Simple Agent Toolkit (DSAT)\n\nDSAT is a comprehensive Python toolkit for building LLM applications and running experiments. It consists of two main frameworks that work independently or together:\n\n## \ud83e\udd16 [Agents Framework](readme-agents.md)\n\nA unified interface for working with multiple LLM providers through configuration-driven agents.\n\n**Key Features:**\n- **Multi-provider support**: Anthropic Claude, Google Vertex AI, Ollama (local models)\n- **Configuration-driven**: JSON configs + TOML prompt templates\n- **Comprehensive logging**: Standard Python logging, JSONL files, or custom callbacks\n- **Prompt versioning**: Versioned prompt management with TOML templates\n- **Factory patterns**: Easy agent creation and management\n\n**Quick Example:**\n```python\nfrom agents.agent import Agent, AgentConfig\n\nconfig = AgentConfig(\n agent_name=\"my_assistant\",\n model_provider=\"anthropic\", # or \"google\", \"ollama\"\n model_family=\"claude\", \n model_version=\"claude-3-5-haiku-latest\",\n prompt=\"assistant:v1\",\n provider_auth={\"api_key\": \"your-api-key\"}\n)\n\nagent = Agent.create(config)\nresponse = agent.invoke(\"Hello, how are you?\")\n```\n\n## \ud83d\udcca [Scryptorum Framework](readme-scryptorum.md)\n\nA modern, annotation-driven framework for running and tracking LLM experiments.\n\n**Key Features:**\n- **Dual run types**: Trial runs (logs only) vs Milestone runs (full versioning) \n- **Annotation-driven**: `@experiment`, `@metric`, `@timer`, `@llm_call` decorators\n- **CLI-configurable**: Same code runs as trial or milestone based on CLI flags\n- **Thread-safe logging**: JSONL format for metrics, timings, and LLM calls\n- **Project integration**: Seamlessly integrates with existing Python projects\n\n**Quick Example:**\n```python\nfrom scryptorum import experiment, metric, timer\n\n@experiment(name=\"sentiment_analysis\")\ndef main():\n reviews = load_reviews()\n results = []\n \n for review in reviews:\n sentiment = analyze_sentiment(review)\n results.append(sentiment)\n \n accuracy = calculate_accuracy(results)\n return accuracy\n\n@timer(\"data_loading\")\ndef load_reviews():\n return [\"Great product!\", \"Terrible service\", \"Love it!\"]\n\n@metric(name=\"accuracy\", metric_type=\"accuracy\")\ndef calculate_accuracy(results):\n return 0.85\n```\n\n## \ud83d\udd27 Framework Integration\n\nWhen used together, DSAT provides `AgentExperiment` and `AgentRun` classes that extend Scryptorum's base classes with agent-specific capabilities:\n\n```python\nfrom agents.agent_experiment import AgentExperiment\nfrom scryptorum import metric\n\n@experiment(name=\"agent_evaluation\")\ndef evaluate_agents():\n # Load agents from configs\n agent1 = Agent.create(config1)\n agent2 = Agent.create(config2)\n \n # Run evaluation with automatic LLM call logging\n score1 = evaluate_agent(agent1)\n score2 = evaluate_agent(agent2) \n \n return {\"agent1\": score1, \"agent2\": score2}\n```\n\n## \ud83d\ude80 Quick Start\n\n### Installation\n```bash\n# Basic installation\ngit clone <repository-url>\ncd dsat\nuv sync\n\n# With optional dependencies\nuv sync --extra dev # Development tools\nuv sync --extra server # HTTP server support\n```\n\n### Initialize a Project\n```bash\n# Initialize scryptorum in your Python project\nscryptorum init\n\n# Create your first experiment\nscryptorum create-experiment my_experiment\n```\n\n### Run Examples\n```bash\n# Agent conversation demo\npython examples/agents/conversation.py\n\n# Agent logging examples \npython examples/agents/agent_logging_examples.py\n\n# Complete experiment with agent evaluation\npython examples/scryptorum/literary_evaluation.py\n```\n\n## \ud83d\udcc1 Examples\n\nThe [`examples/`](examples/) directory contains comprehensive demonstrations:\n\n- **[`examples/agents/`](examples/agents/)**: Agent framework examples including logging patterns and character conversations\n- **[`examples/scryptorum/`](examples/scryptorum/)**: Experiment tracking examples with literary agent evaluation\n- **[`examples/config/`](examples/config/)**: Shared configurations and prompt templates\n\n## \ud83c\udfd7\ufe0f Architecture\n\n```\nyour_project/ \u2190 Your Python Package\n\u251c\u2500\u2500 src/your_package/\n\u2502 \u251c\u2500\u2500 experiments/ \u2190 Your experiment code\n\u2502 \u2514\u2500\u2500 agents/ \u2190 Your agent code \n\u251c\u2500\u2500 .scryptorum \u2190 Scryptorum config\n\u2514\u2500\u2500 pyproject.toml \u2190 Dependencies\n\n~/experiments/ \u2190 Scryptorum Project (separate location)\n\u251c\u2500\u2500 your_package/ \u2190 Project tracking\n\u2502 \u251c\u2500\u2500 experiments/ \u2190 Experiment data & results\n\u2502 \u2502 \u2514\u2500\u2500 my_experiment/\n\u2502 \u2502 \u251c\u2500\u2500 runs/ \u2190 Trial & milestone runs\n\u2502 \u2502 \u251c\u2500\u2500 config/ \u2190 Agent configs\n\u2502 \u2502 \u2514\u2500\u2500 prompts/ \u2190 Prompt templates\n\u2502 \u2514\u2500\u2500 data/ \u2190 Shared data\n```\n\n## \ud83d\udcd6 Documentation\n\n- **[Agents Framework](readme-agents.md)**: Multi-provider LLM agent system\n- **[Scryptorum Framework](readme-scryptorum.md)**: Experiment tracking and management\n- **[Examples Documentation](examples/README.md)**: Comprehensive examples and tutorials\n\n## \ud83d\udee0\ufe0f Development\n\n```bash\n# Install development dependencies\nuv sync --extra dev\n\n# Run tests\npython -m pytest test/ -v\n\n# Format code\nblack src/\n\n# Lint code \nruff check src/\n```\n\n## \ud83d\udcc4 License\n\nMIT License - see LICENSE file for details.\n\n---\n\n*DSAT simplifies LLM application development by providing unified agent abstractions and comprehensive experiment tracking with minimal boilerplate.*",
"bugtrack_url": null,
"license": "MIT",
"summary": "Dan's Simple Agent Toolkit - Multi-provider LLM agents and experiment tracking",
"version": "0.1.0",
"project_urls": {
"Documentation": "https://github.com/ecodan/dsat/blob/main/README.md",
"Homepage": "https://github.com/ecodan/dsat",
"Issues": "https://github.com/ecodan/dsat/issues",
"Repository": "https://github.com/ecodan/dsat"
},
"split_keywords": [
"agents",
" ai",
" anthropic",
" claude",
" experiments",
" google",
" llm",
" ollama",
" vertex-ai"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "2938f4e94c926eb159d0f10dfc50c96ac873a477f89ead6a6ddd939c00492307",
"md5": "351dc08ad475724671868a0ea4f79610",
"sha256": "d2a2784b5ff814869fc764b6542f29fb44580cc2b5f32c15cd3a8b333795dc20"
},
"downloads": -1,
"filename": "dsat_ai-0.1.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "351dc08ad475724671868a0ea4f79610",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.12",
"size": 43711,
"upload_time": "2025-08-14T16:37:46",
"upload_time_iso_8601": "2025-08-14T16:37:46.833519Z",
"url": "https://files.pythonhosted.org/packages/29/38/f4e94c926eb159d0f10dfc50c96ac873a477f89ead6a6ddd939c00492307/dsat_ai-0.1.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "235a1ddbfb9bf3b4a37b345fa61c61f956419e75b16044c3c67aec5b5f5db5e3",
"md5": "1fa0e0d739dd9ed357951885c0eedfee",
"sha256": "a76495c0fbb21edefdc19020152c6f8e9df9c8656049b986378d6ad4b7f53872"
},
"downloads": -1,
"filename": "dsat_ai-0.1.0.tar.gz",
"has_sig": false,
"md5_digest": "1fa0e0d739dd9ed357951885c0eedfee",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.12",
"size": 139791,
"upload_time": "2025-08-14T16:37:48",
"upload_time_iso_8601": "2025-08-14T16:37:48.434794Z",
"url": "https://files.pythonhosted.org/packages/23/5a/1ddbfb9bf3b4a37b345fa61c61f956419e75b16044c3c67aec5b5f5db5e3/dsat_ai-0.1.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-08-14 16:37:48",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "ecodan",
"github_project": "dsat",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"lcname": "dsat-ai"
}