Name | keywordsai JSON |
Version |
1.0.3
JSON |
| download |
home_page | None |
Summary | A package that helps interacting with Keywords AI monitoring, evaluation & user analytics APIs |
upload_time | 2025-08-09 21:31:13 |
maintainer | None |
docs_url | None |
author | Raymond Huang |
requires_python | <4.0,>3.9 |
license | Apache 2.0 |
keywords |
|
VCS |
|
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# Keywords AI Python SDK
A comprehensive Python SDK for Keywords AI monitoring, evaluation, and analytics APIs. Build, test, and evaluate your AI applications with ease.
## ๐ Features
- **๐ Dataset Management** - Create, manage, and analyze datasets from your AI logs
- **๐ฌ Experiment Framework** - Run A/B tests with different prompts and model configurations
- **๐ AI Evaluation** - Evaluate model outputs with built-in and custom evaluators
- **๐ Log Management** - Comprehensive logging and monitoring for AI applications
## ๐ฆ Installation
```bash
pip install keywordsai
```
Or with Poetry:
```bash
poetry add keywordsai
```
## ๐ Quick Start
### 1. Set up your API key
```bash
export KEYWORDSAI_API_KEY="your-api-key-here"
```
Or create a `.env` file:
```env
KEYWORDSAI_API_KEY=your-api-key-here
KEYWORDSAI_BASE_URL=https://api.keywordsai.co # optional
```
### 2. Basic Usage
```python
from keywordsai import DatasetAPI, ExperimentAPI, EvaluatorAPI
# Initialize clients
dataset_client = DatasetAPI(api_key="your-api-key")
experiment_client = ExperimentAPI(api_key="your-api-key")
evaluator_client = EvaluatorAPI(api_key="your-api-key")
# Create a dataset from logs
dataset = dataset_client.create({
"name": "My Dataset",
"description": "Dataset for evaluation",
"type": "sampling",
"sampling": 100
})
# List available evaluators
evaluators = evaluator_client.list()
print(f"Available evaluators: {len(evaluators.results)}")
# Run evaluation
evaluation = dataset_client.run_dataset_evaluation(
dataset_id=dataset.id,
evaluator_slugs=["accuracy-evaluator", "relevance-evaluator"]
)
```
## ๐๏ธ Core APIs
### Dataset API
Manage datasets and run evaluations on your AI model outputs:
```python
from keywordsai import DatasetAPI, DatasetCreate
client = DatasetAPI(api_key="your-api-key")
# Create dataset
dataset = client.create(DatasetCreate(
name="Production Logs",
type="sampling",
sampling=1000
))
# Add logs to dataset
client.add_logs_to_dataset(
dataset_id=dataset.id,
start_time="2024-01-01T00:00:00Z",
end_time="2024-01-02T00:00:00Z"
)
# Run evaluations
evaluation = client.run_dataset_evaluation(
dataset_id=dataset.id,
evaluator_slugs=["accuracy-evaluator"]
)
```
### Experiment API
Run A/B tests with different model configurations:
```python
from keywordsai import ExperimentAPI, ExperimentCreate, ExperimentColumnType
client = ExperimentAPI(api_key="your-api-key")
# Create experiment
experiment = client.create(ExperimentCreate(
name="Prompt A/B Test",
description="Testing different system prompts",
columns=[
ExperimentColumnType(
name="Version A",
model="gpt-4",
temperature=0.7,
prompt_messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "{{user_input}}"}
]
),
ExperimentColumnType(
name="Version B",
model="gpt-4",
temperature=0.3,
prompt_messages=[
{"role": "system", "content": "You are a concise assistant."},
{"role": "user", "content": "{{user_input}}"}
]
)
]
))
# Run experiment
results = client.run_experiment(experiment_id=experiment.id)
```
### Evaluator API
Discover and use AI evaluators:
```python
from keywordsai import EvaluatorAPI
client = EvaluatorAPI(api_key="your-api-key")
# List all evaluators
evaluators = client.list()
# Get specific evaluator details
evaluator = client.get("accuracy-evaluator")
print(f"Evaluator: {evaluator.name}")
print(f"Description: {evaluator.description}")
```
### Prompt API
Manage prompts and their versions:
```python
from keywordsai import PromptAPI
from keywordsai_sdk.keywordsai_types.prompt_types import Prompt, PromptVersion
client = PromptAPI(api_key="your-api-key")
# Create a prompt
prompt = client.create()
# Create a version for the prompt
version = client.create_version(prompt.id, PromptVersion(
prompt_version_id="v1",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Hello!"}
],
model="gpt-4o-mini",
temperature=0.7
))
# List all prompts
prompts = client.list()
# Get specific prompt with versions
prompt_details = client.get(prompt.id)
```
### Log API
Create and manage AI application logs:
```python
from keywordsai import LogAPI, KeywordsAILogParams
client = LogAPI(api_key="your-api-key")
# Create log entry
log = client.create(KeywordsAILogParams(
model="gpt-4",
input="What is machine learning?",
output="Machine learning is a subset of AI...",
status_code=200,
prompt_tokens=10,
completion_tokens=50
))
```
## ๐ Async Support
All APIs support both synchronous and asynchronous operations:
```python
import asyncio
from keywordsai import DatasetAPI
async def main():
client = DatasetAPI(api_key="your-api-key")
# Use 'await' with 'a' prefixed methods for async
datasets = await client.alist()
dataset = await client.aget(dataset_id="123")
print(f"Found {datasets.count} datasets")
asyncio.run(main())
```
## ๐ Examples
Check out the [`examples/`](https://github.com/Keywords-AI/keywordsai/tree/main/python-sdks/keywordsai/examples) directory for complete workflows:
- **[Simple Evaluator Example](https://github.com/Keywords-AI/keywordsai/blob/main/python-sdks/keywordsai/examples/simple_evaluator_example.py)** - Basic evaluator operations
- **[Dataset Workflow](https://github.com/Keywords-AI/keywordsai/blob/main/python-sdks/keywordsai/examples/dataset_workflow_example.py)** - Complete dataset management
- **[Experiment Workflow](https://github.com/Keywords-AI/keywordsai/blob/main/python-sdks/keywordsai/examples/experiment_workflow_example.py)** - A/B testing with experiments
- **[Prompt Workflow](https://github.com/Keywords-AI/keywordsai/blob/main/python-sdks/keywordsai/examples/prompt_workflow_example.py)** - Prompt and version management
```bash
# Run examples
python examples/simple_evaluator_example.py
python examples/dataset_workflow_example.py
python examples/experiment_workflow_example.py
python examples/prompt_workflow_example.py
```
## ๐งช Testing
The SDK includes comprehensive tests for both unit testing and real API integration:
```bash
# Install development dependencies
poetry install
# Run all tests
python -m pytest tests/ -v
# Run specific test suites
python -m pytest tests/test_dataset_api_real.py -v
python -m pytest tests/test_experiment_api_real.py -v
```
## ๐ API Reference
### Core Classes
- **`DatasetAPI`** - Dataset management and evaluation
- **`ExperimentAPI`** - A/B testing and experimentation
- **`EvaluatorAPI`** - AI model evaluation tools
- **`LogAPI`** - Application logging and monitoring
- **`PromptAPI`** - Prompt and version management
### Type Safety
All APIs include comprehensive type definitions:
```python
from keywordsai import (
Dataset, DatasetCreate, DatasetUpdate,
Experiment, ExperimentCreate, ExperimentUpdate,
Evaluator, EvaluatorList,
KeywordsAILogParams, LogList,
PromptAPI
)
from keywordsai_sdk.keywordsai_types.prompt_types import (
Prompt, PromptVersion
)
```
## ๐ง Configuration
### Environment Variables
```bash
KEYWORDSAI_API_KEY=your-api-key-here # Required
KEYWORDSAI_BASE_URL=https://api.keywordsai.co # Optional
```
### Client Initialization
```python
# Using environment variables
dataset_client = DatasetAPI() # Reads from KEYWORDSAI_API_KEY
prompt_client = PromptAPI() # Reads from KEYWORDSAI_API_KEY
# Explicit configuration
client = DatasetAPI(
api_key="your-api-key",
base_url="https://api.keywordsai.co"
)
```
## ๐ Requirements
- Python 3.9+
- httpx >= 0.25.0
- keywordsai-sdk >= 0.4.63
## ๐ License
Apache 2.0 - see [LICENSE](https://github.com/Keywords-AI/keywordsai/blob/main/LICENSE) for details.
## ๐ค Contributing
We welcome contributions! Please see our [Contributing Guide](https://github.com/Keywords-AI/keywordsai/blob/main/CONTRIBUTING.md) for details.
## ๐ Support
- ๐ง Email: [team@keywordsai.co](mailto:team@keywordsai.co)
- ๐ Documentation: [https://docs.keywordsai.co](https://docs.keywordsai.co)
- ๐ Issues: [GitHub Issues](https://github.com/Keywords-AI/keywordsai/issues)
---
Built with โค๏ธ by the Keywords AI team
Raw data
{
"_id": null,
"home_page": null,
"name": "keywordsai",
"maintainer": null,
"docs_url": null,
"requires_python": "<4.0,>3.9",
"maintainer_email": null,
"keywords": null,
"author": "Raymond Huang",
"author_email": "raymond@keywordsai.co",
"download_url": "https://files.pythonhosted.org/packages/00/c9/18982caa0c3d4b59e57a5bd2a62f97beb7829a4f7b260c5e376007c38003/keywordsai-1.0.3.tar.gz",
"platform": null,
"description": "# Keywords AI Python SDK\n\nA comprehensive Python SDK for Keywords AI monitoring, evaluation, and analytics APIs. Build, test, and evaluate your AI applications with ease.\n\n## \ud83d\ude80 Features\n\n- **\ud83d\udcca Dataset Management** - Create, manage, and analyze datasets from your AI logs\n- **\ud83d\udd2c Experiment Framework** - Run A/B tests with different prompts and model configurations\n- **\ud83d\udcc8 AI Evaluation** - Evaluate model outputs with built-in and custom evaluators\n- **\ud83d\udcdd Log Management** - Comprehensive logging and monitoring for AI applications\n\n## \ud83d\udce6 Installation\n\n```bash\npip install keywordsai\n```\n\nOr with Poetry:\n\n```bash\npoetry add keywordsai\n```\n\n## \ud83d\udd11 Quick Start\n\n### 1. Set up your API key\n\n```bash\nexport KEYWORDSAI_API_KEY=\"your-api-key-here\"\n```\n\nOr create a `.env` file:\n\n```env\nKEYWORDSAI_API_KEY=your-api-key-here\nKEYWORDSAI_BASE_URL=https://api.keywordsai.co # optional\n```\n\n### 2. Basic Usage\n\n```python\nfrom keywordsai import DatasetAPI, ExperimentAPI, EvaluatorAPI\n\n# Initialize clients\ndataset_client = DatasetAPI(api_key=\"your-api-key\")\nexperiment_client = ExperimentAPI(api_key=\"your-api-key\")\nevaluator_client = EvaluatorAPI(api_key=\"your-api-key\")\n\n# Create a dataset from logs\ndataset = dataset_client.create({\n \"name\": \"My Dataset\",\n \"description\": \"Dataset for evaluation\",\n \"type\": \"sampling\",\n \"sampling\": 100\n})\n\n# List available evaluators\nevaluators = evaluator_client.list()\nprint(f\"Available evaluators: {len(evaluators.results)}\")\n\n# Run evaluation\nevaluation = dataset_client.run_dataset_evaluation(\n dataset_id=dataset.id,\n evaluator_slugs=[\"accuracy-evaluator\", \"relevance-evaluator\"]\n)\n```\n\n## \ud83c\udfd7\ufe0f Core APIs\n\n### Dataset API\nManage datasets and run evaluations on your AI model outputs:\n\n```python\nfrom keywordsai import DatasetAPI, DatasetCreate\n\nclient = DatasetAPI(api_key=\"your-api-key\")\n\n# Create dataset\ndataset = client.create(DatasetCreate(\n name=\"Production Logs\",\n type=\"sampling\",\n sampling=1000\n))\n\n# Add logs to dataset\nclient.add_logs_to_dataset(\n dataset_id=dataset.id,\n start_time=\"2024-01-01T00:00:00Z\",\n end_time=\"2024-01-02T00:00:00Z\"\n)\n\n# Run evaluations\nevaluation = client.run_dataset_evaluation(\n dataset_id=dataset.id,\n evaluator_slugs=[\"accuracy-evaluator\"]\n)\n```\n\n### Experiment API\nRun A/B tests with different model configurations:\n\n```python\nfrom keywordsai import ExperimentAPI, ExperimentCreate, ExperimentColumnType\n\nclient = ExperimentAPI(api_key=\"your-api-key\")\n\n# Create experiment\nexperiment = client.create(ExperimentCreate(\n name=\"Prompt A/B Test\",\n description=\"Testing different system prompts\",\n columns=[\n ExperimentColumnType(\n name=\"Version A\",\n model=\"gpt-4\",\n temperature=0.7,\n prompt_messages=[\n {\"role\": \"system\", \"content\": \"You are a helpful assistant.\"},\n {\"role\": \"user\", \"content\": \"{{user_input}}\"}\n ]\n ),\n ExperimentColumnType(\n name=\"Version B\", \n model=\"gpt-4\",\n temperature=0.3,\n prompt_messages=[\n {\"role\": \"system\", \"content\": \"You are a concise assistant.\"},\n {\"role\": \"user\", \"content\": \"{{user_input}}\"}\n ]\n )\n ]\n))\n\n# Run experiment\nresults = client.run_experiment(experiment_id=experiment.id)\n```\n\n### Evaluator API\nDiscover and use AI evaluators:\n\n```python\nfrom keywordsai import EvaluatorAPI\n\nclient = EvaluatorAPI(api_key=\"your-api-key\")\n\n# List all evaluators\nevaluators = client.list()\n\n# Get specific evaluator details\nevaluator = client.get(\"accuracy-evaluator\")\nprint(f\"Evaluator: {evaluator.name}\")\nprint(f\"Description: {evaluator.description}\")\n```\n\n### Prompt API\nManage prompts and their versions:\n\n```python\nfrom keywordsai import PromptAPI\nfrom keywordsai_sdk.keywordsai_types.prompt_types import Prompt, PromptVersion\n\nclient = PromptAPI(api_key=\"your-api-key\")\n\n# Create a prompt\nprompt = client.create()\n\n# Create a version for the prompt\nversion = client.create_version(prompt.id, PromptVersion(\n prompt_version_id=\"v1\",\n messages=[\n {\"role\": \"system\", \"content\": \"You are a helpful assistant.\"},\n {\"role\": \"user\", \"content\": \"Hello!\"}\n ],\n model=\"gpt-4o-mini\",\n temperature=0.7\n))\n\n# List all prompts\nprompts = client.list()\n\n# Get specific prompt with versions\nprompt_details = client.get(prompt.id)\n```\n\n### Log API\nCreate and manage AI application logs:\n\n```python\nfrom keywordsai import LogAPI, KeywordsAILogParams\n\nclient = LogAPI(api_key=\"your-api-key\")\n\n# Create log entry\nlog = client.create(KeywordsAILogParams(\n model=\"gpt-4\",\n input=\"What is machine learning?\",\n output=\"Machine learning is a subset of AI...\",\n status_code=200,\n prompt_tokens=10,\n completion_tokens=50\n))\n```\n\n## \ud83d\udd04 Async Support\n\nAll APIs support both synchronous and asynchronous operations:\n\n```python\nimport asyncio\nfrom keywordsai import DatasetAPI\n\nasync def main():\n client = DatasetAPI(api_key=\"your-api-key\")\n \n # Use 'await' with 'a' prefixed methods for async\n datasets = await client.alist()\n dataset = await client.aget(dataset_id=\"123\")\n \n print(f\"Found {datasets.count} datasets\")\n\nasyncio.run(main())\n```\n\n## \ud83d\udcda Examples\n\nCheck out the [`examples/`](https://github.com/Keywords-AI/keywordsai/tree/main/python-sdks/keywordsai/examples) directory for complete workflows:\n\n- **[Simple Evaluator Example](https://github.com/Keywords-AI/keywordsai/blob/main/python-sdks/keywordsai/examples/simple_evaluator_example.py)** - Basic evaluator operations\n- **[Dataset Workflow](https://github.com/Keywords-AI/keywordsai/blob/main/python-sdks/keywordsai/examples/dataset_workflow_example.py)** - Complete dataset management\n- **[Experiment Workflow](https://github.com/Keywords-AI/keywordsai/blob/main/python-sdks/keywordsai/examples/experiment_workflow_example.py)** - A/B testing with experiments\n- **[Prompt Workflow](https://github.com/Keywords-AI/keywordsai/blob/main/python-sdks/keywordsai/examples/prompt_workflow_example.py)** - Prompt and version management\n\n```bash\n# Run examples\npython examples/simple_evaluator_example.py\npython examples/dataset_workflow_example.py\npython examples/experiment_workflow_example.py\npython examples/prompt_workflow_example.py\n```\n\n## \ud83e\uddea Testing\n\nThe SDK includes comprehensive tests for both unit testing and real API integration:\n\n```bash\n# Install development dependencies\npoetry install\n\n# Run all tests\npython -m pytest tests/ -v\n\n# Run specific test suites\npython -m pytest tests/test_dataset_api_real.py -v\npython -m pytest tests/test_experiment_api_real.py -v\n```\n\n## \ud83d\udcd6 API Reference\n\n### Core Classes\n\n- **`DatasetAPI`** - Dataset management and evaluation\n- **`ExperimentAPI`** - A/B testing and experimentation \n- **`EvaluatorAPI`** - AI model evaluation tools\n- **`LogAPI`** - Application logging and monitoring\n- **`PromptAPI`** - Prompt and version management\n\n### Type Safety\n\nAll APIs include comprehensive type definitions:\n\n```python\nfrom keywordsai import (\n Dataset, DatasetCreate, DatasetUpdate,\n Experiment, ExperimentCreate, ExperimentUpdate,\n Evaluator, EvaluatorList,\n KeywordsAILogParams, LogList,\n PromptAPI\n)\nfrom keywordsai_sdk.keywordsai_types.prompt_types import (\n Prompt, PromptVersion\n)\n```\n\n## \ud83d\udd27 Configuration\n\n### Environment Variables\n\n```bash\nKEYWORDSAI_API_KEY=your-api-key-here # Required\nKEYWORDSAI_BASE_URL=https://api.keywordsai.co # Optional\n```\n\n### Client Initialization\n\n```python\n# Using environment variables\ndataset_client = DatasetAPI() # Reads from KEYWORDSAI_API_KEY\nprompt_client = PromptAPI() # Reads from KEYWORDSAI_API_KEY\n\n# Explicit configuration\nclient = DatasetAPI(\n api_key=\"your-api-key\",\n base_url=\"https://api.keywordsai.co\"\n)\n```\n\n## \ud83d\udccb Requirements\n\n- Python 3.9+\n- httpx >= 0.25.0\n- keywordsai-sdk >= 0.4.63\n\n## \ud83d\udcc4 License\n\nApache 2.0 - see [LICENSE](https://github.com/Keywords-AI/keywordsai/blob/main/LICENSE) for details.\n\n## \ud83e\udd1d Contributing\n\nWe welcome contributions! Please see our [Contributing Guide](https://github.com/Keywords-AI/keywordsai/blob/main/CONTRIBUTING.md) for details.\n\n## \ud83d\udcde Support\n\n- \ud83d\udce7 Email: [team@keywordsai.co](mailto:team@keywordsai.co)\n- \ud83d\udcd6 Documentation: [https://docs.keywordsai.co](https://docs.keywordsai.co)\n- \ud83d\udc1b Issues: [GitHub Issues](https://github.com/Keywords-AI/keywordsai/issues)\n\n---\n\nBuilt with \u2764\ufe0f by the Keywords AI team\n",
"bugtrack_url": null,
"license": "Apache 2.0",
"summary": "A package that helps interacting with Keywords AI monitoring, evaluation & user analytics APIs",
"version": "1.0.3",
"project_urls": null,
"split_keywords": [],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "cd1c180ce7a41f70ff2afce8d2fab49d41471e46f89b2dd81158c20863e8060a",
"md5": "dac538946e113234542e43b656fcddf4",
"sha256": "306c89005eb3bccdfc32d7701578b70a698efdf029eb485d5853f0c6a45b65cf"
},
"downloads": -1,
"filename": "keywordsai-1.0.3-py3-none-any.whl",
"has_sig": false,
"md5_digest": "dac538946e113234542e43b656fcddf4",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "<4.0,>3.9",
"size": 45189,
"upload_time": "2025-08-09T21:31:12",
"upload_time_iso_8601": "2025-08-09T21:31:12.174662Z",
"url": "https://files.pythonhosted.org/packages/cd/1c/180ce7a41f70ff2afce8d2fab49d41471e46f89b2dd81158c20863e8060a/keywordsai-1.0.3-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "00c918982caa0c3d4b59e57a5bd2a62f97beb7829a4f7b260c5e376007c38003",
"md5": "a8b6ffc81f9f2a6cb690cd15f5ac5598",
"sha256": "5c0e8741c4053e01a61133d3ba822450b2bf74e82bd01908885ddfb03cc0dd13"
},
"downloads": -1,
"filename": "keywordsai-1.0.3.tar.gz",
"has_sig": false,
"md5_digest": "a8b6ffc81f9f2a6cb690cd15f5ac5598",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "<4.0,>3.9",
"size": 34881,
"upload_time": "2025-08-09T21:31:13",
"upload_time_iso_8601": "2025-08-09T21:31:13.522192Z",
"url": "https://files.pythonhosted.org/packages/00/c9/18982caa0c3d4b59e57a5bd2a62f97beb7829a4f7b260c5e376007c38003/keywordsai-1.0.3.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-08-09 21:31:13",
"github": false,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"lcname": "keywordsai"
}