Name | tora JSON |
Version |
0.0.6
JSON |
| download |
home_page | None |
Summary | Python SDK for Tora ML experiment tracking platform |
upload_time | 2025-07-11 18:13:12 |
maintainer | None |
docs_url | None |
author | Tora Team |
requires_python | >=3.11 |
license | MIT License
Copyright (c) [year] [fullname]
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE. |
keywords |
data-science
experiment-tracking
machine-learning
mlops
|
VCS |
 |
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# Tora Python SDK
A Python SDK for the Tora ML experiment tracking platform.
## Features
- 🚀 **Easy to use**: Simple API for logging metrics and managing experiments
- 📊 **Comprehensive tracking**: Log metrics, hyperparameters, tags, and metadata
- 🔄 **Buffered logging**: Efficient batched metric logging for better performance
- 🛡️ **Type safe**: Full type hints and validation for better development experience
- 🌐 **Web dashboard**: Beautiful web interface for visualizing experiments
- 🔧 **Flexible**: Works with any ML framework (PyTorch, TensorFlow, scikit-learn, etc.)
## Installation
```bash
pip install tora
```
## Quick Start
### 1. Set up your environment
```bash
export TORA_API_KEY="your-api-key"
```
### 2. Basic usage
```python
import tora
# Create an experiment
client = tora.Tora.create_experiment(
name="my-ml-experiment",
workspace_id="your-workspace-id",
description="Testing the new model architecture",
hyperparams={
"learning_rate": 0.001,
"batch_size": 32,
"epochs": 100,
},
tags=["pytorch", "cnn", "image-classification"]
)
# Log metrics during training
for epoch in range(100):
# ... your training code ...
client.log("train_loss", train_loss, step=epoch)
client.log("train_accuracy", train_acc, step=epoch)
client.log("val_loss", val_loss, step=epoch)
client.log("val_accuracy", val_acc, step=epoch)
# Ensure all metrics are sent
client.shutdown()
```
### 3. Using the global API (simpler for single experiments)
```python
import tora
# Set up global experiment
tora.setup(
name="my-experiment",
workspace_id="your-workspace-id",
hyperparams={"lr": 0.001, "batch_size": 32}
)
# Log metrics anywhere in your code
tora.tlog("accuracy", 0.95, step=100)
tora.tlog("loss", 0.05, step=100)
# Cleanup (optional - happens automatically)
tora.shutdown()
```
## Advanced Usage
### Context Manager
```python
import tora
with tora.Tora.create_experiment("my-experiment", workspace_id="ws-123") as client:
client.log("metric", 1.0)
# Automatically flushes and closes on exit
```
### Custom Configuration
```python
import tora
client = tora.Tora.create_experiment(
name="custom-experiment",
workspace_id="ws-123",
max_buffer_len=50, # Buffer up to 50 metrics before sending
api_key="custom-api-key",
server_url="https://custom-tora-instance.com/api"
)
```
### Loading Existing Experiments
```python
import tora
# Load an existing experiment
client = tora.Tora.load_experiment(
experiment_id="exp-123",
api_key="your-api-key"
)
# Continue logging to the existing experiment
client.log("new_metric", 42.0)
```
### Error Handling
```python
import tora
from tora import ToraError, ToraValidationError, ToraNetworkError
try:
client = tora.Tora.create_experiment("test", workspace_id="ws-123")
client.log("metric", 1.0)
except ToraValidationError as e:
print(f"Validation error: {e}")
except ToraNetworkError as e:
print(f"Network error: {e}")
except ToraError as e:
print(f"General Tora error: {e}")
```
## Framework Integration
### PyTorch
```python
import torch
import torch.nn as nn
import tora
# Set up experiment
client = tora.Tora.create_experiment(
name="pytorch-training",
workspace_id="ws-123",
hyperparams={
"learning_rate": 0.001,
"batch_size": 32,
"model": "ResNet18"
}
)
model = nn.Sequential(...)
optimizer = torch.optim.Adam(model.parameters(), lr=0.001)
for epoch in range(num_epochs):
for batch_idx, (data, target) in enumerate(train_loader):
optimizer.zero_grad()
output = model(data)
loss = nn.functional.cross_entropy(output, target)
loss.backward()
optimizer.step()
# Log every 100 batches
if batch_idx % 100 == 0:
step = epoch * len(train_loader) + batch_idx
client.log("train_loss", loss.item(), step=step)
client.shutdown()
```
### Hugging Face Transformers
```python
from transformers import Trainer, TrainingArguments
import tora
class ToraCallback:
def __init__(self, tora_client):
self.tora = tora_client
def on_log(self, args, state, control, logs=None, **kwargs):
if logs:
for key, value in logs.items():
if isinstance(value, (int, float)):
self.tora.log(key, value, step=state.global_step)
# Set up experiment
client = tora.Tora.create_experiment("transformer-training", workspace_id="ws-123")
# Add to trainer
trainer = Trainer(
model=model,
args=training_args,
train_dataset=train_dataset,
callbacks=[ToraCallback(client)]
)
trainer.train()
client.shutdown()
```
## API Reference
### Main Classes
#### `Tora`
The main client class for experiment tracking.
**Methods:**
- `create_experiment(name, workspace_id=None, ...)` - Create a new experiment
- `load_experiment(experiment_id, ...)` - Load an existing experiment
- `log(name, value, step=None, metadata=None)` - Log a metric
- `flush()` - Send all buffered metrics immediately
- `shutdown()` - Flush metrics and close the client
**Properties:**
- `experiment_id` - The experiment ID
- `max_buffer_len` - Maximum metrics to buffer before sending
- `buffer_size` - Current number of buffered metrics
- `is_closed` - Whether the client is closed
#### Global Functions
- `setup(name, workspace_id=None, ...)` - Set up global experiment
- `tlog(name, value, step=None, metadata=None)` - Log metric globally
- `flush()` - Flush global client
- `shutdown()` - Shutdown global client
- `is_initialized()` - Check if global client is initialized
### Exception Classes
- `ToraError` - Base exception class
- `ToraValidationError` - Input validation errors
- `ToraNetworkError` - Network-related errors
- `ToraAPIError` - API response errors
- `ToraAuthenticationError` - Authentication errors
- `ToraConfigurationError` - Configuration errors
- `ToraExperimentError` - Experiment-related errors
- `ToraMetricError` - Metric logging errors
## Configuration
### Environment Variables
- `TORA_API_KEY` - Your Tora API key
- `TORA_BASE_URL` - Base URL for the Tora API (default: https://tora-1030250455947.us-central1.run.app/api)
### Workspace Management
```python
import tora
# Create a new workspace
workspace = tora.create_workspace(
name="My ML Project",
description="Experiments for the new model",
api_key="your-api-key"
)
print(f"Created workspace: {workspace['id']}")
```
Raw data
{
"_id": null,
"home_page": null,
"name": "tora",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.11",
"maintainer_email": null,
"keywords": "data-science, experiment-tracking, machine-learning, mlops",
"author": "Tora Team",
"author_email": null,
"download_url": null,
"platform": null,
"description": "# Tora Python SDK\n\nA Python SDK for the Tora ML experiment tracking platform.\n\n## Features\n\n- \ud83d\ude80 **Easy to use**: Simple API for logging metrics and managing experiments\n- \ud83d\udcca **Comprehensive tracking**: Log metrics, hyperparameters, tags, and metadata\n- \ud83d\udd04 **Buffered logging**: Efficient batched metric logging for better performance\n- \ud83d\udee1\ufe0f **Type safe**: Full type hints and validation for better development experience\n- \ud83c\udf10 **Web dashboard**: Beautiful web interface for visualizing experiments\n- \ud83d\udd27 **Flexible**: Works with any ML framework (PyTorch, TensorFlow, scikit-learn, etc.)\n\n## Installation\n\n```bash\npip install tora\n```\n\n## Quick Start\n\n### 1. Set up your environment\n\n```bash\nexport TORA_API_KEY=\"your-api-key\"\n```\n\n### 2. Basic usage\n\n```python\nimport tora\n\n# Create an experiment\nclient = tora.Tora.create_experiment(\n name=\"my-ml-experiment\",\n workspace_id=\"your-workspace-id\",\n description=\"Testing the new model architecture\",\n hyperparams={\n \"learning_rate\": 0.001,\n \"batch_size\": 32,\n \"epochs\": 100,\n },\n tags=[\"pytorch\", \"cnn\", \"image-classification\"]\n)\n\n# Log metrics during training\nfor epoch in range(100):\n # ... your training code ...\n\n client.log(\"train_loss\", train_loss, step=epoch)\n client.log(\"train_accuracy\", train_acc, step=epoch)\n client.log(\"val_loss\", val_loss, step=epoch)\n client.log(\"val_accuracy\", val_acc, step=epoch)\n\n# Ensure all metrics are sent\nclient.shutdown()\n```\n\n### 3. Using the global API (simpler for single experiments)\n\n```python\nimport tora\n\n# Set up global experiment\ntora.setup(\n name=\"my-experiment\",\n workspace_id=\"your-workspace-id\",\n hyperparams={\"lr\": 0.001, \"batch_size\": 32}\n)\n\n# Log metrics anywhere in your code\ntora.tlog(\"accuracy\", 0.95, step=100)\ntora.tlog(\"loss\", 0.05, step=100)\n\n# Cleanup (optional - happens automatically)\ntora.shutdown()\n```\n\n## Advanced Usage\n\n### Context Manager\n\n```python\nimport tora\n\nwith tora.Tora.create_experiment(\"my-experiment\", workspace_id=\"ws-123\") as client:\n client.log(\"metric\", 1.0)\n # Automatically flushes and closes on exit\n```\n\n### Custom Configuration\n\n```python\nimport tora\n\nclient = tora.Tora.create_experiment(\n name=\"custom-experiment\",\n workspace_id=\"ws-123\",\n max_buffer_len=50, # Buffer up to 50 metrics before sending\n api_key=\"custom-api-key\",\n server_url=\"https://custom-tora-instance.com/api\"\n)\n```\n\n### Loading Existing Experiments\n\n```python\nimport tora\n\n# Load an existing experiment\nclient = tora.Tora.load_experiment(\n experiment_id=\"exp-123\",\n api_key=\"your-api-key\"\n)\n\n# Continue logging to the existing experiment\nclient.log(\"new_metric\", 42.0)\n```\n\n### Error Handling\n\n```python\nimport tora\nfrom tora import ToraError, ToraValidationError, ToraNetworkError\n\ntry:\n client = tora.Tora.create_experiment(\"test\", workspace_id=\"ws-123\")\n client.log(\"metric\", 1.0)\nexcept ToraValidationError as e:\n print(f\"Validation error: {e}\")\nexcept ToraNetworkError as e:\n print(f\"Network error: {e}\")\nexcept ToraError as e:\n print(f\"General Tora error: {e}\")\n```\n\n## Framework Integration\n\n### PyTorch\n\n```python\nimport torch\nimport torch.nn as nn\nimport tora\n\n# Set up experiment\nclient = tora.Tora.create_experiment(\n name=\"pytorch-training\",\n workspace_id=\"ws-123\",\n hyperparams={\n \"learning_rate\": 0.001,\n \"batch_size\": 32,\n \"model\": \"ResNet18\"\n }\n)\n\nmodel = nn.Sequential(...)\noptimizer = torch.optim.Adam(model.parameters(), lr=0.001)\n\nfor epoch in range(num_epochs):\n for batch_idx, (data, target) in enumerate(train_loader):\n optimizer.zero_grad()\n output = model(data)\n loss = nn.functional.cross_entropy(output, target)\n loss.backward()\n optimizer.step()\n\n # Log every 100 batches\n if batch_idx % 100 == 0:\n step = epoch * len(train_loader) + batch_idx\n client.log(\"train_loss\", loss.item(), step=step)\n\nclient.shutdown()\n```\n\n### Hugging Face Transformers\n\n```python\nfrom transformers import Trainer, TrainingArguments\nimport tora\n\nclass ToraCallback:\n def __init__(self, tora_client):\n self.tora = tora_client\n\n def on_log(self, args, state, control, logs=None, **kwargs):\n if logs:\n for key, value in logs.items():\n if isinstance(value, (int, float)):\n self.tora.log(key, value, step=state.global_step)\n\n# Set up experiment\nclient = tora.Tora.create_experiment(\"transformer-training\", workspace_id=\"ws-123\")\n\n# Add to trainer\ntrainer = Trainer(\n model=model,\n args=training_args,\n train_dataset=train_dataset,\n callbacks=[ToraCallback(client)]\n)\n\ntrainer.train()\nclient.shutdown()\n```\n\n## API Reference\n\n### Main Classes\n\n#### `Tora`\n\nThe main client class for experiment tracking.\n\n**Methods:**\n- `create_experiment(name, workspace_id=None, ...)` - Create a new experiment\n- `load_experiment(experiment_id, ...)` - Load an existing experiment\n- `log(name, value, step=None, metadata=None)` - Log a metric\n- `flush()` - Send all buffered metrics immediately\n- `shutdown()` - Flush metrics and close the client\n\n**Properties:**\n- `experiment_id` - The experiment ID\n- `max_buffer_len` - Maximum metrics to buffer before sending\n- `buffer_size` - Current number of buffered metrics\n- `is_closed` - Whether the client is closed\n\n#### Global Functions\n\n- `setup(name, workspace_id=None, ...)` - Set up global experiment\n- `tlog(name, value, step=None, metadata=None)` - Log metric globally\n- `flush()` - Flush global client\n- `shutdown()` - Shutdown global client\n- `is_initialized()` - Check if global client is initialized\n\n### Exception Classes\n\n- `ToraError` - Base exception class\n- `ToraValidationError` - Input validation errors\n- `ToraNetworkError` - Network-related errors\n- `ToraAPIError` - API response errors\n- `ToraAuthenticationError` - Authentication errors\n- `ToraConfigurationError` - Configuration errors\n- `ToraExperimentError` - Experiment-related errors\n- `ToraMetricError` - Metric logging errors\n\n## Configuration\n\n### Environment Variables\n\n- `TORA_API_KEY` - Your Tora API key\n- `TORA_BASE_URL` - Base URL for the Tora API (default: https://tora-1030250455947.us-central1.run.app/api)\n\n### Workspace Management\n\n```python\nimport tora\n\n# Create a new workspace\nworkspace = tora.create_workspace(\n name=\"My ML Project\",\n description=\"Experiments for the new model\",\n api_key=\"your-api-key\"\n)\n\nprint(f\"Created workspace: {workspace['id']}\")\n```\n",
"bugtrack_url": null,
"license": "MIT License\n \n Copyright (c) [year] [fullname]\n \n Permission is hereby granted, free of charge, to any person obtaining a copy\n of this software and associated documentation files (the \"Software\"), to deal\n in the Software without restriction, including without limitation the rights\n to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n copies of the Software, and to permit persons to whom the Software is\n furnished to do so, subject to the following conditions:\n \n The above copyright notice and this permission notice shall be included in all\n copies or substantial portions of the Software.\n \n THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\n SOFTWARE.",
"summary": "Python SDK for Tora ML experiment tracking platform",
"version": "0.0.6",
"project_urls": {
"Homepage": "https://pypi.org/project/tora/",
"Issues": "https://github.com/taigaishida/tora/issues",
"Repository": "https://github.com/taigaishida/tora"
},
"split_keywords": [
"data-science",
" experiment-tracking",
" machine-learning",
" mlops"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "29828a8ae832540a47b063a9b7742ccfd0962129fee056fd5dc5ebacc3be9a28",
"md5": "55ba80bc3a49bce1be6af9c27ff3bff5",
"sha256": "19b29eb98405406bfb026cb537b70fee0dd8b7abb7d42880edb8d2ecfd9e9182"
},
"downloads": -1,
"filename": "tora-0.0.6-py3-none-any.whl",
"has_sig": false,
"md5_digest": "55ba80bc3a49bce1be6af9c27ff3bff5",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.11",
"size": 17759,
"upload_time": "2025-07-11T18:13:12",
"upload_time_iso_8601": "2025-07-11T18:13:12.924265Z",
"url": "https://files.pythonhosted.org/packages/29/82/8a8ae832540a47b063a9b7742ccfd0962129fee056fd5dc5ebacc3be9a28/tora-0.0.6-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-07-11 18:13:12",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "taigaishida",
"github_project": "tora",
"github_not_found": true,
"lcname": "tora"
}