# Frontier Signal Python SDK
Official Python SDK for the Frontier Signal training API - fine-tune language models with LoRA using simple, powerful primitives.
## Installation
```bash
pip install frontier-signal
```
For development:
```bash
pip install frontier-signal[dev]
```
## Quick Start
### Synchronous Client
```python
from rewardsignal import SignalClient
# Initialize client
client = SignalClient(
api_key="sk-...", # Your API key
base_url="https://signal-production-d2d8.up.railway.app"
)
# List available models
models = client.list_models()
print(f"Available models: {models}")
# Create a training run
run = client.create_run(
base_model="meta-llama/Llama-3.2-3B",
lora_r=32,
lora_alpha=64,
learning_rate=3e-4,
)
# Prepare training data
batch = [
{"text": "The quick brown fox jumps over the lazy dog."},
{"text": "Machine learning is transforming technology."},
]
# Training loop
for step in range(10):
# Forward-backward pass
result = run.forward_backward(batch=batch)
print(f"Step {step}: Loss = {result['loss']:.4f}")
# Optimizer step
run.optim_step()
# Sample from model every 5 steps
if step % 5 == 0:
samples = run.sample(
prompts=["The meaning of life is"],
temperature=0.7,
)
print(f"Sample: {samples['outputs'][0]}")
# Save final model
artifact = run.save_state(mode="adapter", push_to_hub=False)
print(f"Saved to: {artifact['checkpoint_path']}")
```
### Asynchronous Client
```python
import asyncio
from rewardsignal import AsyncSignalClient
async def train():
# Use async context manager for automatic cleanup
async with AsyncSignalClient(
api_key="sk-...",
base_url="https://signal-production-d2d8.up.railway.app"
) as client:
# Create run
run = await client.create_run(
base_model="meta-llama/Llama-3.2-3B",
lora_r=32,
)
# Training data
batch = [
{"text": "The quick brown fox jumps over the lazy dog."},
{"text": "Machine learning is transforming technology."},
]
# Training loop
for step in range(10):
result = await run.forward_backward(batch=batch)
print(f"Step {step}: Loss = {result['loss']:.4f}")
await run.optim_step()
# Save model
artifact = await run.save_state(mode="adapter")
print(f"Saved to: {artifact['checkpoint_path']}")
# Run the async function
asyncio.run(train())
```
### Context Manager (Sync)
```python
from rewardsignal import SignalClient
with SignalClient(api_key="sk-...") as client:
run = client.create_run(base_model="meta-llama/Llama-3.2-3B")
# Training code here...
```
## Features
- **Sync & Async Support** - Use `SignalClient` for synchronous code or `AsyncSignalClient` for async/await
- **Progressive API** - Simple API for beginners, advanced specialized clients for production
- **Type Hints** - Full type annotations for better IDE support and type checking
- **Custom Exceptions** - Specific exceptions for different error types (auth, rate limits, etc.)
- **Context Managers** - Automatic resource cleanup with context managers
- **Pydantic Models** - Request/response validation with Pydantic schemas
- **Specialized Clients** - Separate TrainingClient and InferenceClient for advanced use cases
## API Reference
### Client Initialization
```python
SignalClient(
api_key: str,
base_url: str = "https://signal-production-d2d8.up.railway.app",
timeout: int = 300,
)
```
**Parameters:**
- `api_key` - Your Frontier Signal API key (starts with `sk-`)
- `base_url` - API server URL (default: production)
- `timeout` - Request timeout in seconds (default: 300)
### Creating a Run
```python
run = client.create_run(
base_model: str,
lora_r: int = 32,
lora_alpha: int = 64,
lora_dropout: float = 0.0,
lora_target_modules: Optional[List[str]] = None,
optimizer: str = "adamw_8bit",
learning_rate: float = 3e-4,
weight_decay: float = 0.01,
max_seq_length: int = 2048,
bf16: bool = True,
gradient_checkpointing: bool = True,
)
```
**Returns:** `SignalRun` object with methods for training
### Training Methods
#### Forward-Backward Pass
```python
result = run.forward_backward(
batch: List[Dict[str, Any]],
accumulate: bool = False,
)
```
**Batch format:**
- Text: `{"text": "Your text here"}`
- Chat: `{"messages": [{"role": "user", "content": "Hello"}]}`
**Returns:**
```python
{
"loss": 0.5,
"step": 1,
"grad_norm": 0.25,
"grad_stats": {...}
}
```
#### Optimizer Step
```python
result = run.optim_step(
learning_rate: Optional[float] = None,
)
```
**Returns:**
```python
{
"step": 1,
"learning_rate": 0.0003,
"metrics": {...}
}
```
#### Sample/Generate
```python
result = run.sample(
prompts: List[str],
max_tokens: int = 512,
temperature: float = 0.7,
top_p: float = 0.9,
return_logprobs: bool = False,
)
```
**Returns:**
```python
{
"outputs": ["Generated text..."],
"logprobs": [...] # if return_logprobs=True
}
```
#### Save State
```python
result = run.save_state(
mode: Literal["adapter", "merged"] = "adapter",
push_to_hub: bool = False,
hub_model_id: Optional[str] = None,
)
```
**Returns:**
```python
{
"artifact_uri": "s3://...",
"checkpoint_path": "/data/...",
"pushed_to_hub": False,
"hub_model_id": None
}
```
### Run Information
```python
# Get current status
status = run.get_status()
# Get metrics history
metrics = run.get_metrics()
```
### List Operations
```python
# List all models
models = client.list_models()
# List all runs
runs = client.list_runs()
```
## API Levels
The SDK provides **three levels of API** to match your expertise and needs:
### Level 1: Simple API (Recommended for Beginners)
The simple API provides direct methods on the client for common operations:
```python
from rewardsignal import SignalClient
client = SignalClient(api_key="sk-...")
run = client.create_run(base_model="Qwen/Qwen2.5-3B")
# Direct training calls
client.forward_backward(run.run_id, batch)
client.optim_step(run.run_id)
client.sample(run.run_id, prompts)
```
### Level 2: Advanced Training API
For production training, use the specialized `TrainingClient`:
```python
from rewardsignal import SignalClient
client = SignalClient(api_key="sk-...")
run = client.create_run(base_model="Qwen/Qwen2.5-3B")
# Get specialized training client
training = client.training(
run_id=run.run_id,
timeout=7200, # 2 hours for long training
max_retries=3,
)
# Fine-grained control
for batch in dataloader:
result = training.forward_backward(batch)
# Conditional optimizer step (e.g., gradient clipping)
if result['grad_norm'] < 10.0:
training.optim_step()
# Convenience methods
training.train_batch(batch) # forward_backward + optim_step
training.train_epoch(dataloader, progress=True) # Full epoch with progress bar
# State tracking
metrics = training.get_metrics()
print(f"Average loss: {metrics['avg_loss']:.4f}")
# Save checkpoint
training.save_checkpoint(mode="adapter")
```
**Features:**
- Training-optimized defaults (1 hour timeout, exponential backoff)
- State tracking (loss history, gradient norms)
- Convenience methods (train_batch, train_epoch)
- Context manager support
### Level 3: Advanced Inference API
For production inference, use the specialized `InferenceClient`:
```python
from rewardsignal import SignalClient
client = SignalClient(api_key="sk-...")
# Get specialized inference client
inference = client.inference(
run_id="run_123",
step=100, # Use specific checkpoint
batch_size=32, # Batch size for inference
timeout=30,
)
# Batched generation (automatic chunking)
prompts = ["Hello"] * 100
outputs = inference.batch_sample(
prompts=prompts,
max_tokens=50,
)
# Enable caching for repeated prompts
inference.enable_cache()
outputs = inference.sample(["Same prompt"], max_tokens=50) # Cached on repeat
# Compare different checkpoints
inference_early = client.inference(run_id, step=10)
inference_late = client.inference(run_id, step=1000)
```
**Features:**
- Inference-optimized defaults (30s timeout, immediate retry)
- Automatic batching for efficiency
- Response caching
- Future: streaming, embeddings
**📚 For detailed examples and comparisons, see [HYBRID_CLIENT_GUIDE.md](HYBRID_CLIENT_GUIDE.md)**
## Advanced Usage
### Chat Templates
For instruction-tuned models:
```python
batch = [
{
"messages": [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "What is the capital of France?"},
{"role": "assistant", "content": "The capital of France is Paris."}
]
}
]
run.forward_backward(batch=batch)
```
### Gradient Accumulation
Accumulate gradients across multiple batches:
```python
# Accumulate gradients
run.forward_backward(batch=batch1, accumulate=False) # Reset
run.forward_backward(batch=batch2, accumulate=True) # Accumulate
run.forward_backward(batch=batch3, accumulate=True) # Accumulate
# Apply accumulated gradients
run.optim_step()
```
### Custom LoRA Configuration
```python
run = client.create_run(
base_model="meta-llama/Llama-3.1-8B",
lora_r=64, # Higher rank = more capacity
lora_alpha=128, # Usually 2x lora_r
lora_dropout=0.05,
lora_target_modules=["q_proj", "v_proj", "k_proj", "o_proj"],
)
```
### Export to HuggingFace Hub
```python
artifact = run.save_state(
mode="merged",
push_to_hub=True,
hub_model_id="your-username/your-model-name",
)
```
## Exception Handling
The SDK provides specific exceptions for different error types:
```python
from rewardsignal import (
SignalAPIError,
AuthenticationError,
NotFoundError,
RateLimitError,
ValidationError,
)
try:
run = client.create_run(base_model="invalid/model")
except AuthenticationError:
print("Invalid API key")
except ValidationError as e:
print(f"Invalid parameters: {e.message}")
except RateLimitError:
print("Rate limit exceeded, try again later")
except NotFoundError:
print("Resource not found")
except SignalAPIError as e:
print(f"API error: {e.message} (status: {e.status_code})")
```
## Type Hints
The SDK includes full type annotations. Import schemas for type hints:
```python
from rewardsignal import RunConfig, TrainingExample
def prepare_batch(texts: List[str]) -> List[TrainingExample]:
return [TrainingExample(text=t) for t in texts]
config = RunConfig(
base_model="meta-llama/Llama-3.2-3B",
lora_r=32,
)
```
## Development
### Local Installation
Clone the monorepo and install in editable mode:
```bash
git clone https://github.com/yourusername/frontier-signal.git
cd frontier-signal/client
pip install -e .
```
### Running Tests
```bash
cd client
pytest
```
### Building the Package
```bash
cd client
python -m build
```
### Publishing to PyPI
```bash
cd client
# Test on TestPyPI first
python -m twine upload --repository testpypi dist/*
# Then publish to PyPI
python -m twine upload dist/*
```
## Examples
See the [examples/](examples/) directory for more usage examples:
- `basic_sync.py` - Synchronous client example
- `basic_async.py` - Asynchronous client example
- `advanced_training.py` - Advanced training with TrainingClient
- `advanced_inference.py` - Advanced inference with InferenceClient
## Guides
- **[Hybrid Client Guide](HYBRID_CLIENT_GUIDE.md)** - Complete guide to all three API levels with examples
## Support
- **Documentation**: https://docs.frontier-signal.com
- **GitHub Issues**: https://github.com/yourusername/frontier-signal/issues
- **Email**: support@frontier-signal.com
## License
MIT License - see [LICENSE](../LICENSE) file for details.
Raw data
{
"_id": null,
"home_page": null,
"name": "rewardsignal",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.8",
"maintainer_email": null,
"keywords": "machine-learning, llm, fine-tuning, lora, training, ai, frontier",
"author": null,
"author_email": "Frontier Signal Team <support@frontier-signal.com>",
"download_url": "https://files.pythonhosted.org/packages/1a/f0/b5631d7f8e531b69ad3aa1130cc91c6f87cd56513435d80e2b3658125489/rewardsignal-0.2.0.tar.gz",
"platform": null,
"description": "# Frontier Signal Python SDK\n\nOfficial Python SDK for the Frontier Signal training API - fine-tune language models with LoRA using simple, powerful primitives.\n\n## Installation\n\n```bash\npip install frontier-signal\n```\n\nFor development:\n```bash\npip install frontier-signal[dev]\n```\n\n## Quick Start\n\n### Synchronous Client\n\n```python\nfrom rewardsignal import SignalClient\n\n# Initialize client\nclient = SignalClient(\n api_key=\"sk-...\", # Your API key\n base_url=\"https://signal-production-d2d8.up.railway.app\"\n)\n\n# List available models\nmodels = client.list_models()\nprint(f\"Available models: {models}\")\n\n# Create a training run\nrun = client.create_run(\n base_model=\"meta-llama/Llama-3.2-3B\",\n lora_r=32,\n lora_alpha=64,\n learning_rate=3e-4,\n)\n\n# Prepare training data\nbatch = [\n {\"text\": \"The quick brown fox jumps over the lazy dog.\"},\n {\"text\": \"Machine learning is transforming technology.\"},\n]\n\n# Training loop\nfor step in range(10):\n # Forward-backward pass\n result = run.forward_backward(batch=batch)\n print(f\"Step {step}: Loss = {result['loss']:.4f}\")\n \n # Optimizer step\n run.optim_step()\n \n # Sample from model every 5 steps\n if step % 5 == 0:\n samples = run.sample(\n prompts=[\"The meaning of life is\"],\n temperature=0.7,\n )\n print(f\"Sample: {samples['outputs'][0]}\")\n\n# Save final model\nartifact = run.save_state(mode=\"adapter\", push_to_hub=False)\nprint(f\"Saved to: {artifact['checkpoint_path']}\")\n```\n\n### Asynchronous Client\n\n```python\nimport asyncio\nfrom rewardsignal import AsyncSignalClient\n\nasync def train():\n # Use async context manager for automatic cleanup\n async with AsyncSignalClient(\n api_key=\"sk-...\",\n base_url=\"https://signal-production-d2d8.up.railway.app\"\n ) as client:\n # Create run\n run = await client.create_run(\n base_model=\"meta-llama/Llama-3.2-3B\",\n lora_r=32,\n )\n \n # Training data\n batch = [\n {\"text\": \"The quick brown fox jumps over the lazy dog.\"},\n {\"text\": \"Machine learning is transforming technology.\"},\n ]\n \n # Training loop\n for step in range(10):\n result = await run.forward_backward(batch=batch)\n print(f\"Step {step}: Loss = {result['loss']:.4f}\")\n \n await run.optim_step()\n \n # Save model\n artifact = await run.save_state(mode=\"adapter\")\n print(f\"Saved to: {artifact['checkpoint_path']}\")\n\n# Run the async function\nasyncio.run(train())\n```\n\n### Context Manager (Sync)\n\n```python\nfrom rewardsignal import SignalClient\n\nwith SignalClient(api_key=\"sk-...\") as client:\n run = client.create_run(base_model=\"meta-llama/Llama-3.2-3B\")\n # Training code here...\n```\n\n## Features\n\n- **Sync & Async Support** - Use `SignalClient` for synchronous code or `AsyncSignalClient` for async/await\n- **Progressive API** - Simple API for beginners, advanced specialized clients for production\n- **Type Hints** - Full type annotations for better IDE support and type checking\n- **Custom Exceptions** - Specific exceptions for different error types (auth, rate limits, etc.)\n- **Context Managers** - Automatic resource cleanup with context managers\n- **Pydantic Models** - Request/response validation with Pydantic schemas\n- **Specialized Clients** - Separate TrainingClient and InferenceClient for advanced use cases\n\n## API Reference\n\n### Client Initialization\n\n```python\nSignalClient(\n api_key: str,\n base_url: str = \"https://signal-production-d2d8.up.railway.app\",\n timeout: int = 300,\n)\n```\n\n**Parameters:**\n- `api_key` - Your Frontier Signal API key (starts with `sk-`)\n- `base_url` - API server URL (default: production)\n- `timeout` - Request timeout in seconds (default: 300)\n\n### Creating a Run\n\n```python\nrun = client.create_run(\n base_model: str,\n lora_r: int = 32,\n lora_alpha: int = 64,\n lora_dropout: float = 0.0,\n lora_target_modules: Optional[List[str]] = None,\n optimizer: str = \"adamw_8bit\",\n learning_rate: float = 3e-4,\n weight_decay: float = 0.01,\n max_seq_length: int = 2048,\n bf16: bool = True,\n gradient_checkpointing: bool = True,\n)\n```\n\n**Returns:** `SignalRun` object with methods for training\n\n### Training Methods\n\n#### Forward-Backward Pass\n\n```python\nresult = run.forward_backward(\n batch: List[Dict[str, Any]],\n accumulate: bool = False,\n)\n```\n\n**Batch format:**\n- Text: `{\"text\": \"Your text here\"}`\n- Chat: `{\"messages\": [{\"role\": \"user\", \"content\": \"Hello\"}]}`\n\n**Returns:**\n```python\n{\n \"loss\": 0.5,\n \"step\": 1,\n \"grad_norm\": 0.25,\n \"grad_stats\": {...}\n}\n```\n\n#### Optimizer Step\n\n```python\nresult = run.optim_step(\n learning_rate: Optional[float] = None,\n)\n```\n\n**Returns:**\n```python\n{\n \"step\": 1,\n \"learning_rate\": 0.0003,\n \"metrics\": {...}\n}\n```\n\n#### Sample/Generate\n\n```python\nresult = run.sample(\n prompts: List[str],\n max_tokens: int = 512,\n temperature: float = 0.7,\n top_p: float = 0.9,\n return_logprobs: bool = False,\n)\n```\n\n**Returns:**\n```python\n{\n \"outputs\": [\"Generated text...\"],\n \"logprobs\": [...] # if return_logprobs=True\n}\n```\n\n#### Save State\n\n```python\nresult = run.save_state(\n mode: Literal[\"adapter\", \"merged\"] = \"adapter\",\n push_to_hub: bool = False,\n hub_model_id: Optional[str] = None,\n)\n```\n\n**Returns:**\n```python\n{\n \"artifact_uri\": \"s3://...\",\n \"checkpoint_path\": \"/data/...\",\n \"pushed_to_hub\": False,\n \"hub_model_id\": None\n}\n```\n\n### Run Information\n\n```python\n# Get current status\nstatus = run.get_status()\n\n# Get metrics history\nmetrics = run.get_metrics()\n```\n\n### List Operations\n\n```python\n# List all models\nmodels = client.list_models()\n\n# List all runs\nruns = client.list_runs()\n```\n\n## API Levels\n\nThe SDK provides **three levels of API** to match your expertise and needs:\n\n### Level 1: Simple API (Recommended for Beginners)\n\nThe simple API provides direct methods on the client for common operations:\n\n```python\nfrom rewardsignal import SignalClient\n\nclient = SignalClient(api_key=\"sk-...\")\nrun = client.create_run(base_model=\"Qwen/Qwen2.5-3B\")\n\n# Direct training calls\nclient.forward_backward(run.run_id, batch)\nclient.optim_step(run.run_id)\nclient.sample(run.run_id, prompts)\n```\n\n### Level 2: Advanced Training API\n\nFor production training, use the specialized `TrainingClient`:\n\n```python\nfrom rewardsignal import SignalClient\n\nclient = SignalClient(api_key=\"sk-...\")\nrun = client.create_run(base_model=\"Qwen/Qwen2.5-3B\")\n\n# Get specialized training client\ntraining = client.training(\n run_id=run.run_id,\n timeout=7200, # 2 hours for long training\n max_retries=3,\n)\n\n# Fine-grained control\nfor batch in dataloader:\n result = training.forward_backward(batch)\n \n # Conditional optimizer step (e.g., gradient clipping)\n if result['grad_norm'] < 10.0:\n training.optim_step()\n\n# Convenience methods\ntraining.train_batch(batch) # forward_backward + optim_step\ntraining.train_epoch(dataloader, progress=True) # Full epoch with progress bar\n\n# State tracking\nmetrics = training.get_metrics()\nprint(f\"Average loss: {metrics['avg_loss']:.4f}\")\n\n# Save checkpoint\ntraining.save_checkpoint(mode=\"adapter\")\n```\n\n**Features:**\n- Training-optimized defaults (1 hour timeout, exponential backoff)\n- State tracking (loss history, gradient norms)\n- Convenience methods (train_batch, train_epoch)\n- Context manager support\n\n### Level 3: Advanced Inference API\n\nFor production inference, use the specialized `InferenceClient`:\n\n```python\nfrom rewardsignal import SignalClient\n\nclient = SignalClient(api_key=\"sk-...\")\n\n# Get specialized inference client\ninference = client.inference(\n run_id=\"run_123\",\n step=100, # Use specific checkpoint\n batch_size=32, # Batch size for inference\n timeout=30,\n)\n\n# Batched generation (automatic chunking)\nprompts = [\"Hello\"] * 100\noutputs = inference.batch_sample(\n prompts=prompts,\n max_tokens=50,\n)\n\n# Enable caching for repeated prompts\ninference.enable_cache()\noutputs = inference.sample([\"Same prompt\"], max_tokens=50) # Cached on repeat\n\n# Compare different checkpoints\ninference_early = client.inference(run_id, step=10)\ninference_late = client.inference(run_id, step=1000)\n```\n\n**Features:**\n- Inference-optimized defaults (30s timeout, immediate retry)\n- Automatic batching for efficiency\n- Response caching\n- Future: streaming, embeddings\n\n**\ud83d\udcda For detailed examples and comparisons, see [HYBRID_CLIENT_GUIDE.md](HYBRID_CLIENT_GUIDE.md)**\n\n## Advanced Usage\n\n### Chat Templates\n\nFor instruction-tuned models:\n\n```python\nbatch = [\n {\n \"messages\": [\n {\"role\": \"system\", \"content\": \"You are a helpful assistant.\"},\n {\"role\": \"user\", \"content\": \"What is the capital of France?\"},\n {\"role\": \"assistant\", \"content\": \"The capital of France is Paris.\"}\n ]\n }\n]\n\nrun.forward_backward(batch=batch)\n```\n\n### Gradient Accumulation\n\nAccumulate gradients across multiple batches:\n\n```python\n# Accumulate gradients\nrun.forward_backward(batch=batch1, accumulate=False) # Reset\nrun.forward_backward(batch=batch2, accumulate=True) # Accumulate\nrun.forward_backward(batch=batch3, accumulate=True) # Accumulate\n\n# Apply accumulated gradients\nrun.optim_step()\n```\n\n### Custom LoRA Configuration\n\n```python\nrun = client.create_run(\n base_model=\"meta-llama/Llama-3.1-8B\",\n lora_r=64, # Higher rank = more capacity\n lora_alpha=128, # Usually 2x lora_r\n lora_dropout=0.05,\n lora_target_modules=[\"q_proj\", \"v_proj\", \"k_proj\", \"o_proj\"],\n)\n```\n\n### Export to HuggingFace Hub\n\n```python\nartifact = run.save_state(\n mode=\"merged\",\n push_to_hub=True,\n hub_model_id=\"your-username/your-model-name\",\n)\n```\n\n## Exception Handling\n\nThe SDK provides specific exceptions for different error types:\n\n```python\nfrom rewardsignal import (\n SignalAPIError,\n AuthenticationError,\n NotFoundError,\n RateLimitError,\n ValidationError,\n)\n\ntry:\n run = client.create_run(base_model=\"invalid/model\")\nexcept AuthenticationError:\n print(\"Invalid API key\")\nexcept ValidationError as e:\n print(f\"Invalid parameters: {e.message}\")\nexcept RateLimitError:\n print(\"Rate limit exceeded, try again later\")\nexcept NotFoundError:\n print(\"Resource not found\")\nexcept SignalAPIError as e:\n print(f\"API error: {e.message} (status: {e.status_code})\")\n```\n\n## Type Hints\n\nThe SDK includes full type annotations. Import schemas for type hints:\n\n```python\nfrom rewardsignal import RunConfig, TrainingExample\n\ndef prepare_batch(texts: List[str]) -> List[TrainingExample]:\n return [TrainingExample(text=t) for t in texts]\n\nconfig = RunConfig(\n base_model=\"meta-llama/Llama-3.2-3B\",\n lora_r=32,\n)\n```\n\n## Development\n\n### Local Installation\n\nClone the monorepo and install in editable mode:\n\n```bash\ngit clone https://github.com/yourusername/frontier-signal.git\ncd frontier-signal/client\npip install -e .\n```\n\n### Running Tests\n\n```bash\ncd client\npytest\n```\n\n### Building the Package\n\n```bash\ncd client\npython -m build\n```\n\n### Publishing to PyPI\n\n```bash\ncd client\n\n# Test on TestPyPI first\npython -m twine upload --repository testpypi dist/*\n\n# Then publish to PyPI\npython -m twine upload dist/*\n```\n\n## Examples\n\nSee the [examples/](examples/) directory for more usage examples:\n- `basic_sync.py` - Synchronous client example\n- `basic_async.py` - Asynchronous client example\n- `advanced_training.py` - Advanced training with TrainingClient\n- `advanced_inference.py` - Advanced inference with InferenceClient\n\n## Guides\n\n- **[Hybrid Client Guide](HYBRID_CLIENT_GUIDE.md)** - Complete guide to all three API levels with examples\n\n## Support\n\n- **Documentation**: https://docs.frontier-signal.com\n- **GitHub Issues**: https://github.com/yourusername/frontier-signal/issues\n- **Email**: support@frontier-signal.com\n\n## License\n\nMIT License - see [LICENSE](../LICENSE) file for details.\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "Python SDK for Reward Signal - Training API for Language Models",
"version": "0.2.0",
"project_urls": {
"Documentation": "https://docs.tryfrontier.dev",
"Homepage": "https://github.com/wright-labs/signal",
"Issues": "https://github.com/wright-labs/signal/issues",
"Repository": "https://github.com/wright-labs/signal"
},
"split_keywords": [
"machine-learning",
" llm",
" fine-tuning",
" lora",
" training",
" ai",
" frontier"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "a01c747fd49076fd4c948b9e663197e88cfd7216ae7f5b236e008511f3431286",
"md5": "f62e2ccf29b4baabf9a7e07a2637723f",
"sha256": "f3a897926b3f33e89b993caec348ddd9c76ca9baf681558b7a4d4e76a996adbb"
},
"downloads": -1,
"filename": "rewardsignal-0.2.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "f62e2ccf29b4baabf9a7e07a2637723f",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.8",
"size": 28524,
"upload_time": "2025-10-26T03:19:17",
"upload_time_iso_8601": "2025-10-26T03:19:17.916452Z",
"url": "https://files.pythonhosted.org/packages/a0/1c/747fd49076fd4c948b9e663197e88cfd7216ae7f5b236e008511f3431286/rewardsignal-0.2.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "1af0b5631d7f8e531b69ad3aa1130cc91c6f87cd56513435d80e2b3658125489",
"md5": "b864eca50d3174cdd4e924826e741fac",
"sha256": "71b37f465e9f3abd233168c7e3cdc7dd9a4fee65a735df067ff923dbb1c6122f"
},
"downloads": -1,
"filename": "rewardsignal-0.2.0.tar.gz",
"has_sig": false,
"md5_digest": "b864eca50d3174cdd4e924826e741fac",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8",
"size": 37438,
"upload_time": "2025-10-26T03:19:19",
"upload_time_iso_8601": "2025-10-26T03:19:19.400201Z",
"url": "https://files.pythonhosted.org/packages/1a/f0/b5631d7f8e531b69ad3aa1130cc91c6f87cd56513435d80e2b3658125489/rewardsignal-0.2.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-10-26 03:19:19",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "wright-labs",
"github_project": "signal",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"requirements": [
{
"name": "fastapi",
"specs": [
[
">=",
"0.115.0"
]
]
},
{
"name": "uvicorn",
"specs": [
[
">=",
"0.30.0"
]
]
},
{
"name": "pydantic",
"specs": [
[
">=",
"2.0.0"
]
]
},
{
"name": "requests",
"specs": [
[
">=",
"2.31.0"
]
]
},
{
"name": "supabase",
"specs": [
[
">=",
"2.3.0"
]
]
},
{
"name": "python-dotenv",
"specs": [
[
">=",
"1.0.0"
]
]
},
{
"name": "python-jose",
"specs": [
[
">=",
"3.3.0"
]
]
},
{
"name": "passlib",
"specs": [
[
">=",
"1.7.4"
]
]
},
{
"name": "python-multipart",
"specs": [
[
">=",
"0.0.6"
]
]
},
{
"name": "bcrypt",
"specs": [
[
">=",
"4.1.2"
]
]
},
{
"name": "slowapi",
"specs": [
[
">=",
"0.1.9"
]
]
},
{
"name": "modal",
"specs": [
[
">=",
"0.69.0"
]
]
},
{
"name": "pyyaml",
"specs": [
[
">=",
"6.0.0"
]
]
},
{
"name": "boto3",
"specs": [
[
">=",
"1.34.0"
]
]
},
{
"name": "botocore",
"specs": [
[
">=",
"1.34.0"
]
]
},
{
"name": "pytest",
"specs": [
[
">=",
"8.0.0"
]
]
},
{
"name": "pytest-cov",
"specs": [
[
">=",
"4.1.0"
]
]
},
{
"name": "pytest-asyncio",
"specs": [
[
">=",
"0.23.0"
]
]
},
{
"name": "httpx",
"specs": [
[
">=",
"0.27.0"
]
]
}
],
"lcname": "rewardsignal"
}