# Pingala Shunya
A comprehensive speech transcription package by **Shunya Labs** supporting **ct2 (CTranslate2)** and **transformers** backends. Get superior transcription quality with unified API and advanced features.
## Overview
Pingala Shunya provides a unified interface for transcribing audio files using state-of-the-art backends optimized by Shunya Labs. Whether you want the high-performance CTranslate2 optimization or the flexibility of Hugging Face transformers, Pingala Shunya delivers exceptional results with the `shunyalabs/pingala-v1-en-verbatim` model.
## Features
- **Shunya Labs Optimized**: Built by Shunya Labs for superior performance
- **CT2 Backend**: High-performance CTranslate2 optimization (default)
- **Transformers Backend**: Hugging Face models and latest research
- **Auto-Detection**: Automatically selects the best backend for your model
- **Unified API**: Same interface across all backends
- **Word-Level Timestamps**: Precise timing for individual words
- **Confidence Scores**: Quality metrics for transcription segments and words
- **Voice Activity Detection (VAD)**: Filter out silence and background noise
- **Language Detection**: Automatic language identification
- **Multiple Output Formats**: Text, SRT subtitles, and WebVTT
- **Streaming Support**: Process segments as they are generated
- **Advanced Parameters**: Full control over all backend features
- **Rich CLI**: Command-line tool with comprehensive options
- **Error Handling**: Comprehensive error handling and validation
## Installation
### Basic Installation (ct2 backend)
```bash
pip install pingala-shunya
```
### Backend-Specific Installations
```bash
# For Hugging Face transformers support
pip install "pingala-shunya[transformers]"
# For all backends
pip install "pingala-shunya[all]"
# Complete installation with development tools
pip install "pingala-shunya[complete]"
```
### Requirements
- Python 3.8 or higher
- CUDA-compatible GPU (recommended for optimal performance)
- PyTorch and torchaudio
## Supported Backends
### ct2 (CTranslate2) - Default
- **Performance**: Fastest inference with CTranslate2 optimization
- **Features**: Full parameter control, VAD, streaming, GPU acceleration
- **Models**: All compatible models, optimized for Shunya Labs models
- **Best for**: Production use, real-time applications
### transformers
- **Performance**: Good performance with Hugging Face ecosystem
- **Features**: Access to latest models, easy fine-tuning integration
- **Models**: Any Seq2Seq model on Hugging Face Hub
- **Best for**: Research, latest models, custom transformer models
## Supported Models
### Default Model
- `shunyalabs/pingala-v1-en-verbatim` - High-quality English transcription model by Shunya Labs
### Shunya Labs Models
- `shunyalabs/pingala-v1-en-verbatim` - Optimized for English verbatim transcription
- More Shunya Labs models coming soon!
### Custom Models (Advanced Users)
- Any Hugging Face Seq2Seq model compatible with automatic-speech-recognition pipeline
- Local model paths supported
### Local Models
- `/path/to/local/model` - Local model directory or file
## Quick Start
### Basic Usage with Auto-Detection
```python
from pingala_shunya import PingalaTranscriber
# Initialize with default Shunya Labs model and auto-detected backend
transcriber = PingalaTranscriber()
# Simple transcription
segments = transcriber.transcribe_file_simple("audio.wav")
for segment in segments:
print(f"[{segment.start:.2f}s -> {segment.end:.2f}s] {segment.text}")
```
### Backend Selection
```python
from pingala_shunya import PingalaTranscriber
# Explicitly choose backends with Shunya Labs model
transcriber_ct2 = PingalaTranscriber(model_name="shunyalabs/pingala-v1-en-verbatim", backend="ct2")
transcriber_tf = PingalaTranscriber(model_name="shunyalabs/pingala-v1-en-verbatim", backend="transformers")
# Auto-detection (recommended)
transcriber_auto = PingalaTranscriber() # Uses default Shunya Labs model with ct2
```
### Advanced Usage with All Features
```python
from pingala_shunya import PingalaTranscriber
# Initialize with specific backend and settings
transcriber = PingalaTranscriber(
model_name="shunyalabs/pingala-v1-en-verbatim",
backend="ct2",
device="cuda",
compute_type="float16"
)
# Advanced transcription with full metadata
segments, info = transcriber.transcribe_file(
"audio.wav",
beam_size=10, # Higher beam size for better accuracy
word_timestamps=True, # Enable word-level timestamps
temperature=0.0, # Deterministic output
compression_ratio_threshold=2.4, # Filter out low-quality segments
log_prob_threshold=-1.0, # Filter by probability
no_speech_threshold=0.6, # Silence detection threshold
initial_prompt="High quality audio recording", # Guide the model
hotwords="Python, machine learning, AI", # Boost specific words
vad_filter=True, # Enable voice activity detection
task="transcribe" # or "translate" for translation
)
# Print transcription info
model_info = transcriber.get_model_info()
print(f"Backend: {model_info['backend']}")
print(f"Model: {model_info['model_name']}")
print(f"Language: {info.language} (confidence: {info.language_probability:.3f})")
print(f"Duration: {info.duration:.2f} seconds")
# Process segments with all metadata
for segment in segments:
print(f"[{segment.start:.2f}s -> {segment.end:.2f}s] {segment.text}")
if segment.confidence:
print(f"Confidence: {segment.confidence:.3f}")
# Word-level details
for word in segment.words:
print(f" '{word.word}' [{word.start:.2f}-{word.end:.2f}s] (conf: {word.probability:.3f})")
```
### Using Transformers Backend
```python
# Use Shunya Labs model with transformers backend
transcriber = PingalaTranscriber(
model_name="shunyalabs/pingala-v1-en-verbatim",
backend="transformers"
)
segments = transcriber.transcribe_file_simple("audio.wav")
# Auto-detection will use ct2 by default for Shunya Labs models
transcriber = PingalaTranscriber() # Uses ct2 backend (recommended)
```
## Command-Line Interface
The package includes a comprehensive CLI supporting both backends:
### Basic CLI Usage
```bash
# Basic transcription with auto-detected backend
pingala audio.wav
# Specify backend explicitly
pingala audio.wav --backend ct2
pingala audio.wav --backend transformers
# Use Shunya Labs model with different backends
pingala audio.wav --model shunyalabs/pingala-v1-en-verbatim --backend ct2
pingala audio.wav --model shunyalabs/pingala-v1-en-verbatim --backend transformers
# Save to file
pingala audio.wav --model shunyalabs/pingala-v1-en-verbatim -o transcript.txt
# Use CPU for processing
pingala audio.wav --device cpu
```
### Advanced CLI Features
```bash
# Word-level timestamps with confidence scores (ct2)
pingala audio.wav --model shunyalabs/pingala-v1-en-verbatim --word-timestamps --show-confidence --show-words
# Voice Activity Detection (ct2 only)
pingala audio.wav --model shunyalabs/pingala-v1-en-verbatim --vad --verbose
# Language detection with different backends
pingala audio.wav --model shunyalabs/pingala-v1-en-verbatim --detect-language --backend ct2
# SRT subtitles with word-level timing
pingala audio.wav --model shunyalabs/pingala-v1-en-verbatim --format srt --word-timestamps -o subtitles.srt
# Transformers backend with Shunya Labs model
pingala audio.wav --model shunyalabs/pingala-v1-en-verbatim --backend transformers --verbose
# Advanced parameters (ct2)
pingala audio.wav --model shunyalabs/pingala-v1-en-verbatim \
--beam-size 10 \
--temperature 0.2 \
--compression-ratio-threshold 2.4 \
--log-prob-threshold -1.0 \
--initial-prompt "This is a technical presentation" \
--hotwords "Python,AI,machine learning"
```
### CLI Options Reference
| Option | Description | Backends | Default |
|--------|-------------|----------|---------|
| `--model` | Model name or path | All | shunyalabs/pingala-v1-en-verbatim |
| `--backend` | Backend selection | All | auto-detect |
| `--device` | Device: cuda, cpu, auto | All | cuda |
| `--compute-type` | Precision: float16, float32, int8 | All | float16 |
| `--beam-size` | Beam size for decoding | All | 5 |
| `--language` | Language code (e.g., 'en') | All | auto-detect |
| `--word-timestamps` | Enable word-level timestamps | ct2 | False |
| `--show-confidence` | Show confidence scores | All | False |
| `--show-words` | Show word-level details | All | False |
| `--vad` | Enable VAD filtering | ct2 | False |
| `--detect-language` | Language detection only | All | False |
| `--format` | Output format: text, srt, vtt | All | text |
| `--temperature` | Sampling temperature | All | 0.0 |
| `--compression-ratio-threshold` | Compression ratio filter | ct2 | 2.4 |
| `--log-prob-threshold` | Log probability filter | ct2 | -1.0 |
| `--no-speech-threshold` | No speech threshold | All | 0.6 |
| `--initial-prompt` | Initial prompt text | All | None |
| `--hotwords` | Hotwords to boost | ct2 | None |
| `--task` | Task: transcribe, translate | All | transcribe |
## Backend Comparison
| Feature | ct2 | transformers |
|---------|-----|--------------|
| **Performance** | Fastest | Good |
| **GPU Acceleration** | Optimized | Standard |
| **Memory Usage** | Lowest | Moderate |
| **Model Support** | Any model | Any HF model |
| **Word Timestamps** | Full support | Limited |
| **VAD Filtering** | Built-in | No |
| **Streaming** | True streaming | Batch only |
| **Advanced Params** | All features | Basic |
| **Latest Models** | Updated | Latest |
| **Custom Models** | CTranslate2 | Any format |
### Recommendations
- **Production/Performance**: Use `ct2` with Shunya Labs models
- **Latest Research Models**: Use `transformers`
- **Real-time Applications**: Use `ct2` with VAD
- **Custom Transformer Models**: Use `transformers`
## Performance Optimization
### Backend Selection Tips
```python
# Real-time/Production: Use ct2 with Shunya Labs model
transcriber = PingalaTranscriber(model_name="shunyalabs/pingala-v1-en-verbatim", backend="ct2")
# Maximum accuracy: Use Shunya Labs model with ct2
transcriber = PingalaTranscriber(model_name="shunyalabs/pingala-v1-en-verbatim", backend="ct2")
# Alternative backend: Use transformers with Shunya Labs model
transcriber = PingalaTranscriber(model_name="shunyalabs/pingala-v1-en-verbatim", backend="transformers")
# Research/Latest models: Use transformers backend
transcriber = PingalaTranscriber(model_name="shunyalabs/pingala-v1-en-verbatim", backend="transformers")
```
### Hardware Recommendations
| Use Case | Model | Backend | Hardware |
|----------|-------|---------|----------|
| Real-time | shunyalabs/pingala-v1-en-verbatim | ct2 | GPU 4GB+ |
| Production | shunyalabs/pingala-v1-en-verbatim | ct2 | GPU 6GB+ |
| Maximum Quality | shunyalabs/pingala-v1-en-verbatim | ct2 | GPU 8GB+ |
| Alternative | shunyalabs/pingala-v1-en-verbatim | transformers | GPU 4GB+ |
| CPU-only | shunyalabs/pingala-v1-en-verbatim | any | 8GB+ RAM |
## Examples
See `example.py` for comprehensive examples:
```bash
# Run with default backend (auto-detected)
python example.py audio.wav
# Test specific backends with Shunya Labs model
python example.py audio.wav --backend ct2
python example.py audio.wav --backend transformers
# Test Shunya Labs model with different backends
python example.py audio.wav shunyalabs/pingala-v1-en-verbatim
```
## Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
## License
This project is licensed under the MIT License - see the LICENSE file for details.
## Acknowledgments
- Built by [Shunya Labs](https://shunyalabs.ai) for superior transcription quality
- Powered by CTranslate2 for optimized inference
- Supports [Hugging Face transformers](https://github.com/huggingface/transformers)
- Uses the Pingala model from [Shunya Labs](https://shunyalabs.ai)
## About Shunya Labs
Visit [Shunya Labs](https://shunyalabs.ai) to learn more about our AI research and products.
Contact us at [0@shunyalabs.ai](mailto:0@shunyalabs.ai) for questions or collaboration opportunities.
Raw data
{
"_id": null,
"home_page": null,
"name": "pingala-shunya",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.8",
"maintainer_email": null,
"keywords": "speech-to-text, transcription, audio, ai, shunyalabs, ct2, transformers, enterprise",
"author": null,
"author_email": "Shunya Labs <0@shunyalabs.ai>",
"download_url": "https://files.pythonhosted.org/packages/cd/c3/559d4ae419fcbd6d879e39cfefede2eff70bea4b63cad47a887d40a570c7/pingala_shunya-0.1.5.tar.gz",
"platform": null,
"description": "# Pingala Shunya\n\nA comprehensive speech transcription package by **Shunya Labs** supporting **ct2 (CTranslate2)** and **transformers** backends. Get superior transcription quality with unified API and advanced features.\n\n## Overview\n\nPingala Shunya provides a unified interface for transcribing audio files using state-of-the-art backends optimized by Shunya Labs. Whether you want the high-performance CTranslate2 optimization or the flexibility of Hugging Face transformers, Pingala Shunya delivers exceptional results with the `shunyalabs/pingala-v1-en-verbatim` model.\n\n## Features\n\n- **Shunya Labs Optimized**: Built by Shunya Labs for superior performance\n- **CT2 Backend**: High-performance CTranslate2 optimization (default)\n- **Transformers Backend**: Hugging Face models and latest research\n- **Auto-Detection**: Automatically selects the best backend for your model\n- **Unified API**: Same interface across all backends\n- **Word-Level Timestamps**: Precise timing for individual words\n- **Confidence Scores**: Quality metrics for transcription segments and words\n- **Voice Activity Detection (VAD)**: Filter out silence and background noise\n- **Language Detection**: Automatic language identification\n- **Multiple Output Formats**: Text, SRT subtitles, and WebVTT\n- **Streaming Support**: Process segments as they are generated\n- **Advanced Parameters**: Full control over all backend features\n- **Rich CLI**: Command-line tool with comprehensive options\n- **Error Handling**: Comprehensive error handling and validation\n\n## Installation\n\n### Basic Installation (ct2 backend)\n```bash\npip install pingala-shunya\n```\n\n### Backend-Specific Installations\n\n```bash\n# For Hugging Face transformers support\npip install \"pingala-shunya[transformers]\"\n\n# For all backends\npip install \"pingala-shunya[all]\"\n\n# Complete installation with development tools\npip install \"pingala-shunya[complete]\"\n```\n\n### Requirements\n\n- Python 3.8 or higher\n- CUDA-compatible GPU (recommended for optimal performance)\n- PyTorch and torchaudio\n\n## Supported Backends\n\n### ct2 (CTranslate2) - Default\n- **Performance**: Fastest inference with CTranslate2 optimization\n- **Features**: Full parameter control, VAD, streaming, GPU acceleration\n- **Models**: All compatible models, optimized for Shunya Labs models\n- **Best for**: Production use, real-time applications\n\n### transformers \n- **Performance**: Good performance with Hugging Face ecosystem\n- **Features**: Access to latest models, easy fine-tuning integration\n- **Models**: Any Seq2Seq model on Hugging Face Hub\n- **Best for**: Research, latest models, custom transformer models\n\n## Supported Models\n\n### Default Model\n- `shunyalabs/pingala-v1-en-verbatim` - High-quality English transcription model by Shunya Labs\n\n### Shunya Labs Models\n- `shunyalabs/pingala-v1-en-verbatim` - Optimized for English verbatim transcription\n- More Shunya Labs models coming soon!\n\n### Custom Models (Advanced Users)\n- Any Hugging Face Seq2Seq model compatible with automatic-speech-recognition pipeline\n- Local model paths supported\n\n### Local Models\n- `/path/to/local/model` - Local model directory or file\n\n## Quick Start\n\n### Basic Usage with Auto-Detection\n\n```python\nfrom pingala_shunya import PingalaTranscriber\n\n# Initialize with default Shunya Labs model and auto-detected backend\ntranscriber = PingalaTranscriber()\n\n# Simple transcription\nsegments = transcriber.transcribe_file_simple(\"audio.wav\")\n\nfor segment in segments:\n print(f\"[{segment.start:.2f}s -> {segment.end:.2f}s] {segment.text}\")\n```\n\n### Backend Selection\n\n```python\nfrom pingala_shunya import PingalaTranscriber\n\n# Explicitly choose backends with Shunya Labs model\ntranscriber_ct2 = PingalaTranscriber(model_name=\"shunyalabs/pingala-v1-en-verbatim\", backend=\"ct2\")\ntranscriber_tf = PingalaTranscriber(model_name=\"shunyalabs/pingala-v1-en-verbatim\", backend=\"transformers\") \n\n# Auto-detection (recommended)\ntranscriber_auto = PingalaTranscriber() # Uses default Shunya Labs model with ct2\n```\n\n### Advanced Usage with All Features\n\n```python\nfrom pingala_shunya import PingalaTranscriber\n\n# Initialize with specific backend and settings\ntranscriber = PingalaTranscriber(\n model_name=\"shunyalabs/pingala-v1-en-verbatim\",\n backend=\"ct2\",\n device=\"cuda\", \n compute_type=\"float16\"\n)\n\n# Advanced transcription with full metadata\nsegments, info = transcriber.transcribe_file(\n \"audio.wav\",\n beam_size=10, # Higher beam size for better accuracy\n word_timestamps=True, # Enable word-level timestamps\n temperature=0.0, # Deterministic output\n compression_ratio_threshold=2.4, # Filter out low-quality segments\n log_prob_threshold=-1.0, # Filter by probability\n no_speech_threshold=0.6, # Silence detection threshold\n initial_prompt=\"High quality audio recording\", # Guide the model\n hotwords=\"Python, machine learning, AI\", # Boost specific words\n vad_filter=True, # Enable voice activity detection\n task=\"transcribe\" # or \"translate\" for translation\n)\n\n# Print transcription info\nmodel_info = transcriber.get_model_info()\nprint(f\"Backend: {model_info['backend']}\")\nprint(f\"Model: {model_info['model_name']}\")\nprint(f\"Language: {info.language} (confidence: {info.language_probability:.3f})\")\nprint(f\"Duration: {info.duration:.2f} seconds\")\n\n# Process segments with all metadata\nfor segment in segments:\n print(f\"[{segment.start:.2f}s -> {segment.end:.2f}s] {segment.text}\")\n if segment.confidence:\n print(f\"Confidence: {segment.confidence:.3f}\")\n \n # Word-level details\n for word in segment.words:\n print(f\" '{word.word}' [{word.start:.2f}-{word.end:.2f}s] (conf: {word.probability:.3f})\")\n```\n\n### Using Transformers Backend\n\n```python\n# Use Shunya Labs model with transformers backend\ntranscriber = PingalaTranscriber(\n model_name=\"shunyalabs/pingala-v1-en-verbatim\",\n backend=\"transformers\"\n)\n\nsegments = transcriber.transcribe_file_simple(\"audio.wav\")\n\n# Auto-detection will use ct2 by default for Shunya Labs models\ntranscriber = PingalaTranscriber() # Uses ct2 backend (recommended)\n```\n\n## Command-Line Interface\n\nThe package includes a comprehensive CLI supporting both backends:\n\n### Basic CLI Usage\n\n```bash\n# Basic transcription with auto-detected backend\npingala audio.wav\n\n# Specify backend explicitly \npingala audio.wav --backend ct2\npingala audio.wav --backend transformers\n\n# Use Shunya Labs model with different backends\npingala audio.wav --model shunyalabs/pingala-v1-en-verbatim --backend ct2\npingala audio.wav --model shunyalabs/pingala-v1-en-verbatim --backend transformers\n\n# Save to file\npingala audio.wav --model shunyalabs/pingala-v1-en-verbatim -o transcript.txt\n\n# Use CPU for processing\npingala audio.wav --device cpu\n```\n\n### Advanced CLI Features\n\n```bash\n# Word-level timestamps with confidence scores (ct2)\npingala audio.wav --model shunyalabs/pingala-v1-en-verbatim --word-timestamps --show-confidence --show-words\n\n# Voice Activity Detection (ct2 only)\npingala audio.wav --model shunyalabs/pingala-v1-en-verbatim --vad --verbose\n\n# Language detection with different backends\npingala audio.wav --model shunyalabs/pingala-v1-en-verbatim --detect-language --backend ct2\n\n# SRT subtitles with word-level timing\npingala audio.wav --model shunyalabs/pingala-v1-en-verbatim --format srt --word-timestamps -o subtitles.srt\n\n# Transformers backend with Shunya Labs model\npingala audio.wav --model shunyalabs/pingala-v1-en-verbatim --backend transformers --verbose\n\n# Advanced parameters (ct2)\npingala audio.wav --model shunyalabs/pingala-v1-en-verbatim \\\n --beam-size 10 \\\n --temperature 0.2 \\\n --compression-ratio-threshold 2.4 \\\n --log-prob-threshold -1.0 \\\n --initial-prompt \"This is a technical presentation\" \\\n --hotwords \"Python,AI,machine learning\"\n```\n\n### CLI Options Reference\n\n| Option | Description | Backends | Default |\n|--------|-------------|----------|---------|\n| `--model` | Model name or path | All | shunyalabs/pingala-v1-en-verbatim |\n| `--backend` | Backend selection | All | auto-detect |\n| `--device` | Device: cuda, cpu, auto | All | cuda |\n| `--compute-type` | Precision: float16, float32, int8 | All | float16 |\n| `--beam-size` | Beam size for decoding | All | 5 |\n| `--language` | Language code (e.g., 'en') | All | auto-detect |\n| `--word-timestamps` | Enable word-level timestamps | ct2 | False |\n| `--show-confidence` | Show confidence scores | All | False |\n| `--show-words` | Show word-level details | All | False |\n| `--vad` | Enable VAD filtering | ct2 | False |\n| `--detect-language` | Language detection only | All | False |\n| `--format` | Output format: text, srt, vtt | All | text |\n| `--temperature` | Sampling temperature | All | 0.0 |\n| `--compression-ratio-threshold` | Compression ratio filter | ct2 | 2.4 |\n| `--log-prob-threshold` | Log probability filter | ct2 | -1.0 |\n| `--no-speech-threshold` | No speech threshold | All | 0.6 |\n| `--initial-prompt` | Initial prompt text | All | None |\n| `--hotwords` | Hotwords to boost | ct2 | None |\n| `--task` | Task: transcribe, translate | All | transcribe |\n\n## Backend Comparison\n\n| Feature | ct2 | transformers |\n|---------|-----|--------------|\n| **Performance** | Fastest | Good |\n| **GPU Acceleration** | Optimized | Standard |\n| **Memory Usage** | Lowest | Moderate |\n| **Model Support** | Any model | Any HF model |\n| **Word Timestamps** | Full support | Limited |\n| **VAD Filtering** | Built-in | No |\n| **Streaming** | True streaming | Batch only |\n| **Advanced Params** | All features | Basic |\n| **Latest Models** | Updated | Latest |\n| **Custom Models** | CTranslate2 | Any format |\n\n### Recommendations\n\n- **Production/Performance**: Use `ct2` with Shunya Labs models\n- **Latest Research Models**: Use `transformers`\n- **Real-time Applications**: Use `ct2` with VAD\n- **Custom Transformer Models**: Use `transformers`\n\n## Performance Optimization\n\n### Backend Selection Tips\n\n```python\n# Real-time/Production: Use ct2 with Shunya Labs model\ntranscriber = PingalaTranscriber(model_name=\"shunyalabs/pingala-v1-en-verbatim\", backend=\"ct2\")\n\n# Maximum accuracy: Use Shunya Labs model with ct2 \ntranscriber = PingalaTranscriber(model_name=\"shunyalabs/pingala-v1-en-verbatim\", backend=\"ct2\")\n\n# Alternative backend: Use transformers with Shunya Labs model\ntranscriber = PingalaTranscriber(model_name=\"shunyalabs/pingala-v1-en-verbatim\", backend=\"transformers\")\n\n# Research/Latest models: Use transformers backend\ntranscriber = PingalaTranscriber(model_name=\"shunyalabs/pingala-v1-en-verbatim\", backend=\"transformers\")\n```\n\n### Hardware Recommendations\n\n| Use Case | Model | Backend | Hardware |\n|----------|-------|---------|----------|\n| Real-time | shunyalabs/pingala-v1-en-verbatim | ct2 | GPU 4GB+ |\n| Production | shunyalabs/pingala-v1-en-verbatim | ct2 | GPU 6GB+ |\n| Maximum Quality | shunyalabs/pingala-v1-en-verbatim | ct2 | GPU 8GB+ |\n| Alternative | shunyalabs/pingala-v1-en-verbatim | transformers | GPU 4GB+ |\n| CPU-only | shunyalabs/pingala-v1-en-verbatim | any | 8GB+ RAM |\n\n## Examples\n\nSee `example.py` for comprehensive examples:\n\n```bash\n# Run with default backend (auto-detected)\npython example.py audio.wav\n\n# Test specific backends with Shunya Labs model\npython example.py audio.wav --backend ct2\npython example.py audio.wav --backend transformers \n\n# Test Shunya Labs model with different backends\npython example.py audio.wav shunyalabs/pingala-v1-en-verbatim\n```\n\n## Contributing\n\nContributions are welcome! Please feel free to submit a Pull Request.\n\n## License\n\nThis project is licensed under the MIT License - see the LICENSE file for details.\n\n## Acknowledgments\n\n- Built by [Shunya Labs](https://shunyalabs.ai) for superior transcription quality\n- Powered by CTranslate2 for optimized inference\n- Supports [Hugging Face transformers](https://github.com/huggingface/transformers) \n- Uses the Pingala model from [Shunya Labs](https://shunyalabs.ai)\n\n## About Shunya Labs\n\nVisit [Shunya Labs](https://shunyalabs.ai) to learn more about our AI research and products. \nContact us at [0@shunyalabs.ai](mailto:0@shunyalabs.ai) for questions or collaboration opportunities. \n",
"bugtrack_url": null,
"license": null,
"summary": "Speech transcription package by Shunya Labs with ct2 and transformers backends",
"version": "0.1.5",
"project_urls": {
"Bug Reports": "https://github.com/shunyalabs/pingala-shunya/issues",
"Documentation": "https://shunyalabs.ai/pingala-shunya",
"Homepage": "https://shunyalabs.ai",
"Source": "https://github.com/shunyalabs/pingala-shunya"
},
"split_keywords": [
"speech-to-text",
" transcription",
" audio",
" ai",
" shunyalabs",
" ct2",
" transformers",
" enterprise"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "462147436d9365c809349ad822c1d6bc2b538d4f28721184a23afda5f80d4af4",
"md5": "f7acc206d0432a972e2a60f715f36772",
"sha256": "28e60cef3e88da24ed8ccb1344a02a492933cd6a6c7908a319868b01ad126161"
},
"downloads": -1,
"filename": "pingala_shunya-0.1.5-py3-none-any.whl",
"has_sig": false,
"md5_digest": "f7acc206d0432a972e2a60f715f36772",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.8",
"size": 17173,
"upload_time": "2025-07-25T12:21:45",
"upload_time_iso_8601": "2025-07-25T12:21:45.520126Z",
"url": "https://files.pythonhosted.org/packages/46/21/47436d9365c809349ad822c1d6bc2b538d4f28721184a23afda5f80d4af4/pingala_shunya-0.1.5-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "cdc3559d4ae419fcbd6d879e39cfefede2eff70bea4b63cad47a887d40a570c7",
"md5": "5acfaadf1ea2df2dd06ffe9b9d24e1ea",
"sha256": "3ce769c284160a283283baa1e61913a4b17cd914d01a032b2903f7656fafb1b9"
},
"downloads": -1,
"filename": "pingala_shunya-0.1.5.tar.gz",
"has_sig": false,
"md5_digest": "5acfaadf1ea2df2dd06ffe9b9d24e1ea",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8",
"size": 19605,
"upload_time": "2025-07-25T12:21:46",
"upload_time_iso_8601": "2025-07-25T12:21:46.428004Z",
"url": "https://files.pythonhosted.org/packages/cd/c3/559d4ae419fcbd6d879e39cfefede2eff70bea4b63cad47a887d40a570c7/pingala_shunya-0.1.5.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-07-25 12:21:46",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "shunyalabs",
"github_project": "pingala-shunya",
"github_not_found": true,
"lcname": "pingala-shunya"
}