# Cascade - Production-Ready, High-Performance, Asynchronous VAD Library
[中文](./README_zh.md)
[](https://python.org)
[](LICENSE)
[](https://github.com/xucailiang/cascade)
[](https://github.com/snakers4/silero-vad)
[](https://github.com/xucailiang/cascade)
[](https://github.com/xucailiang/cascade)
**Cascade** is a **production-ready**, **high-performance**, and **low-latency** audio stream processing library designed for Voice Activity Detection (VAD). Built upon the excellent [Silero VAD](https://github.com/snakers4/silero-vad) model, Cascade significantly reduces VAD processing latency while maintaining high accuracy through its **1:1:1 binding architecture** and **asynchronous streaming technology**.
## 📊 Performance Benchmarks
Based on our latest streaming VAD performance tests with different chunk sizes:
### Streaming Performance by Chunk Size
| Chunk Size (bytes) | Processing Time (ms) | Throughput (chunks/sec) | Total Test Time (s) | Speech Segments |
|-------------------|---------------------|------------------------|-------------------|-----------------|
| **1024** | **0.66** | **92.2** | 3.15 | 2 |
| **4096** | 1.66 | 82.4 | 0.89 | 2 |
| **8192** | 2.95 | 72.7 | 0.51 | 2 |
### Key Performance Metrics
| Metric | Value | Description |
|-------------------------|---------------|-----------------------------------------|
| **Best Processing Speed** | 0.66ms/chunk | Optimal performance with 1024-byte chunks |
| **Peak Throughput** | 92.2 chunks/sec | Maximum processing throughput |
| **Success Rate** | 100% | Processing success rate across all tests |
| **Accuracy** | High | Guaranteed by the Silero VAD model |
| **Architecture** | 1:1:1:1 | Independent model per processor instance |
### Performance Characteristics
- **Excellent performance across chunk sizes**: High throughput and low latency with various chunk sizes
- **Real-time capability**: Sub-millisecond processing enables real-time applications
- **Scalability**: Linear performance scaling with independent processor instances
## ✨ Core Features
### 🚀 High-Performance Engineering
- **Lock-Free Design**: The 1:1:1 binding architecture eliminates lock contention, boosting performance.
- **Frame-Aligned Buffer**: A highly efficient buffer optimized for 512-sample frames.
- **Asynchronous Streaming**: Non-blocking audio stream processing based on `asyncio`.
- **Memory Optimization**: Zero-copy design, object pooling, and cache alignment.
- **Concurrency Optimization**: Dedicated threads, asynchronous queues, and batch processing.
### 🔧 Robust Software Engineering
- **Modular Design**: A component architecture with high cohesion and low coupling.
- **Interface Abstraction**: Dependency inversion through interface-based design.
- **Type System**: Data validation and type checking using Pydantic.
- **Comprehensive Testing**: Unit, integration, and performance tests.
- **Code Standards**: Adherence to PEP 8 style guidelines.
### 🛡️ Production-Ready Reliability
- **Error Handling**: Robust error handling and recovery mechanisms.
- **Resource Management**: Automatic cleanup and graceful shutdown.
- **Monitoring Metrics**: Real-time performance monitoring and statistics.
- **Scalability**: Horizontal scaling by increasing the number of instances.
- **Stability Assurance**: Handles boundary conditions and exceptional cases gracefully.
## 🏗️ Architecture
Cascade employs a **1:1:1:1 independent architecture** to ensure optimal performance and thread safety.
```mermaid
graph TD
Client --> StreamProcessor
subgraph "1:1:1:1 Independent Architecture"
StreamProcessor --> |per connection| IndependentProcessor[Independent Processor Instance]
IndependentProcessor --> |independent loading| VADModel[Silero VAD Model]
IndependentProcessor --> |independent management| VADIterator[VAD Iterator]
IndependentProcessor --> |independent buffering| FrameBuffer[Frame-Aligned Buffer]
IndependentProcessor --> |independent state| StateMachine[State Machine]
end
subgraph "Asynchronous Processing Flow"
VADModel --> |asyncio.to_thread| VADInference[VAD Inference]
VADInference --> StateMachine
StateMachine --> |None| SingleFrame[Single Frame Output]
StateMachine --> |start| Collecting[Start Collecting]
StateMachine --> |end| SpeechSegment[Speech Segment Output]
end
```
## 🚀 Quick Start
### Installation
```
pip install cascade-vad
```
OR
```bash
# Using uv is recommended
uv venv -p 3.12
source .venv/bin/activate
# Install from PyPI (recommended)
pip install cascade-vad
# Or install from source
git clone https://github.com/xucailiang/cascade.git
cd cascade
pip install -e .
```
### Basic Usage
```python
import cascade
import asyncio
async def basic_example():
"""A basic usage example."""
# Method 1: Simple file processing
async for result in cascade.process_audio_file("audio.wav"):
if result.result_type == "segment":
segment = result.segment
print(f"🎤 Speech Segment: {segment.start_timestamp_ms:.0f}ms - {segment.end_timestamp_ms:.0f}ms")
else:
frame = result.frame
print(f"🔇 Single Frame: {frame.timestamp_ms:.0f}ms")
# Method 2: Stream processing
async with cascade.StreamProcessor() as processor:
async for result in processor.process_stream(audio_stream):
if result.result_type == "segment":
segment = result.segment
print(f"🎤 Speech Segment: {segment.start_timestamp_ms:.0f}ms - {segment.end_timestamp_ms:.0f}ms")
else:
frame = result.frame
print(f"🔇 Single Frame: {frame.timestamp_ms:.0f}ms")
asyncio.run(basic_example())
```
### Advanced Configuration
```python
from cascade.stream import StreamProcessor, create_default_config
async def advanced_example():
"""An advanced configuration example."""
# Custom configuration
config = create_default_config(
vad_threshold=0.7, # Higher detection threshold
max_instances=3, # Max 3 concurrent instances
buffer_size_frames=128 # Larger buffer
)
# Use the custom config
async with StreamProcessor(config) as processor:
# Process audio stream
async for result in processor.process_stream(audio_stream, "my-stream"):
# Process results...
pass
# Get performance statistics
stats = processor.get_stats()
print(f"Processing Stats: {stats.summary()}")
print(f"Throughput: {stats.throughput_chunks_per_second:.1f} chunks/sec")
asyncio.run(advanced_example())
```
## 🧪 Testing
```bash
# Run basic integration tests
python tests/test_simple_vad.py -v
# Run simulated audio stream tests
python tests/test_stream_vad.py -v
# Run performance benchmark tests
python tests/benchmark_performance.py
```
Test Coverage:
- ✅ Basic API Usage
- ✅ Stream Processing
- ✅ File Processing
- ✅ Real Audio VAD
- ✅ Automatic Speech Segment Saving
- ✅ 1:1:1:1 Architecture Validation
- ✅ Performance Benchmarks
- ✅ FrameAlignedBuffer Tests
## 🌐 Web Demo
We provide a complete WebSocket-based web demonstration that showcases Cascade's real-time VAD capabilities with multiple client support.

### Features
- **Real-time Audio Processing**: Capture audio from browser microphone and process with VAD
- **Live VAD Visualization**: Real-time display of VAD detection results
- **Speech Segment Management**: Display detected speech segments with playback support
- **Dynamic VAD Configuration**: Adjust VAD parameters in real-time
- **Multi-client Support**: Independent Cascade instances for each WebSocket connection
### Quick Start
```bash
# Start backend server
cd web_demo
python server.py
# Start frontend (in another terminal)
cd web_demo/frontend
pnpm install && pnpm dev
```
For detailed setup instructions, see [Web Demo Documentation](web_demo/README.md).
## 🔧 Production Deployment
### Best Practices
1. **Resource Allocation**
- Each instance uses approximately 50MB of memory.
- Recommended: 2-3 instances per CPU core.
- Monitor memory usage to prevent Out-of-Memory (OOM) errors.
2. **Performance Tuning**
- Adjust `max_instances` to match server CPU cores.
- Increase `buffer_size_frames` for higher throughput.
- Tune `vad_threshold` to balance accuracy and sensitivity.
3. **Error Handling**
- Implement retry mechanisms for transient errors.
- Use health checks to monitor service status.
- Log detailed information for troubleshooting.
### Monitoring Metrics
```python
# Get performance monitoring metrics
stats = processor.get_stats()
# Key monitoring metrics
print(f"Active Instances: {stats.active_instances}/{stats.total_instances}")
print(f"Average Processing Time: {stats.average_processing_time_ms}ms")
print(f"Success Rate: {stats.success_rate:.2%}")
print(f"Memory Usage: {stats.memory_usage_mb:.1f}MB")
```
## 🔧 Requirements
### Core Dependencies
- **Python**: 3.12 (recommended)
- **pydantic**: 2.4.0+ (Data validation)
- **numpy**: 1.24.0+ (Numerical computation)
- **scipy**: 1.11.0+ (Signal processing)
- **silero-vad**: 5.1.2+ (VAD model)
- **onnxruntime**: 1.22.1+ (ONNX inference)
- **torchaudio**: 2.7.1+ (Audio processing)
### Development Dependencies
- **pytest**: Testing framework
- **black**: Code formatter
- **ruff**: Linter
- **mypy**: Type checker
- **pre-commit**: Git hooks
## 🤝 Contribution Guide
We welcome community contributions! Please follow these steps:
1. **Fork the project** and create a feature branch.
2. **Install development dependencies**: `pip install -e .[dev]`
3. **Run tests**: `pytest`
4. **Lint your code**: `ruff check . && black --check .`
5. **Type check**: `mypy cascade`
6. **Submit a Pull Request** with a clear description of your changes.
## 📄 License
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
## 🙏 Acknowledgments
- **Silero Team**: For their excellent VAD model.
- **PyTorch Team**: For the deep learning framework.
- **Pydantic Team**: For the type validation system.
- **Python Community**: For the rich ecosystem.
## 📞 Contact
- **Author**: Xucailiang
- **Email**: xucailiang.ai@gmail.com
- **Project Homepage**: https://github.com/xucailiang/cascade
- **Issue Tracker**: https://github.com/xucailiang/cascade/issues
- **Documentation**: https://cascade-vad.readthedocs.io/
## 🗺️ Roadmap
### v0.2.0 (Planned)
- [ ] Support for more audio formats (MP3, FLAC)
- [ ] Real-time microphone input support
- [ ] WebSocket API interface
- [ ] Performance optimization and memory reduction
### v0.3.0 (Planned)
- [ ] Multi-language VAD model support
- [ ] Speech separation and enhancement
- [ ] Cloud deployment support
- [ ] Visual monitoring dashboard
---
**⭐ If you find this project helpful, please give it a star!**
Raw data
{
"_id": null,
"home_page": null,
"name": "cascade-vad",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.11",
"maintainer_email": "Xucailiang <xucailiang.ai@gmail.com>",
"keywords": "voice-activity-detection, vad, audio-processing, speech, async, parallel, high-performance",
"author": null,
"author_email": "Xucailiang <xucailiang.ai@gmail.com>",
"download_url": "https://files.pythonhosted.org/packages/21/1a/3f982d9159304e05f47bbbc5708a3bafa88ab6a1e823543a4a329031ac0b/cascade_vad-1.0.0.tar.gz",
"platform": null,
"description": "# Cascade - Production-Ready, High-Performance, Asynchronous VAD Library\n\n[\u4e2d\u6587](./README_zh.md)\n\n[](https://python.org)\n[](LICENSE)\n[](https://github.com/xucailiang/cascade)\n[](https://github.com/snakers4/silero-vad)\n[](https://github.com/xucailiang/cascade)\n[](https://github.com/xucailiang/cascade)\n\n**Cascade** is a **production-ready**, **high-performance**, and **low-latency** audio stream processing library designed for Voice Activity Detection (VAD). Built upon the excellent [Silero VAD](https://github.com/snakers4/silero-vad) model, Cascade significantly reduces VAD processing latency while maintaining high accuracy through its **1:1:1 binding architecture** and **asynchronous streaming technology**.\n\n## \ud83d\udcca Performance Benchmarks\n\nBased on our latest streaming VAD performance tests with different chunk sizes:\n\n### Streaming Performance by Chunk Size\n\n| Chunk Size (bytes) | Processing Time (ms) | Throughput (chunks/sec) | Total Test Time (s) | Speech Segments |\n|-------------------|---------------------|------------------------|-------------------|-----------------|\n| **1024** | **0.66** | **92.2** | 3.15 | 2 |\n| **4096** | 1.66 | 82.4 | 0.89 | 2 |\n| **8192** | 2.95 | 72.7 | 0.51 | 2 |\n\n### Key Performance Metrics\n\n| Metric | Value | Description |\n|-------------------------|---------------|-----------------------------------------|\n| **Best Processing Speed** | 0.66ms/chunk | Optimal performance with 1024-byte chunks |\n| **Peak Throughput** | 92.2 chunks/sec | Maximum processing throughput |\n| **Success Rate** | 100% | Processing success rate across all tests |\n| **Accuracy** | High | Guaranteed by the Silero VAD model |\n| **Architecture** | 1:1:1:1 | Independent model per processor instance |\n\n### Performance Characteristics\n\n- **Excellent performance across chunk sizes**: High throughput and low latency with various chunk sizes\n- **Real-time capability**: Sub-millisecond processing enables real-time applications\n- **Scalability**: Linear performance scaling with independent processor instances\n\n\n## \u2728 Core Features\n\n### \ud83d\ude80 High-Performance Engineering\n\n- **Lock-Free Design**: The 1:1:1 binding architecture eliminates lock contention, boosting performance.\n- **Frame-Aligned Buffer**: A highly efficient buffer optimized for 512-sample frames.\n- **Asynchronous Streaming**: Non-blocking audio stream processing based on `asyncio`.\n- **Memory Optimization**: Zero-copy design, object pooling, and cache alignment.\n- **Concurrency Optimization**: Dedicated threads, asynchronous queues, and batch processing.\n\n### \ud83d\udd27 Robust Software Engineering\n\n- **Modular Design**: A component architecture with high cohesion and low coupling.\n- **Interface Abstraction**: Dependency inversion through interface-based design.\n- **Type System**: Data validation and type checking using Pydantic.\n- **Comprehensive Testing**: Unit, integration, and performance tests.\n- **Code Standards**: Adherence to PEP 8 style guidelines.\n\n### \ud83d\udee1\ufe0f Production-Ready Reliability\n\n- **Error Handling**: Robust error handling and recovery mechanisms.\n- **Resource Management**: Automatic cleanup and graceful shutdown.\n- **Monitoring Metrics**: Real-time performance monitoring and statistics.\n- **Scalability**: Horizontal scaling by increasing the number of instances.\n- **Stability Assurance**: Handles boundary conditions and exceptional cases gracefully.\n\n## \ud83c\udfd7\ufe0f Architecture\n\nCascade employs a **1:1:1:1 independent architecture** to ensure optimal performance and thread safety.\n\n```mermaid\ngraph TD\n Client --> StreamProcessor\n \n subgraph \"1:1:1:1 Independent Architecture\"\n StreamProcessor --> |per connection| IndependentProcessor[Independent Processor Instance]\n IndependentProcessor --> |independent loading| VADModel[Silero VAD Model]\n IndependentProcessor --> |independent management| VADIterator[VAD Iterator]\n IndependentProcessor --> |independent buffering| FrameBuffer[Frame-Aligned Buffer]\n IndependentProcessor --> |independent state| StateMachine[State Machine]\n end\n \n subgraph \"Asynchronous Processing Flow\"\n VADModel --> |asyncio.to_thread| VADInference[VAD Inference]\n VADInference --> StateMachine\n StateMachine --> |None| SingleFrame[Single Frame Output]\n StateMachine --> |start| Collecting[Start Collecting]\n StateMachine --> |end| SpeechSegment[Speech Segment Output]\n end\n```\n\n## \ud83d\ude80 Quick Start\n\n### Installation\n\n```\npip install cascade-vad\n```\nOR\n\n```bash\n# Using uv is recommended\nuv venv -p 3.12\n\nsource .venv/bin/activate\n\n# Install from PyPI (recommended)\npip install cascade-vad\n\n# Or install from source\ngit clone https://github.com/xucailiang/cascade.git\ncd cascade\npip install -e .\n```\n\n### Basic Usage\n\n```python\nimport cascade\nimport asyncio\n\nasync def basic_example():\n \"\"\"A basic usage example.\"\"\"\n \n # Method 1: Simple file processing\n async for result in cascade.process_audio_file(\"audio.wav\"):\n if result.result_type == \"segment\":\n segment = result.segment\n print(f\"\ud83c\udfa4 Speech Segment: {segment.start_timestamp_ms:.0f}ms - {segment.end_timestamp_ms:.0f}ms\")\n else:\n frame = result.frame\n print(f\"\ud83d\udd07 Single Frame: {frame.timestamp_ms:.0f}ms\")\n \n # Method 2: Stream processing\n async with cascade.StreamProcessor() as processor:\n async for result in processor.process_stream(audio_stream):\n if result.result_type == \"segment\":\n segment = result.segment\n print(f\"\ud83c\udfa4 Speech Segment: {segment.start_timestamp_ms:.0f}ms - {segment.end_timestamp_ms:.0f}ms\")\n else:\n frame = result.frame\n print(f\"\ud83d\udd07 Single Frame: {frame.timestamp_ms:.0f}ms\")\n\nasyncio.run(basic_example())\n```\n\n### Advanced Configuration\n\n```python\nfrom cascade.stream import StreamProcessor, create_default_config\n\nasync def advanced_example():\n \"\"\"An advanced configuration example.\"\"\"\n \n # Custom configuration\n config = create_default_config(\n vad_threshold=0.7, # Higher detection threshold\n max_instances=3, # Max 3 concurrent instances\n buffer_size_frames=128 # Larger buffer\n )\n \n # Use the custom config\n async with StreamProcessor(config) as processor:\n # Process audio stream\n async for result in processor.process_stream(audio_stream, \"my-stream\"):\n # Process results...\n pass\n \n # Get performance statistics\n stats = processor.get_stats()\n print(f\"Processing Stats: {stats.summary()}\")\n print(f\"Throughput: {stats.throughput_chunks_per_second:.1f} chunks/sec\")\n\nasyncio.run(advanced_example())\n```\n\n## \ud83e\uddea Testing\n\n```bash\n# Run basic integration tests\npython tests/test_simple_vad.py -v\n\n# Run simulated audio stream tests\npython tests/test_stream_vad.py -v\n\n# Run performance benchmark tests\npython tests/benchmark_performance.py\n```\n\nTest Coverage:\n- \u2705 Basic API Usage\n- \u2705 Stream Processing\n- \u2705 File Processing\n- \u2705 Real Audio VAD\n- \u2705 Automatic Speech Segment Saving\n- \u2705 1:1:1:1 Architecture Validation\n- \u2705 Performance Benchmarks\n- \u2705 FrameAlignedBuffer Tests\n\n## \ud83c\udf10 Web Demo\n\nWe provide a complete WebSocket-based web demonstration that showcases Cascade's real-time VAD capabilities with multiple client support.\n\n\n\n### Features\n\n- **Real-time Audio Processing**: Capture audio from browser microphone and process with VAD\n- **Live VAD Visualization**: Real-time display of VAD detection results\n- **Speech Segment Management**: Display detected speech segments with playback support\n- **Dynamic VAD Configuration**: Adjust VAD parameters in real-time\n- **Multi-client Support**: Independent Cascade instances for each WebSocket connection\n\n### Quick Start\n\n```bash\n# Start backend server\ncd web_demo\npython server.py\n\n# Start frontend (in another terminal)\ncd web_demo/frontend\npnpm install && pnpm dev\n```\n\nFor detailed setup instructions, see [Web Demo Documentation](web_demo/README.md).\n\n## \ud83d\udd27 Production Deployment\n\n### Best Practices\n\n1. **Resource Allocation**\n - Each instance uses approximately 50MB of memory.\n - Recommended: 2-3 instances per CPU core.\n - Monitor memory usage to prevent Out-of-Memory (OOM) errors.\n\n2. **Performance Tuning**\n - Adjust `max_instances` to match server CPU cores.\n - Increase `buffer_size_frames` for higher throughput.\n - Tune `vad_threshold` to balance accuracy and sensitivity.\n\n3. **Error Handling**\n - Implement retry mechanisms for transient errors.\n - Use health checks to monitor service status.\n - Log detailed information for troubleshooting.\n\n### Monitoring Metrics\n\n```python\n# Get performance monitoring metrics\nstats = processor.get_stats()\n\n# Key monitoring metrics\nprint(f\"Active Instances: {stats.active_instances}/{stats.total_instances}\")\nprint(f\"Average Processing Time: {stats.average_processing_time_ms}ms\")\nprint(f\"Success Rate: {stats.success_rate:.2%}\")\nprint(f\"Memory Usage: {stats.memory_usage_mb:.1f}MB\")\n```\n\n## \ud83d\udd27 Requirements\n\n### Core Dependencies\n\n- **Python**: 3.12 (recommended)\n- **pydantic**: 2.4.0+ (Data validation)\n- **numpy**: 1.24.0+ (Numerical computation)\n- **scipy**: 1.11.0+ (Signal processing)\n- **silero-vad**: 5.1.2+ (VAD model)\n- **onnxruntime**: 1.22.1+ (ONNX inference)\n- **torchaudio**: 2.7.1+ (Audio processing)\n\n### Development Dependencies\n\n- **pytest**: Testing framework\n- **black**: Code formatter\n- **ruff**: Linter\n- **mypy**: Type checker\n- **pre-commit**: Git hooks\n\n## \ud83e\udd1d Contribution Guide\n\nWe welcome community contributions! Please follow these steps:\n\n1. **Fork the project** and create a feature branch.\n2. **Install development dependencies**: `pip install -e .[dev]`\n3. **Run tests**: `pytest`\n4. **Lint your code**: `ruff check . && black --check .`\n5. **Type check**: `mypy cascade`\n6. **Submit a Pull Request** with a clear description of your changes.\n\n## \ud83d\udcc4 License\n\nThis project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.\n\n## \ud83d\ude4f Acknowledgments\n\n- **Silero Team**: For their excellent VAD model.\n- **PyTorch Team**: For the deep learning framework.\n- **Pydantic Team**: For the type validation system.\n- **Python Community**: For the rich ecosystem.\n\n## \ud83d\udcde Contact\n\n- **Author**: Xucailiang\n- **Email**: xucailiang.ai@gmail.com\n- **Project Homepage**: https://github.com/xucailiang/cascade\n- **Issue Tracker**: https://github.com/xucailiang/cascade/issues\n- **Documentation**: https://cascade-vad.readthedocs.io/\n\n## \ud83d\uddfa\ufe0f Roadmap\n\n### v0.2.0 (Planned)\n- [ ] Support for more audio formats (MP3, FLAC)\n- [ ] Real-time microphone input support\n- [ ] WebSocket API interface\n- [ ] Performance optimization and memory reduction\n\n### v0.3.0 (Planned)\n- [ ] Multi-language VAD model support\n- [ ] Speech separation and enhancement\n- [ ] Cloud deployment support\n- [ ] Visual monitoring dashboard\n\n---\n\n**\u2b50 If you find this project helpful, please give it a star!**\n",
"bugtrack_url": null,
"license": null,
"summary": "\u9ad8\u6027\u80fd\u5f02\u6b65\u5e76\u884cVAD\u5904\u7406\u5e93",
"version": "1.0.0",
"project_urls": {
"Changelog": "https://github.com/xucailiang/cascade/blob/main/CHANGELOG.md",
"Documentation": "https://cascade-vad.readthedocs.io/",
"Issues": "https://github.com/xucailiang/cascade/issues",
"Repository": "https://github.com/xucailiang/cascade"
},
"split_keywords": [
"voice-activity-detection",
" vad",
" audio-processing",
" speech",
" async",
" parallel",
" high-performance"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "776a41b510d4d4ec283c3623261c680e4cf21194d8cfaa3306207cf841010bfb",
"md5": "cb387b0f55c505ab36c1b02d3b71e71d",
"sha256": "6b935a2388ad033afa360c391ef586075c601942fab01dc19018fdea3f1c3f45"
},
"downloads": -1,
"filename": "cascade_vad-1.0.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "cb387b0f55c505ab36c1b02d3b71e71d",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.11",
"size": 58385,
"upload_time": "2025-10-22T06:24:42",
"upload_time_iso_8601": "2025-10-22T06:24:42.616538Z",
"url": "https://files.pythonhosted.org/packages/77/6a/41b510d4d4ec283c3623261c680e4cf21194d8cfaa3306207cf841010bfb/cascade_vad-1.0.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "211a3f982d9159304e05f47bbbc5708a3bafa88ab6a1e823543a4a329031ac0b",
"md5": "38acd749a722e050d4a96afdca82be43",
"sha256": "cf3cd6b8a1980e6cfdc30f110f2eb51e881d9424421ec6c5881f2d74d49299cf"
},
"downloads": -1,
"filename": "cascade_vad-1.0.0.tar.gz",
"has_sig": false,
"md5_digest": "38acd749a722e050d4a96afdca82be43",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.11",
"size": 60496,
"upload_time": "2025-10-22T06:24:44",
"upload_time_iso_8601": "2025-10-22T06:24:44.051032Z",
"url": "https://files.pythonhosted.org/packages/21/1a/3f982d9159304e05f47bbbc5708a3bafa88ab6a1e823543a4a329031ac0b/cascade_vad-1.0.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-10-22 06:24:44",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "xucailiang",
"github_project": "cascade",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "cascade-vad"
}