funasr-python


Namefunasr-python JSON
Version 0.1.4 PyPI version JSON
download
home_pageNone
SummaryA high-performance Python client for FunASR WebSocket speech recognition service
upload_time2025-10-29 07:35:09
maintainerFunASR Team
docs_urlNone
authorFunASR Team
requires_python>=3.8
licenseMIT License Copyright (c) 2025 FunASR Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
keywords asr funasr real-time speech-recognition streaming websocket
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # FunASR Python Client

[![PyPI version](https://badge.fury.io/py/funasr-python.svg)](https://badge.fury.io/py/funasr-python)
[![Python versions](https://img.shields.io/pypi/pyversions/funasr-python.svg)](https://pypi.org/project/funasr-python)
[![License](https://img.shields.io/pypi/l/funasr-python.svg)](https://pypi.org/project/funasr-python)
[![Tests](https://github.com/your-org/funasr-python/workflows/Tests/badge.svg)](https://github.com/your-org/funasr-python/actions)
[![Code style: ruff](https://img.shields.io/badge/code%20style-ruff-000000.svg)](https://github.com/astral-sh/ruff)

A high-performance, enterprise-grade Python client for FunASR WebSocket speech recognition service. Built for production use with comprehensive error handling, automatic reconnection, and extensive customization options.

## Features

### πŸš€ **High Performance**
- **Asynchronous I/O**: Built on asyncio for maximum concurrency
- **Connection Pooling**: Efficient WebSocket connection management
- **Streaming Recognition**: Real-time speech recognition with minimal latency
- **Memory Efficient**: Optimized audio processing with configurable buffering

### πŸ”§ **Production Ready**
- **Robust Error Handling**: Comprehensive exception handling and recovery
- **Automatic Reconnection**: Smart reconnection with exponential backoff
- **Health Monitoring**: Built-in connection health checks
- **Resource Management**: Automatic cleanup and resource deallocation

### πŸ“Š **Recognition Modes for Different Scenarios**
- **Offline Mode**: Best for complete audio files, highest accuracy
- **Online Mode**: Ultra-low latency streaming, suitable for interactive applications
- **Two-Pass Mode** ⭐: **Recommended for real-time scenarios** - combines streaming speed with offline accuracy

### 🎯 **Enterprise Features**
- **Configuration Management**: Flexible configuration with .env support
- **Comprehensive Logging**: Structured logging with configurable levels
- **Metrics & Monitoring**: Built-in performance metrics
- **Type Safety**: Full type hints for better IDE support

### 🎡 **Audio Processing**
- **Multiple Formats**: Support for WAV, FLAC, MP3, and more
- **Automatic Resampling**: Smart audio format conversion
- **Voice Activity Detection**: Optional VAD for improved efficiency
- **Microphone Integration**: Real-time microphone recording support

## Installation

### Basic Installation

```bash
pip install funasr-python
```

### With Optional Dependencies

```bash
# Audio processing capabilities
pip install funasr-python[audio]

# Performance optimizations
pip install funasr-python[performance]

# Development tools
pip install funasr-python[dev]

# Everything
pip install funasr-python[all]
```

### From Source

```bash
git clone https://github.com/alibaba-damo-academy/FunASR.git
cd FunASR/clients/funasr-python
pip install -e .
```

## Quick Start

### Basic Usage

```python
import asyncio
from funasr_client import AsyncFunASRClient

async def main():
    client = AsyncFunASRClient()

    # Recognize an audio file
    result = await client.recognize_file("examples/audio/asr_example.wav")
    print(f"Recognition result: {result.text}")

    await client.close()

if __name__ == "__main__":
    asyncio.run(main())
```

### Stream Recognition (Async Iterator)

Use `recognize_stream` to process custom audio streams from any source:

```python
import asyncio
from funasr_client import AsyncFunASRClient
from funasr_client.callbacks import SimpleCallback

async def stream_recognition_demo():
    """Recognize audio from custom async stream."""
    client = AsyncFunASRClient(
        server_url="ws://localhost:10095",
        mode="2pass"  # Two-Pass mode for best results
    )

    def on_partial_result(result):
        print(f"Partial: {result.text}")

    def on_final_result(result):
        print(f"Final: {result.text} (confidence: {result.confidence:.2f})")

    callback = SimpleCallback(
        on_partial=on_partial_result,
        on_final=on_final_result
    )

    await client.start()

    # Example 1: Stream from file in chunks
    async def audio_stream_from_file(file_path, chunk_size=3200):
        """Read audio file and yield chunks."""
        with open(file_path, 'rb') as f:
            # Skip WAV header (44 bytes)
            f.read(44)
            while True:
                chunk = f.read(chunk_size)
                if not chunk:
                    break
                yield chunk
                await asyncio.sleep(0.01)  # Simulate real-time streaming

    # Start streaming recognition
    await client.recognize_stream(
        audio_stream_from_file("examples/audio/asr_example.wav"),
        callback
    )

    await client.close()

# Example 2: Stream from network source
async def stream_from_network():
    """Stream audio from network source (e.g., RTP, RTSP)."""
    import aiohttp

    client = AsyncFunASRClient(server_url="ws://localhost:10095")

    async def network_audio_stream(url):
        """Stream audio from HTTP/network source."""
        async with aiohttp.ClientSession() as session:
            async with session.get(url) as response:
                async for chunk in response.content.iter_chunked(3200):
                    yield chunk

    def on_result(result):
        if result.is_final:
            print(f"Transcription: {result.text}")

    from funasr_client.callbacks import SimpleCallback
    callback = SimpleCallback(on_final=on_result)

    await client.start()
    await client.recognize_stream(
        network_audio_stream("http://example.com/audio.pcm"),
        callback
    )
    await client.close()

# Example 3: Stream from microphone (PyAudio)
async def stream_from_microphone():
    """Real-time recognition from microphone using PyAudio."""
    import pyaudio
    import asyncio

    client = AsyncFunASRClient(server_url="ws://localhost:10095")

    async def microphone_stream():
        """Capture audio from microphone and yield chunks."""
        CHUNK = 1600  # 100ms at 16kHz
        FORMAT = pyaudio.paInt16
        CHANNELS = 1
        RATE = 16000

        p = pyaudio.PyAudio()
        stream = p.open(
            format=FORMAT,
            channels=CHANNELS,
            rate=RATE,
            input=True,
            frames_per_buffer=CHUNK
        )

        print("🎀 Recording... Press Ctrl+C to stop")
        try:
            while True:
                data = await asyncio.get_event_loop().run_in_executor(
                    None, stream.read, CHUNK
                )
                yield data
        except KeyboardInterrupt:
            pass
        finally:
            stream.stop_stream()
            stream.close()
            p.terminate()

    def on_result(result):
        if result.is_final:
            print(f"You said: {result.text}")
        else:
            print(f"Hearing: {result.text}")

    from funasr_client.callbacks import SimpleCallback
    callback = SimpleCallback(
        on_partial=on_result,
        on_final=on_result
    )

    await client.start()
    await client.recognize_stream(microphone_stream(), callback)
    await client.close()

if __name__ == "__main__":
    # Run different examples
    asyncio.run(stream_recognition_demo())
    # asyncio.run(stream_from_network())
    # asyncio.run(stream_from_microphone())
```

### Real-time Recognition (Microphone)

For real-time applications, we recommend **Two-Pass Mode** which provides the best balance of speed and accuracy:

```python
import asyncio
from funasr_client import AsyncFunASRClient
from funasr_client.models import RecognitionMode, ClientConfig

async def realtime_recognition():
    # Two-Pass Mode: Optimal for real-time scenarios
    config = ClientConfig(
        server_url="ws://localhost:10095",
        mode=RecognitionMode.TWO_PASS,  # Recommended for real-time
        enable_vad=True,  # Voice activity detection
        chunk_interval=10  # Balanced latency/accuracy
    )

    client = AsyncFunASRClient(config=config)

    def on_partial_result(result):
        print(f"Partial: {result.text}")

    def on_final_result(result):
        print(f"Final: {result.text} (confidence: {result.confidence:.2f})")

    from funasr_client.callbacks import SimpleCallback
    callback = SimpleCallback(
        on_partial=on_partial_result,
        on_final=on_final_result
    )

    await client.start()

    # Start real-time session
    session = await client.start_realtime(callback)

    # Your audio streaming logic here
    # In practice, you would stream from microphone or audio source

    await client.close()

if __name__ == "__main__":
    asyncio.run(realtime_recognition())
```

### Ultra-Low Latency (Interactive Applications)

For scenarios requiring minimal latency (e.g., voice assistants):

```python
async def ultra_low_latency():
    config = ClientConfig(
        mode=RecognitionMode.ONLINE,  # Ultra-low latency
        chunk_interval=5,  # Faster processing
        enable_vad=True
    )

    client = AsyncFunASRClient(config=config)
    # Implementation similar to above
```

### Configuration with Environment Variables

Create a `.env` file:

```env
FUNASR_WS_URL=ws://localhost:10095
FUNASR_MODE=2pass  # Recommended: Two-Pass Mode for optimal real-time performance
FUNASR_SAMPLE_RATE=16000
FUNASR_ENABLE_ITN=true
FUNASR_ENABLE_VAD=true  # Recommended for real-time scenarios
```

```python
from funasr_client import create_async_client

# Configuration loaded automatically from .env
client = await create_async_client()
result = await client.recognize_file("examples/audio/asr_example.wav")
print(result.text)
```

## Advanced Usage

### Custom Configuration

```python
from funasr_client import AsyncFunASRClient, ClientConfig, AudioConfig
from funasr_client.models import RecognitionMode, AudioFormat

config = ClientConfig(
    server_url="ws://your-server:10095",
    mode=RecognitionMode.TWO_PASS,
    timeout=30.0,
    max_retries=3,
    audio=AudioConfig(
        sample_rate=16000,
        format=AudioFormat.PCM,
        channels=1
    )
)

client = AsyncFunASRClient(config=config)
```

### Callback Handlers

```python
from funasr_client.callbacks import SimpleCallback

def on_result(result):
    print(f"Received: {result.text}")

def on_error(error):
    print(f"Error: {error}")

callback = SimpleCallback(
    on_result=on_result,
    on_error=on_error
)

client = AsyncFunASRClient(callback=callback)
```

### Multiple Recognition Sessions

```python
async def recognize_multiple():
    # Use Two-Pass Mode for optimal performance
    client = AsyncFunASRClient(
        mode=RecognitionMode.TWO_PASS  # ⭐ Recommended
    )

    # Process multiple files concurrently
    tasks = [
        client.recognize_file("examples/audio/asr_example.wav"),
        client.recognize_file("examples/audio/61-70970-0001.wav"),
        client.recognize_file("examples/audio/61-70970-0016.wav")
    ]

    results = await asyncio.gather(*tasks)
    for i, result in enumerate(results, 1):
        print(f"File {i}: {result.text}")
```

### Real-time Applications Examples

#### Live Streaming Transcription

```python
async def live_transcription():
    """Real-time transcription for live streams."""
    config = ClientConfig(
        mode=RecognitionMode.TWO_PASS,  # ⭐ Optimal for live streaming
        enable_vad=True,                # Filter silence
        chunk_interval=8,               # Balanced performance
        auto_reconnect=True             # Handle network issues
    )

    client = AsyncFunASRClient(config=config)

    def on_result(result):
        if result.is_final:
            # Send to subtitle system
            send_subtitle(result.text, result.confidence)
        else:
            # Show live preview
            show_live_text(result.text)

    from funasr_client.callbacks import SimpleCallback
    callback = SimpleCallback(on_final=on_result, on_partial=on_result)

    await client.start()
    session = await client.start_realtime(callback)

    # Your audio streaming implementation here
    await stream_audio_to_session(session)
```

#### Voice Assistant Integration

```python
async def voice_assistant():
    """Voice assistant with Two-Pass optimization."""
    config = ClientConfig(
        mode=RecognitionMode.TWO_PASS,  # ⭐ Best for voice assistants
        enable_vad=True,                # Automatic speech detection
        chunk_interval=10               # Good responsiveness
    )

    client = AsyncFunASRClient(config=config)

    async def process_command(result):
        if result.is_final and result.confidence > 0.8:
            # Process voice command
            response = await process_voice_command(result.text)
            await speak_response(response)

    from funasr_client.callbacks import AsyncSimpleCallback
    callback = AsyncSimpleCallback(on_final=process_command)

    await client.start()
    session = await client.start_realtime(callback)

    print("🎀 Voice assistant ready. Speak now...")
    # Your microphone streaming logic here
```

## Command Line Interface

The package includes a full-featured CLI:

```bash
# Basic recognition
funasr-client recognize examples/audio/asr_example.wav

# Real-time recognition from microphone
funasr-client stream --source microphone

# Batch processing
funasr-client batch examples/audio/*.wav --output results.jsonl

# Server configuration
funasr-client configure --server-url ws://localhost:10095

# Test connection
funasr-client test-connection
```

## Recognition Mode Selection Guide

Choose the optimal recognition mode for your use case:

| Mode | Latency | Accuracy | Best For | Use Cases |
|------|---------|----------|----------|-----------|
| **Two-Pass** ⭐ | Medium | **High** | **Real-time applications** | Live streaming, real-time subtitles, voice assistants |
| **Online** | **Low** | Medium | Interactive apps | Voice commands, quick responses |
| **Offline** | High | **Highest** | File processing | Transcription services, post-processing |

### Two-Pass Mode Advantages ⭐

**Recommended for real-time scenarios** because it:

- βœ… **Fast partial results** for immediate user feedback (Phase 1: Online)
- βœ… **High-accuracy final results** using 2-pass optimization (Phase 2: Offline)
- βœ… **Balanced resource usage** with smart buffering
- βœ… **Production-ready** with robust error handling

```python
# Recommended configuration for real-time applications
config = ClientConfig(
    mode=RecognitionMode.TWO_PASS,  # Best balance
    enable_vad=True,                # Improves efficiency
    chunk_interval=10,              # Optimal for most cases
    auto_reconnect=True             # Production reliability
)
```

> ⚠️ **Important**: To ensure you receive **both** partial (online) and final (offline) results in Two-Pass mode:
> - βœ… Use `recognize_file()` for complete audio files (handles end-of-speech automatically)
> - βœ… Call `end_realtime_session()` after each utterance in streaming scenarios
> - βœ… Enable VAD (`enable_vad=True`) for better speech boundary detection
> - βœ… Include sufficient silence (0.5-1s) at the end of speech segments
> 
> πŸ“– **See detailed guide**: [Two-Pass Best Practices](docs/TWO_PASS_BEST_PRACTICES_zh.md)

## Configuration

### Environment Variables

| Variable | Description | Default |
|----------|-------------|---------|
| `FUNASR_WS_URL` | WebSocket server URL | `ws://localhost:10095` |
| `FUNASR_MODE` | Recognition mode (`offline`, `online`, `2pass`) | `2pass` ⭐ |
| `FUNASR_TIMEOUT` | Connection timeout | `30.0` |
| `FUNASR_MAX_RETRIES` | Max retry attempts | `3` |
| `FUNASR_SAMPLE_RATE` | Audio sample rate | `16000` |
| `FUNASR_ENABLE_ITN` | Enable inverse text normalization | `true` |
| `FUNASR_ENABLE_VAD` | Enable voice activity detection | `true` |
| `FUNASR_DEBUG` | Enable debug logging | `false` |

> πŸ’‘ **Tip**: Two-Pass Mode (`2pass`) is recommended for most real-time applications as it provides the best balance between latency and accuracy.

### Configuration File

```python
from funasr_client import ConfigManager

# Load from custom config file
config = ConfigManager.from_file("my_config.json")
client = AsyncFunASRClient(config=config.client_config)
```

## Error Handling

```python
from funasr_client.errors import (
    FunASRError,
    ConnectionError,
    AudioError,
    TimeoutError
)

try:
    result = await client.recognize_file("examples/audio/asr_example.wav")
except ConnectionError:
    print("Failed to connect to server")
except AudioError:
    print("Audio processing failed")
except TimeoutError:
    print("Request timed out")
except FunASRError as e:
    print(f"Recognition error: {e}")
```

## Performance Optimization

### Real-time Performance Best Practices

For optimal real-time performance, follow these recommendations:

```python
from funasr_client import AsyncFunASRClient, ClientConfig
from funasr_client.models import RecognitionMode, AudioConfig

# Optimized configuration for real-time scenarios
config = ClientConfig(
    # Core settings
    mode=RecognitionMode.TWO_PASS,  # ⭐ Best balance for real-time
    enable_vad=True,                # Reduces processing load
    chunk_interval=10,              # Optimal latency/accuracy trade-off

    # Performance settings
    auto_reconnect=True,            # Production reliability
    connection_pool_size=5,         # Connection reuse
    buffer_size=8192,               # Optimal buffer size

    # Audio optimization
    audio=AudioConfig(
        sample_rate=16000,          # Standard ASR rate
        channels=1,                 # Mono for efficiency
        sample_width=2              # 16-bit PCM
    )
)

client = AsyncFunASRClient(config=config)
```

### Performance Tuning Guidelines

| Parameter | Recommended Value | Impact |
|-----------|------------------|---------|
| `mode` | `TWO_PASS` ⭐ | Best accuracy/latency balance |
| `chunk_interval` | `10` | Standard real-time performance |
| `chunk_interval` | `5` | Lower latency, higher CPU usage |
| `chunk_interval` | `20` | Higher latency, lower CPU usage |
| `enable_vad` | `True` | Reduces unnecessary processing |
| `sample_rate` | `16000` | Optimal for most ASR models |

### Connection Pooling

```python
from funasr_client import ConnectionManager, ClientConfig

# Create configuration with custom pool size
config = ClientConfig(connection_pool_size=10)
manager = ConnectionManager(config)

# Start the connection manager
await manager.start()

# Use connection manager for multiple clients
client1 = AsyncFunASRClient(connection_manager=manager)
client2 = AsyncFunASRClient(connection_manager=manager)
```

### Audio Processing

```python
from funasr_client import AudioProcessor

# Pre-process audio for better performance
processor = AudioProcessor(
    target_sample_rate=16000,
    enable_vad=True,
    chunk_size=1024
)

processed_audio = processor.process_file("examples/audio/asr_example.wav")
result = await client.recognize_audio(processed_audio)
```

## Testing

Run the test suite:

```bash
# Install test dependencies
pip install funasr-python[test]

# Run all tests
pytest

# Run with coverage
pytest --cov=funasr_client

# Run specific test categories
pytest -m unit
pytest -m integration
```

## Development

### Setup Development Environment

```bash
git clone https://github.com/alibaba-damo-academy/FunASR.git
cd FunASR/clients/funasr-python

# Create virtual environment
python -m venv venv
source venv/bin/activate  # On Windows: venv\Scripts\activate

# Install in development mode
pip install -e .[dev]

# Install pre-commit hooks
pre-commit install
```

### Code Quality

```bash
# Format code
ruff format src/ tests/

# Lint code
ruff check src/ tests/

# Type check
mypy src/

# Run all quality checks
pre-commit run --all-files
```

## API Reference

### Core Classes

- **`AsyncFunASRClient`**: Main asynchronous client
- **`FunASRClient`**: Synchronous client wrapper
- **`ClientConfig`**: Client configuration
- **`AudioConfig`**: Audio processing configuration
- **`RecognitionResult`**: Recognition result container

### Callback System

- **`RecognitionCallback`**: Abstract callback interface
- **`SimpleCallback`**: Basic callback implementation
- **`LoggingCallback`**: Logging-based callback
- **`MultiCallback`**: Combines multiple callbacks

### Audio Processing

- **`AudioProcessor`**: Audio processing utilities
- **`AudioRecorder`**: Microphone recording
- **`AudioFileStreamer`**: File-based audio streaming

### Utilities

- **`ConfigManager`**: Configuration management
- **`ConnectionManager`**: Connection pooling
- **`Timer`**: Performance timing utilities

## Documentation & Guides

### Quick References ⚑
- [Two-Pass Quick Reference](docs/TWO_PASS_QUICK_REFERENCE.md) - Fast solutions for common Two-Pass mode issues
- [Examples Directory](examples/) - Comprehensive usage examples

### Detailed Guides πŸ“–
- [Two-Pass Best Practices (δΈ­ζ–‡)](docs/TWO_PASS_BEST_PRACTICES_zh.md) - Complete guide to avoid empty Phase 2 results
- API Reference (Coming soon)
- Configuration Guide (Coming soon)
- Performance Optimization (Coming soon)

### Architecture Documentation
- [FunASR WebSocket Protocol](../../runtime/docs/websocket_protocol.md)
- [Two-Pass Architecture](../../runtime/docs/funasr-wss-server-2pass-architecture.puml)

## Contributing

We welcome contributions! Please see [CONTRIBUTING.md](CONTRIBUTING.md) for details.

### Development Process

1. Fork the repository
2. Create a feature branch
3. Make your changes
4. Add tests for new functionality
5. Run the test suite
6. Submit a pull request

## License

This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.

## Changelog

See [CHANGELOG.md](CHANGELOG.md) for version history.

## Support

- **Documentation**: [FunASR Documentation](https://github.com/alibaba-damo-academy/FunASR)
- **Issues**: [GitHub Issues](https://github.com/alibaba-damo-academy/FunASR/issues)
- **Discussions**: [GitHub Discussions](https://github.com/alibaba-damo-academy/FunASR/discussions)

## Acknowledgments

- Built on the excellent [FunASR](https://github.com/alibaba-damo-academy/FunASR) speech recognition toolkit
- Inspired by best practices from the Python asyncio ecosystem
- Thanks to all contributors and users for feedback and improvements
            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "funasr-python",
    "maintainer": "FunASR Team",
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": null,
    "keywords": "asr, funasr, real-time, speech-recognition, streaming, websocket",
    "author": "FunASR Team",
    "author_email": null,
    "download_url": "https://files.pythonhosted.org/packages/66/74/761803495a036b93f6d7f4aadaee04f0eb109a47773a073124fd8088dcfd/funasr_python-0.1.4.tar.gz",
    "platform": null,
    "description": "# FunASR Python Client\n\n[![PyPI version](https://badge.fury.io/py/funasr-python.svg)](https://badge.fury.io/py/funasr-python)\n[![Python versions](https://img.shields.io/pypi/pyversions/funasr-python.svg)](https://pypi.org/project/funasr-python)\n[![License](https://img.shields.io/pypi/l/funasr-python.svg)](https://pypi.org/project/funasr-python)\n[![Tests](https://github.com/your-org/funasr-python/workflows/Tests/badge.svg)](https://github.com/your-org/funasr-python/actions)\n[![Code style: ruff](https://img.shields.io/badge/code%20style-ruff-000000.svg)](https://github.com/astral-sh/ruff)\n\nA high-performance, enterprise-grade Python client for FunASR WebSocket speech recognition service. Built for production use with comprehensive error handling, automatic reconnection, and extensive customization options.\n\n## Features\n\n### \ud83d\ude80 **High Performance**\n- **Asynchronous I/O**: Built on asyncio for maximum concurrency\n- **Connection Pooling**: Efficient WebSocket connection management\n- **Streaming Recognition**: Real-time speech recognition with minimal latency\n- **Memory Efficient**: Optimized audio processing with configurable buffering\n\n### \ud83d\udd27 **Production Ready**\n- **Robust Error Handling**: Comprehensive exception handling and recovery\n- **Automatic Reconnection**: Smart reconnection with exponential backoff\n- **Health Monitoring**: Built-in connection health checks\n- **Resource Management**: Automatic cleanup and resource deallocation\n\n### \ud83d\udcca **Recognition Modes for Different Scenarios**\n- **Offline Mode**: Best for complete audio files, highest accuracy\n- **Online Mode**: Ultra-low latency streaming, suitable for interactive applications\n- **Two-Pass Mode** \u2b50: **Recommended for real-time scenarios** - combines streaming speed with offline accuracy\n\n### \ud83c\udfaf **Enterprise Features**\n- **Configuration Management**: Flexible configuration with .env support\n- **Comprehensive Logging**: Structured logging with configurable levels\n- **Metrics & Monitoring**: Built-in performance metrics\n- **Type Safety**: Full type hints for better IDE support\n\n### \ud83c\udfb5 **Audio Processing**\n- **Multiple Formats**: Support for WAV, FLAC, MP3, and more\n- **Automatic Resampling**: Smart audio format conversion\n- **Voice Activity Detection**: Optional VAD for improved efficiency\n- **Microphone Integration**: Real-time microphone recording support\n\n## Installation\n\n### Basic Installation\n\n```bash\npip install funasr-python\n```\n\n### With Optional Dependencies\n\n```bash\n# Audio processing capabilities\npip install funasr-python[audio]\n\n# Performance optimizations\npip install funasr-python[performance]\n\n# Development tools\npip install funasr-python[dev]\n\n# Everything\npip install funasr-python[all]\n```\n\n### From Source\n\n```bash\ngit clone https://github.com/alibaba-damo-academy/FunASR.git\ncd FunASR/clients/funasr-python\npip install -e .\n```\n\n## Quick Start\n\n### Basic Usage\n\n```python\nimport asyncio\nfrom funasr_client import AsyncFunASRClient\n\nasync def main():\n    client = AsyncFunASRClient()\n\n    # Recognize an audio file\n    result = await client.recognize_file(\"examples/audio/asr_example.wav\")\n    print(f\"Recognition result: {result.text}\")\n\n    await client.close()\n\nif __name__ == \"__main__\":\n    asyncio.run(main())\n```\n\n### Stream Recognition (Async Iterator)\n\nUse `recognize_stream` to process custom audio streams from any source:\n\n```python\nimport asyncio\nfrom funasr_client import AsyncFunASRClient\nfrom funasr_client.callbacks import SimpleCallback\n\nasync def stream_recognition_demo():\n    \"\"\"Recognize audio from custom async stream.\"\"\"\n    client = AsyncFunASRClient(\n        server_url=\"ws://localhost:10095\",\n        mode=\"2pass\"  # Two-Pass mode for best results\n    )\n\n    def on_partial_result(result):\n        print(f\"Partial: {result.text}\")\n\n    def on_final_result(result):\n        print(f\"Final: {result.text} (confidence: {result.confidence:.2f})\")\n\n    callback = SimpleCallback(\n        on_partial=on_partial_result,\n        on_final=on_final_result\n    )\n\n    await client.start()\n\n    # Example 1: Stream from file in chunks\n    async def audio_stream_from_file(file_path, chunk_size=3200):\n        \"\"\"Read audio file and yield chunks.\"\"\"\n        with open(file_path, 'rb') as f:\n            # Skip WAV header (44 bytes)\n            f.read(44)\n            while True:\n                chunk = f.read(chunk_size)\n                if not chunk:\n                    break\n                yield chunk\n                await asyncio.sleep(0.01)  # Simulate real-time streaming\n\n    # Start streaming recognition\n    await client.recognize_stream(\n        audio_stream_from_file(\"examples/audio/asr_example.wav\"),\n        callback\n    )\n\n    await client.close()\n\n# Example 2: Stream from network source\nasync def stream_from_network():\n    \"\"\"Stream audio from network source (e.g., RTP, RTSP).\"\"\"\n    import aiohttp\n\n    client = AsyncFunASRClient(server_url=\"ws://localhost:10095\")\n\n    async def network_audio_stream(url):\n        \"\"\"Stream audio from HTTP/network source.\"\"\"\n        async with aiohttp.ClientSession() as session:\n            async with session.get(url) as response:\n                async for chunk in response.content.iter_chunked(3200):\n                    yield chunk\n\n    def on_result(result):\n        if result.is_final:\n            print(f\"Transcription: {result.text}\")\n\n    from funasr_client.callbacks import SimpleCallback\n    callback = SimpleCallback(on_final=on_result)\n\n    await client.start()\n    await client.recognize_stream(\n        network_audio_stream(\"http://example.com/audio.pcm\"),\n        callback\n    )\n    await client.close()\n\n# Example 3: Stream from microphone (PyAudio)\nasync def stream_from_microphone():\n    \"\"\"Real-time recognition from microphone using PyAudio.\"\"\"\n    import pyaudio\n    import asyncio\n\n    client = AsyncFunASRClient(server_url=\"ws://localhost:10095\")\n\n    async def microphone_stream():\n        \"\"\"Capture audio from microphone and yield chunks.\"\"\"\n        CHUNK = 1600  # 100ms at 16kHz\n        FORMAT = pyaudio.paInt16\n        CHANNELS = 1\n        RATE = 16000\n\n        p = pyaudio.PyAudio()\n        stream = p.open(\n            format=FORMAT,\n            channels=CHANNELS,\n            rate=RATE,\n            input=True,\n            frames_per_buffer=CHUNK\n        )\n\n        print(\"\ud83c\udfa4 Recording... Press Ctrl+C to stop\")\n        try:\n            while True:\n                data = await asyncio.get_event_loop().run_in_executor(\n                    None, stream.read, CHUNK\n                )\n                yield data\n        except KeyboardInterrupt:\n            pass\n        finally:\n            stream.stop_stream()\n            stream.close()\n            p.terminate()\n\n    def on_result(result):\n        if result.is_final:\n            print(f\"You said: {result.text}\")\n        else:\n            print(f\"Hearing: {result.text}\")\n\n    from funasr_client.callbacks import SimpleCallback\n    callback = SimpleCallback(\n        on_partial=on_result,\n        on_final=on_result\n    )\n\n    await client.start()\n    await client.recognize_stream(microphone_stream(), callback)\n    await client.close()\n\nif __name__ == \"__main__\":\n    # Run different examples\n    asyncio.run(stream_recognition_demo())\n    # asyncio.run(stream_from_network())\n    # asyncio.run(stream_from_microphone())\n```\n\n### Real-time Recognition (Microphone)\n\nFor real-time applications, we recommend **Two-Pass Mode** which provides the best balance of speed and accuracy:\n\n```python\nimport asyncio\nfrom funasr_client import AsyncFunASRClient\nfrom funasr_client.models import RecognitionMode, ClientConfig\n\nasync def realtime_recognition():\n    # Two-Pass Mode: Optimal for real-time scenarios\n    config = ClientConfig(\n        server_url=\"ws://localhost:10095\",\n        mode=RecognitionMode.TWO_PASS,  # Recommended for real-time\n        enable_vad=True,  # Voice activity detection\n        chunk_interval=10  # Balanced latency/accuracy\n    )\n\n    client = AsyncFunASRClient(config=config)\n\n    def on_partial_result(result):\n        print(f\"Partial: {result.text}\")\n\n    def on_final_result(result):\n        print(f\"Final: {result.text} (confidence: {result.confidence:.2f})\")\n\n    from funasr_client.callbacks import SimpleCallback\n    callback = SimpleCallback(\n        on_partial=on_partial_result,\n        on_final=on_final_result\n    )\n\n    await client.start()\n\n    # Start real-time session\n    session = await client.start_realtime(callback)\n\n    # Your audio streaming logic here\n    # In practice, you would stream from microphone or audio source\n\n    await client.close()\n\nif __name__ == \"__main__\":\n    asyncio.run(realtime_recognition())\n```\n\n### Ultra-Low Latency (Interactive Applications)\n\nFor scenarios requiring minimal latency (e.g., voice assistants):\n\n```python\nasync def ultra_low_latency():\n    config = ClientConfig(\n        mode=RecognitionMode.ONLINE,  # Ultra-low latency\n        chunk_interval=5,  # Faster processing\n        enable_vad=True\n    )\n\n    client = AsyncFunASRClient(config=config)\n    # Implementation similar to above\n```\n\n### Configuration with Environment Variables\n\nCreate a `.env` file:\n\n```env\nFUNASR_WS_URL=ws://localhost:10095\nFUNASR_MODE=2pass  # Recommended: Two-Pass Mode for optimal real-time performance\nFUNASR_SAMPLE_RATE=16000\nFUNASR_ENABLE_ITN=true\nFUNASR_ENABLE_VAD=true  # Recommended for real-time scenarios\n```\n\n```python\nfrom funasr_client import create_async_client\n\n# Configuration loaded automatically from .env\nclient = await create_async_client()\nresult = await client.recognize_file(\"examples/audio/asr_example.wav\")\nprint(result.text)\n```\n\n## Advanced Usage\n\n### Custom Configuration\n\n```python\nfrom funasr_client import AsyncFunASRClient, ClientConfig, AudioConfig\nfrom funasr_client.models import RecognitionMode, AudioFormat\n\nconfig = ClientConfig(\n    server_url=\"ws://your-server:10095\",\n    mode=RecognitionMode.TWO_PASS,\n    timeout=30.0,\n    max_retries=3,\n    audio=AudioConfig(\n        sample_rate=16000,\n        format=AudioFormat.PCM,\n        channels=1\n    )\n)\n\nclient = AsyncFunASRClient(config=config)\n```\n\n### Callback Handlers\n\n```python\nfrom funasr_client.callbacks import SimpleCallback\n\ndef on_result(result):\n    print(f\"Received: {result.text}\")\n\ndef on_error(error):\n    print(f\"Error: {error}\")\n\ncallback = SimpleCallback(\n    on_result=on_result,\n    on_error=on_error\n)\n\nclient = AsyncFunASRClient(callback=callback)\n```\n\n### Multiple Recognition Sessions\n\n```python\nasync def recognize_multiple():\n    # Use Two-Pass Mode for optimal performance\n    client = AsyncFunASRClient(\n        mode=RecognitionMode.TWO_PASS  # \u2b50 Recommended\n    )\n\n    # Process multiple files concurrently\n    tasks = [\n        client.recognize_file(\"examples/audio/asr_example.wav\"),\n        client.recognize_file(\"examples/audio/61-70970-0001.wav\"),\n        client.recognize_file(\"examples/audio/61-70970-0016.wav\")\n    ]\n\n    results = await asyncio.gather(*tasks)\n    for i, result in enumerate(results, 1):\n        print(f\"File {i}: {result.text}\")\n```\n\n### Real-time Applications Examples\n\n#### Live Streaming Transcription\n\n```python\nasync def live_transcription():\n    \"\"\"Real-time transcription for live streams.\"\"\"\n    config = ClientConfig(\n        mode=RecognitionMode.TWO_PASS,  # \u2b50 Optimal for live streaming\n        enable_vad=True,                # Filter silence\n        chunk_interval=8,               # Balanced performance\n        auto_reconnect=True             # Handle network issues\n    )\n\n    client = AsyncFunASRClient(config=config)\n\n    def on_result(result):\n        if result.is_final:\n            # Send to subtitle system\n            send_subtitle(result.text, result.confidence)\n        else:\n            # Show live preview\n            show_live_text(result.text)\n\n    from funasr_client.callbacks import SimpleCallback\n    callback = SimpleCallback(on_final=on_result, on_partial=on_result)\n\n    await client.start()\n    session = await client.start_realtime(callback)\n\n    # Your audio streaming implementation here\n    await stream_audio_to_session(session)\n```\n\n#### Voice Assistant Integration\n\n```python\nasync def voice_assistant():\n    \"\"\"Voice assistant with Two-Pass optimization.\"\"\"\n    config = ClientConfig(\n        mode=RecognitionMode.TWO_PASS,  # \u2b50 Best for voice assistants\n        enable_vad=True,                # Automatic speech detection\n        chunk_interval=10               # Good responsiveness\n    )\n\n    client = AsyncFunASRClient(config=config)\n\n    async def process_command(result):\n        if result.is_final and result.confidence > 0.8:\n            # Process voice command\n            response = await process_voice_command(result.text)\n            await speak_response(response)\n\n    from funasr_client.callbacks import AsyncSimpleCallback\n    callback = AsyncSimpleCallback(on_final=process_command)\n\n    await client.start()\n    session = await client.start_realtime(callback)\n\n    print(\"\ud83c\udfa4 Voice assistant ready. Speak now...\")\n    # Your microphone streaming logic here\n```\n\n## Command Line Interface\n\nThe package includes a full-featured CLI:\n\n```bash\n# Basic recognition\nfunasr-client recognize examples/audio/asr_example.wav\n\n# Real-time recognition from microphone\nfunasr-client stream --source microphone\n\n# Batch processing\nfunasr-client batch examples/audio/*.wav --output results.jsonl\n\n# Server configuration\nfunasr-client configure --server-url ws://localhost:10095\n\n# Test connection\nfunasr-client test-connection\n```\n\n## Recognition Mode Selection Guide\n\nChoose the optimal recognition mode for your use case:\n\n| Mode | Latency | Accuracy | Best For | Use Cases |\n|------|---------|----------|----------|-----------|\n| **Two-Pass** \u2b50 | Medium | **High** | **Real-time applications** | Live streaming, real-time subtitles, voice assistants |\n| **Online** | **Low** | Medium | Interactive apps | Voice commands, quick responses |\n| **Offline** | High | **Highest** | File processing | Transcription services, post-processing |\n\n### Two-Pass Mode Advantages \u2b50\n\n**Recommended for real-time scenarios** because it:\n\n- \u2705 **Fast partial results** for immediate user feedback (Phase 1: Online)\n- \u2705 **High-accuracy final results** using 2-pass optimization (Phase 2: Offline)\n- \u2705 **Balanced resource usage** with smart buffering\n- \u2705 **Production-ready** with robust error handling\n\n```python\n# Recommended configuration for real-time applications\nconfig = ClientConfig(\n    mode=RecognitionMode.TWO_PASS,  # Best balance\n    enable_vad=True,                # Improves efficiency\n    chunk_interval=10,              # Optimal for most cases\n    auto_reconnect=True             # Production reliability\n)\n```\n\n> \u26a0\ufe0f **Important**: To ensure you receive **both** partial (online) and final (offline) results in Two-Pass mode:\n> - \u2705 Use `recognize_file()` for complete audio files (handles end-of-speech automatically)\n> - \u2705 Call `end_realtime_session()` after each utterance in streaming scenarios\n> - \u2705 Enable VAD (`enable_vad=True`) for better speech boundary detection\n> - \u2705 Include sufficient silence (0.5-1s) at the end of speech segments\n> \n> \ud83d\udcd6 **See detailed guide**: [Two-Pass Best Practices](docs/TWO_PASS_BEST_PRACTICES_zh.md)\n\n## Configuration\n\n### Environment Variables\n\n| Variable | Description | Default |\n|----------|-------------|---------|\n| `FUNASR_WS_URL` | WebSocket server URL | `ws://localhost:10095` |\n| `FUNASR_MODE` | Recognition mode (`offline`, `online`, `2pass`) | `2pass` \u2b50 |\n| `FUNASR_TIMEOUT` | Connection timeout | `30.0` |\n| `FUNASR_MAX_RETRIES` | Max retry attempts | `3` |\n| `FUNASR_SAMPLE_RATE` | Audio sample rate | `16000` |\n| `FUNASR_ENABLE_ITN` | Enable inverse text normalization | `true` |\n| `FUNASR_ENABLE_VAD` | Enable voice activity detection | `true` |\n| `FUNASR_DEBUG` | Enable debug logging | `false` |\n\n> \ud83d\udca1 **Tip**: Two-Pass Mode (`2pass`) is recommended for most real-time applications as it provides the best balance between latency and accuracy.\n\n### Configuration File\n\n```python\nfrom funasr_client import ConfigManager\n\n# Load from custom config file\nconfig = ConfigManager.from_file(\"my_config.json\")\nclient = AsyncFunASRClient(config=config.client_config)\n```\n\n## Error Handling\n\n```python\nfrom funasr_client.errors import (\n    FunASRError,\n    ConnectionError,\n    AudioError,\n    TimeoutError\n)\n\ntry:\n    result = await client.recognize_file(\"examples/audio/asr_example.wav\")\nexcept ConnectionError:\n    print(\"Failed to connect to server\")\nexcept AudioError:\n    print(\"Audio processing failed\")\nexcept TimeoutError:\n    print(\"Request timed out\")\nexcept FunASRError as e:\n    print(f\"Recognition error: {e}\")\n```\n\n## Performance Optimization\n\n### Real-time Performance Best Practices\n\nFor optimal real-time performance, follow these recommendations:\n\n```python\nfrom funasr_client import AsyncFunASRClient, ClientConfig\nfrom funasr_client.models import RecognitionMode, AudioConfig\n\n# Optimized configuration for real-time scenarios\nconfig = ClientConfig(\n    # Core settings\n    mode=RecognitionMode.TWO_PASS,  # \u2b50 Best balance for real-time\n    enable_vad=True,                # Reduces processing load\n    chunk_interval=10,              # Optimal latency/accuracy trade-off\n\n    # Performance settings\n    auto_reconnect=True,            # Production reliability\n    connection_pool_size=5,         # Connection reuse\n    buffer_size=8192,               # Optimal buffer size\n\n    # Audio optimization\n    audio=AudioConfig(\n        sample_rate=16000,          # Standard ASR rate\n        channels=1,                 # Mono for efficiency\n        sample_width=2              # 16-bit PCM\n    )\n)\n\nclient = AsyncFunASRClient(config=config)\n```\n\n### Performance Tuning Guidelines\n\n| Parameter | Recommended Value | Impact |\n|-----------|------------------|---------|\n| `mode` | `TWO_PASS` \u2b50 | Best accuracy/latency balance |\n| `chunk_interval` | `10` | Standard real-time performance |\n| `chunk_interval` | `5` | Lower latency, higher CPU usage |\n| `chunk_interval` | `20` | Higher latency, lower CPU usage |\n| `enable_vad` | `True` | Reduces unnecessary processing |\n| `sample_rate` | `16000` | Optimal for most ASR models |\n\n### Connection Pooling\n\n```python\nfrom funasr_client import ConnectionManager, ClientConfig\n\n# Create configuration with custom pool size\nconfig = ClientConfig(connection_pool_size=10)\nmanager = ConnectionManager(config)\n\n# Start the connection manager\nawait manager.start()\n\n# Use connection manager for multiple clients\nclient1 = AsyncFunASRClient(connection_manager=manager)\nclient2 = AsyncFunASRClient(connection_manager=manager)\n```\n\n### Audio Processing\n\n```python\nfrom funasr_client import AudioProcessor\n\n# Pre-process audio for better performance\nprocessor = AudioProcessor(\n    target_sample_rate=16000,\n    enable_vad=True,\n    chunk_size=1024\n)\n\nprocessed_audio = processor.process_file(\"examples/audio/asr_example.wav\")\nresult = await client.recognize_audio(processed_audio)\n```\n\n## Testing\n\nRun the test suite:\n\n```bash\n# Install test dependencies\npip install funasr-python[test]\n\n# Run all tests\npytest\n\n# Run with coverage\npytest --cov=funasr_client\n\n# Run specific test categories\npytest -m unit\npytest -m integration\n```\n\n## Development\n\n### Setup Development Environment\n\n```bash\ngit clone https://github.com/alibaba-damo-academy/FunASR.git\ncd FunASR/clients/funasr-python\n\n# Create virtual environment\npython -m venv venv\nsource venv/bin/activate  # On Windows: venv\\Scripts\\activate\n\n# Install in development mode\npip install -e .[dev]\n\n# Install pre-commit hooks\npre-commit install\n```\n\n### Code Quality\n\n```bash\n# Format code\nruff format src/ tests/\n\n# Lint code\nruff check src/ tests/\n\n# Type check\nmypy src/\n\n# Run all quality checks\npre-commit run --all-files\n```\n\n## API Reference\n\n### Core Classes\n\n- **`AsyncFunASRClient`**: Main asynchronous client\n- **`FunASRClient`**: Synchronous client wrapper\n- **`ClientConfig`**: Client configuration\n- **`AudioConfig`**: Audio processing configuration\n- **`RecognitionResult`**: Recognition result container\n\n### Callback System\n\n- **`RecognitionCallback`**: Abstract callback interface\n- **`SimpleCallback`**: Basic callback implementation\n- **`LoggingCallback`**: Logging-based callback\n- **`MultiCallback`**: Combines multiple callbacks\n\n### Audio Processing\n\n- **`AudioProcessor`**: Audio processing utilities\n- **`AudioRecorder`**: Microphone recording\n- **`AudioFileStreamer`**: File-based audio streaming\n\n### Utilities\n\n- **`ConfigManager`**: Configuration management\n- **`ConnectionManager`**: Connection pooling\n- **`Timer`**: Performance timing utilities\n\n## Documentation & Guides\n\n### Quick References \u26a1\n- [Two-Pass Quick Reference](docs/TWO_PASS_QUICK_REFERENCE.md) - Fast solutions for common Two-Pass mode issues\n- [Examples Directory](examples/) - Comprehensive usage examples\n\n### Detailed Guides \ud83d\udcd6\n- [Two-Pass Best Practices (\u4e2d\u6587)](docs/TWO_PASS_BEST_PRACTICES_zh.md) - Complete guide to avoid empty Phase 2 results\n- API Reference (Coming soon)\n- Configuration Guide (Coming soon)\n- Performance Optimization (Coming soon)\n\n### Architecture Documentation\n- [FunASR WebSocket Protocol](../../runtime/docs/websocket_protocol.md)\n- [Two-Pass Architecture](../../runtime/docs/funasr-wss-server-2pass-architecture.puml)\n\n## Contributing\n\nWe welcome contributions! Please see [CONTRIBUTING.md](CONTRIBUTING.md) for details.\n\n### Development Process\n\n1. Fork the repository\n2. Create a feature branch\n3. Make your changes\n4. Add tests for new functionality\n5. Run the test suite\n6. Submit a pull request\n\n## License\n\nThis project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.\n\n## Changelog\n\nSee [CHANGELOG.md](CHANGELOG.md) for version history.\n\n## Support\n\n- **Documentation**: [FunASR Documentation](https://github.com/alibaba-damo-academy/FunASR)\n- **Issues**: [GitHub Issues](https://github.com/alibaba-damo-academy/FunASR/issues)\n- **Discussions**: [GitHub Discussions](https://github.com/alibaba-damo-academy/FunASR/discussions)\n\n## Acknowledgments\n\n- Built on the excellent [FunASR](https://github.com/alibaba-damo-academy/FunASR) speech recognition toolkit\n- Inspired by best practices from the Python asyncio ecosystem\n- Thanks to all contributors and users for feedback and improvements",
    "bugtrack_url": null,
    "license": "MIT License\n        Copyright (c) 2025 FunASR\n        \n        Permission is hereby granted, free of charge, to any person obtaining a copy\n        of this software and associated documentation files (the \"Software\"), to deal\n        in the Software without restriction, including without limitation the rights\n        to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n        copies of the Software, and to permit persons to whom the Software is\n        furnished to do so, subject to the following conditions:\n        \n        The above copyright notice and this permission notice shall be included in\n        all copies or substantial portions of the Software.\n        \n        THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n        IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n        FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n        AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n        LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n        OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN\n        THE SOFTWARE.",
    "summary": "A high-performance Python client for FunASR WebSocket speech recognition service",
    "version": "0.1.4",
    "project_urls": {
        "Bug Reports": "https://github.com/alibaba-damo-academy/FunASR/issues",
        "Changelog": "https://github.com/alibaba-damo-academy/FunASR/blob/main/CHANGELOG.md",
        "Documentation": "https://github.com/alibaba-damo-academy/FunASR",
        "Repository": "https://github.com/alibaba-damo-academy/FunASR"
    },
    "split_keywords": [
        "asr",
        " funasr",
        " real-time",
        " speech-recognition",
        " streaming",
        " websocket"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "8090c98a142c5917bcf0ce7558570b733c1ac0baba3a51be61e60210f23b15c8",
                "md5": "fe8b411bed38621787517aa00172f6b4",
                "sha256": "2b4d310c9e770af3259b1bc5bc8252e26322cb9eb5cf4c3375577161362f1b1b"
            },
            "downloads": -1,
            "filename": "funasr_python-0.1.4-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "fe8b411bed38621787517aa00172f6b4",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8",
            "size": 52133,
            "upload_time": "2025-10-29T07:35:07",
            "upload_time_iso_8601": "2025-10-29T07:35:07.751393Z",
            "url": "https://files.pythonhosted.org/packages/80/90/c98a142c5917bcf0ce7558570b733c1ac0baba3a51be61e60210f23b15c8/funasr_python-0.1.4-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "6674761803495a036b93f6d7f4aadaee04f0eb109a47773a073124fd8088dcfd",
                "md5": "0e0c5a454f9e6200d1df2bd573045425",
                "sha256": "876ab6f8c6f99a4970940b7875fdf00b7ce87223da40b6dc9655f2b4e05eadb4"
            },
            "downloads": -1,
            "filename": "funasr_python-0.1.4.tar.gz",
            "has_sig": false,
            "md5_digest": "0e0c5a454f9e6200d1df2bd573045425",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 332339,
            "upload_time": "2025-10-29T07:35:09",
            "upload_time_iso_8601": "2025-10-29T07:35:09.457765Z",
            "url": "https://files.pythonhosted.org/packages/66/74/761803495a036b93f6d7f4aadaee04f0eb109a47773a073124fd8088dcfd/funasr_python-0.1.4.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-10-29 07:35:09",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "alibaba-damo-academy",
    "github_project": "FunASR",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "funasr-python"
}
        
Elapsed time: 0.62001s